id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.11742 | AdaVis: Adaptive and Explainable Visualization Recommendation for
Tabular Data | Automated visualization recommendation facilitates the rapid creation of
effective visualizations, which is especially beneficial for users with limited
time and limited knowledge of data visualization. There is an increasing trend
in leveraging machine learning (ML) techniques to achieve an end-to-end
visualization recommendation. However, existing ML-based approaches implicitly
assume that there is only one appropriate visualization for a specific dataset,
which is often not true for real applications. Also, they often work like a
black box, and are difficult for users to understand the reasons for
recommending specific visualizations. To fill the research gap, we propose
AdaVis, an adaptive and explainable approach to recommend one or multiple
appropriate visualizations for a tabular dataset. It leverages a box
embedding-based knowledge graph to well model the possible one-to-many mapping
relations among different entities (i.e., data features, dataset columns,
datasets, and visualization choices). The embeddings of the entities and
relations can be learned from dataset-visualization pairs. Also, AdaVis
incorporates the attention mechanism into the inference framework. Attention
can indicate the relative importance of data features for a dataset and provide
fine-grained explainability. Our extensive evaluations through quantitative
metric evaluations, case studies, and user interviews demonstrate the
effectiveness of AdaVis. | Songheng Zhang, Haotian Li, Huamin Qu, Yong Wang | 2023-10-18T06:54:55Z | http://arxiv.org/abs/2310.11742v1 | # AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data
###### Abstract
Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose _AdaVis_, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, _AdaVis_ incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of _AdaVis_.
Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph.
## 1 Introduction
Data visualization has become increasingly popular in data analytics and insight communication. It is common to create visualizations for tabular datasets in various domains, including investment, sales, engineering, education, and scientific research [1, 2, 3]. However, creating compelling visualizations requires expertise in data visualization and relies on manual specifications through either programming or mouse interactions (e.g., clicking and dragging, and dropping). The visualization tools can generally be categorized into two types: visualization packages (e.g., ggplot2 [4], Vega [5], D3 [6], and Prefuse [7]) and visualization software (e.g., Tableau 1 and Microsoft Power BI 2). The former needs users to do programming with different languages (e.g., Python, R, Java, and JavaScript), and the latter often asks users to manually drag and drop and specify the mapping between data and visual encodings. As a result, it is often complicated and time-consuming for common users without a background in data visualization to generate effective visualizations.
Footnote 1: [https://www.tableau.com/](https://www.tableau.com/)
Footnote 2: [https://powerbi.microsoft.com/en-us/desktop/](https://powerbi.microsoft.com/en-us/desktop/)
Footnote 3: [https://www.cnn.com/](https://www.cnn.com/)
[MISSING_PAGE_POST]
Footnote 2
In this paper, we propose _AdaVis_, an **A**daptive and **E** explainable **V**isualization Recommendation approach for tabular data through logical reasoning over knowledge graphs. Inspired by KG4Vis [20], our approach also leverages a knowledge graph to model the relations between different entities involved in visualization recommendation (Figure 2 **A**), e.g., _data features_, _dataset columns_, _datasets_ and _visualization design choices_. The relations in the knowledge graph define the correspondence between two different types of entities. For example, "(a dataset) is visualized by (a visualization choice)". Such relations intrinsically specify the inference rules in visualization designs. However, instead of employing the widely-used vector embeddings [20], [22] to indicate the inference results, we adopt box embeddings [23] that essentially allow the visualization recommendation results to cover multiple appropriate visualization choices for a given dataset (Figure 2 **C**). The incorporation of box embeddings leads to better _adaptability_ for visualization recommendation, enabling _AdaVis_ to adaptively recommend an appropriate number of visualization choices based on the characteristics of a dataset. Also, we have incorporated an attention mechanism into _AdaVis_, which assesses the importance of different features for visualization recommendations [24]. This mechanism works over the knowledge graph, ensuring that our recommendations (Figure 2 **D**) are informed by relevant data features. Moreover, _AdaVis_ offers fine-grained explanations for the visualization recommendations for a specific dataset (_local interpretation_) by tracing the importance of data features along inference paths, thereby improving the interpretability of our recommendations. The explanations (Figure 2 **E**) are natural language (NL) sentences automatically generated from rule-based templates.
We extensively evaluated the effectiveness and usability of _AdaVis_ by using the dataset-visualization pairs collected by Hu et al. [10]. We first quantitatively compared _AdaVis_ with other state-of-the-art baseline approaches in terms of visualization recommendation accuracy. Then, we showed a gallery of visualization recommendation results and the corresponding natural language explanations to demonstrate the adaptability and explainability of _AdaVis_. Further, we conducted user interviews to invite both data visualization experts and common users to verify whether the recommended visualizations are meaningful and align well with their domain knowledge of visualization design requirements and whether explanations regarding these recommendations are correct.
The paper's main contributions can be summarized as follows:
* We propose _AdaVis_, an adaptive and explainable visualization recommendation approach for tabular data via knowledge graphs. It adaptively recommends multiple appropriate visualizations for a specific dataset, better modeling the real visualization design process. Also, it can provide fine-grained explanations for different datasets.
* We extensively assess _AdaVis_ through quantitative metric comparisons with other baseline approaches, qualitative case studies, and user interviews. The results demonstrate the effectiveness and usability of _AdaVis_ in providing adaptive and explainable visualization recommendations.
## 2 Background: Box Embedding in Knowledge Graphs
**Knowledge Graphs and Relations.** A knowledge graph models human knowledge as a directed graph with entities and relations. Each entity is a graph node, and each relation is a graph edge. The relationship between any two entities is delineated by a triple \((h,r,t)\), where \(h\) represents a head entity, \(t\) represents a tail entity, and \(r\) represents a relation. Most relationships in a knowledge graph are 1-to-1 mappings, with the relation \(r\) between the head entity \(h\) and the tail entity \(t\) being unique. For example, _"US has a citizen named Bob"_ is an example of the 1-to-1 relation, as shown in Figure 3(c), and the corresponding triple is _(US, Has Citizen(s), Bob)_. Besides, there are 1-to-N relations in a knowledge graph.
Figure 2: The workflow of _AdaVis_ recommendation consists of Feature Extraction, Model Structure, Inference, Explanation Generation and Adaptive Recommendation. **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **A**daptive **aptive
For instance, _"Amazon has employees Alice and Bob"_ represents a 1-to-N relation. Figure 3(a, c) delineates the 1-to-N relation, where there are more than one tail entities (i.e., _Alice_ and _Bob_) for the same head entity and relation. The triple is _(Amazon, Has Employee(s), {Alice, Bob}_)._
**Box Embedding.** To facilitate computational manipulations on knowledge graphs, it's often necessary to apply Knowledge Graph Embedding (KGE) to represent entities and relations in KGs as continuous embedding vectors [25]. For the 1-to-1 relations in knowledge graphs, there have been many KGE methods to model them, e.g., TransE [26] and PTransE [27]. However, they cannot model the 1-to-N relations that entail a set of tail entities as shown in Figure 3(a). Also, these methods cannot be used to define the intersection of multiple 1-to-N triples. The intersection of 1-to-N triples will obtain a set of common tail entities. These common tail entities are relevant to all these 1-to-N triples. For example, Figure 3(c) shows the intersection of two triples, where _"Bob is an employee of Amazon and also a US citizen"_. To handle these challenges in a scalable manner, box embedding is introduced recently [28]. Rather than representing a point in vector space, it represents an area and can handle 1-to-N relations and the intersection of 1-to-N relations using two operations: _projection_ and _intersection_. The projection operation of box embedding maps an entity embedding (i.e., a point) to a box area (i.e., axis-aligned hyper-rectangles) in the vector space (Figure 3(b)). Tail entities should be enclosed within the projected box and satisfy the following condition:
\[\textit{Box}\equiv\{\textbf{v}\in\mathbb{R}^{d}:\textit{Cen}(\textit{Box})- \textit{Off}(\textit{Box})\preceq\textbf{v}\preceq\textit{Cen}(\textit{Box})+ \textit{Off}(\textit{Box})\}, \tag{1}\]
where \(\preceq\) denotes element-wise inequality, \(\textit{Cen}(\textit{Box})\in\mathbb{R}^{d}\) denotes the center point of the box and \(\textit{Off}(\textit{Box})\in\mathbb{R}^{d}\geq 0\), which stands for the positive offset of the box. The offset indicates the size of the projected box, as shown in Figure 3(b).
The intersection operation of box embeddings models the intersection of multiple 1-to-N relations by intersecting several projected boxes. For instance, Figure 3(c) depicts the intersection of two triples, where \(h_{1}\) and \(h_{2}\) have different relations to \(t_{2}\). The intersection of two box embeddings projected from \(\textbf{h_{1}}\) and \(\textbf{h_{2}}\), shown by the small shadowed box in Figure 3(d), identifies the \(t_{2}\) to which both \(h_{1}\) and \(h_{2}\) have relations. \(\textbf{t_{2}}\) is within the intersected box, indicating that \(t_{2}\) is the tail entities of both two triples.
For visualization recommendations, it is common for a dataset to be represented as multiple visualizations types, which is a 1-to-N relation. To model these relations accurately, we utilize box embedding in our approach for adaptive visualization recommendations.
## 3 Related Work
The related work of this paper can be categorized into three groups: visualization recommendation, knowledge graph embedding, knowledge graph-based explainable recommendation.
### _Visualization Recommendation_
Visualization recommendation aims to suggest or generate appropriate visualizations for a given dataset automatically and generally includes two types of methods [16]: Rule-based methods and machine learning (ML)-based approaches.
Rule-based methods leverage visualization rules specified by visualization experts to recommend appropriate visualizations [14, 15, 29]. For example, Mackinlay _et al._ proposed _Show Me_, which can automatically suggest visualizations using predefined visualization guidelines [14]. Using a predefined set of rules, voyager [29] and voyager2 [30] enumerates all potential data columns in a dataset to get potential visualizations and further ranks all these visualizations to recommend appropriate choices. Additionally, Foresight [31] detects pre-defined statistical features from the dataset and then made recommendations according to these features. and present them visually through appropriate chart types. Though rule-based methods have been extensively studied, developing a comprehensive list of rules for visualization recommendations is challenging, and the maintenance of such empirical rules is often labour-intensive [20].
ML-based methods learn the mappings between input datasets and visualizations [16] from training examples. For instance, Vizdeck [32] trains a linear model for this mapping. DeepEye [33] uses a _learning-to-rank_[34] model to rank visualization recommendations, then recommends the top scoring one. Draco [35] employs the statistical model RankSVM to rank possible visualizations. More recently, deep neural networks have also been widely used for visualization recommendations, such as VizML [10], Data2Vis [8] and TableCharts [1]. While these ML-based methods can reduce the manual efforts of compiling rules for visualization recommendations, they often operate like a black box, making it
Figure 3: Illustration of the box embedding two operations: projection and intersection in a knowledge graph (a, c) and vector space (b, d), respectively. \(h_{1}\) and \(h_{2}\) are the head entities representing a company Amazon and a country 1K, respectively. Similarly, \(t_{1}\) and \(t_{2}\) are tail entities representing two individuals in a KG. The red and green arrows are different relations between head entities and tail entities. Additionally, **h** and **t** are entities’ vector forms in the vector space. Besides, **a** denotes the center of the box. **Offset** represents the box range, thereby determining the box’s size. **Max** and **Min** are the endpoints of the box’s diagonal. (a) displays a 1-to-N relation triple in which _Amazon_ (\(h_{1}\)) has employees named _Alice_ (\(t_{1}\)) _and Bob_ (\(t_{2}\)). (b) demonstrates the 1-to-N triple computation in the 2-D vector space by the projection: the relation _(Has Employee(s))_ projects the head entity into a box that contains the all entities. (c) indicates the intersection of two 1-to-N triples where _Bob_ (\(t_{2}\)) is an _employee_ of _Amazon_ (\(h_{1}\)) and a _ciziner_ of _US_ (\(h_{1}\)). (d) shows the box intersection in the 2-D vector space: two boxes’ intersection gives rise to a smaller box that contains a tail entity (\(\textbf{t_{2}}\)) relevant to both two triples. Otherwise, the irrelevant tail entity (\(\textbf{t_{1}}\)) is outside the smaller box. The center **a** of the smaller box is based upon the two projected box centers **a** and **a**.
difficult for general users to interpret them [16]. Li _et al._[20] proposed a knowledge graph-based recommendation approach. Their approach recommends suitable visualization choices in a data-driven and explainable manner, making it the most relevant study to our work. However, this approach fails to consider relationships between data columns in the dataset, which is crucial for determining visualization choices.
Unlike the above studies, _AdaVis_ takes into account the cross-column relationships of the input dataset and can provide adaptive visualization recommendations and explanations.
### _Knowledge Graph Embedding_
Knowledge Graph (KG) models the relations between different entities [36], and knowledge graph embedding (KGE) maps entities and relations into embedding vectors while preserving their semantic meanings, which mainly includes semantic matching models and translational distance models [25]. Semantic matching models evaluate the plausibility of a triple by matching the entities and relations with latent semantics in the vector space. For example, RESCAL [37] assigns a vector embedding to each entity in the knowledge graph, and each relation is interpreted as a matrix that models the semantic interaction between two entities. Dismult [38] restricts the relation matrix of RESCAL to multiple diagonal matrices, thereby simplifying the calculation of RESCAL.
Translational distance models use translation operations to represent the relations between any two entities, where the distance between the entity embedding after a translation and the other entity embedding indicates the plausibility of a triple. TransE [26] is one of the most representative translational distance methods. When using TransE, combining the embedding vector of a head entity with that of a relation creates a new embedding in the vector space that approximates the tail entity. The limitation of TransE is that it implicitly assumes that there are only one-to-one relationships between entities and cannot deal with 1-to-N relationships [39]. Therefore, other methods have been proposed to improve the modeling of 1-to-N relationships in knowledge graphs [23, 39, 40, 41]. For example, query2box [23] introduces box embeddings whereby a head entity embedding can be translated into a box by a relation embedding. rather than a point in the vector space. When the embedding vector of a tail entity lies inside the projected box embedding of a head entity, the corresponding triple is considered valid, making it able to represent 1-to-N relations.
Our approach is inspired by query2box [23] and incorporates box embedding in our knowledge graph to model the 1-to-N and intersection of 1-to-N relations in visualization recommendation. Also, we augment the original loss function of query2box to enhance the adaptability of recommended visualizations.
### _Knowledge Graph Based Explainable Recommendation_
Knowledge graphs have been integrated into recommendation systems to enhance their interpretability [36]. According to the survey by Li _et al._[42], the knowledge graph-based explainable recommendation methods can be grouped into two categories: internal route-based methods and external route-based methods. For the internal route-based methods, the recommendation algorithms are designed by explicitly considering the knowledge graphs, including their entities, relations, paths, and rules, to improve the recommendation performance and provide explanations. For instance, Wang _et al._[43] defined the relations between entities as sequential paths and further leveraged a Recurrent Neural Network (RNN) to model the sequential dependencies of entities within a knowledge graph. Also, Ma _et al._[44] directly derived recommendation rules from the knowledge graph and recommended items based on the extracted rules.
In contrast, external route-based recommendation methods are not built upon knowledge graphs. Instead, they only use external knowledge graphs to generate explanations for the recommendation results. For example, the medical knowledge graph has been used to discover possible explanations for previous medical treatments [45]. Also, Sarker _et al._[46] utilized an external knowledge graph to elucidate the behaviors of neural network classifications.
Our approach falls under internal routes-based recommendation methods, and integrates a knowledge graph into our recommendation framework. Tracing back paths in the knowledge graph can provide meaningful explanations for the recommended visualizations.
## 4 Our Method
We propose _AdaVis_, an adaptive and explainable knowledge-graph-based approach to recommend visualizations for tabular datasets. Given that the choice of standard visualization types (i.e., line chart, bar chart, box plot, and scatter plot) often depends on the two data columns displayed on the chart axes, we formulate the visualization recommendation problem for two-dimensional datasets as logical reasoning over the KG to infer visualization types for two-column datasets [23, 47]. The source code for our approach is available.
### _Overview_
When determining appropriate visualizations for a dataset, users often need to consider the characteristics of two data columns of interest and their interrelationships. The design of _AdaVis_ is inspired by the logical reasoning process of humans when they select the right visualizations. Such a reasoning process is modeled by a knowledge graph consisting of entities (i.e., single-column features, cross-column features, data columns, datasets, and visualization choices). In this paper, we refer to an individual column of a dataset as a data column. A single-column feature is a quantified characteristic of a data column. Similarly, a cross-column feature is a quantified interrelationship between data columns.
Fig. 4: An example of transforming a dataset into a knowledge graph. **A** The dataset contains a pair of tabular data and corresponding visualization. **B** With feature extraction, the individual data columns’ characteristics, namely single-column features (SFs) and the interrelationships of two data columns, namely cross-column features (CFs) will be obtained, as well as the mapping between the dataset and visualization choices (VIS Choice). Visualization choices include the visualization type (e.g., line chart) and the axis (e.g., x-axis). **C** The single-column features cross-column features, data columns, datasets, and visualization choices are represented as entities in the knowledge graph. Only part of the features and entities are shown.
_AdaVis_ comprises feature extraction, knowledge graph construction, box embedding learning, model inference, and explanation generation for the visualization recommendation, which collectively facilitate visualization recommendation. Given a corpus of dataset-visualization pairs, _AdaVis_ can learn the implicit mapping between datasets and visualization choices (i.e., visualization types and an axis), which will be further used for visualization recommendations for new datasets. Specifically, as shown in Figure 4
**A** and **B**, _AdaVis_ extracts related features for columns in a dataset. Data column features, data columns, datasets, and visualization choices will be used to construct knowledge graphs (Figure 4
**B**). Further, we utilize the box embedding technique [23] to learn the embeddings of entities in the knowledge graph, which can well model their relations in visualization recommendations. We will use the learned embeddings of the knowledge graph's entities and relations to infer suitable visualization choices (i.e., visualization types or an axis) for an unseen dataset. Moreover, a template-based explanation module, built upon the knowledge graph, is integrated into _AdaVis_ to generate natural language explanations for the visualization recommendation.
### _Feature Extraction_
Visualization choices depend on the characteristics of the input dataset. To extract quantified characteristics (features) of datasets, we first surveyed prior studies of visualization recommendation [32, 10, 33] and visualization insight discovery [48, 49]. Based on the survey results, we categorized the dataset features into two types: _single-column features_ and _cross-column features_, as shown in Figure 4
**A** and
**B**.
The _single-column features_ describe the properties of individual data columns, such as the column length, the mean and variance of a column's values. We extracted 80 distinct single-column features that can collectively model the properties of individual data columns from various perspectives. Besides the 80 single-column features, we also extracted 40 well-designed cross-column features in _AdaVis_ by referring to prior studies [33, 48, 10]. The _cross-column features_ capture the interrelationships between two columns, e.g., two columns' data types are categorical and numerical. Such cross-column features are also crucial for deciding the visualization choices. For example, compared with a line chart, it is more appropriate to use a scatter plot to visualize a two-column dataset with two columns exhibiting significant correlation [50]. Furthermore, as shown in Figure 4
**B**, we obtain the visualization choices (i.e., visualization types and axes) of datasets. A complete list of all the features used in _AdaVis_ can be found in the appendix.
### _Knowledge Graph Construction_
A knowledge graph allows us to model the mapping between datasets and different visualization choices. With a well-designed knowledge graph, we can further recommend appropriate visualizations.
**Definition of Entities.** As shown in Figure 4
**B**, we define five classes of entities that are encoded with different colors: single-column features (\(\mathbb{E}_{SF}\), yellow nodes), data columns (\(\mathbb{E}_{COL}\), gray nodes), datasets (\(\mathbb{E}_{DS}\), brown nodes), cross-column features (\(\mathbb{E}_{CF}\), orange nodes) and visualization choices (\(\mathbb{E}_{VIS}\), green nodes).
As shown in Table I, \(\mathbb{E}_{SF}\) represents features extracted from individual data columns; \(\mathbb{E}_{CF}\) are cross-column features; \(\mathbb{E}_{COL}\) and \(\mathbb{E}_{DS}\) refer to data columns and datasets, respectively; \(\mathbb{E}_{VIS}\) refers to the choices available for visualizations, and consists of four popular charts (i.e., bar chart, line chart, scatter plot, and a box plot) [51], as well as the two commonly used axes (i.e., the x-axis and y-axis).
Since single-column and cross-column features can be continuous values, we discretize them into different intervals to represent them as entities in a knowledge graph. Specifically, we utilize the widely-used MDLP approach [52] to transform continuous features into categorical features.
**Definition of Relations.** As illustrated in Table I, there are five relations classes in our knowledge graph. (1) \(\mathbb{R}_{SF\to COL}\) denotes a class of relations that associate single-column features with single data columns, and this class indicates that these features are present in a single data column. (2) \(\mathbb{R}_{COL\to DS}\) represents a class of relations that link a single data column to a dataset. It shows that datasets contain the data column. (3) Similar to \(\mathbb{R}_{SF\to COL}\), \(\mathbb{R}_{CF\to DS}\) is a class of relations that indicate cross-column features exist in datasets. For example, \(\mathbb{R}_{CF\to DS}\) means that _"(one cross-column feature) exists in columns of (a dataset)"_. (4) \(\mathbb{R}_{COL\to VIS}\) shows that single data columns are encoded with a specific axis. For example, \(\mathbb{R}_{COL\to VIS}\) means that _"(one data column) is encoded as (x-axis)"_. (5) Similarly, \(\mathbb{R}_{DS\to VIS}\) means a dataset is encoded as a visualization type.
**Definition of Triples.** After defining entities and relations, we generate triples based on existing dataset-visualization pairs. These triples are instances of the defined knowledge graph relations. These triples can be categorized into two types, 1-to-N and intersection of 1-to-Ns, as illustrated in Table I. As shown in Figure 4
**C**, a 1-to-N triple is constructed by a relation (an arrow) and two types of entities (two nodes with different colors). It is a 1-to-N triple (N\(\geq\) 1) because one head entity may correspond to multiple tail entities by a relation. For example, in Figure 4
**C**,"(Has Outlier \(\rightarrow\) Column B)" denotes a single-column feature (i.e., Has Outlier) could exist in many data columns (i.e., Column B), so, in a triple, the single-column feature (i.e., head) may correspond to many data columns (i.e., tails) by a relation (i.e., \(\mathbb{R}_{SF\to COL}\)). There are five types of 1-to-N triples since the knowledge graph contains five
Fig. 5: This figure illustrates the workflow of _AdaVis_ infers appropriate visualization types.
**A** illustrates that we extract single-column and cross-column features from a dataset. Each node represents a feature entity.
**B** describes the procedure for generating data column and dataset box embeddings in inference. A hollow yellow/orange rectangle represents a box embedding from a single-column/cross-column feature. A gray rectangle denotes a data column obtained from an intersection of box embeddings of single-column features. The brown rectangle represents the dataset generated from an intersection of box embeddings that represent data columns and cross-column features.
**C** indicates that box embeddings of data columns and dataset can infer correct visualization choices (i.e., axis and visualization types).
types of relation. Besides 1-to-N triples, an intersection of 1-to-N triples is also generated from the knowledge graph. For example, in Figure 4, a combination of two triples, i.e.,"(Has Outlier \(\rightarrow\) Column B) & (Is unique \(\rightarrow\) Column B)", is an instance of intersection of 1-to-N triples. The example refers to a set of data columns with both features (i.e., Has Outlier & Is unique).
### _Box Embedding Learning_
Box Embedding Learning guides _AdaVis_ to learn possible visualization choices for a dataset. As introduced in Section 2, a head entity and a relation in triples are projected onto a box embedding. For example, suppose a single-column feature exists in data columns. In that case, this condition can be regarded as a triple like _"(A single-column feature, Exists in, Some data columns)"_. As illustrated in Figure 5 and 6, the single-column feature (a yellow node) is projected into a box embedding (a hollow yellow rectangle) and its tail entities (i.e., data columns have single-column features) are supposed to lie within the box embedding. Besides transforming triples into box embeddings, multiple boxes from multiple triples can be merged to create a smaller box embedding. In Figure 5 and 6, for instance, the box embeddings of columns (gray rectangles) and a cross-columns feature (a hollow orange rectangle) are intersected to obtain a smaller box embedding that represents datasets. The dataset entities with those columns and the cross-column feature will be inside the dataset box embedding. The distance between the box and tail entity is defined as:
\[\text{dist}_{\text{box}}(t;b)=\text{dist}_{\text{outside}}(t;b)+\alpha\cdot \text{dist}_{\text{inside}}(t;b)+\beta\cdot b_{\text{size}}, \tag{2}\]
where \(b\) denotes the box embedding from the head and relation in a triple, and \(t\) represents an embedding of the tail entity. The \(\text{dist}_{\text{box}}(t;b)\) serves as a scoring function that measures the distance in vector space between the tail entity's the embedding and the box embedding. The distance function can be decomposed into three sub-functions: \(\text{dist}_{\text{outside}}\) identifies whether the tail entity \(t\) is within the box \(b\). If \(t\) is inside the box, the score of \(\text{dist}_{\text{outside}}\) is 0. Otherwise, \(\text{dist}_{\text{outside}}\) returns the distance between the tail entity \(t\) and the close side of the box \(b\). As for \(\text{dist}_{\text{inside}}\), it calculates the distance between the center of the box \(b\) and \(t\) (or the distance between the close side of the box and \(t\) if \(t\) is outside the box). The hyper-parameter \(\alpha\in[0,1]\) controlls the weight of \(\text{dist}_{\text{inside}}\). If \(\alpha=0\), \(\text{dist}_{\text{inside}}\) is nullified, causing the scoring function to solely consider the distance of the tails from the box \(b\). Furthermore,, \(b_{\text{size}}\) is designed to control the box size in case the box size is so large that it includes irrelevant tail entities. Thus, \(\beta\in[0,1]\) is a hyperparameter that controls the box size. In summary, \(\text{dist}_{\text{box}}(t;b)\) aims to measure how far a tail entity is from the box embedding. \(\text{dist}_{\text{outside}}\), \(\text{dist}_{\text{inside}}\) and \(b_{\text{size}}\) are defined as follows:
\[\text{dist}_{\text{outside}}(t;b)=||\text{Max}(t-b_{\text{max}},0)+\text{ Max}(b_{\text{min}}-t,0)||_{1} \tag{3}\]
\[\text{dist}_{\text{inside}}(t;b)=||\text{Cen}(b)-\text{Min}(b_{\text{max}}, \text{Max}(b_{\text{min}},t))||_{1}, \tag{4}\]
\[b_{\text{size}}(b)=||b_{\text{max}}-b_{\text{max}}||_{2}, \tag{5}\]
where \(b_{\text{max}}\) and \(b_{\text{min}}\) represent endpoints of the box \(b\), as delineated in Figure 3(b), Cen(\(b\)) denotes the box's center point. \(b_{\text{size}}\) represents the box size.
In each iteration of model training, we sample a set of positive and negative triples from the training dataset. In the knowledge graph, positive triples are the correct triples, while negative triples are incorrect triples whose tail entities \(t\) do not correspond to the head and relation. For example, _(US, Has Citizen, UK)_ is a negative sample where the tail entity (_UK_) is not the answer to the head entity (_US_) and relation (_Has Citizen_). We generate \(k\) negative samples for a positive triple. The positive and negative samples constitute a minibatch of training samples. In this minibatch, _AdaVis_ is updated by the calculated loss. The loss function is defined as follows, according to [23]:
\[L=-\log\sigma(\gamma-\text{dist}_{\text{box}}(b;t))-\sum_{i=1}^{k}\frac{1}{k} \log\sigma(\text{dist}_{\text{box}}(b;t_{i}^{\prime})-\gamma), \tag{6}\]
where \(\sigma\) is the Sigmoid function, and \(\gamma\) is a fixed scalar margin. \(t\) means a positive tail entity, while \(t_{i}^{\prime}\) refers to a negative tail entity
\begin{table}
\begin{tabular}{p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline Type & Relations & Meanings & Examples \\ \hline \multirow{4}{*}{1-to-N} & \(\mathbb{R}_{SF\to COL}\) & Data columns with the specific single-column feature are & The column (\(\alpha_{s}\)) length is 50 (\(s\)) \\ \cline{2-4} & \(\mathbb{R}_{CF\to DS}\) & Datasets with the specific cross-column feature are & Percentage of unique values (\(c\)) shared in a Dataset ’s two columns (\(s\)) \\ \cline{2-4} & \(\mathbb{R}_{COL\to DS}\) & Data columns in the specific dataset are & A column representing grades (\(c_{\text{size}}\)) in a dataset (\(ns\)) about students’ grades (Figure 1), \\ \cline{2-4} & \(\mathbb{R}_{COL\to VIS_{\text{Adais}}}\) & The data columns can be encoded on y-axis (\(v_{\text{IS\_size}}\)) in a visualization (Figure 1) \\ \cline{2-4} & \(\mathbb{R}_{DS\to VIST_{\text{pre}}}\) & Visualization types available for the dataset are & The dataset about students’ grades (\(ns\)) is visualized as a box plot (\(v_{\text{IS\_size}}\)) (Figure 1) \\ \hline Intersection of 1-to-Ns & \(\cdots\cap\mathbb{R}_{SF\to COL}\cap\) & Data columns with \(n\) single-column features are & A set of columns (\(c_{\text{size}},\ldots c_{\text{size}}\)) have the same features such as column length is 50 (\(v_{n}\)), column values are sorted (\(s_{\text{size}}\)) and so on (\(s_{\text{size}},\ldots c_{\text{size}}\)). \\ \cline{2-4} & \(\mathbb{R}_{COL_{1}\to DS}\cap\cdots\cap\mathbb{R}_{CF_{\text{in}} \to DS}\cap\) & Datasets including two specific data columns and a set of \(m\) cross-columns features are & A set of datasets (\(c_{\text{size}},\ldots c_{\text{size}}\)) have the same characteristics: their columns’ characteristics (\(c_{\text{size}},c_{\text{size}},c_{\text{size}},c_{\text{size}},c_{\text{size}},c_{ \text{size}}\)) are the same, and these datasets have the same cross-column features such as the pairwise columns have overlapping value ranges (\(c_{\text{size}}\)) and so on (\(c_{\text{size}},\ldots c_{\text{size}}\)). \\ \hline \end{tabular}
\end{table} TABLE I: The table shows two categories of triples in the knowledge graph: 1-to-N and the intersection of 1-to-N 1-to-N triples can be classified as five relations in the knowledge graph. The intersection of 1-to-N triples intrinsically combines multiple 1-to-N relations.
that should be far away from the box embedding \(b\). The intuition behind the loss function is that the correct answer (positive tail) should lie inside the box and as close to the box center as possible, but the incorrect answer (negative tail) should be far away from the box as possible.
### _Model Inference_
In the training step, the model learned the embeddings of entities and relations in the knowledge graph. This section clarifies how _AdaVis_ uses the learned embeddings to infer axes and possible visualization types for an unseen dataset.
_AdaVis_ extracts single-column and cross-column features from the unseen dataset, as illustrated in Section 4.2. Our knowledge graph contains entities of these features, and the corresponding embeddings of these features have been learned. Then, these single-column features are projected to box embeddings, as shown in Figure 5 and. _AdaVis_ further obtains the box embedding of the data column from an intersection of single-column features' box embeddings. Each single-column feature represents a particular characteristic of a data column, so the intersected box embedding of single-column features represents a data column's overall characteristics. Having obtained the data column embedding, we can infer an appropriate axis for the data column. In the paper, a dataset includes two columns, and its visualizations are also two-dimensional, with one x-axis and one y-axis. In other words, one data column only corresponds to one axis. Due to this fact, we should determine which axis is best suited to this data column. To this end, we first infer the axis' box embedding (_BoxAxis_) from this data column's box embedding (_BoxCOL_). _BoxCOL_ represents the data column, and _BoxAxis_ represents the optimal axis to this data column. The axis embedding (_BoxAxis_) is obtained from the data column embedding (_BoxCOL_) by a relation (\(\mathbb{E}_{COL\rightarrowVIS_{AdaVis}}\)) which specifies the transformation from a data column to its optimal axis. Since _BoxAxis_ represents the optimal axis for this column and two axis entities (i.e., x, y-axis) exists in the constructed knowledge graph, we can calculate the distance between _BoxAxis_ and the embeddings of these two axis entities. The distance measures the plausibility between the optimal axis and axis entities, and the axis entities are denoted by \(\mathbb{E}_{VIS_{AdaVis}}\) (Table I). Additionally, the distance calculation is done by Equation 2. A lower calculated score means higher plausibility. For example, if \(\text{dist}_{\text{box}}(\mathbb{E}_{VIS_{i}};BoxAxis_{i})<\text{dist}_{\text {box}}(\mathbb{E}_{VIS_{i}};BoxAxis_{i})\), _AdaVis_ chooses x-axis for the data column because the data column's optimal axis is more plausible to the x-axis entity than the y-axis entity.
As for inferring visualization types to a dataset, we will identify a set of visualization types that are appropriate to the dataset. Unlike the 1-to-1 mapping between a data column and an axis, a 1-to-N mapping exists between a dataset and multiple visualization types, as multiple visualization types are suitable for the same dataset (Figure 1). To infer visualization types, we first need to obtain the dataset representation in terms of box embedding. In a manner similar to data column embedding obtainment, a box embedding of the dataset (_BoxDIS_) is gained by intersecting the box embeddings of its data columns and cross-column features, as shown in Figure 5 and. From the _BoxDIS_, we infer its optimal visualization types (_BoxType_) in terms of box embedding by a relation \(\mathbb{R}_{COL\rightarrowVIS_{\text{type}}}\). Since one dataset may have more than one suitable visualization type, we need to classify these suitable visualization types at once. Due to this requirement, Equation 1 is used to identify which visualization types entities (e.g., line, bar) are inside _BoxType_. In other words, if visualization type entities are appropriate for the dataset, they will be the _BoxType_.
### _Explanation Generation_
In this section, we will illustrate how _AdaVis_ provides fine-grained explanations for the recommendation of a specific dataset. As mentioned in Section 4.5, the procedure of _AdaVis_ inference is sequential and follows an inference path, such as {Are disordered} \(\rightarrow\) {Column A} \(\rightarrow\) {Dataset} \(\rightarrow\) {Line, Scatter} (Figure 6). The inference path indicates: _If data values are **[arranged in disorder]** in **{Column A}**, **then the {Dataset} can be visualized as {Line, Scatter}**. As an inference path from a single-column feature (e.g., the column values are not ordered) or a cross-column feature (e.g., two data column's data types are both numerical) to the dataset, this path contributes to the prediction of the visualization type. To quantify each path's importance to the inference result, _AdaVis_ leverages the attention mechanism in the inference paths [53]. We input box embeddings of single or cross-column features into a fully-connected neural network (i.e., MLP) and obtain output values as feature attention scores. The path's importance to the inference result is represented by its attention score, as shown in Figure 6. With the quantified importance, we can reversely trace the important inference paths and reach specific single-column and cross-column feature entities.
We have developed a rule-based template to automatically translate important features into natural language sentences. We designed an explanation template. The template is a pre-defined sentence with empty slots: {_VVIS_} is recommended if [_Column_] has [_Single-column features_], and Cross-column (i.e., the relationship between two columns) has [_Cross-column features_]'. _AdaVis_ recommends visualization results for the dataset and determines features that are critical to the final visualization recommendation. For example, Figure 6 provides an example of the template instance where the crucial features (color-filled texts) replace the slots. Both the recommended visualization types and important features are filled in on the template. In particular, according to the principles advised by Yuan _et al._[54], we propose detailed rules for our template to enhance the interpretability of the resulting explanation for humans. (1) We limit an explanation's maximum number of features to four to prevent cognitive overload. Also, to ensure the features in the explanation are not trivial to the recommendation result, we filter out unimportant features based
Fig. 6: An illustration of explanation generation for a visualization type recommendation. **A** a visualization of three paths with quantified importance for the recommendation result. Each path represents a single-column feature (yellow) or cross-column feature (orange). Their quantified importance in the recommendation result is denoted by \(\text{w}_{i}\). The \(\text{w}_{i}\) are obtained by normalizing their attention scores. Only parts of the paths are shown. **B** an explanation for the recommendation. The features are filled into the explanation template’s slot.
on their importance scores which are calculated by the attention score. (2) We also empirically filter out some features which are statistically informative but could confuse users, such as Moment 9, Gini coefficient, and Kurtosis. (3) We divide numerical features' semantics into several degrees based on their discretized intervals. For example, if a single-column feature such as _column length_ is discretized into two intervals by MDLP, and _"a data column length is equal to 5"_ is within the lower interval, the column's length will be regarded as short. (4) We avoid including categorical features that have a negative value as they are not relatable to an understandable concept for users. For instance, _"There is no linear regression"_ is a negative feature value and does not make sense to users.
## 5 Evaluation
To demonstrate the effectiveness of _AdaVis_, we extensively evaluated _AdaVis_ with quantitative comparisons, case studies, and expert interviews. This section introduces the setup of our experiments (Section 5.1) and the results (Sections 5.2 and 5.3) in detail.
### _Experiment Setup_
This section introduces the dataset used for evaluation and the model settings in our experiments.
**Corpus.** We have used the large-scale corpus of visualization-dataset pairs introduced in VizML [10] to evaluate the effectiveness of _AdaVis_. The VizML corpus is crawled from Plotly Chart Studio 3. In the corpus, each visualization-dataset pair contains one dataset and the corresponding visualization created by users.
Footnote 3: [https://plot.ly/online-chartmaker/](https://plot.ly/online-chartmaker/)
We first filtered all visualization-dataset pairs with the dataset consisting of two data columns from the corpus. Then, we retained four types of visualizations, i.e., bar charts, scatter plots, line charts, and box plots, since they are commonly used in Plotly [51] and are standard chart types. Our final dataset consists of approximately 30000 dataset-visualization pairs, which are further randomly divided into training and testing sets by a ratio of 2:1.
### _Quantitative Evaluation_
We conducted experiments to evaluate _AdaVis_ quantitatively from three perspectives: single-class visualization recommendation (i.e., an axis and a visualization type), multiple-class visualization recommendation (i.e., multiple visualization types), and the validity of cross-column features.
#### 5.2.1 Single-class Visualization Recommendation
We conducted an experiment to assess our method. As mentioned in Section 4.1, a visualization consists of axes and a visualization type, so we evaluate _AdaVis_ by carrying out two tasks: (1) axis recommendation for a data column; (2) visualization type recommendation for a dataset. Our corpus was collected from Plotly Chart Studio, where users typically create one visualization for each dataset. Therefore, the visualization choice is single-class in the experiment. To be specific, our experiment took one axis and one visualization type as the ground truth for each data column and dataset, respectively.
**Baseline Models.** We compare _AdaVis_ with three baseline models: KG4Vis [20], GQE [47] and Decision Tree. Among them, KG4Vis [20] is the most relevant to our approach, as it also employs a knowledge graph for recommending visualization choices and further providing explanations for its recommendations. The scores attributed by KG4Vis to each visualization choice were derived by taking an average of the inference results across all columns of a dataset. Besides KG4Vis, we also compare _AdaVis_ to two other models: GQE [47] and the Decision Tree approach. GQE is a widely used model in knowledge-graph recommendation-based tasks [55, 56, 57], which makes it a suitable baseline to be compared with our approach. The Decision Tree model is essentially an explainable ML model and has also been employed in visualization recommendation [10, 33]. The models in our experiment assign a score to each visualization option and rank them in descending order. For this, _AdaVis_ employs Equation 2 to calculate the score.
**Metrics.** We applied two widely used metrics to evaluate the performance of our method comprehensively: Mean Rank (MR) and Hit@2 [26]. MR represents the average ranking of the correct visualization choices (the lower the MR score, the better performance), and Hits@2 represents the proportion of correct visualization choices that rank in the top two inference results (the higher the Hits@2 score, the better performance). Since the axis is either the x-axis or the y-axis, we use accuracy to evaluate its binary prediction (the higher the accuracy score, the better performance).
**Result and Analysis.** Table II shows that _AdaVis_ outperforms the baseline models in recommending appropriate visualization types, underscoring the effectiveness of _AdaVis_. A contributing factor to _AdaVis_'s performance is its use of cross-column features and the intersection of box embeddings which can effectively model the intersection of all features, which extracts critical insights from both single-column and cross-column features. These insights are subsequently employed to recommend suitable visualization types. As for the axis prediction, Decision Tree marginally surpasses _AdaVis_. However, it is important to note that retraining Decision Tree for different tasks needs an additional computational burden. In contrast, _AdaVis_ is trained only once for both tasks.
experiments in this subsection to evaluate the adaptability of _AdaVis_ in recommending multiple types of visualizations correctly. A new test set is necessary, where each dataset has more than one correct visualization type. Since the continuous features of datasets have been transformed into categorical by discretization (Section 4.3), we grouped datasets with the same feature values. The datasets with the same single-column and cross-column features were regarded as a group whose visualization types were interchangeable. In other words, if a dataset is within a group and the group has multiple visualization types, these types will be considered the ground truth for the dataset. According to observation, a group of datasets with the same feature values are seldom visualized using four visualization types. Hence, we tested _AdaVis_ adaptability with the groups of datasets whose visualization types are two and three. We sampled 406 test datasets with two visualization types and sampled 141 test datasets with three visualization types.
**Baseline Models.** The purpose of the experiment in this subsection is to evaluate the performance of _AdaVis_ in adaptively recommending multiple types of appropriate visualizations. Given that no existing visualization recommendation approaches are explicitly designed for such a purpose, we followed the practice of Section 5.2.1 and also used KG4Vis, GQE, and Decision Tree as the baseline methods in this experiment. _AdaVis_ can suggest adaptive visualization types for an unseen dataset as long as the visualization type entities are within the inference box of _AdaVis_ (Equation 1). The baseline methods were set to recommend the visualization type with the highest prediction score, which is also common practice when they are used in real applications.
**Metrics.** For the adaptability experiment, we identify the recommended multiple visualization types based on whether the visualization type entities are inside the box of model inference (Equation 1). Since there are more than one ground truth visualizations for each test dataset, we used Recall, Precision, and F1 as the metrics in the adaptability experiment [58]. Recall can evaluate the model's adaptability. A high Recall score means the model can recommend more visualization types suited for a dataset. Precision evaluates the consistency between the model's recommendation results and the ground truth of the dataset. High precision indicates that models are well aligned with users' design preferences. F1 offers a comprehensive measurement that considers both Recall and Precision.
**Result and Analysis.** Table III shows that _AdaVis_ consistently outperforms the other models in both two and three choices scenarios as indicated by the highest F1 scores, signifying that it effectively balances precision and recall. _AdaVis_ also stands out in terms of recall, suggesting that it can recommend a broad range of suitable visualization types and shows excellent adaptability. In the scenarios with three choices, _AdaVis_'s performance surpasses all baseline models, further reinforcing its adaptability. Despite some models achieving higher precision in certain scenarios, their lower overall F1 scores imply a lack of diversity in their recommendations. In contrast, _AdaVis_ maintains high precision across all scenarios, indicating that its recommendations align well with user selections and can offer a wider range of suitable visualization recommendations.
#### 5.2.3 Ablation Study about Cross-column Features
In this ablation study, we evaluated the effects of cross-column features. Given the crucial role of meaningful relationships between data columns in determining visualization type [29, 59, 60], _AdaVis_ incorporates cross-column features. To verify the significance of our cross-column features, we conducted an ablation experiment. It involves removing the cross-column features from _AdaVis_ and baseline models, and then assessing whether the recommended visualizations still align well with human users' visualization choices.
**Baseline Models & Metrics.** To evaluate the cross-column features' effect, we used the GQE and Decision Tree as the baseline models. We did not consider the KG4Vis as it does not use the cross-column features. Additionally, the metrics used in the experiment are MR and Hits@2.
**Result and Analysis.** Table IV displays the effect of cross-column features on the performance of both _AdaVis_ and the baseline models in recommending visualization types. In each model, i.e., _AdaVis_, GQE, and Decision Tree, the incorporation of cross-column features resulted in enhanced performance in terms of MR and Hits@2 scores. This is evident when comparing these scores with their counterparts obtained when cross-column features were not used. This trend underscores the importance of integrating cross-column features in the process of visualization recommendation. It suggests that considering the interrelations between columns (i.e., cross-column features), rather than treating each column in isolation, contributes to more effective and relevant visualization recommendations.
### _Qualitative Evaluation_
To extensively assess _AdaVis_'s adaptability and understand why these charts are recommended, we further conducted case studies and user interviews to examine recommended visualizations and associated explanations.
#### 5.3.1 Adaptability and Explanation
Figure 7 displays the visualization recommendation results for four datasets by _AdaVis_. As introduced in Section 4.6, _AdaVis_ offers explanations for the recommendation results. These explanations highlight the features important to the recommendation results in the understandable language. The following paragraphs describe multiple recommendations for different sets.
Figure 7A1 and A2 show that there are two types of visualizations recommended for a dataset. Figure 7A3 explains the recommendation reason. The two columns' data types are numerical and datetime (values are related to the date or time). Additionally, in the y-axis, the numerical values are not arranged in an orderly manner (e.g., the values of the columns are arranged in increasing or decreasing order). This implies y-axis values are fluctuating. Therefore, the bar and line charts are appropriate for the dataset. The explanation is consistent with existing visualization guidances. For example, Show Me [14] concludes that a bar chart can be used with two columns of data, one for categorical (datetime
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{Visualization Types (Cross Features)} \\ \hline \multicolumn{5}{c}{MR} \\ \hline & With & Without & With & Without \\ \hline _AdaVis_ & **1.626** & 1.688 & **0.8421** & 0.8298 \\ \hline GQE & **1.884** & 2 & **0.7268** & 0.7033 \\ \hline Decision Tree & **1.893** & 1.9086 & **0.7189** & 0.7133 \\ \hline \hline \end{tabular}
\end{table}
Table IV: The table displays an evaluation of cross-column features’ effectiveness. Results are compared between models with and without cross-column features.
can be regarded as categorical value) and another for numerical columns. Similarly, Munzner [61] indicates that a line chart is often used to show temporal trends.
Figure 7B and 62 present a line chart and a scatter plot as recommended visualizations for a dataset. Figure 7B shows that _AdaVis_ identifies linear correlation among data columns and further recommends a line chart and a scatter plot to visualize the given dataset. The explanation is also supported by previous work. According to Cui _et al_. [50], a scatter plot is appropriate if there is a correlation between two columns of the dataset, and line charts are recommended if the dataset fits a linear regression model with a low estimated error.
Figure 7C and 62 show that a line chart and a bar chart are recommended for a dataset by _AdaVis_. Figure 7C gives the explanations for these two recommendation results: since there is a monotonic trend in the dataset, it is suitable to visualize the dataset using a line chart because the line chart are commonly used to show the trend [61]; a bar chart is also recommended for the dataset because it contains categorical and numerical data [14].
Figure 7B and 62 show that a scatter plot and a box plot are recommended to visualize a dataset. Figure 7B reveals important dataset features that led to these recommendation results, i.e., various data distributions and the existence of different clusters. These explanations also align well with the observations in prior studies. For example, Cui _et al_. [50] pointed out that a scatter plot is preferred if several clusters exist in the dataset. Munzner [61] mentioned that a box plot is considered to be appropriate for a dataset with different data distribution across different columns.
These examples show that our explanations for the visualization recommendation align well with the guidelines in prior studies. It confirms the correctness of our visualization recommendation and its adaptability to satisfy the requirements of different datasets.
#### 5.3.2 Cross-column Features
We compared recommendations generated by _AdaVis_ with and without cross-column features to demonstrate their necessity. Several recommended visualizations are shown in Figure 8, where the visualizations on the left are recommended by considering cross-column features, and those on the right are recommended without considering cross-column features. To explore how cross-column features lead to different recommendations, we studied the explanations regarding cross-column features.
Figure 8A, and show that for the same dataset, _AdaVis_ with cross-column features recommended scatter plot. In contrast, _AdaVis_ without cross-column features recommended a bar chart. According to the model's explanation, _"there are clusters between two data columns and columns' data types are numerical-numerical"_. Since apparent clusters among data columns and two data columns are both numerical, the scatter plot is appropriate. In contrast, the bar chart is not suitable for a dataset consisting of two numerical data columns [14]. Therefore, the inappropriate visualization type is recommended because it does consider the cross-column features.
As shown in Figure 8B, 62, 62, _AdaVis_ with cross-column
Figure 8: The figure displays pairs of visualization recommendations for four different datasets. A pair of visualization at each row comes from the same dataset. Visualizations recommended by a model with cross-column features are on the left side; Recommendations from the model without cross-column features are on the right side.
Figure 7: The present figures pairs of visualization recommendations for four different datasets. Visualizations in the same column are from the same dataset. The recommendation results are explained at the bottom of each column. Due to the space limitation, we only describe the top two features in our explanations to illustrate our recommendations.
features recommended a line chart rather than a bar chart because _"two column data values are significantly correlated, and data types are numerical-numerical"_. The study by Saket _et al._[62] indicates that a line chart is suitable for finding correlation. _AdaVis_ with cross-column features capture the critical interrelationships between data columns in a dataset and thus recommend a line chart.
Figure 8 () and () present a box plot recommended by _AdaVis_ with cross-column features and line chart without cross-column features. Based on the corresponding explanation, we recognized that _"Cross-column has a different distribution"_. That important cross-column feature led to the box plot recommendation.
Figure 8 () and () show a bar chart and a box plot, respectively. Based on the explanation, the important cross-column feature in the recommendation is that _"two column data types are categorical-numerical"_, and thus recommended a bar chart rather than a box plot. As established by Mackinaly _et al._[14], a bar chart is well-suited for visualizing data involving two columns with categorical and numerical values. Therefore, a bar chart is an appropriate choice for visualizing the dataset. Additionally, this case illustrates that without cross-column features, _AdaVis_ does not take the data type of two columns into account, leading to a box plot recommendation that violates the visualization guidance.
These cases demonstrate the importance of cross-column features in visualization recommendations. According to our observations, cross-column features can help _AdaVis_ to exclude recommendations that violate visual design rules restricted by cross-column features. Cross-column features can reveal interrelationships between columns. Therefore, it is necessary to consider cross-column features in recommending appropriate visualizations.
#### 5.3.3 User Interview and Feedback
Our quantitative experiments above demonstrate the effectiveness of _AdaVis_ in adaptively recommending multiple appropriate visualizations, but it is also crucial to ask actual users to evaluate the recommended visualizations as well as the natural language explanations provided by _AdaVis_. Thus, we conducted user interviews with two distinct tasks, where each is designed to evaluate one aspect of _AdaVis_: the adaptability of our recommendations and the clarity of the generated explanations:
**Task 1**: **Recommendation Adaptability Assessment.** Participants were asked to view the recommended visualizations for ten randomly sampled datasets with multiple recommended visualization options. Provided with the tabular dataset, participants needed to assess and provide feedback on how well each recommended visualization presented the original tabular data of the dataset.
**Task 2**: **Explanation Clarity Assessment.** Participants were provided with explanations for the recommendations of ten randomly sampled datasets. They were then asked to evaluate whether the explanations helped them understand the recommendation results. Feedback was requested on the clarity of the explanations.
For the user interviews, we recruited 12 participants, all of whom actively use data visualization tools but have varying degrees of expertise in the field. This allowed us to conduct a thorough evaluation of _AdaVis_ across a range of user experiences. For our analysis, we categorized them into two groups. The first group (E1-E6) is composed of participants who have demonstrated a high level of expertise in data visualization, evidenced by their contributions to at least one scientific publication in the field. The second group (C1-C6) includes participants who regularly use data analysis tools, such as Excel, and have a fundamental understanding of data science. The diversity of participants' backgrounds allowed us to assess the effectiveness of _AdaVis_ from different perspectives. Throughout the interview, we encouraged participants to express their thoughts and feedback in a think-aloud manner.
After finishing the interviews, we analyzed all the feedback from participants and also categorized participant feedback into two groups accordingly. We then conducted a thematic analysis [63] within each group to identify recurrently-raised issues. To highlight the differences between the two groups' feedback, we conducted a cross-comparison of these issues. Consequently, our analysis of the feedback revealed both convergences and divergences from the perspectives of data visualization experts and common users, which are organized and presented as follows:
**Recommendations of Multiple Visualization Types.** Overall, all participants found that most of the visualization recommendations by _AdaVis_ are appropriate for the given datasets. For instance, E3 endorsed the variety in our recommendations: _"It is reasonable to recommend these multiple types of visualizations for the same dataset. For instance, when examining a trend or investigating correlations and distributions within a dataset, line chart, bar chart, and scatter plot can all be used to visualize the same dataset"_. However, we also observed a discrepancy between these two participant groups. The second group's (C1-C6) acceptance rate of visualization recommendations seems to be influenced by their existing knowledge of data visualization, while data visualization experts are not. For instance, C1 has never used a box plot before, and she got confused when she was presented with a box plot as the visualization recommendations. Similarly, C6 disagreed with some of the recommended line charts for datasets without a column being the time variable, as she insists that line charts should be predominantly used for time-series-related data. A significant suggestion from both groups was to further incorporate the users' analytical tasks when recommending appropriate types of visualizations for a given dataset.
**Recommendation Explanations.** The clarity of our explanations was confirmed by all the participants regardless of their familiarity with data visualizations. They pointed out that these explanations can help them effectively and conveniently understand why specific visualizations were recommended for a given dataset. For example, C6 mentioned that the explanations enhanced her comprehension of the recommended visualization, as they highlighted the specific features that drove the recommendations. Nevertheless, we observed that a user's knowledge level of visualization significantly influences their requirements for explanations. Users less familiar with data visualization may need more detailed and contextual explanations of the recommended visualization. For example, C4 suggested that the explanation should be task-oriented, displaying a specific scenario, and C2 expressed a desire for justifications in explanation when his unfamiliar visualizations are recommended. Conversely, users with more advanced knowledge, like E2, might prefer an explanation that focuses on the characteristics of a dataset. This observation underscores the importance of tailoring explanations to the knowledge level and needs of individual users to facilitate their understanding and acceptance of the recommended visualizations. Also, participants have provided insightful suggestions for further enhancing _AdaVis_. For instance, a common suggestion from both groups of participants was that our explanation should also illustrate why certain types of visualizations were not recommended beyond only explaining
why certain visualizations were recommended. More specifically, E1 advised that _AdaVis_ could incorporate a _What-if_ functionality, enabling users to discern which features are most influential to recommendation results. Moreover, some common users (C6, C5) pointed out that certain terms used in the explanation, such as "disorder", should be presented in a more intuitive manner.
## 6 Discussion
### _Lessons_
**Explanability.** Our explanation for visualization recommendations considers both feature importance and intuitiveness. 120 features are used in our approach to comprehensively model the characteristics of the input dataset. However, an increasing number of dataset features can also result in the difficulty of providing intuitive explanations for the visualization recommendation from the perspective of dataset features. To strike a good balance between visualization recommendation performance and explainability, we integrated an attention mechanism in _AdaVis_ (Section 4.6) that can identify critical features for the final visualization recommendation result. Our explanation is built upon the most important two or three features. For feature intuitiveness, some dataset features are important for the final visualization recommendation result, but their meanings are difficult to interpret, especially for statistical data features (i.e., the entropy is high, Kolmogorov-Smirnov test result is significant). Given that target audiences comprise common users, these perplexing features were avoided in the final explanations.
**Trade-off in Model Training Strategy.**_AdaVis_ performs two kinds of recommendation tasks: axis and visualization type recommendation. Since single-column features are required in both axes and visualization type prediction, the embeddings of the single-column features are influenced by two tasks about axis and visualization types recommendation, when we combine the two tasks for the model to learn; the multi-task learning can introduce noise because visualization type recommendation visualization is unrelated to axes prediction. To investigate the effect, we conducted a control experiment on multi-task and single-task learning; the model was doing multi-task learning while two separate models were trained for each task in the control group. According to the experiment result, training different models for each task led to a few points of improvement. However, the improvement came at the cost of double the computation time and storage, because it needs to train two individual models to do axis and visualization type recommendation task, respectively.
**Feature Importance.**_AdaVis_ defines a comprehensive set of features, including 80 single-column features and 40 cross-column features, to capture the diverse characteristics of input datasets. A feature importance analysis revealed that some single-column features, especially those related to column names and data column statistical properties, are particularly impactful. Among these, _data column length_ emerges as a highly influential feature. The significance of data column length can be attributed to its role in reflecting data density, which in turn informs the choice of visualization type. For instance, for a high-density dataset, a line chart may be more suitable than a bar chart or scatter plot, as it represents data points in a less cluttered and more understandable way. Also, our analysis highlights the importance of _digits in column name_. This feature carries semantic information about the data column, which potentially indicates users' design logic for visualization. For example, a column named "Year2017" can imply the need to visualize a trend over time, while column names such as "Method1" or "Method2" only reveal the need for a comparison.
### _Limitations_
**Corpus.** We utilize the dataset-visualization pairs uploaded by Plotly users as the ground truth, and most visualization choices of them align well with general visualization design guidelines. However, there are also a small number of problematic visualization choices for the input datasets, which may have a negative impact on the performance of _AdaVis_. To further bolster the performance of _AdaVis_, it is important to expand our corpus with more high-quality dataset-visualization pairs. For example, we can try to collect more data visualization examples created by experienced visualization experts from professional visualization blogs or forums like Observable4.
Footnote 4: [https://observableqh.com/](https://observableqh.com/)
**Visualization Choices.** As an initial step towards adaptive and explainable visualization recommendation, _AdaVis_ is applied to the widely-used standard charts in this paper, like line charts, bar charts, scatter plots, and box plots, and it does not encompass all types of visualizations. Such a choice originates from both the popularity of these visualizations [10] and the fact that other visualizations are scarce in the Plotly corpus. Also, given that these standard charts have an emphasis on the axes' visual encodings [10], _AdaVis_ mainly focuses on recommending appropriate types of data visualizations and the x/y axes. However, with a new dataset-visualization corpus of other types of visualizations, _AdaVis_ can be easily extended to work for new types of visualizations and other detailed visualization encodings like color schemes.
**User-centric Recommendation.**_AdaVis_ effectively maps datasets to visualizations but lacks an explicit consideration of users' specific intents, such as their analytical tasks or preferences. As indicated by user feedback (Section 5.3.3), it will be interesting to further incorporate user intent in visualization recommendations, which can ensure that the recommended visualizations align with the user's specific needs. In this paper, we demonstrate the effectiveness of _AdaVis_ by using datasets with two columns, but _AdaVis_ can also recommend appropriate visualizations for datasets with more than two columns. It can be achieved by extracting cross-column features from every possible combination of two columns in the dataset, and then finding the intersection of these cross-column box embeddings, which indicate the dataset's characteristics and can be further used to derive the appropriate visualization choices. The possible issue of extending _AdaVis_ to datasets with over two columns is that the exhaustive search of every possible combination of two columns in the dataset can be time-consuming, which can be mitigated by further considering user intent to narrow down the search space of pairwise column combination in the visualization recommendation process. In addition, some terminologies used in the natural language explanations by _AdaVis_ may not be easily understood by all users. For instance, technical terms like "a linear regression model" are obvious for machine learning practitioners but can be perplexing for laypersons. It will be helpful to further incorporate more straightforward explanations into _AdaVis_, making it more accessible to a broader range of audiences.
**Training Time.** The training time of _AdaVis_ is prolonged due to a large number of extracted features and corpus. _AdaVis_ extracts many features from a dataset. These features enable _AdaVis_ to comprehensively model the dataset characteristics and further increase the generality of _AdaVis_, which can recommend
appropriate visualization types for diverse datasets. However, the large number of features also increases model complexity and thus enlarges the model size. In addition, to better learn the complex mapping from datasets to visualization types, _AdaVis_ is trained on a large corpus. Feature importance analysis can be used to identify unimportant features and reduce the feature number in the model.
## 7 Conclusion and Future Work
In this paper, we propose _AdaVis_, an adaptive and explainable approach for visualization recommendation. Given a dataset, _AdaVis_ can adaptively recommend multiple appropriate visualization choices and provide detailed explanations for the recommendation result. Our approach consists of four modules: feature extraction, knowledge graph construction, model training, and inference. It first extracts the individual column's features and the interrelationship among data features, data columns and visualization choices. With these features and interrelationships, a knowledge graph is constructed to model them. The box embeddings of entities and relations in the knowledge graph can also be learned. With these learned box embeddings, an inference module can adaptively recommend multiple visualizations for an unseen dataset and provide natural language explanations for the recommendations. Quantitative and qualitative evaluations are conducted to evaluate the effectiveness and adaptability of _AdaVis_.
In future work, we will collect more diverse dataset-visualization pairs and extend _AdaVis_ to recommend more different types of data visualizations in an adaptive and explainable manner. Also, it is interesting to investigate how user intent can be integrated into _AdaVis_ to further improve its efficiency and effectiveness in adaptive and explainable visualization recommendations.
## Acknowledgments
This project is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 (Proposal ID: T2EP2022-0049) and HK RGC GRF grant 16210722. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore. We are grateful to Xiaolin Wen for his help in figure editing, to the experts in participating our interviews, and to anonymous reviewers for their constructive feedback.
|
2310.00897 | Practical Radar Sensing Using Two Stage Neural Network for Denoising
OTFS Signals | Our objective is to derive the range and velocity of multiple targets from
the delay-Doppler domain for radar sensing using orthogonal time frequency
space (OTFS) signaling. Noise contamination affects the performance of OTFS
signals in real-world environments, making radar sensing challenging. This work
introduces a two-stage approach to tackle this issue. In the first stage, we
use a generative adversarial network to denoise the corrupted OTFS samples,
significantly improving the data quality. Following this, the denoised signals
are passed to a convolutional neural network model to predict the values of the
velocities and ranges of multiple targets. The proposed two-stage approach can
predict the range and velocity of multiple targets, even in very low
signal-to-noise ratio scenarios, with high accuracy and outperforms existing
methods. | Ashok S Kumar, Sheetal Kalyani | 2023-10-02T04:29:04Z | http://arxiv.org/abs/2310.00897v2 | # Practical Radar Sensing Using Two Stage Neural Network for Denoising OTFS Signals
###### Abstract
Noise contamination affects the performance of orthogonal time frequency space (OTFS) signals in real-world environments, making radar sensing challenging. Our objective is to derive the range and velocity from the delay-Doppler (DD) domain for radar sensing by using OTFS signaling. This work introduces a two-stage approach to tackle this issue. In the first stage, we use a convolutional neural network (CNN) model to classify the noise levels as moderate or severe. Subsequently, if the noise level is severe, the OTFS samples are denoised using a generative adversarial network (GAN). The proposed approach achieves notable levels of accuracy in the classification of noisy signals and mean absolute error (MAE) for the entire system even in low signal-to-noise ratio (SNR) scenarios.
OTFS, delay-Doppler domain, convolutional neural network, generative adversarial network.
## I Introduction
Orthogonal time frequency space (OTFS) signaling has been identified as a promising signal waveform for fully using the capabilities of the integrated sensing and communication (ISAC) system [1, 2]. The OTFS signal modulates the data in the delay-Doppler (DD) domain. The target's range and velocity characteristics, which may be derived from the DD domain, are the essential parameters to be calculated in radar signal processing.
A 2D correlation-based approach to evaluate the Doppler and delay indices for radar sensing has been studied in [3]. The advantages of using OTFS waveform for velocity and range estimates in radar sensing applications have been investigated in [4, 5] and [6]. A single target scenario has been considered in [4], in which the root mean square error (RMSE) performance of the range and velocity as a function of signal-to-noise ratio (SNR) up to -15 dB has been analyzed for radar target estimation. The work in [5] mentions the estimation of range and velocity RMSE as a function of radar SNR in a multipath scenario. The work reported in [7] exploited the application of three distinct sparse algorithms in the estimation of range and velocity of target using OTFS signaling. All the studies mentioned above examine the range and velocity RMSE of the target by considering fixed SNR levels. Unlike the existing state- of-the-art methodologies, we calculated the range and velocity RMSE of each target by using a two-stage noise reduction method for OTFS signals.
For the first stage, a convolutional neural network (CNN) model is proposed in our work for the classification of noise as severe or moderate [8]. The proposed CNN model effectively classifies the noise without incorporating any preprocessing steps for noise removal. In wireless communication systems, deep neural networks (DNNs) significantly improve data interpretation, leading to better signal quality and system performance [9, 10, 11]. The GAN-based denoising models have several advantages over traditional denoising methods. The model can rival the performance of artificial denoising methods. The advantages include better performance, enhanced generalization capabilities, the ability to study complex patterns in the data, and automation of the entire process [12, 13, 14]. The aforementioned advantages serve as the driving force behind the incorporation of GAN into the second stage of our proposed method for denoising.
In summary, this letter presents a two-stage neural network model comprised of CNN and GAN with the goal of estimating the range and velocity for radar target detection by means of OTFS signaling. In real-world radar applications, the severity of noise in the signal is unpredictable, which makes radar target detection difficult. The existing literature falls short of providing a system dealing with radar target estimation in extremely noisy conditions. Simulation results show that our system has the capability to reduce the values of mean absolute error (MAE), range RMSE, and velocity RMSE of the target significantly. The system can even operate in extremely noisy conditions ranging from 0 to -20 dB SNR, thus expanding the SNR range for radar target detection.
## II System Model
In this section, we first describe the OTFS based system.
### _Otfs_
For each OTFS frame, \(N\) and \(M\) represent the number of time slots and the number of sub-carriers respectively. \(T\) represents the symbol duration and \(\delta f_{s}\) is the subcarrier frequency spacing. For a given \(\delta f_{s}\), the total bandwidth is \(B=M\delta f_{s}\) and the duration of one OTFS frame is given by \(NT\). The information bits are mapped to a symbol set in the DD domain. The information symbols are generally QAM symbols. The symbol set corresponding to \(l^{th}\) delay and \(k^{th}\) doppler bins is \(A_{\mathrm{DD}}[k,l]\), for \(k=0,\ldots,N-1\) and \(l=0,\ldots,M-1\).
The DD domain symbols are mapped to time-frequency (TF) domain symbols using inverse symplectic finite Fourier transform (ISFFT), operation as
\[A_{\mathrm{TF}}[n,m]=\frac{1}{\sqrt{NM}}\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}A_{ \mathrm{DD}}[k,l]e^{j2\pi\left(\frac{nk}{M}-\frac{ml}{M}\right)} \tag{1}\]
where \(n=0,\ldots,N-1\) and \(m=0,\ldots,M-1\). The TF symbols are translated to the time domain transmit signal, \(x(t)\) by using the Heisenberg transform,
\[x(t)=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}A_{\mathrm{TF}}[n,m]g_{tx}(t-nT)e^{j2\pi m \delta f_{x}(t-nT)} \tag{2}\]
where \(g_{tx}\) is a pulse-shaping waveform.
The time domain signal \(x(t)\) is passed through the linear time-varying channel, which has \(P\) targets in the DD domain. The \(p^{th}\) target has reflection coefficient \(h_{p}\), delay \(\tau_{p}\) with \(0<\tau_{p}<T\) and Doppler shift \(\nu_{p}\). The complex base-band channel response, \(h(\tau,\nu)\) in the DD domain can be expressed as
\[h(\tau,\nu)=\sum_{p=0}^{P-1}h_{p}\delta\left(\tau-\tau_{p}\right)\delta\left( \nu-\nu_{p}\right) \tag{3}\]
For integer delays and Doppler, \(\tau_{p}=\frac{l_{p}}{M\delta f_{x}}\) and \(\nu_{p}=\frac{k_{p}}{NT}\), where \(l_{p}\) and \(k_{p}\) denote the corresponding delay and Doppler indices of the \(p^{th}\) target. The received signal \(r(t)\) is given by
\[r(t)=\iint h(\tau,\nu)e^{j2\pi\nu(t-\tau)}x(t-\tau)d\tau d\nu+w(t) \tag{4}\]
where \(w(t)\) denotes the additive white Gaussian noise (AWGN) process with one side power spectral density (PSD), \(N_{0}\). The received signal \(r(t)\) is converted back to the TF domain using Wigner transform,
\[B_{\mathrm{TF}}[n,m]=\int_{-\infty}^{\infty}r(t)g_{rx}^{*}(t-nT)e^{-j2\pi m \delta f_{x}(t-nT)}dt \tag{5}\]
where \(g_{rx}(t)\) is the pulse-shaping filter at the receiver. The TF domain signals \(B_{\mathrm{TF}}[n,m]\) are then converted to DD domain symbols \(B_{\mathrm{DD}}[k,l]\) using symplectic finite Fourier transform (SFFT), which is given by,
\[B_{\mathrm{DD}}[k,l]=\frac{1}{\sqrt{NM}}\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}B_{ \mathrm{TF}}[n,m]e^{-j2\pi\left(\frac{mk}{N}-\frac{m}{N}\right)} \tag{6}\]
In view of the fact that \(B_{\mathrm{DD}}\) contains information symbols, we are not able to identify the target areas of interest directly. Instead, a 2D correlation-based approach has been used between \(B_{\mathrm{DD}}\) and \(A_{\mathrm{DD}}\) to obtain the delay and Doppler index [3]. The matrix \(V\) contains information about the correlation between the transmitted and received signals at different delay and Doppler indices. The accumulated correlation coefficient under different delay and Doppler indices is given by,
\[\begin{split} V[k,l]=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}B_{ \mathrm{DD}}^{*}[n,m]A_{\mathrm{DD}}\left[[n-k]_{N},[m-l]_{M}\right]\\ \times\gamma[n-k,m-l]e^{j2\pi\frac{(m-1)k}{NM}},\end{split} \tag{7}\]
where \(k\in[0,N-1]\) and \(l\in[0,M-1]\), and \(\gamma[k,l]\) is a phase offset given by
\[\gamma[k,l]=\begin{cases}1,&l\geq 0,\\ e^{-j2\pi\frac{k}{N}},&l<0.\end{cases} \tag{8}\]
### _Dataset Description_
We describe the following datasets which we use to train the proposed deep learning model.
* **Transmitted dataset:** The transmitted dataset contains the transmitted OTFS signals \(x(t)\) that are generated by the transmitter and sent out into the environment. This signal is used to probe the environment and detect the objects or targets that reflect the signal to the receiver.
* **Low Noise dataset:** The low noise dataset has the signal \(r(t)\), which is obtained by using the equation (4). Typically, deep learning applications operate under the assumption of completely clean dataset with no noise. However, in practical scenarios, such datasets are rarely available. Hence we use low noisy datasets in our work. We have separately created datasets with 5 dB, 20 dB, and 40 dB SNR values. This is for comparing MAE, range RMSE, and velocity RMSE of the target at different SNR values. These datasets are used as the input to the GAN only during the training phase
* **Corrupted/ Noisy dataset:** The corrupted dataset has the signals \(r(t)\), after it has been corrupted with AWGN noise, where SNR ranges from 0 to -20 dB.
* **Label:** The estimated target location is obtained in the DD domain by correlating the corrupted signal with the transmitted signal. This is obtained by using the equation (7). The true target location indicates the actual target location. The estimated target value is compared with the true target value to generate the labels. The label '0' denotes a match between the estimated and true target values which indicates moderate noise. The label '1' denotes a mismatch between the estimated and true target values which correspond to severe noise.
Fig. 1. shows the DD matrices for radar sensing after performing the 2D correlation between moderately corrupted signal and transmitted signal in the DD domain. In this example, one target is considered with \(M=N=28\). The delay and Doppler indices of the targets are 7 and 12 respectively. The same scenario has been considered in Fig. 2. by performing the 2D correlation between the severely corrupted signal and the transmitted signal in the DD domain. It is seen that we cannot identify the location of the target exactly from the DD domain matrix.
### _Proposed CNN for classification of noise_
In the proposed system, the input to the CNN consists of transmitted dataset and corrupted dataset. The proposed CNN architecture which is used to classify noise as moderate or severe is shown in Fig. 3. The network starts with an image input layer. This layer is succeeded by a convolution layer with 32 filters of size \(13\times 13\). The padding is set to be the same. The Batch normalization layer is inserted between the layers which is then followed by the ReLU layer. The CNN then continues with a series of similar layers alternating between convolutional layers, batch normalization layers, and ReLU layers. The filter size also gradually increases from 32 and doubles up to 256 in the subsequent layers. A dropout of value 0.5 is also added after this layer. Two fully connected layers of
512 nodes and one fully connected layer of 256 nodes follow the convolution layer. In the proposed model, the activation function used at the output layer is the softmax layer. The final classification layer then computes the loss and accuracy of the network during training and evaluation. In this case, the loss function employed is the cross-entropy loss. The CNN architecture classifies the noise as moderate or severe. If the noise is moderate, then by using equation (7), the location of the target can be identified. Consequently, we can derive the velocity and range of the target from the DD domain. On the other hand, if the CNN outputs severe noise as the classification, all the corresponding samples are aggregated and passed through the GAN for denoising.
### _Denoising OTFS Signals using GAN_
GANs use two neural networks called a Generator Network \(G\) and a Discriminator Network \(D\), which compete against each other to create the desired result. The inputs to the discriminator and generator are real data \(u\) and random variable \(w\) respectively. The discriminator gives a value \(D(u)\) suggesting the possibility that \(u\) is a real sample. The main purpose of the discriminator is to maximize the probability of labeling the real samples as real and the generated fake samples \(G(w)\) as fake. The objective of the generator is to produce fake samples \(G(w)\) which are as close as possible to that of real samples, so that the discriminator fails to separate between fake and real samples. Hence a GAN can be defined as a minimax game in which \(G\) wants to minimize the value function \(\tilde{V}(D,G):\) while \(D\) wants to maximize it.
\[\min_{G}\max_{D}\tilde{V}(D,G)=\mathbb{E}[\log D(u)]+\mathbb{E}[\log(1-D(G(w)))] \tag{9}\]
The GAN is trained using pairs of low noise and corrupted OTFS signals. Fig. 4. shows the block diagram of GAN for denoising OTFS signals. The generator network is trained to generate denoised signals from the corrupted signals. The input to the discriminator network is low noise signals and generated signals from the generator. During training, the generator network attempts to minimize the difference between the generated and low noise signals, while the discriminator network aims to maximize the difference between the generated and low noise signals.
The discriminator network starts with an input layer followed by the convolution layer which has 32 filters of size (3,3) and stride (2,2). The padding is set to the same. The output of the convolution layer is then fed through a Leaky ReLU activation function. The same convolution layers are repeated with increasing filter sizes of 64, 128, 128, 256, and 512. A dropout regularisation layer with 0.5 is inserted at the output to prevent overfitting. The output layer is a fully connected layer (Dense) with a single unit and a sigmoid activation function. The corrupted signal is preprocessed before feeding to the Generator. The preprocessing is done
Fig. 1: The DD matrices for radar sensing after performing the 2D correlation between moderately corrupted signal and transmitted signal in the DD domain. The below plot corresponds to the 3D plot by taking the magnitude of \(V\) along the \(z-\)axis
Fig. 2: The DD matrices for radar sensing after performing the 2D correlation between severely corrupted signal and transmitted signal in the DD domain. The below plot corresponds to the 3D plot by taking the magnitude of \(V\) along the \(z-\)axis
by using Gaussian filtering and the preprocessed signal is normalized to a range of [0,1]. This is then passed through a series of two sets of convolution layers with 32 filters each with size (3,3), each succeeded by a max pool layer. This is repeated with two more batches of convolution layer and max pool layer with the filter size of 64 and 128. Each layer uses the ReLU activation function and the same padding. The upsampling path also consists of 3 sets of 2 convolution layers with filter sizes of 64, 32, and 3 respectively. The output layer produces the denoised signals after training. It represents the generated signals which are then fed to the discriminator.
## III Simulation Results
In the simulations, we consider a single target scenario with \(P=1\). A DD domain grid with \(M=N=28\) and \(\delta f_{s}\) = 150 kHz is considered. The carrier frequency \(f_{c}\) is taken as 60 GHz. The velocity and range resolution can be calculated by \(V_{res}=\frac{Bc}{2M\)\(M\)\(f_{c}\) and \(R_{res}=\frac{c}{2B}\), where \(c\) is the speed of light. The maximum velocity and range are given by \(V_{max}=\frac{c\delta f_{s}}{2f_{c}}\) and \(R_{max}=\frac{cT}{2}\). The dataset comprising 50000 complex OTFS samples, each with a length of \(MN\) are taken for low noise, corrupted, and transmitted signals. We have also created three separate datasets corresponding to low noise signals with different SNR levels of 5 dB, 20 dB, and 40 dB. Each dataset contains 50000 samples and is utilized for analyzing MAE, range RMSE, and velocity RMSE of the target. Out of 50000 samples, 40000 samples are used for training the system. The plot in Fig. 5. shows the variation in loss across the number of iterations for the CNN. Notably, the proposed CNN achieved a remarkable training accuracy of \(98.8\%\). Concurrently, the loss reduced progressively, reaching a low value of 0.05.
### _System Validation and Performance Analysis_
For testing the system, a total of 10000 transmitted and corrupted samples are given to CNN. The test accuracy for the CNN is found to be \(97.89\%\). If the CNN produces severely noisy results, associated samples are aggregated and sent through the GAN for denoising. Table 1. shows the comparative analysis of MAE, range RMSE, and velocity RMSE of the target by utilizing the samples of corrupted signals in the range 0 to -20 dB SNR along with low noise signals of different SNR values. In this work, we have achieved
Fig. 4: Block diagram of GAN for denoising
Fig. 3: CNN Architecture for classification of noise
a MAE value of 0.68 in the DD domain by using low noise signals with 40 dB SNR and corrupted signals. Thus, for a maximum distance of 1000 m and a maximum velocity of 375 m/s, we obtain a range RMSE and a velocity RMSE of 69.5 m and 26.06 m/s respectively. Similarly, the MAE, range RMSE, and velocity RMSE value of the target under the same scenario by using low noise signals with 20 dB and 5 dB SNR is depicted in the Table. 1.
### _Comparison with the state-of-the-art methods_
The works [5] and [6] address modified versions of maximum likelihood (ML) estimation algorithms for target detection using OTFS signaling. In [5] and [6], the maximum range RMSE (at an SNR of up to -15 dB) is determined to be 0.7 m and 12 m, respectively. Similarly, the maximum velocity RMSE at -15 dB SNR is found to be 3 m/s and 10 m/s, respectively. The work in [7] performs target sensing up to -10 dB by utilizing sparse algorithms and they could obtain both the maximum range RMSE and velocity RMSE as 1. These non-zero values of RMSE in [4]-[6] indicate inaccurate target localization in radar sensing. In the SNR range from 0 to -15 dB, our approach delivers highly accurate results, with both range RMSE and velocity RMSE effectively reduced to zero. The work in [6] and [7] does not address SNR levels below -15 dB. Due to our dual-stage technique, we are able to work even with SNR as low as -20 dB. Despite [5] addressing SNR levels below -15 dB, which resulted in range RMSE and velocity RMSE values of 100 m and 120 m/s at -20 dB, our approach demonstrates superior performance with lower values as indicated in Table 1. It is evident that the values of MAE, range RMSE, and velocity RMSE are low when considering corrupted signals ranging from 0 to -20 dB SNR. Hence, radar target detection in the highly challenging, low SNR environments, is now possible due to our proposed approach.
## IV Conclusions
In this work, we have proposed a two-stage approach for denoising OTFS signals for radar sensing. The first step involves the classification of noisy OTFS samples as moderate or severe with the help of a CNN. The CNN accurately distinguishes the noisy samples and provides a solid foundation for the subsequent denoising process. The second step focuses on denoising the identified noisy samples with the help of a GAN. The proposed system has yielded promising results demonstrating its effectiveness in both the classification and denoising processes even in very low SNR environment.
|
2307.03789 | Synthesizing Forestry Images Conditioned on Plant Phenotype Using a
Generative Adversarial Network | Plant phenology and phenotype prediction using remote sensing data is
increasingly gaining the attention of the plant science community to improve
agricultural productivity. This work aims to generate synthetic forestry images
that satisfy certain phenotypic attributes, viz. canopy greenness. We harness a
Generative Adversarial Network (GAN) to synthesize biologically plausible and
phenotypically stable forestry images conditioned on the greenness of
vegetation (a continuous attribute) over a specific region of interest
(describing a particular vegetation type in a mixed forest). The training data
is based on the automated digital camera imagery provided by the National
Ecological Observatory Network (NEON) and processed by the PhenoCam Network.
Our method helps render the appearance of forest sites specific to a greenness
value. The synthetic images are utilized to predict another phenotypic
attribute, viz., redness of plants. The Structural SIMilarity (SSIM) index is
used to assess the quality of the synthetic images. The greenness and redness
indices of the generated synthetic images are compared against that of the
original images using Root Mean Squared Percentage Error (RMSPE) to evaluate
their accuracy and integrity. The generalizability and scalability of our
proposed GAN model is determined by effectively transforming it to generate
synthetic images for other forest sites and vegetation types. | Debasmita Pal, Arun Ross | 2023-07-07T18:28:44Z | http://arxiv.org/abs/2307.03789v2 | # Synthesizing Forestry Images Conditioned on Plant Phenotype Using a Generative Adversarial Network
###### Abstract
Plant phenology and phenotype prediction using remote sensing data is increasingly gaining the attention of the plant science community to improve agricultural productivity. In this work, we generate synthetic forestry images that satisfy certain phenotypic attributes, viz. canopy greenness. The greenness index of plants describes a particular vegetation type in a mixed forest. Our objective is to develop a Generative Adversarial Network (GAN) to synthesize forestry images conditioned on this continuous attribute, i.e., greenness of vegetation, over a specific region of interest. The training data is based on the automated digital camera imagery provided by the National Ecological Observatory Network (NEON) and processed by the PhenoCam Network. The synthetic images generated by our method are also used to predict another phenotypic attribute, viz., redness of plants. The Structural SIMilarity (SSIM) index is utilized to assess the quality of the synthetic images. The greenness and redness indices of the generated synthetic images are compared against that of the original images using Root Mean Squared Error (RMSE) in order to evaluate their accuracy and integrity. Moreover, the generalizability and scalability of our proposed GAN model is determined by effectively transforming it to generate synthetic images for other forest sites and vegetation types.
keywords: Generative Adversarial Network (GAN), synthetic forestry imagery, plant phenology prediction, plant phenotype, canopy greenness (GCC), redness of plants (RCC) +
Footnote †: journal: Pattern Recognition
## 1 Introduction
Phenology is the study of recurring and seasonal biological life cycle events of organisms, primarily driven by complex interactions between environmental and genetic factors [1]. This can be utilized in optimizing crop production, better understanding ecosystem processes like carbon and hydrology cycle, invasive species
and pests management, predicting human health related problems (e.g., seasonal allergies), etc.1 Flag leaf emergence, flowering of plants, insect emergence, animal migration are examples of phenological events in nature. These phenomena are highly sensitive to weather and climate change, specifically to temperature and precipitation. Due to gradual change in the global climate, plant phenology and phenotype (the observable traits and characteristics resulted from the interactions between genotypes and environment)2 prediction occupies a prominent place in the domain of agriculture [1; 2]. It advances the study of phenological trends and reduces the uncertainties associated with ecosystem processes (e.g., carbon cycle) as a consequence of phenological shifts. With recent technological advancements, this area of research is significantly growing due to the availability of remotely sensed near-surface plant phenological observations through satellite and digital camera imagery in lieu of manual measurements. Consequently, image analysis and pattern recognition have been playing an important role in precision agriculture [3; 4]. Specifically, the adoption of deep learning and computer vision techniques in plant research has enabled scientists to learn the representation and regularities in high volume of data in order to increase plant productivity [5; 6].
Footnote 1: [https://www.usanpn.org/about/why-phenology](https://www.usanpn.org/about/why-phenology)
Footnote 2: [https://www.genome.gov/genetics-glossary/Phenotype](https://www.genome.gov/genetics-glossary/Phenotype)
Over the past few years, various deep generative models (viz., energy-based models, variational autoencoders, generative adversarial networks, normalizing flows) [7] have been proposed in the literature to model the distribution of input training patterns and generate new samples. Among these, Generative Adversarial Networks (GANs) have been found to be immensely successful in generating high-quality synthetic images from an existing distribution of sample real images. The purpose of this work is to put forward a GAN architecture for generating realistic-looking synthetic forestry images conditioned on certain phenotypic attributes. We use the greenness of vegetation canopy as a condition in generating synthetic images. Canopy greenness measurements provide information about the foliage present and its colors.3 Tracking canopy greenness is instrumental in the comprehensive understanding of the sources and sinks of carbon to reduce uncertainties in global carbon cycle. Further, (a) the leaf emergence increasing the greenness impacts the hydrologic processes by evapotranspiration; (b) the senescence in autumn, during which the leaves color switches from green to yellow and/or red [8] influences nutrient cycling process by adding nutrients to the forest floor; (c) the amount and the condition of the foliage present affects the surface energy balance. We hypothesize that the generated
synthetic images conditioned on the greenness of vegetation canopy would help render the appearance of the forest sites specific to a given greenness value. Moreover, the synthetic images could be utilized to predict other phenotypic attributes such as redness of plants, leaf area index (LAI), canopy cover, etc. Studies have shown that redness of plants is a better predictor of GPP-based (Gross Primary Productivity) start and end of growing season in some of the vegetation forest sites [9]. LAI quantifies the canopy development and is critical in photosynthesis process.
### Background
Our study is based on the RGB forestry imagery along with the derived greenness index curated by the PhenoCam Network.4 The images are captured by automated, near-surface, remote sensing digital camera placed on the top of the canopy in 30-minute time intervals throughout the year [10]. Each pixel in an RGB image is represented by a triplet of digital numbers denoting the intensity of red, green, and blue color channels. These images are processed by the PhenoCam Network to gather statistics about the greenness of vegetation canopy, measured by Green Chromatic Coordinate (GCC). GCC is the relative brightness of the green channel, normalized against the overall brightness of red, green, and blue channels together. Additionally, the PhenoCam Network report the redness index of plants, measured by Red Chromatic Coordinate (RCC). It is defined as the relative brightness of the red channel, normalized against the overall brightness of red, green, and blue channels together.
Footnote 4: [https://phenocam.sr.unh.edu/webcam/](https://phenocam.sr.unh.edu/webcam/)
\[GCC=\frac{G_{DN}}{R_{DN}+G_{DN}+B_{DN}}\hskip 14.226378pt(1)\hskip 14.226378pt RCC=\frac{R_{DN}}{R_{DN}+G_{DN}+B_{DN}}\hskip 14.226378pt(2)\]
In general, the greenness and redness index are measured over a specific Region of Interest (ROI) on the image to describe a particular vegetation type in a mixed forest, such as Deciduous Broadleaf (DB), Evergreen Needleleaf (EN), Grassland (GR). The PhenoCam Network defines certain ROIs for each of the forest sites to measure the greenness and redness statistics, and each of these ROIs is designated by an ROI ID (e.g., DB_1000, EN_1000 etc.). The first two letters of the ROI ID indicate the vegetation type and the last four digits serve as a unique identifier to distinguish between multiple ROIs of same vegetation type at a given site. The GCC and RCC corresponding to the ROI of an image are calculated by taking the mean of GCC and RCC respectively, of the pixels in that ROI.
We intend to use Type-I PhenoCam sites because of their high quality of the
captured images. The National Ecological Observatory Network (NEON)5 is one of the participating Type-I sites capturing the images of plant canopy across United States following the protocols defined by the PhenoCam Network (NEON Data Product DP1.00033 [11]). They have strategically formulated 20 ecoclimatic "Domains" grounded on the vegetation, landforms and ecosystem dynamics, involving 47 terrestrial field sites and 34 aquatic freshwater sites. In our experiment, we consider the terrestrial sites belonging to the NEON domain "D01-Northeast", which encompasses New England and north-eastern Seaboard states along with the northern end of the Appalachian range. This domain includes the following terrestrial sites:
Footnote 5: [https://www.neonscience.org/](https://www.neonscience.org/)
* Harvard Forest, Massachusetts, USA
* NEON Site ID: HARV
* PhenoCam Site ID: NEON.D01.HARV.DP1.00033
* Latitude: 42.53691; Longitude: -72.17265
* Bartlett Experimental Forest, New Hampshire, USA
* NEON Site ID: BART
* PhenoCam Site ID: NEON.D01.BART.DP1.00033
* Latitude: 44.063889; Longitude: -71.287375
Figure 1 shows some sample mid-day images corresponding to different times in a year for both sites along with the GCC and RCC values of two ROIs describing the vegetation types DB and EN. It can be observed that the greenness of plants varies throughout the year in such a way that it is low in winter, increases sharply in spring, and gradually falls over the summer. In Fall, the redness starts to increase.
### Objective and Contribution
The objective of this paper is to utilize the images of the above-mentioned NEON forest sites along with derived GCC values for a specific ROI to train a generative model, which synthesizes new examples of realistic-looking forestry images satisfying the given GCC value over the given ROI. We exploit the _concept_ of Conditional GAN (CGAN) [12] to generate forestry images conditioned on the _continuous_ attribute GCC and the ROI image rather than conditioning on any categorical attribute. The quality of the synthetic images is evaluated with the help of Structural SIMilarity
(SSIM) index [13], and the accuracy of GCC in the generated images is measured in terms of Root Mean Squared Error (RMSE). Further, the synthetic images are utilized to predict another phenotypic attribute, viz., RCC, reported by the PhenoCCam network, which is not used to train the model. The predicted RCC values of the synthetic images are compared with the ground-truth RCC of the test images and the RMSE is calculated. Experimental results indicate that the RMSE of the generated images is 2.1% (GCC) and 9.5% (RCC) for Harvard forest, and 2.1% (GCC) and 8.4% (RCC) for Bartlett Experimental Forest respectively when the GAN model is individually trained on each site. In order to deduce the efficacy of our proposed approach of predicting other phenotypic attribute from the synthetic images, we study the correlation between GCC and RCC reported by the PhenoCCam Network. A negative linear correlation is observed between these two phenotypic attributes with magnitude as approximately 0.2 for both of the forest sites, which indicates that the redness index is not highly correlated with greenness index; therefore, the prediction of redness index from the synthetic images generated by our GAN model is not di
Figure 1: Sample mid-day images of NEON terrestrial sites.
rectly controlled by the input greenness index to the model. The GAN model itself plays a significant role in predicting redness index. Additionally, we attempt to verify if the model trained on one forest site can be effectively transformed to generate the images for other forest sites using less computational resource and time, referred to as cross-site training. The scalability of our proposed approach is assessed by extending the model to other vegetation type of the same forest site.
GANs have been utilized to synthesize various kinds of images in the literature either by imposing a condition to control the images being generated or in an unconditional setting. Some examples of GAN-generated images are shown in Figure 2. Most of these applications synthesize objects having well-defined morphological structures. Researchers have also utilized GAN for image-to-image translation (transferring images from a source domain to a target domain), where the image itself is used as a condition [14]. These applications include translating a wide range of images, e.g., aerial to map, summer to winter, day to night, horse to zebra, etc. [15; 16].
To the best of our knowledge, our work is the first attempt to generate forest landscapes satisfying a phenotypic attribute, which is continuous in nature, over a certain portion of the image. As the overall geometry of the forestry images for a particular site always remains the same, GAN is primarily required to learn the color of the foliage, i.e., green-up and green-down based on the GCC value. It is always
Figure 2: Examples of GAN-generated images available in literature based on various datasets.
challenging to extract meaningful phenological information from automated plant imagery due to lighting variations, plant rotations and occlusion [4]. Here, we focus on generating synthetic forestry imagery based on the greenness, which are visually appealing as well as phenotypically stable.
In a nutshell, the contributions of this work are as follows:
* Developing a GAN architecture conditioned on a continuous attribute over a certain portion on the image.
* Application of GAN in the domain of agriculture by generating synthetic forestry images satisfying a given phenotypic attribute over the ROI describing a vegetation type.
* Synthesizing biologically plausible and phenotypically stable images to better portray the appearance of the forest sites.
* Prediction of other phenotypic attributes that were not used in generation process, from the synthetic images.
In Section 2, we provide with a brief literature review on different GAN frameworks and the application of deep learning in agriculture. Section 3 describes the proposed approach followed by the GAN architecture developed in this work. The experiments and the results are reported in Section 4. Section 5 concludes the paper.
## 2 Related Work
Since our goal is to build a GAN with the purpose of applying it in the agricultural domain, the literature review is conducted from both aspects. First, we introduce several GAN architectures proposed in the literature. Then, we present a summary of deep learning approaches, including GANs that have been used in agriculture.
### Generative Adversarial Network (GAN)
GAN was first proposed by Goodfellow et al. in 2014 [17] using multi-layer perceptron (MLP) network with a min-max loss function to generate synthetic images, known as Standard GAN (SGAN). Later on, researchers suggested other loss functions like least-square (LSGAN) [20], Wasserstein distance (WGAN) [23], Wasserstein distance with gradient penalty (WGAN-GP) [24] and hinge loss [25] in order to improve the performance and increase the stability during GAN training. Radford et al. came up with stable architectural guidelines for convolution GANs leading to a class of architectures named Deep Convolutional GAN (DCGAN) [26]. Mirza et al.
developed CGAN by incorporating auxiliary information (class label) during training to control the image being generated by the GAN model [12]. In [27], the authors proposed continuous conditional generative adversarial network (CcGAN) which was built on continuous, scalar conditions (regression labels). Isola et al. designed an image-conditional GAN framework, called as Pix2Pix for image-to-image translation, using a set of aligned image pairs as training data [15]. Thereafter, Cycle-consistent GAN (CycleGAN) [16] was proposed adopting an unsupervised approach for image-to-image translation using unpaired training data with cycle-consistency loss eliminating the requirement of aligned pairs of images as opposed to Pix2Pix. Karras et al. proposed a new training methodology for GAN utilizing the idea of progressive neural network (ProGAN) [19] to generate high-resolution images. In [18], Zhang et al. incorporated long-range dependencies by introducing self-attention modules on top of convolution layers in the GAN architecture, referred as Self Attention GAN (SAGAN). BigGAN was implemented on top of SAGAN architecture by employing certain techniques (such as truncation trick, orthogonal regularization), which substantially improves the performance of class-conditional GAN [28]. Liu et al. proposed an adaptive global and local bilevel optimization model (GL-GAN), that embedded a local bilevel optimization technique to improve the poor quality portion on an image along with traditional global optimization technique to optimize the whole image [29].
### Applications in Agriculture
In [5], the authors presented a comprehensive overview on the application of deep learning techniques in plant phenological research over the past few years -- which indicates that most of the literature studied classification (e.g., phenological stages) and segmentation tasks (e.g., flower or fruit segmentation, counting buds, flowers and fruits, presence of species etc.) based on plant imagery using Convolution Neural Networks (CNNs). Lee et al. utilized CNN to better represent the features of leaf images for the identification of plants [30]. Cao et al. predicted leaf phenology of DB forests in terms of leaf growing dates after start-of-growing season in a year from PhenoCCam images using CNN [31]. In [32], the authors developed a deep learning based platform, known as Deep Plant Phenomics, to accelerate image-based plant phenotyping.
Due to the non-availability of large amount image data required for CNN implementation, GAN has been used for image data augmentation by synthesizing new realistic images in order to improve machine learning performance for various applications related to precision agriculture and plant phenotyping (plant disease recognition, weed control, fruit detection, leaf counting, leaf segmentation, plant
seedling, plant vigor rating etc.) [33]. Further, a semi-automated pipeline for data augmentation was proposed using GAN for agricultural pests detection [34]. In [35], the authors applied CycleGAN between Sentinel-1 (Synethetic Aperture Radar) and Sentinel-2 (optical) of satellite data in order to improve crop type mapping and identification. Miranda et al. suggested to model plant growth as an image-to-image translation task based on conditional GAN to predict plant growth stage as a function of its previous growing stage and diverse environmental conditions which influences plant growth [36].
## 3 Methodologies
A typical GAN architecture consists of two models -- generator and discriminator. The generator learns the distribution of input images and computes a differentiable function, which maps a latent vector space to the input data space in order to generate synthetic images, whereas the discriminator classifies between real and synthetic images. The architecture adopts an adversarial training mechanism in which both the models are trained simultaneously by formulating them as a competition. The discriminator tries to improve its classification accuracy by correctly distinguishing between real and synthetic images, while the generator generates realistic synthetic images from random noise (e.g., spherical Gaussian) in order to deceive the discriminator. While any differentiable network can be used to implement a GAN, it is common to use deep neural networks for its implementation.
In order to develop our GAN model for generating synthetic forestry images conditioned on GCC over a specific ROI, we employ the _concept_ of CGAN [12], i.e., feeding auxiliary information to both the generator and the discriminator to exercise control over the images being generated. The CGAN in [12] utilized an MLP network as the baseline architecture and a categorical attribute as auxiliary information to generate images conditioned on class labels. However, recent advances have revealed the power of using CNN to synthesize new examples of images. Specially, the DCGAN [26] architecture has become one of the most popular and successful in literature, which was implemented in an unconditional setting. On top of the basic architectural guidelines recommended by DCGAN, we propose a novel GAN architecture, which provides a continuous attribute, viz., GCC, and an ROI image as auxiliary input to the generator and the discriminator. Figure 3 refers to the outline of our proposed approach. Here, the notion is to utilize continuous value as an auxiliary input to the generator so that the generator is being conditioned on the continuous attribute. Therefore, during generator training, a random noise vector, the ROI image, and random GCC values within the range of GCC values
for the given ROI of the forest site under consideration are input to the generator in order to generate the synthetic images satisfying the given GCC over the given ROI. The synthetic images are then passed through the discriminator to estimate the probability of them being real, based on which the generator loss is calculated and the weight parameters of the generator model is updated through back-propagation while keeping the discriminator parameters constant. The discriminator is trained with the real images and their corresponding GCC values and the ROI image to calculate the discriminator loss on real images. Further, the synthetic images generated by the generator given the GCC values present in the training dataset are used to compute the discriminator loss on synthetic images. These two loss components are used to update the weight parameters of the discriminator model. The pre-training steps of the inputs to the generator and the discriminator are described later in this section.
Our GAN architecture is shown in Figure 4. We utilize the following guidelines recommended by the DCGAN architecture:
* Removal of fully connected hidden layers on top of convolutional features.
* Using strided convolutions for downsampling in the discriminator and fractional-strided convolutions for upsampling in the generator instead of pooling and scaling, respectively.
* Applying LeakyReLU activation in all the layers of the discriminator and ReLU in all the layers of the generator except the last layer.
Figure 3: Outline of our proposed approach: Generator inputs a random noise vector, a random GCC value within the range of GCC values for a specific ROI of the forest site under consideration, and the corresponding ROI image to generate a synthetic image, which satisfies the given GCC over the given ROI. Discriminator inputs the real or synthetic image and its corresponding GCC value and the ROI image to estimate the probability of the input image being real.
* Use of TanH activation function in the last layer of the generator.
Additionally, we integrate self-attention modules, proposed as part of SAGAN [18] in both the generator and the discriminator to enable long-range dependencies. We use spectral normalization in the discriminator and spectral normalization along with batch normalization in the generator as suggested by SAGAN in order to improve training stability. The PhenoCCam RGB image is of size \(960\times 1296\). Due to the limited computational resources and longer training time, the length and width of the PhenoCCam images and ROI image are reduced to half of their original size using bilinear interpolation and synthetic images of dimension \(480\times 648\) are generated by our GAN architecture. After reducing the size, we validate that the calculated GCC of the resized PhenoCCam images using Equation (1) based on the ROI "DB_1000" is same as that of the original-sized images provided by the PhenoCCam website. Therefore, the computation of the greenness as well as the prediction of the redness index are not affected due to the resizing of our generated images.
The input real images are normalized between \([-1,1]\) during training to make them compatible with the output (synthetic images) of the generator, that uses
Figure 4: Our GAN architecture: Generator uses transposed convolution layers for upsampling, whereas discrimator uses convolution layers for downsampling. Spectral normalization along with batch normalization is used in generator and spectral normalization is used in discriminator to increase the stability during training. Self-attention modules are incorporated in both the generator and discriminator to improve the quality of the synthetic images (ablation study is done in Section 4.1.6).
TanH activation function in the last layer. The GCC value is multiplied by 100 and rounded to 2 decimal digits (referred to as "adjusted" GCC in this paper) before feeding it to the GAN model so as to increase the variance in the input GCC values across the training images, consequently influencing the model discrimination ability over different GCC values. Further, the ROI image given by the Phenocam website is a black-and-white image where the black portion denotes the region of interest (Figure 1). With the intention of focusing on the ROI, the pixel values in the ROI image are inverted, i.e., they are set to a value of 1 over the ROI and 0 for the rest of the image. The pre-processed ROI image and the adjusted-GCC inputs are added as additional channels to the inputs of the generator and the discriminator. During training, orthogonal initialization is used for the weight parameters in the convolution and linear layers as suggested in [28]. We use the hinge loss [25] as an adversarial loss for GAN training. The hinge loss for discriminator and generator are defined below:
\[L_{D}=-E_{(x)\sim p_{data}}[min(0,-1+D(x))]-E_{z\sim p_{z}}[min(0,-1-D(G(z)))] \tag{3}\]
\[L_{G}=-E_{z\sim p_{z}}[D(G(z))] \tag{4}\]
where, \(D(x)\): the probability that \(x\) comes from the real data distribution \(p(data)\); \(z\): the input to the generator; \(G(z)\): the generator's output on the given input, \(z\); \(D(G(z))\): the discriminator's estimate that the generated synthetic image is real.
To measure the similarity between the real images and the synthetic images, the SSIM index [13] is used. Motivated by the fact that human visual perception is highly adapted to extract the structural information from a scene to identify its difference from a reference, this metric was proposed to extract the structural information from the sample and the reference image based on three key features: luminance, contrast and structure and provide with a score in the scale of \([-1,1]\). The higher the score, the more similar the sample image is to the reference image. We use the built-in SSIM method of the Scikit-Learn [37] library to calculate the SSIM index of the generated synthetic images against the test images. Figure 5 illustrates the overall process being used by the generator to generate the synthetic images.
## 4 Experiments
As mid-day images are the most significant in understanding green-up and green-down across the year, the images captured between 10:00 a.m. and 2:00 p.m. daily (9 images per day) throughout the year are considered in this work. Based on data availability, the images captured from January 2017 to December 2020 are used for the training of our GAN model, and images from January 2021 to December
2021 are used for testing. Upon filtering the images based on the availability of GCC values provided by the PhenoCam Network, the number of training and test images, respectively, are 12,149 and 3,189 for Harvard Forest, and 12,307 and 3,227 for Bartlett Experimental Forest. For the purpose of training the GAN model, we use Adam optimizer with a learning rate of 0.0001 for the discriminator and 0.00005 for the generator, which are found to be best suited for our dataset. The beta1 value of Adam optimizer is set to 0.9 and 0.5 for Harvard and Bartlett Forests respectively, and beta2 value is set to 0.999 for both the sites. The following are the set of experiments that were conducted as part of this work.
### Training Individual Sites with a Specific Vegetation Type
We first individually train our GAN model on both the forest sites based on the ROI labeled as "DB_1000" (denoting deciduous broadleaf vegetation) with the above-mentioned parameters. Figures 6 (Harvard) and 7 (Bartlett) show (a) examples of real images with GCC values sampled across the entire range from the test dataset and (b) the corresponding synthetic images with their SSIM indices. The GCC is calculated over the ROI of the synthetic images to compute its deviation from the input GCC. Additionally, the RCC is calculated over the ROI of the synthetic images to compare with the ground-truth RCC of the test images.
#### 4.1.1 Assessing Quality of Synthetic Images
In order to asses the overall quality of the synthetic images, we use the SSIM index by comparing the synthetic images with the corresponding test images (left side of Figure 8). However, for a single GCC value, more than one PhenoCam image may be available in the test dataset. Therefore, we recompute the SSIM index by utilizing the maximum SSIM score obtained when comparing the synthetic images with all the test images having the given GCC value. We refer to this recomputed
Figure 5: Overall process being used by the generator: Generator model generates the synthetic image given an ROI image and a GCC value and SSIM is used to assess the quality of the synthetic image by comparing with the real image.
SSIM index as _"adjusted"_ SSIM index (right side of Figure 8). Based on the adjusted-SSIM index, the generated synthetic images appear much more similar to real images for both the forest sites. It is possible that our model suffers from mode collapse problem [23] generating the same image every time for a particular GCC value. To counter this, we perform an analysis in Section 4.1.2.
Moreover, to acquire an understanding of the SSIM across real images for a particular GCC, we plot the histogram of SSIM indices for every pair of test images corresponding to a single GCC value (Figure 9). For Harvard Forest, there are 919 unique adjusted-GCC values across 3,189 test images, out of which 523 GCC values have more than one image. Similarly, for Bartlett Forest, 873 unique adjusted-GCC values are present across 3,227 test images, out of which 545 GCC values have more than one image. We consider that GCC value for which the highest number of test images are available for plotting the histogram. This analysis can then be utilized as a benchmark (a lower and an upper bound) for the SSIM score that can be achieved for the PhenoCam images. For Harvard Forest, most of the image pairs corresponding to the GCC value 0.3295 have SSIM index in the range [0.3, 0.4] and the lowest possible score is 0.18. It is observed that 43.5% of synthetic images for this forest site have an adjusted-SSIM index of more then 0.3 and 83.22% of synthetic images have an
Figure 6: Sample test images and synthetic images for Harvard Forest (SSIM indicates the similarity score of synthetic image with the corresponding test image. GCC and RCC correspond to the ROI “DB_1000” indicated on the right).
adjusted-SSIM index of more than the lowest possible score (0.18). Similarly, in case of Bartlett Forest, the range is [0.4, 0.5] for the real images corresponding to the GCC value 0.3428 with the possible lowest score as 0.23 and we observe that 31% of synthetic images have an adjusted-SSIM index of more than 0.4 along with 99.3% of synthetic images have an adjusted-SSIM index of more than the lowest possible score (0.23). Therefore, it is evident that our generated synthetic images are consistent with the quality of the real images.
#### 4.1.2 Fidelity and Variety of Synthetic Images
In an attempt to judge the fidelity and variety of the synthetic images generated by our GAN model, we present some sample test images corresponding to a single GCC value and synthetic images generated given that GCC value (Figure 10). Though the overall structure remains same for all the synthetic images corresponding to a particular GCC, the computed GCC and the predicted RCC values are not exactly the same indicating the diversity of the images being generated by the model. At the same time, we observe that the GCC of the generated images are very much closer to the given GCC, confirming the fidelity of the synthetic images.
Figure 7: Sample test images and synthetic images for Bartlett Experimental Forest (SSIM indicates the similarity score of synthetic image with the corresponding test image. GCC and RCC correspond to the ROI “DB_1000” indicated on the right side).
#### 4.1.3 Evaluating Accuracy of GCC and RCC
Apart from assessing the quality, a comparative study using GCC and RCC is performed between the test images and generated images (Figure 11). The GCC distribution across the test images is similar to the computed GCC distribution across synthetic images (compared to the RCC distribution), which in turn establishes that the conditional part of our GAN architecture based on GCC and ROI works.
Figure 8: SSIM and adjusted-SSIM index of synthetic images against test images (SSIM index indicates the score of the synthetic image after comparing it with the corresponding test image and adjusted-SSIM index is the maximum score obtained for the synthetic image after comparing with all the test images corresponding to the given GCC value).
However, our objective is also to predict other phenotypic attributes like RCC from the synthetic images. From that perspective, it can be observed that the range of the
Figure 10: Sample test images and synthetic images for Harvard Forest corresponding to a single GCC value over the “DB_1000” ROI indicating the variety of generated images by our GAN model.
Figure 9: SSIM index across test images corresponding to a single GCC value (obtained by calculating SSIM for each pair of real images having the same GCC value).
predicted RCC of the synthetic images covers the most frequent values of RCC of the test images. The RMSE of GCC and RCC values of the generated images based on the ground-truth GCC and RCC values of test images are 0.008 (2.1%) and 0.034 (9.5%), respectively in case of Harvard Forest, and the RMSE of GCC and RCC in case of Bartlett Forest are 0.009 (2.1%) and 0.035 (8.4%), respectively.
#### 4.1.4 Evaluating Efficacy of our GAN model
As already mentioned, our goal is to build a GAN model conditioned on the continuous attribute over a specific portion of the image. As a means to evaluate this, we choose only those sample images from the test dataset corresponding to the GCC values which are not used to train the model (i.e., not present in the train dataset). For Harvard Forest, we find 97 such test images involving 79 adjusted-GCC values -- some of the original samples with the corresponding generated images are shown in Figure 12. The RMSE of GCC and RCC across these 97 synthetic images are 5% and 9.8%, respectively. In case of Bartlett Forest, there are 42 such images in the test dataset involving 34 unique adjusted-GCC values, and the RMSE of GCC and RCC across these images are 4.6% and 6%, respectively.
#### 4.1.5 Analyzing Significance of Proposed Work
A kind of blurriness is detected across PhenoCam images for certain GCC values. This work also aims to improve the quality of appearance of the forest sites based on a greenness value. Therefore, we take some of the blurred sample images from the test set of Harvard Forest and generate synthetic images using our GAN model as shown in Figure 13. It is observed that the generated images could be utilized to better visualize the appearance of the forest sites in these cases. Additionally, plant biologists could leverage these synthetic images to gain a better understanding of other phenotypic attributes.
#### 4.1.6 Ablation Study on Self-Attention Modules
In order to measure the impact of self-attention modules on the performance of our GAN architecture, we train the GAN model for the Harvard Forest after excluding self-attention modules. Figure 14 shows the similarity scores obtained between the test images and the corresponding generated images by the model trained without self-attention modules. The comparison with the similarity scores achieved with self-attention modules for Harvard Forest shown on Figure 8 reveals that the addition of self-attention modules drastically improves the quality of the generated images in terms of SSIM index as well as adjusted-SSIM index.
Figure 11: GCC and RCC distribution across test images and corresponding synthetic images over the “DB_1000” ROI.
### Cross-site Training: Harvard Forest to Bartlett Experimental Forest
We conduct a study to verify the generalizability of our model, i.e., determine if the model trained on one forest site can be extended to other sites using a small amount of data and fewer number of iterations. In this regard, the model trained on the Harvard Forest dataset for 975 epochs (described in Section 4.1) is considered. It is then fine-tuned with 50% of the training data from Bartlett Forest for 100 epochs with the training parameters derived from this data. It is observed that the model effectively adapts to generate synthetic images for other sites. A comparative analysis is performed between (1) the model trained from scratch for the Bartlett Forest described in Section 4.1, and (2) the cross-site trained model. In case (1), the RMSE of greenness and redness index of the generated images are 4.8% and 12.6%,
Figure 12: Sample test images with GCC values not being used in training and corresponding synthetic images for Harvard Forest depicting the ability of our GAN model to generate synthetic images given the GCC value within the range used in training (GCC and RCC correspond to the “DB_1000” ROI indicated on the right).
Figure 13: Sample blurred test images and corresponding synthetic images for Harvard Forest illustrating the potential of our GAN model to better portray the appearance of the forest based on a greenness value (GCC and RCC correspond to the ROI “DB_1000” indicated on the right).
respectively, whereas in case (2), we obtain RMSEs as 3% and 12.2%, respectively. The SSIM and adjusted-SSIM score of the synthetic images when compared against the test images are presented in Figure 15. Figure 16 shows some sample images from the test set and the corresponding synthetic images for both the cases.
### Scalability to Other Vegetation Type
Next, we attempt to assess the scalability of our GAN model trained on a particular vegetation type to another vegetation type in the same forest site. For this, the model trained on the Harvard Forest dataset with the "DB_1000" ROI described in Section 4.1 is first examined with another ROI "EN_1000" in the same forest site; RMSE of 6.4% (GCC) and 7.11% (RCC) are obtained. Thereafter, the model is further trained for 25 epochs with 25% of the training data of Harvard Forest for the ROI "EN_1000" and we obtain RMSEs of 2.4% (GCC) and 7.06% (RCC). Figure 17 shows some sample images from the test dataset and the corresponding synthetic images for each of these cases. We observe that the SSIM index and GCC are improved after the "DB_1000" model is extended using just 25% of the training data from the new ROI "EN_1000".
## 5 Conclusion
In this work, we present a novel GAN architecture for synthesizing forestry images satisfying a specific phenotypic attribute, viz., greenness index over the ROI of an
Figure 14: SSIM index of synthetic images against test images for Harvard Forest after training without self-attention modules (Comparison with Figure 8 shows the improvement of SSIM score after adding self-attention modules in our GAN model).
image. Experiments on the PhenoCam dataset indicated that the synthetic images generated by our GAN model can be utilized to (a) visualize the appearance of a forest site based on the greenness value, and (b) predict other phenotypic attributes (e.g.,. redness index) that were not used during image synthesis. The SSIM scores between the generated images and real images were observed to be analogous to the SSIM scores between real images, thereby substantiating the quality of the generated images. Further, the proposed model is capable of producing a variety of images pertaining to a particular GCC value. It also has the ability to generate forestry images corresponding to GCC values not used during training but within the specific
Figure 15: Cross-site experiment: Comparison of SSIM index for synthetic images against the test dataset for Bartlett Experimental Forest.
range defined for the forest site. We also demonstrated that our GAN model trained on one forest site can be fine-tuned to generate images for other forest sites, which in turn establishes the generalization capability of the model. In addition, the model is scalable to other vegetation types within the same forest site in an efficient manner.
From a broader perspective, this work aims to advance the study on image generation by identifying patterns in images that do not have distinct morphological structure. Rather, our model automatically learned the phenomenon of green-up and green-down based on the colors and textures of images. Additionally, we applied conditioning on a certain portion of the image (ROI), which gave us control over the
Figure 16: Cross-site experiment: Sample test images and synthetic images for Bartlett Experimental Forest (SSIM indicates the similarity score of synthetic image with the corresponding test image. GCC and RCC correspond to the “DB_1000” ROI indicated on the right).
image generation process.
However, due to the limited size of the training dataset and asymmetric distribution of GCC values across training images, the proposed model was unable to generate high-quality images over some GCC values. It must also be noted that due to computational and time constraints, the size of the generated images was set to be smaller than that of the original PhenoCam images.
Currently, we are working to further improve the quality of the generated forestry images by using stable diffusion models [38]. This work can also be extended to other forest sites belonging to other NEON domains. In addition, other phenological and
Figure 17: Cross-vegetation experiment: Sample test images and synthetic images for Harvard Forest (SSIM indicates the similarity score of synthetic image with the corresponding test image. GCC and RCC correspond to the ROI “EN_1000” indicated on the right).
-phenotypical information (e.g., LAI, canopy cover) could also be extracted from the synthetic images. We believe that the work reported in this paper provides a first step in leveraging generative AI principles from pattern recognition and computer vision for plant phenological research.
## Acknowledgments
This work is supported by National Science Foundation (NSF Awards 1939945, 1940059, 1940062, 1940330 Harnessing the Data Revolution). We also thank all other project members, especially Dr. Bryan Heidorn, Dr. David LeBauer, Dr. Jessica Guo, Dr. Anne Thessen, Dr. Laurel Cooper and Dr. Pankaj Jaiswal.
|
2302.01740 | SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in
Image Classification | Deep learning achieves outstanding results in many machine learning tasks.
Nevertheless, it is vulnerable to backdoor attacks that modify the training set
to embed a secret functionality in the trained model. The modified training
samples have a secret property, i. e., a trigger. At inference time, the secret
functionality is activated when the input contains the trigger, while the model
functions correctly in other cases. While there are many known backdoor attacks
(and defenses), deploying a stealthy attack is still far from trivial.
Successfully creating backdoor triggers depends on numerous parameters.
Unfortunately, research has not yet determined which parameters contribute most
to the attack performance.
This paper systematically analyzes the most relevant parameters for the
backdoor attacks, i.e., trigger size, position, color, and poisoning rate.
Using transfer learning, which is very common in computer vision, we evaluate
the attack on state-of-the-art models (ResNet, VGG, AlexNet, and GoogLeNet) and
datasets (MNIST, CIFAR10, and TinyImageNet). Our attacks cover the majority of
backdoor settings in research, providing concrete directions for future works.
Our code is publicly available to facilitate the reproducibility of our
results. | Gorka Abad, Jing Xu, Stefanos Koffas, Behrad Tajalli, Stjepan Picek, Mauro Conti | 2023-02-03T14:00:05Z | http://arxiv.org/abs/2302.01740v2 | # SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification
###### Abstract
Deep learning achieves outstanding results in many machine learning tasks. Nevertheless, it is vulnerable to backdoor attacks that modify the training set to embed a secret functionality in the trained model. The modified training samples have a secret property, i. e., a trigger. At inference time, the secret functionality is activated when the input contains the trigger, while the model functions correctly in other cases. While there are many known backdoor attacks (and defenses), deploying a stealthy attack is still far from trivial. Successfully creating backdoor triggers depends on numerous parameters. Unfortunately, research has not yet determined which parameters contribute most to the attack performance.
This paper systematically analyzes the most relevant parameters for the backdoor attacks, i.e., trigger size, position, color, and poisoning rate. Using transfer learning, which is very common in computer vision, we evaluate the attack on state-of-the-art models (ResNet, VGG, AlexNet, and GoogLeNet) and datasets (MNIST, CIFAR10, and TinyImageNet). Our attacks cover the majority of backdoor settings in research, providing concrete directions for future works. Our code is publicly available1 to facilitate the reproducibility of our results.
backdoor attacks, backdoor triggers, computer vision +
Footnote †: 1. Code will be shared after paper acceptance.
## 1 Introduction
Deep neural networks (DNNs) have gained significant popularity over the past decade due to their performance in various application domains, including computer vision [1], speech recognition [2], and neural translation [3]. One of the key benefits of DNNs is their ability to automatically learn and extract features from raw data, which reduces the need for manual feature engineering and makes them particularly well-suited for tasks where the data is complex or unstructured, such as image and audio processing [4]. Additionally, DNNs can efficiently process large amounts of data, achieving state-of-the-art performance on various tasks. However, DNNs also have some limitations. For example, they require a large amount of labeled training data to perform well [5], and they can be prone to overfitting if not adequately regularized [6]. They also require significant computational resources and can be challenging to interpret due to their complex decision-making processes [7].
The high computational requirements for training DNNs have led to emerging trends such as outsourced training and machine learning as a service [8]. These trends have introduced new threats for deployed models when they are provided as black boxes by third parties. In addition, malicious data samples can be easily embedded in widely-used crowdsourced datasets [9]. One approach to address the high computational requirements for training is _transfer learning_, which involves using pre-trained models as a starting point for training on a new task [10]. This can significantly reduce the amount of labeled training data and computational resources needed, as the pre-trained model has already learned many general features useful for diverse tasks. This has made transfer learning an essential tool in developing DNNs, particularly when labeled training data is limited or expensive. In addition to the transfer learning, there have also been efforts to improve the interpretability of DNNs [7], [11]. This is important for various reasons, including the need to understand how a model makes decisions, the ability to identify and correct errors, and the development of trust in the model's outputs. One approach to improve interpretability uses visualization techniques, which can provide insight into the internal workings of a DNN and help identify patterns in the data the model is learning [7].
Overall, DNNs have shown impressive performance on a wide range of tasks. Still, much work is needed to address their limitations and improve their interpretability and robustness. One particularly concerning threat is the backdoor attack, which results in targeted misclassifications
when a specific trigger is present in the input. Backdoor attacks can be mounted through data poisoning [8], code poisoning [12], or model poisoning [13]. There has been a significant amount of research on backdoor attacks and their defenses in the literature [14, 15]. Still, these works are empirical, based on prior assumptions, and not covering a wide range of the backdoors' parameter space.
Our paper focuses on the intersection between computer vision for image classification and data poisoning, the most common setup for mounting backdoor attacks. In particular, we systematically evaluate the impact of various parameters on the performance of various backdoor attacks. Our work extends previous research in this area [16] by using larger datasets with higher-dimensional images (we upsampled the images from MNIST to \(64\times 64\), from CIFAR10 to \(128\times 128\), and TinyImageNet to \(224\times 224\)) and more classes. We also consider different state-of-the-art backdoor attacks and defenses. See section 6, Table 4, and Table 5 for a detailed explanation of the differences with previous works. We find that the trigger size is more influential than the poisoning rate and that the performance of backdoor attacks is affected by factors such as the model architecture and the characteristics of the trigger. Finally, we demonstrate that AlexNet is more robust against data-poisoning backdoor attacks, and we conduct experiments to explain this finding.
Our main contributions are summarized as follows:
* We extend the work described in [16] by exploring a more comprehensive range of factors affecting backdoor performance. We also experiment with different state-of-the-art backdoor attacks. This allows us to provide findings that generalize well to the tested datasets and models representing state-of-the-art.
* Based on the extensive experimentation, we extract 1) dataset/model-specific and 2) general findings, which provide valuable insights for understanding the backdoor effect while easing the design of new attacks and defenses.
* We demonstrate that the performance of backdoor attacks is affected by various factors, including the model architecture and the characteristics of the trigger.
* We show AlexNet is more robust against data poisoning backdoor attacks and conduct experiments to explain it.
* We additionally experiment with state-of-the-art defenses, evaluating the viability of the attacks in real-life scenarios.
## 2 Background
### _Deep Neural Networks (DNNs)_
Deep learning algorithms are parameterized functions \(\mathbb{F}_{\theta}\) that map an input \(\mathbf{x}\in\mathbb{R}^{N}\) to some output \(y\in\mathbb{R}^{M}\). \(\theta\) represents the parameters of the function, which are optimized via an iterative process called training. In the image domain, \(\mathbf{x}\) is an image, represented as a vector of pixel values, while \(y\) is the vector of probabilities of the image being of a class \(c\in k\) from a group of classes \(k\). For training, a dataset is needed, i.e., a set of labeled samples \(\mathcal{D}=\{\mathbf{x},y\}^{n}\) of size \(n\). During training, the algorithm tries to find the optimal parameters \(\theta^{\prime}\) by minimizing the "distance" from the predicted labels to the ground truth ones. The distance calculation is done leveraging a loss function \(\mathcal{L}\), which penalizes the algorithm depending on how far the prediction is from the actual label:
\[\theta^{\prime}=\underset{\theta}{\operatorname{argmin}}\sum_{i=1}^{n} \mathcal{L}(\mathbb{F}_{\theta}(\{\mathbf{x}_{i},y_{i}\})).\]
A convolutional neural network (CNN) performs convolutions in the input extracting relevant features linked to fully connected layers. The key intuition is to reduce the input space without losing information, which is easier to process in the consequent layers. This is achieved by kernels that move horizontally and vertically in the input in steps of a predefined value (stride). By doing so, the kernel extracts high-level representations as corners, shapes, or edges. Additionally, CNNs are accompanied by pooling layers that further reduce the computational complexity, extract the most relevant features, and reduce any noise captured by the kernels.
### _Transfer Learning_
Transfer learning is adapting a pre-trained DNN to a related task without retraining the entire model from scratch [10]. It involves adjusting the model's parameters, typically the fully connected layers, while freezing the convolutional layers in the case of CNNs. This approach offers cost-effective benefits when labeled training data is limited or expensive [8]. Transfer learning has wide applications in fields such as computer vision [17], natural language processing [18], and speech recognition [19], allowing DNNs to leverage previous knowledge for efficient learning. Factors such as task similarity, available labeled data, and feature reuse influence transfer learning's effectiveness [20]. It is a powerful tool for adapting DNNs to new tasks, especially in scenarios with limited labeled data.
### _Backdoor Attacks in DNNs_
Backdoor attacks compromise DNNs during training and embed a secret functionality in the deployed model. This secret can be embedded through data poisoning [8, 21], code poisoning [12], or direct modification of the model's weights [13]. In this work, we follow data poisoning by injecting poisoned samples into the training set. A poisoned sample contains a trigger, and its label is usually altered to the target label, which is the output of the model when the backdoor is activated. In the image domain, the trigger is usually (but not limited to) a pixel pattern of a given color, e.g., white or black, placed anywhere over the image, creating a set of poisoned samples \(\hat{\mathbf{x}}\in D_{poison}\). The percentage of poisoned samples in the training set is controlled by \(\epsilon=\frac{m}{n}\) where \(m\) is the number of poisoned samples, \(n\) is the number of the original training set and \(m\ll n\). A small \(\epsilon\) makes the backdoor harder to embed but keeps it stealthier,
as the small number of poisoned samples will not affect the original task much. A large \(\epsilon\) leads to a stronger backdoor, but it could affect the original task substantially, making it somewhat unrealistic. During training with poisoned samples, the backdoor effect is included following:
\[\theta^{\prime}=\operatorname*{argmin}_{\theta}\sum_{i=1}^{n}\mathcal{L}(\mathbb{ F}_{\theta}(\{\mathbf{x}_{i},y_{i}\}))+\sum_{j=1}^{m}\mathcal{L}(\mathbb{F}_{ \theta}(\{\hat{\mathbf{x}}_{j},\hat{y}_{j}\})).\]
After training, the backdoor is embedded in the DNN. The DNN functions normally on clean inputs, but the backdoor is activated in the presence of the trigger.
As stated, the backdoor trigger is usually a pixel pattern placed on the clean image. We refer to this as the _BadNets attack_. However, more advanced attacks have also been developed. For instance, we additionally focus on SSBA [22] and WaNet [23]--see Section 2.5. SSBA leverages an encoder-decoder model as in [24] to add invisible perturbations in the images. The encoder-decoder takes an image and an attacker-defined string and encodes it, resulting in a slightly perturbed image. Note that the perturbation is sample-specific. Following the standard backdoor training, the malicious behavior gets injected into the model. WaNet generates a trigger for images through a two-stage process. First, it creates a warping field using a normalized, upsampled, and clipped random tensor. Second, it trains the network with three modes: Attack (poisoning 10% of samples with warping), Noise (adding warping and Gaussian noise to 20% of samples, keeping labels unchanged), and Clean (training the remaining dataset without modification).
### _On Backdoor Interpretability_
Interpretability techniques are used to explain the behavior of ML models. The interpretability of DNNs refers to the ability to understand the network's decision, which can be obtained by different methods, such as feature visualization. Typically, there is a trade-off between accuracy, simplicity, and explainability. For instance, shallow models such as linear regression or decision trees are highly interpretable [25, 26]. By using DL models, we sacrifice the interpretability to achieve better performance, which often increases the complexity of the model by adding more layers.
Recently, class activation mapping (CAM) was developed for CNNs, which identifies the regions of an image that are more linked to the model's prediction [27]. CAM modifies the architecture of the target model by changing the convolutional layers for fully connected layers, which are much more interpretable but incur a severe degradation in accuracy. Consequent work from Selvaraju et al. introduced a generalization of CAM called gradient-weighted CAM (Grad-CAM) [7]. Instead of modifying the model's architecture, it uses the gradient of a given class to produce a localization of the important regions of the image. Precisely, Grad-CAM computes the target class's gradients concerning the feature map activation of a convolutional layer.
Post-hoc interpretation methods have also been developed to explain individual predictions made by a DNN. These methods are applied after the model has been trained and do not require changes to the model's architecture or training process. One example is LIME (Local Interpretable Model-Agnostic Explanations) [11]. LIME generates explanations by fitting a simple, interpretable model to the predictions made by a DNN in the vicinity of a particular input, allowing the model's behavior to be understood locally.
Other post-hoc interpretation methods are SHAP [28] and DeepLIFT [29]. SHAP uses Shapley values, a concept from game theory, to attribute the prediction made by a DNN to the individual features of the input. DeepLIFT computes the contribution of each feature to the final prediction by comparing the model's output with a reference score, which the user can choose.
Overall, various approaches are available for explaining ML models' behavior, including visualization techniques and post-hoc interpretation methods. The choice of method will depend on the specific requirements and constraints of the task. In this work, we will use Grad-CAM to understand the decisions of the poisoned models and compare their behavior with their clean counterparts, which is a suitable method for understanding the backdoor behavior [26].
### _Motivation_
In recent years, DL has become an extremely popular and rapidly evolving domain as a form to solve various real-world problems. Due to the need for adaptation to other tasks, DNNs have become more complex, often viewed as "black boxes". Indeed, Gilpin et al. [30] established a direct relationship between the models' complexity and their (lack of) explainability. Furthermore, efforts to create more complex and explainable models have been ongoing within the research community [31]. At the same time, DNNs have also gained rising attention in the security community due to their vast applicability and impact of DNNs. The ability to understand and explain the inner workings of these models becomes particularly important in the context of DL attacks, as a lack of explainability can hinder our understanding of the root cause of security problems.
One type of DL attack that has garnered significant attention is the backdoor attack. Indeed, it has been recently subject to a deep investigation in a wide range of domains [15]. In image recognition, the proposed attacks are heterogeneous in the trigger generation, backdoor injection, or threat model. Thus, comparing these attacks is far from trivial, even impossible in some cases. For instance, the models, datasets, experimental setups, and attack parameters are only a few to consider for comparing the performance of different attacks. Furthermore, even if the attacks are comparable, understanding the influence of the attack's parameters on the backdoor performance could still be difficult.
In this paper, we aim to address these issues by systematically investigating the impact of common parameters on the effectiveness of backdoor attacks in clean and backdoor performance. We analyze the core group of backdoor attacks in image classification, where the rest of the attacks build
upon. For that, we investigated the backdoor attacks literature in the image domain, from which we selected the most representative attacks. Namely, BadNets [8], SSBA [22], and WaNet [23]. Then, we further investigated papers in the literature that fall into one of the categories above. The papers that fulfilled our criteria are shown in Table I. By analyzing those, we found an inconsistency in the parameter selection and the understanding of these parameters' effect on the backdoor performance. Thus, we propose systematically analyzing the attack proposals based on the same parameters and investigating the influence of these parameters on the main and backdoor task's performance. This allows us to efficiently and systematically compare a new attack--allowing fair and traceable comparisons. Our final goal is to provide a comprehensive and systematic analysis of the impact of parameters on backdoor attack performance. By doing so, we hope to contribute to a better understanding of these types of attacks and provide a valuable investigation for comparing and evaluating future research in this area.
For our investigation, we provide a realistic attack configuration, and we have designed our experimental setup to be as simple as possible while still being extendable to future backdoor attacks. To this end, we have surveyed the state-of-the-art to identify a common set of experimental settings that can be used to compare and evaluate future attacks. In line with previous research [15], we have chosen to focus on image recognition, using the MNIST, CIFAR10, and Tiny-ImageNet datasets and models AlexNet, ResNet, VGG, and GoogLeNet. These datasets and models are representative samples of the ones used in the state-of-the-art.
Additionally, it is important to note that the choice of parameters can significantly impact the performance of a backdoor attack. For example, the trigger size, poisoning rate, and type of trigger can affect the attack's success rate. Similarly, the choice of dataset and model architecture can significantly affect the attack's effectiveness. By considering various values of parameters in our experiments, we explored a range of possibilities and identified differences in attack performance that may be related to each parameter. This is important in understanding the underlying mechanisms of backdoor attacks and developing more effective countermeasures.
## 3 Threat Model
We consider a _gray-box_ threat model as the attacker can freely modify a small portion of the training dataset and has no knowledge about the training algorithms or the models used by the victims. We also assume a _dirty-label_ backdoor attack meaning that the attacker can alter both the training samples and their labels. Even though this threat model is weaker than its counterpart (clean-label attack [46]), it is the most popular among the existing works [8, 21, 32, 26, 47, 25]. Additionally, we target only transfer learning as it has become a very common practice as training from scratch can be very expensive and the weights of state-of-the-art models like VGG and ResNet trained on ImageNet are publicly available [8]. This threat model is realistic as large datasets like ImageNet [5] are crowdsourced from untrusted sources, and malicious data can evade human inspection [9].
We consider the following metrics:
1. **Attack Success Rate** (ASR): measures the backdoor performance of the model on a fully poisoned dataset \(D_{poison}\), i.e., \(\epsilon=1\). It can be computed by \(ASR\)
\begin{table}
\begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline Paper & \(\epsilon\) & \begin{tabular}{c} Trigger \\ Size \\ \end{tabular} & \begin{tabular}{c} Trigger \\ Location \\ \end{tabular} & \begin{tabular}{c} Trigger \\ Color \\ \end{tabular} \\ \hline Gu et al. [8] & 0.3 & Single pixel & Bottom-tight & White \\ & 0.3 & Four picks & Center & Yellow \\ \hline & 0.1 & 0.1\% & & \\ & 0.2 & 0.5\% & Corners & \\ Saleni et al. [26] & 0.3 & 1.5\% & Top center & Random \\ & 0.4 & 2.2\% & Bottom-centor & Dynamic \\ & 0.5 & 4\% & & \\ \hline & & 4\% & & \\ Liu et al. [32] & - & 7\% & Bottom-tight & Dynamic \\ & & 10\% & & \\ \hline Kwon et al. [33] & 0.1 & & & \\ & 0.25 & 25\% & Corners & White \\ & 0.5 & & & \\ \hline Tin et al. [34] & 0.05 & 8\% & Bottom-tight & White \\ \hline Feng et al. [35] & 0.01 & - & - & Dynamic \\ & 0.02 & & & \\ \hline & 0.1 & & & \\ & 0.2 & & & \\ Zhang et al. [36] & 0.3 & - & - & Dynamic \\ & 0.4 & & & \\ \hline Li et al. [22] & 0.02 & & & \\ & 0.06 & - & - & Dynamic \\ & 0.08 & - & & \\ & 0.1 & & & \\ \hline & 0.05 & & & \\ Li et al. [37] & 0.1 & & & \\ Li et al. [37] & 0.15 & - & - & Using \\ & 0.2 & & & \\ & 0.25 & & & \\ \hline Nguyen et al. [23] & 0.2 & - & - & Dynamic \\ \hline Zeng et al. [38] & 0.1 & - & - & Dynamic \\ \hline Bani et al. [59] & 0.2 & & & \\ Bani et al. [59] & 0.3 & - & - & Dynamic \\ & 0.4 & & & \\ \hline Chen et al. [40] & 0.05 & & & \\ Chen et al. [40] & 0.05 & & & \\ & 0.2 & & & \\ \hline Nguyen et al. [25] & \begin{tabular}{c} \(P_{poison-roison}=0.1\) \\ \end{tabular} & - & - & - & \begin{tabular}{c} Dynamic trigger \\ carded using sequence \\ \end{tabular} \\ \hline Duan et al. [41] & 0.01 & - & - & - & \begin{tabular}{c} Dynamic \\ microphone \\ training card by \\ wasserstein \\ distance \\ \end{tabular} \\ \hline & 0.005 & & & \\ Lia et al. [42] & 0.009 & & - & - & \begin{tabular}{c} (Suits) \\ Reflection \\ \end{tabular} \\ & 0.02 & & & \\ & 0.03 & & & \\ \hline Zhao et al. [43] & 0.01 & & & \\ & 0.15 & - & - & \begin{tabular}{c} (Dynamic) \\ macrobridle \\ triggers \\ \end{tabular} \\ \hline Sahu et al. [44] & 0.125 & 1\% & random location & random \\ & 0.5 & 6\% & right-camera(CIRR10) & pilot \\ \hline & 0.025 & & & \\ Wang et al. [45] & 0.1 & - & - &
\begin{tabular}{c} (Dynamic) \\ macrobridle \\ buffer \\ triggers \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of the attack setting for different state-of-the-art backdoor attacks that use patch triggers.
\(\frac{\sum_{i=1}^{N}\mathbb{I}(F_{\theta}(\hat{x}_{i})=y_{t})}{N}\) where \(F_{\theta}\) is the poisoned model, \(\hat{x_{i}}\) is a poisoned input, \(\hat{x_{i}}\in D_{poison}\), \(y_{t}\) is the target class, and \(\mathbb{I}(x)\) is a function that returns \(1\) if \(x\) is true and \(0\) otherwise.
2. **Clean Accuracy Drop** (CAD): measures the effect of the backdoor attack on the original task. It is calculated by comparing the performance of the poisoned and clean models on a clean holdout validation set \(D_{valid}\), i.e., \(\epsilon=0\). The accuracy drop should be small to keep the attack stealthy.
## 4 Experiments
We systematically evaluate the backdoor attacks on 4 DNN models and four poisoning rates for all three attacks. WaNet and SSBA were tested in CIFAR10 and TinyImagenet, while BadNets was also tested in MNIST. In BadNets, we also use 3 trigger sizes, 5 trigger positions, and 3 trigger colors. We also train a clean model for each dataset and architecture (\(3\times 4\)). Each of these experiments was repeated 5 times. Thus, we train 11,120 backdoored models and 60 clean models in total.
### _Experimental Matrix_
**Datasets.** We evaluated our approach using MNIST, CIFAR10, and TinyImageNet.
* MNIST [48] is a dataset of 70,000 grayscale images of handwritten digits, each \(28\times 28\) pixels in size and belonging to one of 10 different classes. We converted the images to RGB format for our evaluation and resized them to \(64\times 64\) pixels 2. Footnote 2: Data resizing is used to adapt inputs to the chosen networks, which require a minimum input size. Additionally, we experiment with different input sizes to better generalize the results.
* CIFAR10 [49] is a dataset of 60,000 RGB images, each \(32\times 32\) pixels in size and belonging to one of 10 different classes, with 6,000 images per class. Similar to MNIST, we resized the images to \(128\times 128\) pixels for compatibility.
* TinyImageNet [50] is a dataset of 120,000 RGB images belonging to 200 different classes, each \(64\times 64\) pixels in size. We also resized these images to \(224\times 224\) pixels.
**Model Architectures.** In our experiments, we selected four standard benchmark DNNs for evaluation: AlexNet, GoogLeNet, VGG-19_BN, and ResNet-152. To utilize transfer learning and extract features from these models, we froze the parameters for all layers (except for the last fully connected layer and the batch-normalization layers for ResNet, VGG, and GoogLeNet). This allowed us to leverage the pre-trained models while focusing on the task of interest. AlexNet, however, resists backdoor injection when being operated by transfer learning (i.e., we reach low ASR when freezing all layers except the last one). Thus, for a more suitable analysis, our transfer learning setup for AlexNet is to freeze the layers up to its classifier's module (more in section 4.3.2).
These DNNs were selected based on their demonstrated performance on various tasks and widespread use as benchmarks in the field. AlexNet, a CNN introduced by Krizhevsky et al. [51], was the first successful CNN to demonstrate superior performance on the ImageNet dataset [52]. In PyTorch implementation for AlexNet (which we use for our study in this work), it consists of two main modules: the features module, which itself consists of convolutions and pooling layers, and the classifier module, which is composed of fully connected layers for the final classification task. GoogLeNet, introduced by Szegedy et al. [53], is a variant of the Inception architecture that won the 2014 ImageNet Large Scale Visual Recognition Challenge. VGG-19_BN is a variant of the VGG network [54] that incorporates batch normalization [55] and has achieved strong performance on a range of tasks. Finally, ResNet-152 is a residual network [56] with 152 layers that also achieved state-of-the-art performance on several tasks. By using these well-established DNNs, we ensure the reliability and generalizability of our results.
**Trigger Characteristics.** The following trigger characteristics apply to the BadNets attack:
\(\rhd\) Trigger Size: When using the BadNets attack, we focus on the square trigger pattern, a commonly used trigger in backdoor attacks on image classification tasks [8, 16]. This trigger consists of a square patch injected into the training images and used to manipulate the model's behavior. In [16], the square trigger proved the most effective, so we did not consider blending overlay triggers. To evaluate the effectiveness of the attack under different conditions, we varied the width and height of the trigger as a percentage of the width and height of the sampled training image, using values of 4%, 6%, and 8%, which allowed us to assess the trigger size's impact on the attack's performance. These trigger sizes cover most of the trigger sizes considered in the literature while being realistic.
\(\rhd\) Trigger Position: We inject the square trigger into five locations in the poisoned images: the top-left, top-right, middle, bottom-left, and bottom-right positions. This allowed us to evaluate the impact of the trigger position on the performance of the attack and to identify any trends or patterns that may be present. Figure 1 illustrates an example image from the CIFAR10 dataset with triggers embedded at various positions. By studying the attack under these different conditions, we could gain a deeper understanding of the factors that influence the success of a backdoor attack and develop more effective defense strategies.
Fig. 1: Trigger patterns with different trigger positions (top-left, top-right, middle, bottom-left, bottom-right) applied to an image from the CIFAR10 dataset.
\(\rh\) Trigger Color: The color of the trigger pattern is another critical factor to consider in the design of a backdoor attack. In our experiments, we evaluated the performance of the attack using three different trigger colors: black, white, and green. The green trigger was randomly picked to avoid biases that extreme values like black (0, 0, 0) or white (255, 255, 255) may create. To this end, we used Python's pseudorandom generator to retrieve three RGB values (one for each channel). The values are (102, 179, 92). The MNIST dataset only has one channel, so we run experiments of green color on CIFAR10 and TinyImageNet datasets. By comparing the results obtained with these different trigger colors, we could understand how the color of the trigger affects the effectiveness of the attack and identify any trends or patterns that may be present. This information is valuable for understanding these attacks' behavior and developing more effective defense strategies.
**Poisoning Rate.** One key factor that impacts the backdoor's effectiveness is the poisoning rate, which refers to the percentage of training images injected with the backdoor trigger. This parameter is not limited to the BadNets attack but is important in all backdoor attacks. In our experiments, we replaced clean images with their poisoned counterparts to avoid altering the number of training samples in the dataset. We also varied the poisoning rate to study its impact on attack performance. This allowed us to study the effect of different poisoning rates on the attacks' success. Additionally, we chose low poisoning rates because the backdoor should affect the original task as little as possible, and given the amount of data that modern deep learning systems need, large poisoning rates can be unrealistic [21]. Thus, we defined four values: 0.5%, 1%, 1.5%, and 2%.
### _Experimental Setup_
**Training Procedure.** We chose the Adam algorithm and cross-entropy loss as an optimizer and the criterion in our experiments. However, in one setting (TinyImageNet + VGG), the Adam optimizer yielded poor performance (around 37%), and we had to use SGD, which resulted in an accuracy of around 72%. Furthermore, we experimentally set the learning rate to 0.001 and the number of epochs to 20, where we achieve training convergence and good generalization in the test set. Each dataset's batch size is different to fit into the GPU's memory. For the small datasets (MNIST and CIFAR10), the batch size is 128, and for TinyImageNet is 32. Each experiment was repeated five times to reduce the effects of randomness caused by stochastic gradient descent and initialization.
### _Results and Analysis_
#### 4.3.1 Clean Accuracy Drop.
The backdoor should remain stealthy in the deployed model to avoid raising any suspicions. Thus, the model's performance on the original task should not be affected by the backdoor insertion. To ensure this is true in our experiments, we calculate the arithmetic mean of the accuracy of all the clean models we trained (\(\epsilon=0\)) and compared it to the mean of the accuracy of the poisoned models. We show the results in Table II, where we use bold for the value with the largest difference from the clean model. We see that the difference introduced by the backdoor is really small. In almost all cases, the accuracy is decreased but less than 1%. We see only in one case (TinyImagenet + ResNet) a performance drop of around 2%. ResNet is the best-performing model with TinyImageNet (clean accuracy 83.96%), and even a small change in the training data affects the model's generalization. **Additionally, the performance drop is positively correlated with \(\epsilon\) and as \(\epsilon\) gets larger, the drop is increased as well**.
From Table II, we can also see that our models perform well for the datasets tested. However, AlexNet is not very accurate with TinyImageNet and has an accuracy of 21.73%(\(\pm 0.6067\)). As we discussed in Section 4.3.2, the backdoor did not work in this case if we freeze all the layers up to the fully connected layers. Thus, we had to unfreeze a few layers from the feature extractor for the backdoor to be more effective. This resulted in lower performance for the original task as we altered the weights of the feature extractor. If we keep these layers frozen, the model's accuracy for clean inputs is around 46%. **Thus, we conclude that the backdoor attack is more effective if the model is trained from scratch or has a large number of trainable layers**.
#### 4.3.2 Effect of Model Architecture.
In most cases, VGG is the least robust to backdoor attacks, especially on the CIFAR10 dataset, while AlexNet is the most robust to poisoning on all three datasets. For example, in Figure 3, the attack success rate of VGG is always higher than other models. Specifically, the attack performance of VGG with a small poisoning rate, i.e., 0.005, is higher than the other models. VGG has the most neurons among the four models, leading to a larger capacity to learn the backdoor functionality. With the increasing poisoning rate, the ASR of the ResNet and GoogLeNet increases and is similar to VGG's. **We believe that models with larger capacities are more vulnerable to backdoors as they can encode more patterns in their weights even from a very small part of the dataset**.
Additionally, if we freeze the feature extractor layers in AlexNet like the other three models, the ASR is nearly 0%. Because of this, we decided to unfreeze AlexNet parameters layer by layer (from 14 to 0) to see from which layer it starts to react positively on the injected backdoor. Appendix Figure 14 and Figure 2 show the results of our experiments on MNIST and CIFAR10. When we unfreeze the classifier module completely, the network starts to learn the backdoor. This can be observed on the plots when the network is unfrozen up to the 7th parameter. Thereafter, there is a surge in most of the plots from this point, showing that the backdoor has started to work. After this experiment, we decided to do the experiments with AlexNet by retraining the whole classifier module and freezing the feature Module. Nonetheless, the results show, except for trigger positions in the middle for CIFAR10 (Figure 7 and Figure 4), in all other experiments, the backdoor attack fails to reach high
ASR on AlexNet. Interestingly, the classifier modules in AlexNet and VGG are very similar (both having 3 fully connected layers with 4,096 neurons in each layer). The main difference between these two is that in AlexNet, the two dropout layers precede the linear layers, but in VGG, they succeed (this means that in AlexNet, the first dropout will affect the last convolution layer in the features module, while in VGG both dropouts will affect the former fully connected neurons before them). VGG is a deeper and more complicated network than AlexNet, making it more vulnerable to backdoor triggers. However, the reason for AlexNet's robustness against backdoors is not merely its smaller capacity because unfreezing the classifier part improves backdoor learning. From [57], we know that dropouts can affect the learning process of a network and cause a network to learn a deliberate backdoor. We correspondingly assume that the role of dropout layers and their inactivity during test time may affect the backdoor success. Nonetheless, we are not 100% sure about this, and more experimental studies are needed to be done in the future to uncover the primary reason.
**AlexNet has demonstrated to be a very robust network on simple square shape backdoor patterns compared to the other three benchmark networks. It seems that the most important parameter which could affect the ASR on AlexNet is the trigger size.** Figure 15 displays the output of the AlexNet feature module on the same poisoned image with different trigger sizes (the dissimilarity of activations based on trigger size can be observed by comparing two feature map differences on the right).
#### 4.3.3 Effect of the Trigger Size.
In our experiments, we see that by only changing the trigger size, we can create very effective backdoors. For example, in Figure 6, we see that the ASR for AlexNet and MNIST is very low (around 10%) in all cases when the trigger size is 4% or 6%. However, in the same setting, changing the trigger size to 8% could lead to an ASR as high as 80%. Similar behavior is shown in Figure 4 and in Figure 7 for CIFAR10 and all the models.
For the CIFAR10 dataset, we see that the trigger size is the most influential for AlexNet. The feature extractor of the model is unfrozen in AlexNet, so the model can learn easier to spot larger triggers. However, there are multiple cases where changing the trigger size leads to high ASR for the other models. For example, in Figure 3, we see that the ASR for ResNet increases from 40% to more than 90% when \(\epsilon=0.01\) and the size is increased from 6% to 8%. Similarly, in Figure 5, the ASR for GoogLeNet is increased from around 10% to more than 80%.
When we insert the trigger in the middle, for the MNIST and CIFAR10 datasets, the ASR of all models (except AlexNet) is around 10% with a trigger size less than 8%. However, it increases significantly with a trigger size of 8%. In TinyImageNet, the ASR is low when the trigger is not placed in the middle of the image. However, even in these cases, increasing the size may increase the ASR (Figure 11). **Thus, we conclude that the trigger size can significantly affect the ASR**.
#### 4.3.4 Effect of the Trigger Position.
For the MNIST and CIFAR10 datasets, with a trigger size is less than 8%, there is no noticeable difference in the ASR when it is injected in the corners. However, there is a decrease in the ASR for all models when the trigger is injected in the middle. With trigger size increasing to 8%, the trigger position has an unnoticeable impact on the attack performance. On the contrary, for the TinyImageNet dataset, all models are robust to the backdoor attack when the trigger is not injected in the middle. With the trigger in the middle, there is a significant rise in the ASR for all models. This could explain that, in general, images in TinyImageNet are not centered, in contrast with those in MNIST and CIFAR10. Therefore, for TinyImageNet, triggers placed in the middle can achieve high ASR without a noticeable degradation on the main task. However, the model cannot recognize triggers placed in the corners or are small, i.e., less than 8%.
**All these show that in our experiments, no position universally leads to a more successful backdoor attack. The most effective position is different for every dataset and depends on the dataset's properties and the way the models learn.**
#### 4.3.5 Effect of the Trigger Color.
For MNIST, the ASR is low for black triggers placed in the corners. The effect is expected as the training images in MNIST contain many black pixels by default, and the model cannot identify our
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multirow{2}{*}{0} & \multirow{2}{*}{0.5} & \multirow{2}{*}{\(\epsilon\) (\%)} & \multirow{2}{*}{1.5} & \multirow{2}{*}{2} \\ \cline{5-6} \cline{7-7} & AlexNet & & & & & & \\ \hline \multirow{4}{*}{MNIST} & \multirow{2}{*}{GoogLeNet} & 98.50\(\pm\)0.1915 & 98.41\(\pm\)0.1883 & 98.37\(\pm\)0.1876 & **98.30\(\pm\)0.2468** & 98.31\(\pm\)0.2136 \\ & & GoogLeNet & 98.75\(\pm\)0.1191 & 98.67\(\pm\)0.1363 & 98.64\(\pm\)0.1654 & 98.62\(\pm\)0.1817 & **98.58\(\pm\)0.2173** \\ & & ResNet & 98.83\(\pm\)0.1846 & 98.64\(\pm\)0.3198 & 98.50\(\pm\)0.4217 & 89.33\(\pm\)0.4828 & **98.19\(\pm\)0.6144** \\ & & VGG & 99.09\(\pm\)1.1671 & 99.22\(\pm\)1.1597 & **99.34\(\pm\)0.1769** & 99.31\(\pm\)0.4784 & 99.30\(\pm\)0.2782 \\ \hline \multirow{4}{*}{CIFAR10} & \multirow{2}{*}{AlexNet} & 85.17\(\pm\)0.3677 & 84.89\(\pm\)0.4034 & 98.46\(\pm\)0.4217 & 84.20\(\pm\)0.450 & **84.40\(\pm\)0.4397** \\ & & GoogLeNet & 92.54\(\pm\)0.1464 & 92.38\(\pm\)0.2023 & 92.33\(\pm\)0.2190 & 92.22\(\pm\)0.1127 & **92.18\(\pm\)0.1961** \\ & & ResNet & 96.88\(\pm\)0.1449 & 96.68\(\pm\)0.1983 & 96.61\(\pm\)0.2675 & 96.58\(\pm\)0.3037 & **96.56\(\pm\)0.3779** \\ & & VGG & 93.02\(\pm\)0.4733 & 92.87\(\pm\)0.5260 & 92.90\(\pm\)0.4743 & 92.85\(\pm\)0.4712 & **92.77\(\pm\)0.5308** \\ \hline \multirow{4}{*}{ TinyImageNet} & \multirow{2}{*}{AlexNet} & 21.73\(\pm\)0.0667 & 21.60\(\pm\)0.6881 & 21.49\(\pm\)0.7208 & 21.71\(\pm\)0.7209 & **20.89\(\pm\)0.4740** \\ & & GoogLeNet & 70.07\(\pm\)0.1688 & 70.02\(\pm\)0.2322 & 69.06\(\pm\)0.2569 & 69.86\(\pm\)0.2513 & **69.99\(\pm\)0.2574** \\ \cline{1-1} & & ResNet & 83.96\(\pm\)0.1927 & 82.90\(\pm\)0.5713 & 82.45\(\pm\)0.7705 & 82.09\(\pm\)0.9975 & **81.90\(\pm\)0.9917** \\ \cline{1-1} & & VGG & 72.66\(\pm\)0.2265 & 72.60\(\pm\)0.2211 & 72.51\(\pm\)0.2564 & 72.41\(\pm\)0.2315 & **72.33\(\pm\)0.2301** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Clean accuracy comparison between clean and poisoned models. We show in bold the settings that have the largest difference with the clean model’s (\(\epsilon=0\)) performance.
black trigger as a feature. However, white triggers placed in the corners are effective due to their contrast with the black background. For both colors (black and white), the trigger should be large (8%) to start having an effect on the ASR when placed in the middle (Figure 4 and Figure 7) as it overlaps the sample's main information, i.e., the number.
In TinyImageNet, when the trigger is placed in the corners, in most cases, the ASR is around 0%. However, when the trigger has a size of 8% and is green, the ASR can be increased up to 40% (Figure 11). Additionally, when the trigger is inserted in the middle, the backdoor works in all cases but is more effective when green (Figure 10).
In CIFAR10, we see that in some cases for small triggers (\(<\) 8%), GoogLeNet is more effective with white triggers. For example, comparing Figure 5 and Figure 6 we see that for trigger size 4% and \(\epsilon\) = 0.5%, the ASR increases from 10% to almost 90%. The same is true for small (\(<\) 8%) triggers in top-right (Figure 5 vs. Figure 8). Additionally, for black and white triggers smaller than 8% placed in the middle (Figure 4 and Figure 7), the ASR is low for all models except for AlexNet. In this case, the backdoor works only with a green-colored trigger.
**From all these observations, we conclude that the trigger color can play an important role in the backdoor's effectiveness, but it depends on many factors like the dataset or the model, making its optimization challenging for an attacker**.
#### 4.3.6 Effect of the Poisoning Rate
Generally, with the increasing poisoning rate, all models' ASR increases. This is reasonable because, with more poisoned data, a backdoor attack can perform better. However, the attacker cannot increase \(\epsilon\) indefinitely as the model's CAD is reduced when the poisoning rate grows.
### _On the Interpretability of Backdoors_
Convolutional layers capture the spatial information, so the last convolutional layer is expected to achieve the best understanding of high-level semantics and detailed spatial information. Thus, the neurons of convolutional layers look for the class-specific semantics, e.g., capturing image parts relevant to the label "dog". Grad-CAM uses this information for obtaining an attention map given an image and a target class. Intuitively, one can imagine Grad-CAM attention maps as the critical parts for a model to classify an image for the target label. Grad-CAM has also been widely applied in the image backdoor domain to explain the behavior of the backdoor triggers [25, 26]. More precisely, we also leverage Grad-CAM to explain the importance of the trigger location and color. We use CIFAR10 as a test dataset to
Figure 4: Rate vs. size, black color, trigger at middle
Figure 5: Rate vs. size, black color, trigger at top-right
Figure 3: Rate vs. size, black color, trigger at bottom-right
Figure 2: AlexNet on CIFAR10: FreezeLayer effect vs. size and rate, trigger at bottom-right
Figure 8: Rate vs. size, white color, trigger at top-right Figure 10: Rate vs. size, green color, trigger at middle Figure 11: Rate vs. size, green color, trigger at top-left
compare the attention of the backdoored models and the clean models for both clean and backdoored samples. We selected CIFAR10 because it is a perfect candidate since it contains large (upscaled) color images, which is also representative of TinyImageNet and richer in features than MNIST. We select the setting from a successful backdoor attack to ensure that the trigger is getting injected. We experimented with a black trigger of size 8% of the input image placed in the top-left corner. We set the \(\epsilon\) value to 0.02 and train the models for 20 epochs. Simultaneously, we train a clean version of the same model and compute the attention maps for both clean and poisoned models. These maps show the image's most influential part (in red) for the model's output. Depending on the model used, we observe different behaviors. It is important to note that we use clean and target labels, i.e., the ground truth label and the backdoor label, to help understand the label's effect on the model's prediction. Intuitively, we expect a well-trained clean model to resist image perturbations (to some extent) as input triggers. Therefore, we expect the clean model's attention maps to look similar. However, on a backdoor model trained with clean and backdoor data, we expect to obtain a similar attention map (as the clean model's) for the clean images. Nevertheless, backdoor images should bring the model's attention toward the trigger.
In GoogLeNet, the clean model (see Figure 16 in section A) focuses on the center and center-right locations for clean and target labels. This effect also remains visible in the backdoor model (see Figure 17 in section A), caused by the backdoor "idea" where the attention on clean images does not vary. However, the backdoor model's attention drifts toward the trigger under its presence. In the clean model, the trigger is unnoticed.
In ResNet and VGG, we observe a similar, yet more evident behavior as in GoogLeNet. The clean model (see Figure 18 and Figure 20 in section A) robustly resists the trigger presence without modifying the attention map and maintaining the same as the clean input. The backdoor model also focuses on the exact locations of the images, as the clean model does. On poisoned inputs, the backdoor model easily recognizes the presence of the trigger, directing attention toward it, see Figure 19 and Figure 21 in section A.
AlexNet's attention maps are biased by the poor performance on the backdoor task. The heatmaps could intuitively help explain it. AlexNet's predictions are based on observing all the areas from the image rather than focusing on a specific area, as done by the abovementioned models. Still, the clean model is robust against perturbations on the input, i.e., the attention map does not vary much, see Figure 22 in section A. Similarly, the backdoor model has a slightly different attention map on clean images. However, the model does not focus on the trigger but on the whole input space on backdoor images, see Figure 23 in section A.
### Advanced Triggers
In this section, we investigate the efficacy of more subtle triggers compared to BadNets, with a specific focus on WaNet [23] and SSBA [58]. After a backdoor was inserted with these two attacks there was no substantial performance drop on the original task for the poisoned models, similar to the BadNets attack. The results from SSBA and WaNet are presented in Figure 12. Notably, both SSBA and WaNet utilize triggers as large as the image itself, and we observe a clear correlation between the ASR and the poisoning rate for these attacks. However, unlike the BadNets attack, a higher poisoning rate is required to successfully inject the backdoor into the model using SSBA and WaNet. For instance, at the highest \(\epsilon\) value tested, SSBA achieves a maximum ASR of 90%, while WaNet achieves 59%. In contrast, BadNets achieves 99% ASR under the same experimental conditions.
Moreover, we observe that a weaker attacker with limited access to a smaller number of data samples would be unable to inject the backdoor using SSBA or WaNet. Specifically, with \(\epsilon=0.005\), which constitutes only 0.5% of the available dataset, SSBA and WaNet achieve low ASR, obtaining only up to 50% and 12% ASR, respectively. This suggests that the viability of these "advanced triggers" may be limited in real-world scenarios where the attacker has limited access to data samples and transfer learning is used.
It is worth mentioning that the primary objective of these "advanced trigger" is to remain inconspicuous and imperceptible to human inspection, underscoring the stealthy nature of these attacks. Furthermore, the practical applicability of these attacks may be constrained in real-world scenarios where the attacker has limited access to data samples.
### Discussion
We discuss several aspects of the backdoor attacks in image classification based on our experimental findings. First, we saw that the backdoor attack is easier when training from scratch. Thus, in future works, authors claiming that their trigger generation technique is stronger than the state-of-the-art should also run experiments in a transfer learning setup.
_Finding 1_. The backdoor attack is easier when training from scratch.
Additionally, we should always use small poisoning rates as the clean accuracy drop increases when the poisoning rate is increased. Our experiments indicate that the drop is more severe for stronger models and larger datasets. Thus, there is
Figure 12: ASR vs \(\epsilon\) for SSBA and WaNet attacks.
no guarantee of a small clean accuracy drop in large datasets if we see no clean accuracy drop in small datasets.
_Finding 2_. The clean accuracy drop increases as the poisoning rate increases. Additionally, we conjecture that the drop can be more severe for large datasets and strong models.
We also saw that models with a large capacity and a large number of weights are more vulnerable to backdoor attacks. These models can overfit to small subsets of their datasets and learn complex patterns even from only a handful of training samples.
_Finding 3_. Large models with big capacities are more vulnerable to backdoor attacks.
From our experiments, we saw that no position, color, or combination of them results in the most effective backdoor across all settings. The best trigger color and position for every setup depends on the dataset, and the model used.
_Finding 4_. No position or color results in the most effective backdoor universally.
Another observation from our experiments is that the ASR can vary for different trigger positions. Even though CNNs should not be affected by the feature (trigger) position, it seems that in some cases, they exploit the feature's absolute spatial location and learn the trigger easier. This was also shown in [59] but not in the context of backdoor attacks.
_Finding 5_. The backdoor's performance varies for different trigger positions indicating that in some cases, the CNNs exploit the absolute spatial location of their features.
As shown in Figure 4 for TinyImageNet, variations in the poisoning rate show improvement when the trigger size is 4%. However, when the trigger is large, the \(\epsilon\) does not affect much the backdoor performance. A similar effect is visible in Figure 7, where with trigger size 0.04, variations in \(\epsilon\) can drastically increase the backdoor performance. However, the poisoning rate is nearly irrelevant when the trigger size is large.
_Finding 6_. The trigger size has a more significant contribution to the ASR than the poisoning rate.
When comparing the efficiency of patch triggers, i.e., BadNets, with "advanced triggers", we observe an overall lower ASR. Based on the poisoning rate, we observe that "advanced triggers" may not be realistically applicable in real-life scenarios since the attack requires access to a large amount of data, which may not be accessible. The attack's success also depends on the model, in which ASR varies drastically. However, they create more subtle perturbation in the images; thus, depending on the countermeasures applied by the model owner (see section 5, "advanced triggers" could be a viable alternative.
_Finding 7_. Although "advanced triggers" could not be viable in realistic scenarios, they are more stealthy than patch triggers.
## 5 On the Defenses
In this section, we evaluate the attacks against state-of-the-art defenses. First, we briefly discuss the existing countermeasures. Then, we explain the chosen methods in more detail. Lastly, we evaluate the attacks against chosen countermeasures.
### _Discussion_
Several defense mechanisms have been proposed in the literature for mitigating backdoor attacks. Neural Cleanse (NC) [60] is the first to evaluate possible defense mechanisms by proposing a method that reverses engineers the trigger for each class, while outliers in the _L1 norm_ suggest the model has been compromised. However, the optimization process to reverse engineer the trigger is costly. It has to repeatedly be done for each label, which can be unfeasible for datasets with many classes. Recent research has shown that multi-triggers, large triggers, or input-specific labels can easily bypass NC. Similarly, ABS [61] has shown improved performance by improving NC. ABS stimulates neurons in the network and examines the outputs for deviations from the expected behavior. Authors suggest that a class can be represented as a subspace within a feature space, and a backdoor class will therefore create a distinct subspace in the feature space. ABS then relies on the fact that compromised neurons will produce larger outputs than no compromised neurons.
Following another approach, Liu et al. [62] proposed combining neuron pruning and fine-tuning. Fine-pruning is a post-training defense that prunes the most "active" neurons and then fine-tunes the network for some epochs. The intuition is based on the fact that some neurons contain the main (clean) task information, others the backdoor task, and the rest a combination of both. Therefore, removing the correct group of neurons will reduce the backdoor effect. Sometimes, pruning is unnecessary, solely fine-tuning is enough to reduce the backdoor effect while maintaining high accuracy in the main task [63].
### _Experimentation_
We evaluate the attacks against the most representative defense mechanisms, i.e., NC and fine-pruning. We evaluated the effect of these two defenses on the backdoor performance, in this section, we aim to research if the attacks with "best" hyperparameters are still robust against the chosen countermeasures. To be consistent between all the attacks, we select to perform our evaluation on the CIFAR10 dataset--as SSBA attack does not provide an assessment on
MNIST dataset [37]. For each attack, we selected the setup with the highest ASR. Specifically, for the BadNets attack, we selected a green patch placed in the center with size 8% and a poisoning rate of 2%. For the WaNet attack, we selected a poisoning rate of 0.02 for GoogLeNet, AlexNet, ResNet and a poisoning rate of 0.015 for VGG. For the SSBA attack, we selected the poisoning rate of 0.02 for all models.
NC evaluation against BadNets shows excellent performance on ResNet and AlexNet, successfully detecting the backdoored model and the target label. However, on GoogLeNet, the target label gets erroneously detected, although the model is correctly noticed as malicious. Lastly, on VGG, NC cannot detect the backdoor, i.e., the model is not flagged as malicious. Regarding WaNet, NC can always detect the backdoored model on ResNet and VGG, but it cannot accurately detect the target label. On GoogLeNet and AlexNet, NC is unable to detect the malicious model. For SSBA, NC successfully detects the backdoored models on AlexNet, ResNet, and VGG, although the target label is not precisely detected, i.e., more than one target label is identified. However, NC is unable to detect the backdoored model on GoogLeNet.
We also evaluate the attacks against fine-pruning, see Figure 13. We selected different pruning rates, i.e., the number of neurons to prune, and retrained the model for 10% of the used initial epochs, which is a common practice. The BadNets attack is easily mitigated without a noticeable drop in clean accuracy. Moreover, when increasing the pruning rate to 90%, we observe an almost complete elimination of the backdoor behavior. As for the SSBA and WaNet attacks, even with a pruning rate of 0%, the ASR decreases dramatically for both attacks, which means that SSBA and WaNet are not even robust against fine-tuning. Moreover, as the pruning rate increases, the ASR remains low, i.e., around 10%.
## 6 Related Work
Backdoor attacks have been widely investigated in different domains in recent years. BadNets [8] was the first paper to address backdoor attacks in computer vision for image classification. Since then, backdoors have also been applied to different domains such as audio [71, 72, 73], graph neural networks [74, 75], spiking neural networks [76], natural language processing [12, 77, 78], or collaborative learning [79, 80, 81]. Specific to the image domain, different approaches have arisen: multi-trigger [33], dynamic [25, 26, 82], or invisible backdoors [22, 37], to name a few.
At the same time, the security of ML concern grew, and the research community began investigating defense mechanisms to palliate this threat [15, 14]. Most of these works include ablation studies that show the effects that various parameters have on the backdoor's effectiveness. However, the values used are different each time, which makes it challenging to compare the performance of different attacks.
Fixing a parameter while evaluating the rest could provide insightful information about a single parameter. However, in ablation studies, how parameters combine is not evaluated [16, 8], which is indeed what defines the backdoor performance. Therefore, to understand which parameter is the most influential in the backdoor performance, both individual and combined parameters evaluation has to be done.
In this work, we focus on computer vision for image classification, the most popular application in the literature, and systematically evaluate the effect of various factors on the backdoor. Moreover, we find the most influential parameters by comparing their impact on the backdoor's effectiveness.
Not many systematic evaluations have been done that study the effect of different parameters individually and together to discover their impact on backdoor success. To the best of our knowledge, [16, 66] are two works that made some notable evaluations in the image domain in a systematic manner.
In [16], a similar work, the authors kept the number of samples for each class equal to avoid any dataset biases. We followed a more straightforward method that replaces clean samples with their poisoned counterparts and their changed labels because it is more prevalent in the literature [83, 84, 21, 8]. Additionally, we used datasets with larger images and more classes to explore if the observed behavior can be generalized for different settings. Furthermore, based on our results, we extract model/dataset-specific observations leveraging more generalized findings.
In Table 4, we compared the parameters of previous works considered for their investigations. We found that neither of the previous works has performed a thorough evaluation. Precisely, in [16] only considered two datasets with the same number of classes and three models. Although the backdoor attack with two different trigger shapes (a square and an overlay) has been considered, only a single trigger color has been used. Contrary to our work, they considered different trigger opacities.
Nevertheless, we find their chosen trigger size (only one setting) and their selection of poisoning rates unrealistic. Indeed, the poisoning rate should be maintained small, as the attacker cannot access a large part of the training set. In Table 5, we analyze what parameter effects have been considered in previous work. Truong et al. [16] compared the effect of the trigger types in detail by comparing the effects of square, sine, and variance triggers. However, the
Figure 13: Fine-Pruning against BadNets, WaNet, and SSBA (dashed and solid lines are the ASR and clean testing accuracy, respectively).
effect of the poisoning rate, trigger opacity, and regularization as a defense mechanism has not been wholly evaluated (they have only been tested for a specific setting). Lastly, evaluation of the trigger size, color, position, and backdoor explainability are missing.
The investigation performed by Rehman et al. [66] only considered traffic signs datasets, so results cannot be generalized to the broad image domain. Furthermore, the only consideration of a simple CNN is far for the real-world used DL models. Although their considered different trigger colors and shapes, different trigger positions are not evaluated, which could provide inaccurate results as the traffic signs datasets are usually centered. This could lead to a potential misunderstanding of the results. Contrary to previously analyzed work [16] and ours, the authors have not analyzed the effects of the chosen parameters, thus not providing any insight into what is more important for a backdoor trigger in the traffic sign domain, as see in Table V.
Considering the previous evaluation and motivated by previous work, we investigated the encountered gaps in the evaluations. Also, based on the found experimental inconsistencies and to provide accurate and bias-less results, we performed 10,800 experiments containing all the models, datasets, and attack settings. Additionally, our further investigation on AlexNet was carefully performed over 1,800 trained models.
## 7 Conclusions and Future Work
This study investigates the impact of backdoor parameters on image classification, intending to identify the most influential parameters for backdoor success. However, we observed that backdoor attacks exhibit heterogeneity, challenging direct comparisons. Therefore, we focused on a core subgroup of backdoor attacks based on the BadNets approach. We conducted an extensive literature review and devised a systematic experimental setup encompassing standard backdoor designs, allowing us to gain insights into which parameters significantly affect backdoor performance. Our research fills a gap in the existing literature by providing model/dataset-specific findings, some of which may be generalizable. Specifically, our empirical findings shed light on i) injecting backdoors in realistic scenarios, such as transfer learning; ii) the reasoning behind the backdoor effect; and iii) efficient backdoor injection through parameter tuning.
Two key findings emerged from our study. First, we found that trigger size has a more significant impact on backdoor success than the poisoning rate. This has important implications for designing countermeasures against backdoor attacks, as larger trigger sizes are more relevant, contrary to previous work that focused on small triggers only [31]. Second, we found that training a model from scratch facilitates more straightforward backdoor injection than transfer learning. This finding has implications for future attack and defense designs, where fine-tuning must be considered to offer a realistic perspective on the proposal.
This paper aims to contribute to the research community by providing a reference framework for systematically comparing backdoor attack baselines, enabling comparable and reproducible results. However, it is important to acknowledge some limitations of our study. For instance, considering other trigger parameters, such as shape or opacity, could yield more robust findings. Additionally, our focus on patch-based triggers may limit the generalizability of our findings to more complex attacks involving dynamic or blending backdoors. Furthermore, we did not evaluate the impact of defense mechanisms on the choice of backdoor parameters, which could yield interesting findings on the best parameters for defense evasion. Our research specifically focuses on identifying the best parameters for backdoor injection.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & Datasets & Models & Trigger color & Trigger shape & Trigger size & Trigger position & Trigger opacity & Poisoning rate \\ \hline \multirow{4}{*}{Truong et al. [16]} & \multirow{2}{*}{Flowers [64]} & ResNet50 & \multirow{2}{*}{Black} & Square & \multirow{2}{*}{22 pixels} & Top-left & \multirow{2}{*}{Considered} & 1\% & 20\% \\ & & \multirow{2}{*}{CIFAR10} & & NasNet Mobile [65] & & & Overlay & & 10\% & 75\% \\ & & & & & & & & 15\% & 100\% \\ \cline{2-10} & \multirow{4}{*}{Bejian traffic signs [67]} & \multirow{4}{*}{CNN} & White & Square & Not & \multirow{2}{*}{Fixed} & Not & 5\% \\ & & French traffic signs [69] & & Yellow & Star & considered & & 10\% \\ & & German traffic signs [70] & & & & & & 12.5\% \\ \cline{2-10} & \multirow{4}{*}{MNIST} & AlexNet & White & \multirow{4}{*}{Square} & Top-right & \multirow{4}{*}{Not} & 0.5\% \\ & & VGG & Black & Square & 4\% & Top-left & 6\% & Middle & 1\% \\ \cline{2-10} & & GoogleNet & Green & Whole Image & 8\% & Bottom-right & \multirow{2}{*}{Not} & 1.5\% \\ \cline{2-10} & \multirow{4}{*}{TinyImageNet} & ResNet-152 & Dynamic & & & Bottom-left & 2\% \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Comparison of the considered parameters in related works. “Fixed” means that the trigger position is not defined but fixed for all the experiments.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & filter size & filter size & filter size & filter size & filter size & filter size & filter size & filter size \\ \cline{2-10} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} \\ & & & & & & & & \\ \hline \hline \multirow{4}{*}{Truong et al. [16]} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{\(\alpha\)} \\ & & & & & & & & \\ \cline{2-10} & & & & & & & & \\ \hline \hline \end{tabular}
\end{table} TABLE V: Comparison of the effect of the parameters in related works. Where \(\blacksquare\) means completely considered, \(\blacksquare\) somehow considered, and \(\blacksquare\) not considered.
This study has raised several questions that require further investigation. For example, more research on backdoor explainability or interpretability would enhance the accuracy of our findings. Furthermore, additional studies are needed to establish a solid foundation for comparing other types of backdoor attacks, validating their performance, and pushing the research community towards more robust and comprehensible backdoor attacks. Specifically, future work could explore the following:
1. Different trigger shapes and types, such as dynamic or blending triggers.
2. Consideration of defense mechanisms and the stealthiness of backdoor triggers.
3. As our findings suggest, the optimizer may play a significant role in the performance of backdoor attacks. Therefore, further investigation could provide valuable insights for developing more robust models.
|
2308.14470 | The Logarithmic Quot space: foundations and tropicalisation | We construct a logarithmic version of the Hilbert scheme, and more generally
the Quot scheme, of a simple normal crossings pair. The logarithmic Quot space
admits a natural tropicalisation called the space of tropical supports, which
is a functor on the category of cone complexes. The fibers of the map to the
space of tropical supports are algebraic. The space of tropical supports is
representable by ``piecewise linear spaces'', which are introduced here to
generalise fans and cone complexes to allow non--convex geometries. The space
of tropical supports can be seen as a polyhedral analogue of the Hilbert
scheme. The logarithmic Quot space parameterises quotient sheaves on
logarithmic modifications that satisfy a natural transversality condition. We
prove that the space it is a logarithmic algebraic space, is separated, and
universally closed. The logarithmic Hilbert space parameterizes families of
proper monomorphisms, and in this way is exactly analogous to the classical
Hilbert scheme. The new complexity of the space can then be viewed as stemming
from the complexity of proper monomorphisms in logarithmic geometry. Our
construction generalises the logarithmic Donaldson--Thomas space studied by
Maulik--Ranganathan to arbitrary rank and dimension, and the good degenerations
of Quot schemes of Li--Wu to simple normal crossings geometries. | Patrick Kennedy-Hunt | 2023-08-28T10:17:30Z | http://arxiv.org/abs/2308.14470v1 | # The logarithmic quot space: foundations and tropicalisation
###### Abstract.
We construct a logarithmic version of the Hilbert scheme, and more generally the Quot scheme, of a simple normal crossings pair. The logarithmic Quot space admits a natural tropicalisation called the space of tropical supports, which is a functor on the category of cone complexes. The fibers of the map to the space of tropical supports are algebraic. The space of tropical supports is representable by "piecewise linear spaces", which are introduced here to generalise fans and cone complexes to allow non-convex geometries. The space of tropical supports can be seen as a polyhedral analogue of the Hilbert scheme. The logarithmic Quot space parameterises quotient sheaves on logarithmic modifications that satisfy a natural transversality condition. We prove that the space it is a logarithmic algebraic space, is separated, and universally closed. The logarithmic Hilbert space parameterizes families of proper monomorphisms, and in this way is exactly analogous to the classical Hilbert scheme. The new complexity of the space can then be viewed as stemming from the complexity of proper monomorphisms in logarithmic geometry. Our construction generalises the logarithmic Donaldson-Thomas space studied by Maulik-Ranganathan to arbitrary rank and dimension, and the good degenerations of Quot schemes of Li-Wu to simple normal crossings geometries.
###### Contents
* 1 Piecewise linear geometry
* 1.1.1 The space of tropical supports
* 1.1.2 The space of tropical supports
* 1.1.3 Logarithmic flatness for toric varieties
* 1.1.4 Proper monomorphisms and logarithmic surjections of coherent sheaves
* 1.1.5 Tropical support for constant degeneration
* 1.1.6 Tropical support in families
* 1.1.7 Flat limits after Tevelev.
* 1.1.8 The logarithmic Quot space
* 1.1.9 Examples
## Introduction
Let \(X\) be a smooth scheme and consider a simple normal crossing degeneration of \(X\). This paper addresses the following basic question.
How to study the Hilbert scheme or Quot schemes of \(X\) under such a degeneration?
Questions about degenerations lead naturally to questions for a pair \((X,D)\) with \(D\) a simple normal crossing divisor on \(X\). We are led to construct the Quot schemes of a pair \((X,D)\). The usual Quot scheme is recovered in the case that \(D\) is empty.
More precisely, let \(\underline{X}\) be a projective variety over \(\mathbb{C}\) and \(D\) a divisor on \(\underline{X}\) with components \(\{\underline{D}_{i}\}\). Assume that \((\underline{X},D)\) is a simple normal crossing pair and write \(X\) for the associated logarithmic scheme. We define a logarithmic scheme \(D_{i}\) by equipping \(\underline{D}_{i}\) with divisorial logarithmic structure from its intersect with the other components of \(D\). Let \(\mathcal{E}\) be a coherent sheaf on \(X\). We are interested in understanding how the presence of \(D\) effects the Quot scheme associated to \(\mathcal{E}\).
The divisor \(D\) picks out an open subset of the Quot scheme as follows. There are restriction maps between Grothendieck's Quot schemes
\[r_{i}:\mathsf{Quot}(\underline{X},\mathcal{E})\dashrightarrow\mathsf{Quot }(\underline{D}_{i},\mathcal{E}|_{\underline{D}_{i}})\]
defined on an open subset, but which do not extend to a morphism. There is a maximal open subscheme \(U_{X}\) of \(\mathsf{Quot}(\underline{X},\mathcal{E})\) on which all \(r_{i}\) are defined. We define an open subset \(\mathsf{Quot}(X,\mathcal{E})^{o}\) of \(\mathsf{Quot}(\underline{X},\mathcal{E})\) recursively on the dimension of \(X\) as the intersect of the open sets
\[\mathsf{Quot}(\underline{X},\mathcal{E})=U_{X}\cap\bigcap_{i}r_{i}^{-1}( \mathsf{Quot}\left(\underline{D}_{i},\mathcal{E}|_{\underline{D}_{i}}\right) ^{o})\,.\]
**Goal:** provide a moduli space and tautological morphism compactifying the data
\[r_{i}^{o}:\mathsf{Quot}(X,\mathcal{E})^{o}\rightarrow\mathsf{Quot}(D_{i}, \mathcal{E}|_{D_{i}})^{o}.\]
In this paper we construct a proper moduli stack on the category of logarithmic schemes, called the _logarithmic Quot space_ and denoted \(\mathsf{Quot}(X,\mathcal{E})\). The morphism \(r\) does exist after replacing source and target with their corresponding Quot schemes.
In the special case \(\mathcal{E}=\mathcal{O}_{X}\) the (logarithmic) Quot scheme coincides with the (logarithmic) Hilbert scheme. The morphism \(r\) establishes a link between the rich geometry of the Hilbert scheme of points on a surface, in particular Grojnowski-Nakajima theory [10, 11], and the Hilbert scheme of curves on a threefold. For arbitrary \(\mathcal{E}\) the morphism \(r\) is the key tool for developing a _gluing formula_ generalising similar formulas in Gromov-Witten and Donaldson-Thomas theory [1, 12, 13, 14, 15, 16, 17, 18, 19].
Our logarithmic Quot scheme has good categorical properties, see especially Section 0.5. For example, the logarithmic Hilbert scheme is the moduli space of proper monomorphisms in the category of logarithmic schemes. The logarithmic Quot scheme is the moduli space of _logarithmic surjections of coherent sheaves_, which are closely related to surjections of sheaves in the logarithmic etale topology.
A central idea of this paper is to identify a natural tropical object to associate with a logarithmic surjection of coherent sheaves, called the _tropical support_. Tropical support leads to the correct notion of _minimal or basic logarithmic structure_ for the logarithmic Quot scheme [1, 13, 14, 15].
### The logarithmic Quot space
We define what it means for a coherent sheaf to be _logarithmically flat_ in Section 4.2 and restrict attention to \(\mathcal{E}\) a sheaf on \(X\) logarithmically flat over \(\operatorname{Spec}(\mathbb{C})\). Assume moreover \(X\) is logarithmically flat over \(\operatorname{Spec}(\mathbb{C})\). By \(\operatorname{Spec}(\mathbb{C})\) we will always mean a point equipped with the trivial logarithmic structure.
In Section 8 we construct a diagram of stacks over the strict etale site
and a universal surjection of sheaves on \(\mathcal{X}\)
\[q:\pi_{X}^{\star}\mathcal{E}\to\mathcal{F}.\]
**Theorem A**.: The logarithmic Quot space \(\mathsf{Quot}(X,\mathcal{E})\) contains as an open algebraic substack \(\mathsf{Quot}(X,\mathcal{E})^{o}\). Moreover \(\mathsf{Quot}(X,\mathcal{E})\) is universally closed and separated. It admits a logarithmic etale cover by Deligne-Mumford stacks with logarithmic structure. Every standard logarithmic point determines an expansion and surjection of sheaves on this expansion.
Logarithmic algebraic spaces are spaces defined up to a choice of logarithmic birational modification. The logarithmic Picard group and logarithmic multiplicative group have similar representability properties [13, 14]. The choice made in constructing logarithmic Donaldson-Thomas spaces is an alternative way of handling the same phenomenon [13]. Moreover, connected components of the logarithmic Quot scheme are bounded and thus proper, see [11].
An \(S\) valued point of \(\mathsf{Quot}(X,\mathcal{E})\) is an equivalence class of surjections of sheaves on _logarithmic modifications_ of \(X\times S\). We call an equivalence class a _logarithmic surjection of coherent sheaves_ and typically denote it \(q\). See Section 4.1 for details. From a surjection of sheaves on a logarithmic modification, we construct a tropical object called the _tropical support_. The tropical support is combinatorial in nature and records the data of a distinguished class of logarithmic modifications of \(X\times S\).
### Representability and tropicalisation
The logarithmic Quot space is not an algebraic stack with logarithmic structure. There is, however, a strict map
\[\pi:\mathsf{Quot}(X,\mathcal{E})\to\mathit{Supp}(\mathcal{X})\]
with algebraic fibres. The space \(\mathit{Supp}\) is combinatorial nature and closely related to the theory of Artin fans. We now explain \(\mathit{Supp}\) further.
The logarithmic scheme \(X\) is equipped with a map
\[X\to\mathcal{X}.\]
Here the _Artin fan_\(X\) of \(X\) is a zero dimensional algebraic stack equipped with logarithmic structure [1], see Section 4.1. The category of Artin fans is equivalent to the category of cone stacks [10]. In Section 2 we build a moduli space on the category of cone complexes \(\mathsf{Supp}(\mathcal{X})\) parametrising all possible tropical supports, called the space of tropical supports.
The space of tropical supports is not a cone stack, but it is a _piecewise linear space_. The theory of piecewise linear spaces generalises the theory of cone stacks [1, 10], see Section 0.4 and Section 1 for details.
We make sense of subdivisions of \(\mathsf{Supp}(\mathcal{X})\) in Section 1, and say a subdivision with domain a cone complex is a _tropical model_. Under the equivalence of the previous paragraph, tropical models are those subdivisions which are themselves algebraic. We denote the set of tropical models of \(\mathsf{Supp}(\mathcal{X})\) by
\[S_{\mathcal{X}}=\{\mathsf{Supp}_{\Sigma}(\mathcal{X})\to\mathsf{Supp}( \mathcal{X})\}.\]
A moduli functor on the category of cone complexes lifts to define a moduli problem over the category of logarithmic schemes, see [10, Section 7]. We denote the lift of \(\mathsf{Supp}(\mathcal{X})\) by
\(\operatorname{\mathsf{Supp}}(X)\) and the lift of \(\operatorname{\mathsf{Supp}}_{\Sigma}(X)\) by \(\operatorname{\mathit{Supp}}_{\Sigma}(X)\). There is a unique cone complex \(\operatorname{Trop}(X)\) which lifts to \(X\). We define _proper models of the logarithmic \(\operatorname{\mathsf{Quot}}\) space_ to be morphisms
\[\operatorname{\mathsf{Quot}}_{\Sigma}(X,\mathcal{E})\to\operatorname{ \mathsf{Quot}}(X,\mathcal{E})\]
pulled back along \(\pi\) from tropical models
\[\operatorname{\mathit{Supp}}_{\Sigma}(X)\to\operatorname{\mathit{Supp}}(X).\]
**Slogan:** The failure of the logarithmic \(\operatorname{\mathsf{Quot}}\) scheme to be an algebraic stack is controlled by the failure of \(\operatorname{\mathsf{Supp}}(X)\) to be a cone complex.
Here is one more precise instance of our slogan. Consider an open subfunctor (in the _face topology_) \(V\) of \(\operatorname{\mathsf{Supp}}(X)\) which is a cone complex and let \(\mathcal{V}\) be the associated Artin fan. The preimage \(\pi^{-1}(\mathcal{V})\) in \(\operatorname{\mathsf{Quot}}_{\Sigma}(X,\mathcal{E})\) is represented by a Deligne-Mumford stack with logarithmic structure.
A precise version of the following Theorem can be found in Section 8.
Associated to each tropical model \(\operatorname{\mathsf{Supp}}_{\Sigma}(X)\to\operatorname{\mathsf{Supp}}(X)\) there is a diagram of Deligne-Mumford stacks with logarithmic structure
\[\begin{CD}\mathcal{X}_{\Sigma}@>{\pi_{X}}>{}>X\\ @V{}V{\varpi}V\\ \operatorname{\mathsf{Quot}}_{\Sigma}(X,\mathcal{E})@>{r_{i}}>{}>\operatorname{ \mathsf{Quot}}_{\Sigma}(D_{i},\mathcal{E}|_{D_{i}}).\end{CD}\]
**Theorem B**.: The above diagrams exhibit the following properties.
**Representability.** The model \(\operatorname{\mathsf{Quot}}_{\Sigma}(X,\mathcal{E})\) is a Deligne-Mumford stack with logarithmic structure. The logarithmic \(\operatorname{\mathsf{Quot}}\) space has a representable cover, in the sense that in the category of stacks on the strict etale site
\[\lim_{\Sigma\in\hat{S}_{x}}\operatorname{\mathsf{Quot}}_{\Sigma}(X,\mathcal{ E})=\operatorname{\mathsf{Quot}}(X,\mathcal{E}).\]
**Universally closed.** For every choice of \(\Sigma\), the underlying Deligne-Mumford stack of the moduli space \(\operatorname{\mathsf{Quot}}_{\Sigma}(X,\mathcal{E})\) is universally closed.
**Interpretation as a moduli space.** The logarithmic \(\operatorname{\mathsf{Quot}}\) space is the moduli space of _logarithmic surjections of coherent sheaves_. See Section 4 for definitions. It contains an open subscheme
\[\operatorname{\mathsf{Quot}}(X,\mathcal{E})^{o}\subset\operatorname{\mathsf{ Quot}}(X,\mathcal{E}).\]
### Structure of the paper
The following diagram is useful for understanding how the ideas in this paper connect.
\[\operatorname{\mathsf{Quot}}(X,\mathcal{E})\xrightarrow{\text{moduli space of}}q=[\pi_{\Gamma},q_{\Gamma}]\]
\[\begin{CD}\operatorname{\mathsf{ tropicalisation}}\\ @V{}V{\operatorname{\mathsf{ tropicalisation}}}V\\ \operatorname{\mathsf{Supp}}@>{}>{}>\operatorname{\mathsf{Moduli space of}}V\\ \operatorname{\mathsf{Supp}}@>{}>{}>\operatorname{\mathsf{Topicalisation}}V \end{CD}\]
Here \(q\) is a logarithmic surjection of coherent sheaves, see Section 4 and has an associated tropical support \(\mathscr{T}(q)\) described in Section 5. The moduli space of tropical supports relates to \(\operatorname{\mathit{Supp}}\) and is studied in Section 2. To make sense of tropical support and its moduli we introduce the language of piecewise linear spaces in Section 1.
The reader may now skip to Section 1 without loss of continuity. In the remainder of the introduction we provide an overview of the ideas introduced above.
### Piecewise linear spaces
The main combinatorial innovation of this paper is the _piecewise linear space_. Piecewise linear spaces are the natural language to express the tropicalisation of a logarithmic surjection of coherent sheaves, including the case of subschemes. They are analogous to the set theoretic tropicalization studied in [10, 11], but are more sensitive to scheme theoretic data such as embedded components. Piecewise linear spaces are also helpful for understanding the tropicalisation of the logarithmic Quot scheme.
Geometry of fine and saturated monoids is captured by sheaves on the category **RPC** of _rational polyhedral cones_, see [13, 14] for background on **RPC** and the _face topology_. We briefly recall _cone complexes_ in Section 1.1. Cone complexes, via their functor of points, give rise to a distinguished class of sheaves on **RPC** which have a concrete geometric description. To make language simpler we will confuse a cone complex or piecewise linear space with its functor of points, a sheaf on **RPC**.
The monoid geometry in our situation is described by piecewise linear spaces, a generalisation of cone complexes. A piecewise linear space, denoted \(\mathcal{T},\mathcal{S}\) or \(\mathcal{G}\), is the data of a sheaf on **RPC** which may be visualised as the functor of points of a geometric object, just as with cone complexes. _Piecewise linear complexes_ are formed by gluing piecewise linear cones: a rational polyhedral cone is the data of a convex piecewise linear cone. The relation between piecewise linear cones and piecewise linear spaces is in a precise sense the relation between affine schemes and algebraic spaces.
**Example 0.4.1**.: For \(k\) an integer we define subsets of \(\mathbb{R}^{2}=\mathbb{Z}^{2}\otimes\mathbb{R}\),
\[R_{k}=\{(x,y)\in\mathbb{R}^{2}|y\geq k|x|\}.\]
Let \(J_{k}\) be the category of fan structures with support \(R_{k}\). A fan structure with support \(R_{k}\) defines a cone complex and thus a functor of points on the category of rational polyhedral cones. This functor of points is a sheaf with respect to the _face topology_, see [13]. Consider the colimit \(\mathscr{G}_{k}\) over the functor of points of elements of \(J_{k}\).
For \(k>0\) note \(\mathscr{G}_{k}\) is a rational polyhedral cone. The category \(J_{k}\) has terminal object which is the cone with support \(R_{k}\) and the colimit is a rational polyhedral cone.
If \(k\) is negative then \(J_{k}\) does not have a terminal object and \(\mathscr{G}_{k}\) cannot be a rational polyhedral cone. This is our first example of a piecewise linear cone which is not a rational polyhedral cone. For all values of \(k\) we may think of \(\mathscr{G}_{k}\) as the functor assigning to a cone \(\tau\) the set of monoid maps
Figure 1. Left is a rational polyhedral cone \(\mathscr{G}_{1}\). Centre–left is the piecewise linear cone \(\mathscr{G}_{-1}\). Centre–right is a tropical model of \(\mathscr{G}_{-1}\). Right is the piecewise linear cone obtained as the colimit over all fan structures on \(\mathbb{R}^{2}\).
from \(\tau\) to \(\mathbb{Z}^{2}\) whose image is contained within \(R_{k}\). Figure 1 depicts the piecewise linear cone \(\mathscr{G}_{k}\), for various values of \(k\).
#### 0.4.1. How piecewise linear spaces arise. Subdivisions
of piecewise linear spaces are defined in Section 1.2. A _tropical model_ of a piecewise linear space \(\mathscr{S}\) is a morphism from a cone complex to \(\mathscr{S}\) which is a subdivision. Under the slogan of Section 0.2, tropical models correspond to logarithmic etale covers. In the example of Figure 1, tropical models of \(\mathscr{G}_{k}\) are fan structures on \(\mathbb{R}^{2}\) with support \(R_{k}\).
Subdivisions of cone complexes correspond to logarithmic etale covers and thus piecewise linear spaces help us track a class of objects which have a logarithmic etale cover by logarithmic stacks, despite these objects not arising from a logarithmic structure on an algebraic stack. In the present paper, piecewise linear spaces arise naturally in the following ways.
1. Consider a surjection of sheaves \(q:\mathcal{O}_{X}^{n}\twoheadrightarrow\mathcal{F}\) on \(X\) such that \(\mathcal{F}\) has no sections supported on \(D\). The logarithmic modifications \[X_{\Gamma}\to X\] under which the strict transform of \(\mathcal{F}\) is logarithmically flat over \(\operatorname{Spec}(\mathbb{C})\) are controlled by a tropical condition on \(\Gamma\). The tropical condition asks \(\Gamma\) is a tropical model of a certain piecewise linear space \(\mathscr{T}(q)\) subdividing \(\operatorname{Trop}(X)\). See [10] for a proof.
2. The moduli space of subdivisions of cone complexes \(\Gamma\to\operatorname{Trop}(X)\) is represented by a piecewise linear space but not by a cone complex. Moreover \(\operatorname{Supp}(\mathcal{X})\), which we define to be the moduli space of piecewise linear spaces subdividing \(\operatorname{Trop}(X)\), is a piecewise linear space.
3. Our proof that the logarithmic Quot space is proper involves a version of Grobner theory. Instead of a Grobner fan we obtain a Grobner piecewise linear space. These Grobner piecewise linear spaces are built from the data of the _Grobner stratification_ defined in [1].
#### 0.4.2. Piecewise linear spaces and tropicalisation
Let \(Z\) be a closed subscheme of \(\underline{X}\) such that \(\mathcal{O}_{Z}\) has no sections supported on \(D\) and \(Z\) is logarithmically flat over \(\operatorname{Spec}(\mathbb{C})\). There is a link between the following objects.
1. Associated to \(Z\) is a subset \(\operatorname{Trop}(Z)\) of the topological realisation \(|\operatorname{Trop}(X)|\). We call \(\operatorname{Trop}(Z)\) the _tropicalisation of \(Z\)_, see [14] for details. Construction 1.2.6 uses this data to define a piecewise linear space which is a subdivision of the cone complex \(\operatorname{Trop}(X)\).
2. The piecewise linear subdivision \(\mathscr{T}(q:\mathcal{O}_{X}\to\mathcal{O}_{Z})\).
The data of (2) is a more refined version of the data of (1) and thus we may view the piecewise linear space associated to a surjection of structure sheaves as a refined version of the usual notion of tropicalisation. A key difference is that (2) detects a tropical analogue of embedded components, as the next example illustrates.
**Example 0.4.2**.: In this example we study a subscheme of \(\mathbb{P}^{4}\) with its toric logarithmic structure. Consider
\[Z=Z_{1}\cup Z_{2}\ \text{where}\ Z_{1}=V(X_{0}+X_{1}+X_{2}+X_{3}+X_{4})\ \text{and}\ Z_{2}=V(X_{0}-X_{1},X_{2}-X_{1},X_{3}-X_{4}).\]
The piecewise linear space \(\mathscr{T}(q)\) associated to \(q:\mathcal{O}_{X}\to\mathcal{O}_{Z}\) is specified by a stratification \(\mathcal{P}\) of \(\mathbb{R}^{4}\). This locally closed stratification is the common refinement of the locally closed stratifications \(\{\operatorname{Trop}(Z_{1}),\operatorname{Trop}(Z_{1})^{c}\}\) and \(\{\operatorname{Trop}(Z_{2}),\operatorname{Trop}(Z_{2})^{c}\}\) defined in [14, Section 2.5]. See Figure 3 for the piecewise linear structure on one cone of the fan of \(\mathbb{P}^{2}\). The tropicalisation of \(Z\) defined in [14] records only the locally closed stratification defined by \(\operatorname{Trop}(Z_{1})\cup\operatorname{Trop}(Z_{2})\) and its complement.
### The logarithmic Quot space as a moduli space
The logarithmic Quot space is a moduli space of _logarithmic surjections of coherent sheaves_ of \(\mathcal{E}\), defined in Section 4 as an equivalence class of pairs
\[(\pi_{\Gamma}:X_{\Gamma}\to X,q_{\Gamma}:\pi_{\Gamma}^{\star}\mathcal{E}\to \mathcal{F})\]
where \(\pi_{\Gamma}\) is a logarithmic modification corresponding to the subdivision \(\Gamma\to\operatorname{Trop}(X)\) and \(\mathcal{F}\) a logarithmically flat coherent sheaf on the underlying scheme of \(X_{\Gamma}\) in the etale topology. We will write \(q\) for such an equivalence class. The equivalence relation is generated by pullback along certain logarithmic modifications.
We associate to a logarithmic surjection of coherent sheaves, say \(q=[X_{\Gamma},q_{\Gamma}]\), a piecewise linear space \(\mathscr{T}(q)\) subdividing \(\operatorname{Trop}(X)\). We call this subdivision the _tropical support_ of \(q\). The subdivision \(\Gamma\) is a tropical model of \(\mathscr{T}(q)\). The tropical support of a logarithmic surjection of sheaves tracks the minimal expansion \(\pi_{\Gamma}:X_{\Gamma}\to X\) such that we may write \(q=[\pi_{\Gamma},q_{\Gamma}]\).
In the special case \(\mathcal{E}=\mathcal{O}_{X}\), Grothendieck's Quot scheme is the _Hilbert scheme_, a moduli space of proper monomorphisms in the category of schemes. Proper monomorphisms in the category of logarithmic schemes are compositions of two flavours of proper monomorphism: strict closed immersions and logarithmic modifications. We write \(\mathsf{Hilb}(X)=\mathsf{Quot}(X,\mathcal{O}_{X})\) and call it the _logarithmic Hilbert space_.
Since both logarithmic modifications and strict closed immersions define proper monomorphisms in the category of logarithmic schemes, a morphism from a logarithmic scheme \(B\) to the logarithmic Hilbert space specifies a proper monomorphism to \(X\times B\).
**Theorem C**.: The logarithmic Hilbert space is the moduli stack whose fibre over a scheme \(B\) is the groupoid of equivalence classes of proper monomorphisms \(Z\to X\times B\) such that the composition
\[Z\to X\times B\to B\]
is logarithmically flat and integral. See Definition 4.3.1 for the equivalence relation.
The logarithmic Quot scheme is not invariant under logarithmic modification. However, given a logarithmic modification \(\pi:\tilde{X}\to X\) and \(\mathcal{E}\) a sheaf on \(X\) there is a logarithmic modification
\[\operatorname{Quot}(\tilde{X},\pi^{\star}\mathcal{E})\twoheadrightarrow \operatorname{Quot}(X,\mathcal{E}).\]
### Relation to literature
In the special case of the logarithmic Hilbert space of dimension one ideal sheaves in a threefold, models of the logarithmic Hilbert space are studied in logarithmic Donaldson-Thomas theory [10]. _Logarithmic Grassmannians_[23] are components of the logarithmic Quot space.
There is a rich precedent for using logarithmic geometry to understand moduli spaces and their invariants. Originally developed in the context of Gromov Witten theory [11, 12, 13, 14, 15], more recently attention has turned to moduli spaces of sheaves [12, 13, 14, 15].
The representability properties of the logarithmic Quot scheme reflect a theme explored elsewhere [13, 14]. The flat limit algorithm we use to show the logarithmic Quot scheme is proper relates to the study of tropical compactifications in logarithmic schemes [13, 14].
Our strategy for studying moduli spaces of coherent sheaves in the context of logarithmic geometry relies on expanded degenerations. The stack of expansions was introduced in [14]. Geometry of expansions and the stack of expansions is explored in [14, 15]. In particular [14, Theorem A or Theorem 1.8] provides useful context and motivation for why our moduli space has finite stabilisers. This is part of the motivation for the definition of tropical support.
One motivation for defining the logarithmic Quot space is to develop a degeneration and gluing formalism generalising results in Gromov-Witten theory [1, 2, 3, 4, 5], and more recently Donaldson-Thomas theory [16] to other moduli spaces of sheaves.
Proper monomorphisms in the category of logarithmic schemes play a role in making sense of logarithmic intersection theory [1, 2].
After making a draft of this paper publicly available, the author was made aware of ongoing work on an alternative approach to constructing a logarithmic Hilbert space [14].
### Future work
Our constructions suggest a definition of a logarithmic coherent sheaf: any surjection that arises in a Quot scheme. However, a more complete study of the deformation theory and moduli of these objects is an interesting challenge. Once the right notion has been found, the interaction between logarithmic coherent sheaves and the stability conditions of Gieseker and Bridgeland is likely to be important. In the classical situation, Gieseker stability is essentially extracted from the Quot scheme using GIT. We hope that the logarithmic Quot scheme will shed light on these directions. A related challenge is to explore the precise connection between the logarithmic Quot scheme and logarithmic Picard group [15]. The hope is to use a version of GIT to construct logarithmic versions of other moduli spaces which are not yet understood.
Recent developments permit the study of enumerative geometry by thinking about Quot schemes associated to locally free sheaves on surface geometries [17]. The existence of a virtual fundamental class suggests these enumerative theories can be understood by degenerating the target and studying the logarithmic Quot scheme of the resulting special fibre. Enumerative geometry of moduli of higher rank sheaves provides a further direction [18].
The _derived Quot scheme_ and _derived Hilbert scheme_[14] fit into the program of understanding derived versions of moduli spaces. One can attempt to define a derived logarithmic Quot scheme, fitting our construction in with this story. The result may have fruitful applications to enumerative geometry.
Tropical intersection theory recovers data about intersections in Chow rings from tropical intersections. The tropical support is more refined than tropicalisation of a subscheme and one can hope for a corresponding _tropical \(K\) theory_ and comparison results parallel to [17].
### Acknowledgements
The author would like to express sincere gratitude to his supervisor Dhruv Ranganathan for numerous helpful conversations. The author learned a great deal from numerous conversations with Sam Johnston, Navid Nabijou, Thibault Poiret and Martin Ulirsch. He also benefited greatly from conversations with Dan Abramovich, Francesca Carocci, Robert Crumplin, Samouil Molcho, Bernd Siebert, Calla Tschanz and Jonathan Wise. Martin is owed thanks for informing the author of Tevelev's unpublished proof of Proposition 7.1.2 and recognition for suggesting the term _tropical support_. Bernd is thanked for comments and questions on a first draft of this paper.
## 1. Piecewise linear geometry
In this section we generalise the theory of cone complexes to incorporate non-convex cones, as outlined in Section 0.4. The contents of this section are thus parallel to the theory of cone complexes, see [14]. By tropical geometry we mean the geometry of rational polyhedral cones.
### Cones
We refer the reader to [14, Section 2] for the definition of the category **RPC** of rational polyhedral cones and the category **RPCC** of rational polyhedral cone complexes. There is
a fully faithful embedding of categories
\[\mathbf{RPC}\hookrightarrow\mathbf{RPC}.\]
A cone complex \(\Sigma\) has an associated topological space called the _topological realisation_ which we denote \(|\Sigma|\). A morphism of cone complexes \(f:\Sigma^{\prime}\to\Sigma\) induces a morphism of topological spaces
\[|f|:|\Sigma^{\prime}|\to|\Sigma|\]
called the _topological realisation of \(f\)_.
A _subdivision_ of cone complexes is a morphism
\[f:\Sigma_{1}\to\Sigma_{2}\]
of cone complexes such that \(|f|\) is an isomorphism of topological spaces and induces a bijection of lattice points.
Let \(\sigma\) be a cone and \(\Sigma,\Theta\) cone complexes. Consider a morphism \(\Sigma\xrightarrow{h}\Theta\times\sigma\to\sigma\) where \(h\) is a subdivision. Such a composition is said to be _combinatorially flat_ if the image of each cone of \(\Sigma\) is a cone of \(\sigma\). We do not impose any condition on the lattice in our definition of combinatorial flatness and so a combinatorially flat morphism of fans need not correspond to a flat map of toric varieties.
We consider the category of rational polyhedral cones \(\mathbf{RPC}\) a site by declaring the inclusion of any face to be an open morphism. We call this Grothendieck topology the _face topology_. Similarly define the face topology on \(\mathbf{RPC}\).
In the remainder of this section we introduce the category \(\mathbf{PLS}\) of piecewise linear spaces. To do so we define piecewise linear complexes by gluing piecewise linear cones. The situation is analagous to affine schemes, schemes and algebraic spaces, as described in Table 1.
### Piecewise linear cone complexes
The local model for a piecewise linear complex is a piecewise linear cone. To make progress we make an auxilliary definition.
**Definition 1.2.1**.: A _local cone_\(\mathfrak{s}\) of dimension \(k\) is a pair \((N_{\mathfrak{s}},U_{\mathfrak{s}})\) consisting of a finitely generated rank \(k\) abelian group \(N_{\mathfrak{s}}\) and a connected open subset \(U_{\mathfrak{s}}\) of \(N_{\mathfrak{s}}\otimes\mathbb{R}\cong\mathbb{R}^{k}\) such that there is a fan on \(\mathbb{R}^{k}\) in which \(U_{\mathfrak{s}}\) is the union of interiors of cones. For local cones \(\mathfrak{s}_{1},\mathfrak{s}_{2}\) a morphism \(\iota:\mathfrak{s}_{1}\to\mathfrak{s}_{2}\) of local cones is a monoid morphism
\[N_{\mathfrak{s}_{1}}\to N_{\mathfrak{s}_{2}}\]
inducing a map from \(U_{\mathfrak{s}_{1}}\) to the closure of \(U_{\mathfrak{s}_{2}}\).
The interior of any rational polyhedral cone defines a local cone and any morphism of cones induces a morphism of local cones. Not all local cones arise in this way, indeed local cones need not be convex. In the sequel the closure of a subset \(\kappa\) of a topological space will be denoted \(\overline{\kappa}\) with the ambient topological space understood from context.
**Definition 1.2.2**.: A _piecewise linear complex_\(\mathscr{S}\) is a tuple \((|\mathscr{S}|,\mathcal{P}_{\mathscr{S}},A_{\mathscr{S}},C_{\mathscr{S}})\) consisting of the following data.
1. A topological space \(|\mathscr{S}|\) equipped with a locally closed stratification \(\mathcal{P}_{\mathscr{S}}\).
2. The set \(A_{\mathscr{S}}\) consists of a homeomorphism for each \(\kappa\) written \[f_{\kappa}:\overline{\kappa}\to\overline{U}_{\mathfrak{s}_{\kappa}}\] where \(\mathfrak{s}_{\kappa}=(N_{\mathfrak{s}_{\kappa}},U_{\mathfrak{s}_{\kappa}})\) is a local cone and we use a bar to denote closure. This homeomorphism identifies the closure of the stratum \(\kappa\) with the closure of \(U_{\mathfrak{s}_{\kappa}}\) in \(N_{\mathfrak{s}_{\kappa}}\otimes\mathbb{R}\). We require the restriction of \(f_{\kappa}\) to \(\kappa\) defines a homeomorphism to \(U_{\mathfrak{s}_{\kappa}}\).
3. The set \(C_{\mathscr{S}}\) consists of morphisms of local cones \[g_{\kappa^{\prime},\kappa}:\mathfrak{s}_{\kappa^{\prime}}\to\mathfrak{s}_{\kappa}\] for each pair \(\kappa,\kappa^{\prime}\) in \(\mathcal{P}_{\mathscr{S}}\) with \(\kappa^{\prime}\) a subset of \(\overline{\kappa}\). We require the topological realisation of \(g_{\kappa^{\prime},\kappa}\) is compatible with the inclusion of \(\kappa^{\prime}\) as a subset of \(\overline{\kappa}\). Moreover, whenever \(\kappa^{\prime\prime}\subset\overline{\kappa}^{\prime}\subset\overline{\kappa}\) we have \[g_{\kappa,\kappa^{\prime}}\circ g_{\kappa^{\prime},\kappa^{\prime\prime}}=g_{ \kappa,\kappa^{\prime\prime}}.\]
A morphism of piecewise linear complexes \(\varphi:\mathcal{S}_{1}\to\mathcal{S}_{2}\) consists of the following data.
1. A continuous map \[|\varphi|:|\mathscr{S}_{1}|\to|\mathscr{S}_{2}|\] such that for each \(\kappa_{1}\) in \(\mathcal{P}_{\mathscr{S}_{1}}\) there is some \(\kappa_{2}\) in \(\mathcal{P}_{\mathscr{S}_{2}}\) containing \(|\varphi|(\kappa_{1})\).
2. For each pair \(\kappa_{1},\kappa_{2}\) with \(|\varphi|(\kappa_{1})\subset\kappa_{2}\) a monoid morphism \(\varphi_{\kappa_{1},\kappa_{2}}:N_{\mathfrak{s}_{\kappa_{1}}}\to N_{ \mathfrak{s}_{\kappa_{2}}}\) inducing a map of local cones \(\mathfrak{s}_{\kappa_{1}}\to\mathfrak{s}_{\kappa_{2}}\). The \(\varphi_{\kappa_{1},\kappa_{2}}\) must be compatible with passing to closures.
The reader is issued with three warnings. First, a local cone is not a piecewise linear cone complex - this follows from the definition of \(A_{\mathscr{S}}\). Second, the category of piecewise linear complexes is not simply obtained by inverting subdivisions in **RPCC**. Finally, the definition of stratification states that whenever \(\kappa,\kappa^{\prime}\) strata of \(\mathcal{P}_{\mathscr{S}}\) if \(\kappa^{\prime}\) intersects the closure \(\overline{\kappa}\) of \(\kappa\) then \(\kappa^{\prime}\) is contained in \(\overline{\kappa}\). The category of piecewise linear cone complexes is denoted **PLCC**.
**Definition 1.2.3**.: A _piecewise linear cone_ is a piecewise linear space \(\mathscr{S}\) such that \(\mathcal{P}_{\mathscr{S}}\) contains a dense subset of \(|\mathscr{S}|\).
**Example 1.2.4**.: A cone complex \(\Sigma\) specifies a piecewise linear complex \(\mathscr{S}\). Writing \(|\sigma|^{o}\) for the interior of the topological realisation of a cone \(\sigma\), we specify
\[|\mathscr{S}|=|\Sigma|\ \mathrm{and}\ P_{\mathscr{S}}=\left\{|\sigma|^{o}: \sigma\ \mathrm{a\ cone\ of}\ \Sigma\right\}.\]
The closure of the stratum corresponing to \(\sigma^{o}\) is identified with \(|\sigma|\) to give the set \(A_{\mathscr{S}}\) and the set \(C_{\mathscr{S}}\) is defined by face inclusions. This assignment is functorial, giving a fully faithful embedding of the category of cone complexes into the category of piecewise linear complexes.
In this way we obtain fully faithful embeddings of categories
We say a piecewise linear complex \(\Sigma\) is a _cone complex_ if \(\Sigma\) lies in the image of the embedding of Example 1.2.4. We say a morphism \(\varphi:\mathcal{T}_{1}\to\mathcal{T}_{2}\) of piecewise linear complexes is a _subdivision_ if \(|\varphi|\) is an isomorphism of topological spaces; the maps of local cones \(\varphi_{\kappa_{1},\kappa_{2}}\) induce a bijection between lattice points in \(\kappa_{1}\) and lattice points in \(\kappa_{2}\cap|\varphi|(\kappa_{1})\), and the preimage under \(|\varphi|\) of every stratum of \(\mathcal{P}_{\mathcal{T}_{2}}\) is a finite union of elements of \(\mathcal{P}_{\mathcal{T}_{1}}\). It follows that a morphism of cone complexes is a subdivision if and only if it is a subdivision in the usual sense. We say a subdivision is a _tropical model_ if the domain is a cone complex.
#### 1.2.1. Subdivisions from conical stratifications
For \(\mathscr{S}\) a piecewise linear space, we say a locally closed stratification \(\mathcal{P}\) of \(|\mathscr{S}|\) refining \(\mathcal{P}_{\mathscr{S}}\) is _conical_ if there is a tropical model
\[\Sigma\to\mathscr{S}\]
such that each stratum of \(\mathcal{P}\) is the union of interiors of images of cones of \(\Sigma\).
**Proposition 1.2.5**.: There is an initial subdivision of piecewise linear spaces
\[\mathscr{S}(\mathcal{P})\to\mathscr{S}\]
such that given any tropical model \(\Sigma\to\mathscr{S}\) for which each stratum of \(\mathcal{P}\) is a union of interiors of cones of \(\Sigma\), we may factor
\[\Sigma\to\mathscr{S}(P)\to\mathscr{S}.\]
**Construction 1.2.6**.: We define \(\mathscr{S}(\mathcal{P})\). Set \(|\mathscr{S}(\mathcal{P})|=|\mathscr{S}|\) and let \(\sim\) be the equivalence relation on points of \(|\mathscr{S}(\mathcal{P})|\) generated by \(p\sim q\) whenever there is a tropical model
\[\Sigma\to\mathscr{S}\]
such that \(p\) and \(q\) lie in the interior of the same cone. We obtain a locally closed stratification \(\mathcal{P}_{\mathscr{S}(\mathcal{P})}\) of \(|\mathscr{S}|\) by declaring two points to be the same stratum if they are related by \(\sim\). Since each stratum of \(\mathcal{P}_{\mathscr{S}(\mathcal{P})}\) is an open subset of a linear subspace of a local cone, each stratum inherits the structure of a local cone. The sets \(\mathcal{A}_{\mathscr{S}(\mathcal{P})}\) and \(\mathcal{C}_{\mathscr{S}(\mathcal{P})}\) are inherited from \(\mathscr{S}\). \(\Diamond\)
To prove Proposition 1.2.5 it suffices to verify that Construction 1.2.6 yielded a valid piecewise linear space. The universal property is then clear. The key claim to check is that each stratum of \(\mathcal{P}_{\mathscr{S}(\mathcal{P})}\) is an open subset of a linear subspace of a local cone.
Proof of Proposition 1.2.5.: A definition will help us. Suppose we are given a conical locally closed stratification \(\mathcal{Q}\) of \(U_{\mathsf{s}}\) for some local cone \((N_{\mathsf{s}},U_{\mathsf{s}})\) and a ray \(\rho\) inside a stratum \(\kappa\) of \(\mathcal{Q}\). We think of \(\kappa\) as a subset of \(U_{\mathsf{s}}\). We define a subgroup \(G_{\rho}\) of \(N_{\mathsf{s}}\) to be the intersect of (necessarily saturated) subgroups \(G_{i}\leq N_{\mathsf{s}}\) maximal with the following property. There is an open subset \(V_{i}\) of \(U_{\mathsf{s}}\) containing \(\rho\) such that \(G_{i}\otimes\mathbb{R}\cap V_{i}\) lies inside \(\kappa\cap V_{i}\).
To understand why each stratum of \(\mathcal{P}_{\mathscr{S}(\mathcal{P})}\) is an open subset of a linear subspace one can use the following two facts.
1. Each stratum of \(\mathcal{P}_{\mathscr{S}(\mathcal{P})}\) is a union of interiors of cones of any tropical model \(\Sigma\to\mathscr{T}\) provided the image of every cone of \(\Sigma\) lies within a stratum of \(\mathcal{P}\).
2. Suppose lattice points \(p,q\) lie in the same stratum of \(\mathcal{P}_{\mathscr{S}(\mathcal{P})}\). Necessarily \(p,q\) are mapped to the same local cone \((N_{\mathsf{s}},U_{\mathsf{s}})\) in \(\mathscr{S}\) and specify rays \(\rho_{p},\rho_{q}\) inside \(N_{\mathsf{s}}\). Then we have an equality of subgroups of \(N_{\mathsf{s}}\) \[G_{\rho_{q}}=G_{\rho_{p}}.\]
The first fact is clear. The second fact follows because if there is a tropical model \(\Sigma\) in which \(p\) and \(q\) lie in the interior of the same cone then the cones of \(\Sigma\) which contain \(p\) and \(q\) are the same.
Armed with these facts we will show a stratum \(\kappa\) of \(\mathcal{P}_{\mathscr{S}(\mathcal{P})}\) is an open subset of a linear subspace of a local cone. Fix a tropical model \(\Sigma\) and choose a face \(\gamma\) maximal with the property that \(\gamma\) is a subset of \(\kappa\). The aforementioned local cone is \((N_{\kappa},U_{\kappa})\) and the linear subspace
\[\gamma^{\mathrm{gp}}\otimes\mathbb{R}\subset U_{\kappa}.\]
If \(\kappa\) did not lie within this linear subspace then we could find \(p,q\in\kappa\) in the interior of the same cone of a tropical model \(\Sigma^{\prime}\) and fact (2) would force \(\gamma\) not to be maximal. Fact 2 also forces openness.
#### 1.2.2. Transversality and tropical support
In this subsection we use the language of logarithmic geometry recalled in Section 4.1: the reader may skip this subsection without loss of continuity. We restate [11, Theorem II.1.2] in the language of piecewise linear spaces. See also [11, Theorem 12.3] and [10] for related theorems.
Let \(X\) be a projective variety with simple normal crossing divisor \(D\). Denote the complement of \(D\) in \(X\) by \(X_{0}\). Associated to \(X\) is a cone complex \(\operatorname{Trop}(X)\), see Section 4.1. Let \(Z\) be a subscheme of \(X\) which is the scheme theoretic closure of \(Z\cap X_{0}\). Ulirsch defines a subset \(|\operatorname{Trop}(Z)|\) of \(|\operatorname{Trop}(X)|\), see [11, Section II.1].
By [11, Theorem II.1.1], \(|\operatorname{Trop}(Z)|\) defines a conical locally closed stratification
\[\mathcal{P}_{Z}=\{|\operatorname{Trop}(Z)|,(|\operatorname{Trop}(X)|\backslash |\operatorname{Trop}(Z)|)\}\]
of \(|\operatorname{Trop}(X)|\) and thus a piecewise linear space \(\mathscr{T}(Z)\) via Construction 1.2.6.
We explain in Section 4.1 that a tropical model \(\Gamma\to\operatorname{Trop}(X)\) gives rise to a birational modification \(X_{\Gamma}\to X\) called a _logarithmic modification_. All logarithmic modifications arise in this way. The underlying scheme \(X_{\Gamma}\) has a locally closed stratification with strata \(V(\gamma)\) indexed by cones \(\gamma\) of \(\Gamma\). Dimension \(k\) cones index codimension \(k\) strata.
**Theorem 1.2.7** (Ulirsch,Tevelev).: Consider a logarithmic modification
\[\pi_{\Gamma}:X_{\Gamma}\to X.\]
The strict transform \(\pi_{\Gamma}^{!}Z\) of \(Z\) intersects each stratum of \(X_{\Gamma}\) in the expected dimension, that is
\[\dim(V(\gamma)\cap\pi_{\Gamma}^{!}Z)=\dim(Z)-\dim(\gamma)\]
if and only if the tropical model \(\Gamma\to\operatorname{Trop}(X)\) factors through the subdivision \(\mathscr{T}(Z)\to\operatorname{Trop}(X)\).
#### 1.2.3. Examples of piecewise linear spaces
We record two examples of piecewise linear spaces that arise in nature.
**Example 1.2.8**.: Consider the cone \(\sigma\) in \(\mathbb{R}^{3}\) with rays
\[\{(1,0,0),(0,1,0),(0,0,1)\}.\]
Define a piecewise linear complex which is a subdivision of \(\sigma\) obtained by replacing the dense stratum \(\sigma^{o}\) with two strata: the ray in direction \((1,1,1)\) and its complement in \(\sigma^{o}\). See Figure 2. This is piecewise linear subdivision of a certain cone in the fan of \(\mathbb{P}^{4}\) which appears when taking the tropical support in Example 0.4.2.
**Example 1.2.9**.: Let \(\sigma\) be the (rational polyhedral) cone of dimension two with maximal dimension face \(\mathbb{R}^{2}_{\geq 0}\). This is simply the fan of \(\mathbb{A}^{2}\). The space of two labelled points \(p,q\) in \(\sigma\) bijects with points \((p_{1},p_{2},q_{1},q_{2})\in\mathbb{R}^{4}_{\geq 0}\). The first two coordinates specify the location of \(p\) and the second two give the location of \(q\).
We define a piecewise linear structure on \(\mathbb{R}^{4}_{\geq 0}\) by specifying a conical locally closed stratification of \(\mathbb{R}^{4}_{\geq 0}\). Two points \((p_{1},p_{2},q_{1},q_{2})\) and \((p_{1}^{\prime},p_{2}^{\prime},q_{1}^{\prime},q_{2}^{\prime})\) lie in the same stratum if and only if:
1. The points \((p_{1},p_{2})\) and \((p_{1}^{\prime},p_{2}^{\prime})\) lie in the interior of the same face of \(\sigma\). Similarly replacing \(p\) by \(q\).
2. We require \((p_{1},p_{2})=(q_{1},q_{2})\) if and only if \((p_{1}^{\prime},p_{2}^{\prime})=(q_{1}^{\prime},q_{2}^{\prime})\).
The resulting locally closed stratification is readily seen to be conical. The output of Construction 1.2.6 is not a cone complex. Indeed, this piecewise linear space has a stratum \(\kappa\) of dimension two corresponding to the locus where \(p=q\) lie in the maximal dimension face of \(\sigma\). This stratum does not lie in the closure of any stratum of dimension three. Since \(\kappa\) lies in the closure of a stratum of dimension four, we do not have a cone complex.
### Base change and flatness for piecewise linear complexes
Let \(\Theta\) be a cone complex.
#### 1.3.1. Fibre products
Proposition 1.3.1 is closely related the discussion in [13, Section 2.2].
**Proposition 1.3.1**.: Fibre products exist in the category of piecewise linear complexes.
Note we may characterise a piecewise linear complex as a colimit over piecewise linear cones glued along inclusions of piecewise linear cones.
Proof.: The statement is local in the face topology so it suffices to consider a pre-fibre square of piecewise linear cones
\[\begin{CD}\mathcal{T}_{1}\\ @V{}V{}V\\ \mathcal{T}_{2}@>{}>{}>\mathcal{T}_{3}.\end{CD}\]
Write \(\mathfrak{s}_{i}=(N_{\mathfrak{s}_{i}},U_{\mathfrak{s}_{i}})\) for the maximal local cone of \(\mathcal{T}_{i}\). Thus \(\mathcal{T}_{i}\) is the data of a conical locally closed stratification of a closed subset of \(N_{\mathfrak{s}_{i}}\otimes\mathbb{R}\). There is an associated pre-fibre square of monoids whose fibre product is
\[N_{\mathfrak{s}_{1}}\times_{N_{\mathfrak{s}_{3}}}N_{\mathfrak{s}_{2}}=N.\]
Taking the common refinement of the pullbacks of the locally closed stratifications \(\mathcal{P}_{\mathcal{T}_{1}}\) and \(\mathcal{P}_{\mathcal{T}_{2}}\) defines a conical locally closed stratification of \(N\otimes\mathbb{R}\) and thus a piecewise linear cone. Note any conical locally closed stratification pulls back to a conical locally closed stratification: the linear (in)equalities cutting each stratum out pull back to linear (in)equalities. It is easy to see this piecewise linear cone is the fibre product.
#### 1.3.2. Combinatorial flatness for piecewise linear spaces
To make sense of combinatorial flatness for morphisms of piecewise linear spaces we require Construction 1.3.2.
Let \(\sigma\) be a cone, \(\Theta\) a cone complex and consider the cone complex \(\sigma\times\Theta\). Let \(\mathcal{P}^{o}\) be a locally closed stratification of the open subset \(|\sigma|^{o}\times|\Theta|\) which refines the locally closed stratification pulled back from \(\Theta\) and such that there exists a subdivision of \(\sigma\times\Theta\) for which each stratum of \(\mathcal{P}^{o}\) is a union of interiors of cones.
Figure 2. Examples of a piecewise linear complexes can be obtained by taking the cones over the above (subdivided) triangles. Left is piecewise linear space discussed in Examples 0.4.2 and 1.2.8. Right gives a different example. The only examples of piecewise linear complexes of dimension two which are subdivisions of cone complexes are cone complexes.
**Construction 1.3.2**.: We define a locally closed stratification \(\mathcal{P}\) of \(|\sigma\times\Theta|\). Two points lie in the same locally closed stratum of a locally closed stratification \(\mathcal{P}^{\prime}\) if they lie in the interior of the same cone of \(\sigma\times\Theta\) and they lie in the closure of precisely the same strata of \(\mathcal{P}^{o}\). Define the final stratification \(\mathcal{P}\) by replacing strata of \(\mathcal{P}^{\prime}\) with their connected components. \(\Diamond\)
Observe there is a cone complex structure on \(\sigma\times\Theta\) such that each stratum of \(\mathcal{P}\) is a union of cones. We need a notion of continuity for families of piecewise linear spaces. Our definition generalises the definition of combinatorial flatness, discussed for example in [13, proof of Lemma 3.3.5].
**Definition 1.3.3**.: Fix \(\sigma\) a rational polyhedral cone and \(\Theta\) a cone complex. Consider a subdivision \(\mathscr{T}\) of \(\Theta\times\sigma\) and denote the composition
\[p:\mathscr{T}\to\Theta\times\sigma\to\sigma.\]
1. We say \(p\) satisfies axiom F1 if for every stratum \(\kappa\) in \(\mathcal{P}_{\mathscr{T}}\) the image \(|p|(\kappa)\) lies in \(\mathcal{P}_{\sigma}\).
2. Consider faces \(\tau_{1}\leq\tau_{2}\) of \(\sigma\) and denote the restriction of \(p\) to the preimage of \(\tau_{i}\) by \[p_{i}:\mathscr{T}_{i}\to\Theta\times\tau_{i}\to\sigma.\] The restriction of \(\mathcal{P}_{\mathscr{T}_{i}}\) to the preimage of the interior of \(\tau_{i}\) defines a locally closed stratification \(\mathcal{P}^{o}_{\mathscr{T}_{i}}\). We say \(p\) satisfies axiom F2 if for all choices of \(\tau_{1}\leq\tau_{2}\) the output of \(\mathcal{P}^{o}_{\mathscr{T}_{2}}\) under Construction 1.3.2 restricts on the preimage of the interior of \(\tau_{1}\) to \(\mathcal{P}^{o}_{\mathscr{T}_{1}}\).
We say \(p\) is _combinatorially flat_ if it satisfies axioms F1 and F2.
Note if \(\mathscr{T}\) is a cone complex then axiom F1 simply asks the image of each cone is a cone and axiom F2 is automatic. We simplify checking axiom F2 with a lemma.
**Lemma 1.3.4**.: Assume
\[p:\mathscr{T}\to\Theta\times\sigma\to\sigma\]
satisfies axiom F1. Assume moreover that axiom F2 holds for \(\tau_{2}=\sigma\) and any choice of \(\tau_{1}\). Then \(p\) is combinatorially flat.
In the sequel for \(\tau\) a face of \(\sigma\) we write \(\mathcal{P}_{\mathscr{T}}(\tau)\) for the subset of \(\mathcal{P}_{\mathscr{T}}\) mapped under \(p\) to the interior of \(\tau\).
Proof.: If the lemma were false then we could find a pair of faces \(\tau_{1}\leq\tau_{2}\) of \(\sigma\) and strata \(\kappa,\kappa^{\prime}\in\mathcal{P}_{\mathscr{T}}(\tau_{1})\) such that every stratum \(\kappa^{\prime\prime}\in\mathcal{P}_{\mathscr{T}}(\tau_{2})\) contains \(\kappa\) in its closure if and only if it contins \(\kappa^{\prime}\) in its closure.
Since F2 holds for \(\tau_{2}=\sigma\) and \(\tau_{1}\) there exists a stratum \(\hat{\kappa}\) in \(\mathcal{P}_{\mathscr{T}}(\sigma)\) containing \(\kappa^{\prime}\) in its closure but not containing \(\kappa\) (swapping \(\kappa,\kappa^{\prime}\) if needed). Write \(\overline{\kappa}\) for the closure of \(\hat{\kappa}\) in \(\Theta\times\sigma\). We will show that \(\overline{\kappa}\cap p^{-1}(|\tau_{2}^{o}|)\) contains \(\kappa^{\prime}\) in its closure. Note \(\kappa\) cannot lie in the closure of \(\overline{\kappa}\cap p^{-1}(|\tau_{2}^{o}|)\) as it does not lie in \(\overline{\kappa}\). This completes the proof because certainly \(\overline{\kappa}\cap p^{-1}(|\tau_{2}^{o}|)\) is a union of strata so by the definition of piecewise linear space, \(\kappa^{\prime}\) must lie entirely within the closure of one such stratum.
It remains to show that the closure in \(\Theta\times\sigma\) of \(\overline{\kappa}\cap p^{-1}(|\tau_{2}^{o}|)\) contains \(\kappa^{\prime}\). Without loss of generality replace \(\hat{\kappa}\) with a stratum of \(\mathcal{P}_{\mathscr{T}}(\sigma)\) lying in the closure of \(\hat{\kappa}\) which is minimal with the property that its closure contains \(\kappa^{\prime}\). We think of \(\hat{\kappa}\) as a subset of \(\mathbb{R}^{n}\) for some \(n\), obtained by removing polyhedral cones \(\alpha_{1},...,\alpha_{n}\) from the intersect of a polyhedral cone \(\alpha_{0}\) with \(p^{-1}(|\sigma|^{o})\). Note \(\alpha_{0}\) necessarily surjects to \(\sigma\) and since axiom F2 holds for morphisms of cones we know \(\kappa\) lies in the closure of \(\alpha_{0}\cap p^{-1}(|\tau_{2}^{o}|)\). To complete the proof note minimality of \(\hat{\kappa}\) means \(\kappa^{\prime}\) does not lie in the closure of \(\alpha_{i}\) for any \(i>0\). We are done because \(\kappa^{\prime}\) lies in the closure of \(\alpha_{0}\cap|\tau_{2}|^{o}\), but not in the closure of any other \(\alpha_{i}\).
#### 1.3.3. Combinatorial flatness and base change
Let \(\tau\to\sigma\) be a morphism of rational polyhedral cones. Let \(\mathscr{T}\) be a subdivision of \(\sigma\times\Theta\) and consider the following diagram where all squares are cartesian.
\[\diagram\node{\mathscr{T}_{\tau}}\arrow{\tau\times\Theta}\arrow{\tau}\arrow{ \tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{ \tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{ \tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{ \tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau }\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau} \arrow{\tau}\arrow{\tau}\arrow{\tau}\arrow{\tau}\
We now define a _piecewise linear space_ to be a geometric space in the context \((\mathbf{PLCC},\tau_{str},\mathbb{S})\), see [10, Definition 2.7]. A morphism of piecewise linear spaces
\[f:\mathcal{T}\to\mathcal{S}\]
is called a _subdivision_/_tropical model_ if, given any morphism from a piecewise linear cone \(\mathcal{S}^{\prime}\to\mathcal{S}\), the pullback of \(f\) is a subdivision/ tropical model of piecewise linear complexes.
### Properties of piecewise linear spaces
In this section we prove basic properties of piecewise linear spaces.
#### 1.5.1. Subdividing piecewise linear spaces
A _subcomplex_ of a piecewise linear complex \(\mathcal{S}\) is a morphism of piecewise linear complexes \(\mathcal{T}\to\mathcal{S}\) which is an isomorphism to its image. A subcomplex of a piecewise linear space is a morphism of piecewise linear spaces such that the base change along any morphism from a piecewise linear complex is a subcomplex in the above sense.
**Proposition 1.5.1**.: Every piecewise linear space \(\mathcal{G}\) admits a tropical model \(\Sigma\to\mathcal{G}\). Moreover if \(\mathcal{G}\leq\mathcal{G}^{\prime}\) is a subcomplex and \(\Sigma\to\mathcal{G}\) a tropical model then there exists a tropical model \(\Sigma^{\prime}\to\mathcal{G}^{\prime}\) such that the following diagram commutes:
In particular the vertical morphisms are both inclusions of subcomplexes.
A basic example of a piecewise linear space is obtained by defining the free action of a finite group \(G\) on a piecewise linear cone complex \(\mathcal{T}_{G}\). This induces a groupoid presentation of \([\mathcal{T}_{G}/G]\). A _quotient piecewise linear cone_ is defined to be a piecewise linear space which may be written \([\mathcal{T}_{G}/G]\) such that the image of some stratum of \(\mathcal{P}_{\mathcal{T}_{G}}\) is dense. Any piecewise linear space may be obtained by taking a colimit over a diagram of quotient piecewise linear cones where morphisms are inclusions of subcomplexes. This follows from the argument of [14, TAG 0262].
**Lemma 1.5.2**.: We prove the following local analogue of Proposition 1.5.1.
1. Set \(\mathcal{G}\) a quotient piecewise linear cone. There exists a tropical model \(\Sigma\to\mathcal{G}\).
2. Consider a subcomplex \[\Sigma^{\prime}\hookrightarrow\mathcal{G}\] where \(\Sigma^{\prime}\) is a cone complex. There is a tropical model \[\Sigma\to\mathcal{G}\] extending the original embedding of cone complexes.
Proof of Proposition 1.5.1.: The subdivision can be constructed recursively on the dimension of each quotient piecewise linear cone in our colimit, starting with cones of dimension zero (where no subdivision is needed). To handle cones of dimension \(k\) subdivide each quotient piecewise linear cone using Lemma 1.5.2 without further subdividing any strata of dimension less than \(k\). We avoid further subdivision of lower dimensional strata by including the previously constructed subdivision of these strata in the data of \(\Sigma^{\prime}\).
Proof of Lemma 1.5.2.: We induct on the dimension of \(\mathcal{G}\). For the base case note dimension zero cones are a single point and there is nothing to do. For the inductive step choose a presentation
\[\mathcal{G}_{G}\to\mathcal{G}=[\mathcal{G}_{G}/G]\]
for \(G\) a finite group. To specify a tropical model of \(\mathscr{G}\) it suffices to specify a \(G\) equivariant tropical model of \(\mathscr{G}_{G}\). The subcomplex \(\Sigma^{\prime}\to\mathscr{G}\) pulls back to specify a G equivariant subcomplex \(\Sigma^{\prime}_{G}\to\mathscr{G}_{G}\). There is a subcomplex \(\partial\mathscr{G}\) of \(\mathscr{G}\) obtained as the quotient \(\partial\mathscr{G}_{G}\to\partial\mathscr{G}=[\partial\mathscr{G}_{G}/G]\) where \(\partial\mathscr{G}_{G}\) is the subcomplex of \(\mathscr{G}_{G}\) obtained by discarding all strata of maximal dimension in \(\mathcal{P}_{\mathscr{G}_{G}}\). Define \(\partial\Sigma^{\prime}\) as the preimage of \(\partial\mathscr{G}\) in \(\Sigma^{\prime}\). By the inductive hypothesis find a subdivision \(\partial\Sigma\to\partial\mathscr{G}\) extending \(\partial\Sigma^{\prime}\). Write \(\partial\Sigma_{G}\) for the \(G\) equivariant subcomplex of \(\mathscr{G}_{G}\) obtained by pullback.
To complete the inductive step fix a maximal stratum of \(\mathscr{G}_{G}\) and let \(\mathscr{T}\) be the corresponding piecewise linear cone. Write \(\Sigma^{\prime}_{\mathscr{T}},\partial\Sigma_{\mathscr{T}}\) for the restriction to \(\mathscr{T}\) of \(\Sigma^{\prime}_{G},\partial\Sigma_{G}\) respectively. We think of \(\mathscr{T}\) as a subdivision of \(N_{\mathrm{t}}\otimes\mathbb{R}\) where the local cone of the dense stratum of \(\mathscr{T}\) is \((N_{\mathrm{t}},U_{\mathrm{t}})\). We may assume that \(\Sigma^{\prime}_{\mathscr{T}}\) contains \(\partial\Sigma_{\mathscr{T}}\) as a subcomplex.
We now have in particular the data of a cone complex \(\Sigma^{\prime}_{\mathscr{T}}\) embedded in \(N_{\mathrm{t}}\otimes\mathbb{R}\). Since every toric variety admits an equivariant compactification [10], we can extend this to a complete fan on \(N_{\mathrm{t}}\otimes\mathbb{R}\). Restricting this fan to the closure of \(U_{\mathrm{t}}\) defines a tropical model \(\Sigma_{\mathscr{T}}\to\mathscr{T}\). This tropical model defines a subdivision of \(\mathscr{G}_{G}\) by subdividing \(\mathscr{T}\) and leaving the remainder unchanged. This subdivision is not equivariant, however applying the _averaging trick_ explained in [11, Lemma 3.3.7] we obtain an equivariant tropical model of \(\mathscr{G}_{G}\). Since \(\partial\Sigma_{G}\) and \(\Sigma^{\prime}_{G}\) are \(G\) equivariant, the averaging trick does not subdivide them.
#### 1.5.2. Sheaves on \(\mathbf{RPC}\)
Piecewise linear spaces are a geometric way to understand a particular collection of sheaves on \(\mathbf{RPC}\). We denote the category of sheaves on \(\mathbf{RPC}\) with the face topology by \(\mathbf{Sh}(\mathbf{RPC})\).
**Proposition 1.5.3**.: There are fully faithful embeddings of categories
\[\mathbf{RPC}\hookrightarrow\mathbf{PLS}\hookrightarrow\mathbf{Sh}(\mathbf{RPC}).\]
Proof.: \(\mathbf{RPC}\) to \(\mathbf{PLS}\). The assignment of Example 1.2.4 defines a fully faithful embedding
\[\mathbf{RPC}\to\mathbf{PLS}.\]
Indeed Definition 1.2.2 specialises to the definition of a morphism of cone complexes. The topology \(\tau_{\mathrm{str}}\) restricts to the face topology.
\(\mathbf{PLS}\) to \(\mathbf{Sh}(\mathbf{RPC})\). We denote the image of a piecewise linear space \(\mathscr{S}\) under the Yoneda embedding by \(H_{\mathscr{S}}\), a sheaf on \(\mathbf{PLS}\). We denote the restriction of \(H_{\mathscr{S}}\) to \(\mathbf{RPC}\)\(h_{\mathscr{S}}\). Since the topology on \(\mathbf{RPC}\) is pulled back from the topology on the category of piecewise linear spaces, \(h_{\mathscr{S}}\) is a sheaf. The assignment
\[\mathscr{S}\to h_{\mathscr{S}}\]
defines our second map. We now verify this map is fully faithful.
Given two morphisms in \(\mathbf{PLS}\)
\[\varphi_{1},\varphi_{2}:\mathscr{S}_{1}\to\mathscr{S}_{2}\]
there are tropical models
\[\Sigma_{1}\to\mathscr{S}_{1},\Sigma_{2}\to\mathscr{S}_{2}\]
for which there are commutative squares
Note \(\varphi_{1}=\varphi_{2}\) if and only if \(\varphi^{\prime}_{1}=\varphi^{\prime}_{2}\) and since the Yoneda embedding \(\mathbf{RPC}\hookrightarrow\mathbf{Sh}(\mathbf{RPC})\) is faithful, our functor is faithful.
We verify our embedding is full. Since tropical models are monomorphisms and the definition of morphism is local on cones, it suffices to consider a natural transformation \(h_{\sigma}\to h_{\mathscr{T}}\) where \(\sigma\) is a cone complex consisting of a single cone and its faces. The image of \(h_{\sigma}(\sigma)\) defines the morphism from \(\sigma\) to the piecewise linear space \(\mathscr{T}\).
#### 1.5.3. Prorepresentability and piecewise linear spaces
Proposition 1.5.3 implies the functor \(h_{\mathscr{S}}\) is represented by a cone complex if and only if \(\mathscr{S}\) is a cone complex. The next lemma shows in general a piecewise linear space is prorepresentable by cone complexes. We denote the system of tropical models of a piecewise linear complex \(\mathscr{S}\) by \(S_{\mathscr{S}}\).
**Lemma 1.5.4**.: There is an equality of sheaves on **RPC** with the face topology
\[\varinjlim_{\Sigma\in\mathscr{S}_{\mathscr{S}}}h_{\Sigma}=h_{\mathscr{S}}.\]
Proof.: There is a compatible system of morphisms from \(S_{\mathscr{S}}\) to \(\mathscr{S}\) so the universal property of colimit defines a morphism
\[\varinjlim_{S_{\mathscr{S}}}h_{\Sigma}\to h_{\mathscr{S}}.\]
To prove the lemma we write down the inverse morphism. Let \(\sigma\) be the piecewise linear complex associated to an element of **RPC**. Any morphism \(\sigma\to\mathscr{S}\) factors through a tropical model \(\Sigma\to\mathscr{S}\) by Proposition 1.5.1. In this way a map from \(\sigma\) to \(\mathscr{S}\) defines an element of \(h_{\Sigma}(\sigma)\). Composing with the canonical map \(h_{\Sigma}\to\varinjlim_{\Sigma\in\mathscr{S}_{\mathscr{S}}}h_{\Sigma}\) we have defined an element of \(\varinjlim_{\Sigma\in S_{\mathscr{S}}}h_{\Sigma}(\sigma)\). The element is readily seen to be independent of the choice of \(\Sigma\). In this way we define a morphism of sheaves
\[h_{\mathscr{S}}\to\varinjlim_{\Sigma\in\mathscr{S}_{\mathscr{S}}}h_{\Sigma}.\]
Our two maps are inverse.
## 2. The space of tropical supports
Logarithmic modifications of the logarithmic Quot space are controlled by a tropical object called the _moduli space of tropical supports_. This tropical object represents a moduli problem on the category of rational polyhedral cones. The moduli space of tropical supports is analagous to a Hilbert scheme. The role of a projective variety is performed by a fixed cone complex \(\Theta\) and the role of a flat family of closed subschemes is performed by a combinatorially flat subdivision of piecewise linear spaces. To orient the reader we provide a goal in the introduction to each of Sections 2 to 7.
**Goal 2.0.1**.: Define the moduli space of tropical supports by specifying a sheaf on the category of rational polyhedral cones. Prove this sheaf is the functor of points of a piecewise linear space.
### Moduli of tropical supports
In this section we define our tropical moduli problem. Let \(\Theta\) be a cone complex.
#### 2.1.1. Definition of moduli problem
We now define our moduli problem.
**Definition 2.1.1**.: A _family of tropical supports_ over a cone \(\sigma\) is a subdivision \(\mathscr{T}\) of \(\Theta\times\sigma\) combinatorially flat over \(\sigma\).
Define the _moduli space of tropical supports_, denoted \(\mathsf{Supp}(\Theta)\), to be the sheaf on **RPC** assigning to a cone \(\sigma\) the collection of families of tropical supports over \(\sigma\).
**Lemma 2.1.2**.: The moduli space of tropical supports is a well defined sheaf.
Proof.: Section 1.3.3 shows the moduli space of tropical supports is a pre-sheaf. Descent properties with respect to the face topology are immediate. In particular axioms F1 and F2 refer only to faces of a fixed cone. Given a morphism of cones \(\sigma\to\tau\) and a family of tropical supports over \(\tau\), we obtain a family of tropical supports over \(\sigma\) by taking a fibre product.
### The piecewise linear space of tropical supports
In this section we prove the moduli stack \(\mathsf{Supp}(\Theta)\) is a piecewise linear space. We routinely confuse the space of tropical supports with its functor of points. Example 1.2.9 demonstrated the space of tropical supports is rarely a cone complex.
**Theorem 2.2.1**.: There is a diagram of piecewise linear spaces
such that given a family of tropical supports on \(\Theta\) over a rational polyhedral cone \(\sigma\), say
\[\mathcal{X}(\sigma)\to\Theta\times\sigma\to\sigma,\]
there is a unique morphism \(\sigma\to\mathsf{Supp}(\Theta)\) along which \(\mathcal{X}\) pulls back to \(\mathcal{X}_{\sigma}\).
#### 2.2.1. PL polyhedral structures
Given a morphism of cone complexes \(\Theta\times\sigma\to\sigma\) there is a correspondence between tropical models of \(\Theta\times\sigma\) and polyhedral subdivisions of \(\Theta\) with edge lengths metrised by \(\sigma\). We now explain the piecewise linear analogue of this correspondence.
Consider a combinatorially flat piecewise linear subdivision
\[\mathcal{S}\to\Theta\times\mathbb{R}_{\geq 0}\ \mathrm{and\ write}\ \pi_{2}: \Theta\times\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\]
for the second projection map.
**Definition 2.2.2**.: A _PL polyhedral structure_ of \(\Theta\) is a locally closed stratification \(\mathcal{T}_{1}\) of \(|\Theta|\) such that there exists a subdivision \(\mathcal{S}\) with notation in the previous paragraph such that \(\mathcal{P}_{\mathcal{S}}\) pulls back along the inclusion
\[|\Theta|\times\{1\}\hookrightarrow|\Theta\times\mathbb{R}_{\geq 0}|\]
to \(\mathcal{T}_{1}\).
Observe a family of tropical supports on \(\Theta\), written
\[\mathcal{T}\xrightarrow{\pi}\Theta\times\sigma\to\sigma\]
specifies for each point \(p\) of \(|\sigma|\) a PL polyhedral structure \(\mathcal{T}_{p}=\mathcal{P}_{\mathcal{T}}\cap|\pi^{-1}(p)|\). Points of the topological realisation of the piecewise linear space representing \(\mathsf{Supp}(\Theta)\) biject with PL polyhedral structures of \(\Theta\).
Note if \(\mathcal{T}\) is a cone complex then \(\mathcal{T}_{p}\) is a polyhedral subdivision. A stratum of a PL polyhedral structure on \(\Theta\) consisting of a single point is called a _finite corner_. In the special case \(\mathcal{T}\) is a polyhedral structure on \(\Theta\), finite corners are the vertices.
#### 2.2.2. Tropical degree
Given a family of tropical supports \((\mathcal{T},\varphi)\) on \(\Theta\) over a fan \(\sigma\) we obtain a piecewise linear subdivision of \(\Theta\) by pulling back \(\mathcal{T}\) to the preimage of \(0\) in \(\sigma\). We call the resulting tropical support \((\Delta,\varphi_{\Delta})\) over a point the _tropical degree_. Clearly tropical degree is constant in families.
The tropical degree of a PL polyhedral structure \(\mathcal{T}_{p}\) is the tropical degree of any family of tropical supports over some cone \(\sigma\) for which \(\mathcal{T}_{p}=\mathcal{T}_{q}\) for some point \(q\) in \(\sigma\). The _infinite corners_ of
are the dimension one strata in \(\mathcal{P}_{\Delta}\) where \((\Delta,\varphi_{\Delta})\) is the tropical degree of \(\mathscr{T}_{p}\). The infinite corners are thus the minimal unbounded strata.
The set of _corners_ of a polyhedral structure is the union of the finite corners and the set of infinite corners.
#### 2.2.3. Discrete data - a first shot at combinatorial type
We construct the piecewise linear space \(\mathsf{Supp}\) by gluing piecewise linear cones. The gluands are indexed by data called the _combinatorial type_ of a tropical support. It is difficult to write down discrete data specifying the combinatorial type directly. For this reason we now define the _discrete data_ of a tropical support. Fixing discrete data specifies a finite disjoint union of combinatorial types.
Let \(\mathscr{T}\to\Theta\times\sigma\) be a family of tropical supports on \(\Theta\). Choose a point \(q\) and let \(\mathscr{T}_{q}\) be the associated PL polyhedral structure.
**Definition 2.2.3**.: The _discrete data of a PL polyhedral structure_ on \(\Theta\) is the following information.
1. The set of \(C\) of corners of \(\mathscr{T}_{q}\).
2. An assignment of corners of \(\mathscr{T}_{q}\) to cones of \(\Theta\). A corner \(p\) is taken to the cone \(\theta\) for which \(p\) lies in the preimage of the interior of \(\theta\) under the projection map to \(\Theta\).
3. A subset \(W\) of the power set of \(C\). For a stratum \(\kappa\) in \(\mathcal{P}_{\mathscr{T}}\) which maps to the interior of \(\sigma\) we define \(S_{\kappa}\) to be the collection of corners in the closure of \(\kappa\). The set \(W\) is the set of \(S_{\kappa}\) for \(\kappa\) in \(\mathcal{P}_{\mathscr{T}}\).
4. For each set \(S_{\kappa}\) a subgroup of \(\theta^{\mathrm{gp}}\) where the interior of \(\kappa\) is a subset of \(\theta\) a cone of \(\Theta\). This subgroup is generated by \(\{s-s^{\prime}|s,s^{\prime}\in N_{s_{\kappa}}\}\).
The discrete data of PL polyhedral structures \(\mathscr{T}_{1}\) and \(\mathscr{T}_{2}\) are the same if there is a bijection of finite corner sets \(\varphi:C_{1}\to C_{2}\) satisfying the hypotheses
1. The corners \(\varphi(c)\) and \(c\) are assigned the same cone of \(\Theta\).
2. The map \(\varphi\) induces a well defined bijection \[W_{1}\to W_{2}\] which preserves the associated subgroups of \(\theta^{\mathrm{gp}}\).
We now verify that discrete data is constant on the interiors of cones.
**Proposition 2.2.4**.: Consider a family of tropical supports
\[\mathscr{T}\times\Theta\to\sigma\]
and let \(p,q\) be points in the interior of \(|\sigma|\). The discrete data of \(\mathscr{T}_{p}\) and \(\mathscr{T}_{q}\) coincide.
Proof.: Suffices to think about the finite corner sets. Finite corners of \(\mathscr{T}_{p}\) are paired with finite corners of \(\mathscr{T}_{q}\) if they lie in the same stratum of \(\mathcal{P}_{\mathscr{T}}\), inducing a bijection \(\varphi\) from the finite corners of \(\mathscr{T}_{p}\) to the finite corners of \(\mathscr{T}_{q}\) which preserves the cone of \(\Theta\). The second property in the definition of isomorphism of discrete data follows from continuity.
**Lemma 2.2.5**.: Strata of a piecewise linear subdivision of a cone complex \(\Theta\times\sigma\) are of the form
\[\bigcup\operatorname{Int}(\operatorname{Conv}(C_{i}))\]
for strictly convex sets \(C_{i}\) of rays.
Proof.: Consider a stratum \(s\) equipped with homeomorphism to local cone \(\mathfrak{s}_{1}\) which is itself an open subset of \(\mathbb{R}^{k}\) for some \(k\). By part (1) of the definition of piecewise linear complexes, the closure of \(s\) is a union of strata. The convex hull of \(s\) is the convex hull of finitely many points \(p_{1},...,p_{k}\). All points \(p_{i}\) are strata of \(\mathcal{P}_{S}\); this follows by induction on dimension. The stratum \(s\) is an open subset
of the interior of \(\operatorname{conv}(p_{1},...,p_{k})\). Necessarily any such open set is the complement of closed sets which are also convex hulls of rays.
**Corollary 2.2.6**.: A family of tropical supports on \(\Theta\) over a cone \(\sigma\)
\[\mathscr{T}\to\Theta\times\sigma\to\sigma\]
is uniquely specified by two pieces of data.
1. The discrete data of \(\mathscr{T}_{p}\) for \(p\) any point in the interior of \(\sigma\).
2. The elements of \(\mathcal{P}_{\mathscr{T}}\) corresponding to finite corners \(\mathscr{T}_{p}\).
Proof.: Such a family of tropical supports is specified by the data of a locally closed stratification of
\[|\Theta\times\sigma|.\]
By Lemma 2.2.5 such a locally closed stratification is specified by the corners and which convex hulls are the same strata. This latter datum is specified by the discrete data.
#### 2.2.4. Local model
Let \(D\) be a discrete datum. We say two finite corners \(c_{1},c_{2}\) of \(D\) are _similar_ if there is an isomorphism of discrete data with associated bijection of finite corner sets \(\varphi\) satisfying \(\varphi(c_{1})=c_{2}\). Suppose PL polyhedral structures in the combinatorial type \([\mathscr{T}]\) have \(n\) finite corners and let \(\operatorname{Sym}([\mathscr{T}])\) be the subgroup of the symmetric group on \(n\) letters which maps each corner to a similar corner. The position of each finite corner gives a map of sets
\[\Phi:\{\operatorname{PL-polyhedral\ structures\ with\ discrete\ data}\ D\}\to|\Theta|^{n}/ \operatorname{Sym}(D).\]
Our strategy is to upgrade this map of sets to a map of piecewise linear spaces.
**Proposition 2.2.7**.: There is a disjoint union of local cones \(\sqcup_{i}\mathfrak{s}_{i}\) and a \(\operatorname{Sym}(D)\) equivariant embedding
\[\bigsqcup_{i}\mathfrak{s}_{i}\hookrightarrow\Theta^{n}\]
such that the inclusion of topological realisations
\[|\mathfrak{s}_{i}|/\operatorname{Sym}(D)\hookrightarrow|\Theta|^{n}/ \operatorname{Sym}(D)\]
is the image of \(\Phi\). Moreover the image is disjoint from the big diagonal of \(|\Theta|^{d}\).
Proof.: We set \(r:|\Theta^{n}|\to|\Theta|^{n}/\operatorname{Sym}(D)\) the quotient map. We will show \(r^{-1}(\operatorname{Im}(\Phi))\) defines a finite union of local cones. Since the action of \(\operatorname{Sym}(\mathscr{T})\) preserves which cone a point lies in, there are a sequence of cones \(\sigma_{1},...,\sigma_{n}\) of \(\Theta\) such that \(\mathscr{U}_{D}=r^{-1}(\operatorname{Im}(\Phi))\) lies in the topological realisation of the product
\[\sigma_{1}\times\sigma_{2}\times...\times\sigma_{n}\subset\Theta^{d}\]
The product of cones is a cone and thus we may consider
\[\sigma=\sigma_{1}\times\sigma_{2}\times...\times\sigma_{m}\subset N\]
for some finitely generated free abelian group \(N\).
The image of \(\Phi\) is not the whole of \(\sigma\). The third and fourth points in the discrete data cut out a subset. The fourth piece of data defines linear equalities. We thus determine a (saturated) subgroup \(N_{\mathsf{s}}\subset N\). We will take \(N_{\mathsf{s}}\) to be the torsion free abelian group in the definition of the local cones we construct.
We are left to identify open subsets \(U_{\mathsf{s}_{i}}\) of \(N_{\mathsf{s}}\otimes\mathbb{R}\). The set \(W\) imposes an open condition. Indeed fixing which sets lie in the closure of a stratum is an open condition - we are asking no corner lies in the closure of a stratum it should not lie in the closure of. Thus we have specified an open
subset \(U_{\mathfrak{s}}\) of \(N_{\mathfrak{s}}\otimes\mathbb{R}\). The open subset is not obviously connected - set \(U_{\mathfrak{s}_{i}}\) to be the connected components.
We define \(\mathfrak{s}_{i}=(N_{\mathfrak{s}},U_{\mathfrak{s}_{i}})\) and claim we have specified local cones. By induction on dimension of \(\Theta\) we know the closed subset removed to define \(U_{\mathfrak{s}_{i}}\) is the support of a cone complex, and thus \(U_{\mathfrak{s}}\) is the union of interiors of cones.
We say PL subdivisions \(\mathscr{T}_{p}\) and \(\mathscr{T}_{q}\) are of the same _combinatorial type_ if we can take \(p,q\) to lie in the same \(\mathfrak{s}_{i}\). Write \([\mathscr{T}_{p}]=[\mathscr{T}_{q}]\) for the combinatorial type of \(\mathscr{T}_{p}\)
Associated to any local cone \(\mathfrak{s}=\mathfrak{s}_{i}\) is a piecewise linear complex which we now describe. Take the closure of \(\mathfrak{s}\) in \(\mathbb{R}^{k}\). This topological space has a conical locally closed stratification with two strata, one of which is \(\mathfrak{s}\). The output of Construction 1.2.6 is a piecewise linear cone \(\mathscr{S}_{[\mathscr{T}]}\).
We must subdivide the piecewise linear complex \(\mathscr{S}_{[\mathscr{T}]}\) without subdividing its dense stratum. This is because multiple combinatorial types may appear in a single boundary stratum. To obtain the right subdivision we need to construct a universal family. In the sequel whenever \(\mathscr{T}\) has discrete data \(D\) we write \(\operatorname{Sym}(D)=\operatorname{Sym}([\mathscr{T}])\).
#### 2.2.5. Universal family over local model
We now construct the universal family associated to \(\mathscr{S}_{[\mathscr{T}]}\) as a subdivision of \(\mathscr{S}_{[\mathscr{T}]}\times\Theta\). In light of Construction 1.3.2 it suffices to construct a conical locally closed stratification \(\mathcal{P}_{uni}\) of
\[|\mathscr{S}_{[\mathscr{T}]}\times\Theta|\subset|\Theta^{n}\times\Theta|.\]
This locally closed stratification, and thus the universal family we construct, is \(\operatorname{Sym}([\mathscr{T}])\) equivariant.
We now explain how to construct \(\mathcal{P}_{uni}\). The map
\[\Theta^{n}\times\Theta\to\Theta^{n}\]
has \(n\) universal sections \(s_{1},...,s_{n}\). We have fixed a bijection between \(\{s_{i}\}\) and the set of finite corners in the discrete data \(D\) of \(\mathscr{T}_{p}\) when we thought of \(\mathscr{S}_{\mathscr{T}}\) as a subset of \(\Theta^{n}\). Infinite corners naturally biject with dimension one strata of \(\mathcal{P}_{\mathscr{T}_{0}}\). Thus associated to each corner we have a locally closed stratum inside \(\Theta^{n}\times\Theta\).
Write \(\{x_{i}\}\) for the set of strata corresponding to corners. We say a subset \(T\subset\{x_{i}\}\) is strictly convex if for all \(x_{i}\in T\) the convex hull of \(T\backslash\{x_{i}\}\) is not equal to the convex hull of \(T\). We take strata over the interior of \(\mathscr{S}_{[\mathscr{T}]}\) to be unions of convex hulls of the sections corresponding to the strictly convex subsets. Which unions to take is specified by the discrete data \(D\). Strata which lie neither over the interior of \(\mathscr{S}_{[\mathscr{T}]}\) nor the zero stratum in \(\mathscr{S}_{[\mathscr{T}]}\) are dictated by combinatorial flatness.
Denote the resulting map of piecewise linear spaces by \(\mathscr{X}_{[\mathscr{T}]}\to\mathscr{S}_{[\mathscr{T}]}\). This map is not in general combinatorially flat because the image of each stratum of \(\mathcal{P}_{\mathscr{X}_{[\mathscr{T}]}}\) may not be an entire stratum, violating axiom F1. Pushing forward the stratification on \(\mathscr{X}_{[\mathscr{T}]}\) defines a conical refinement of the stratification \(\tilde{\mathcal{P}}\) on \(\mathscr{S}_{[\mathscr{T}]}\). Pulling back \(\tilde{\mathcal{P}}\) defines a conical refinement of \(\mathcal{P}_{\mathscr{X}_{[\mathscr{T}]}}\). Applying Construction 1.2.6 to both conical stratifications, we obtain a new morphism of piecewise linear spaces
\[\varpi:\tilde{\mathscr{X}}\to\tilde{\mathscr{S}}_{[\mathscr{T}]}\]
This map is combinatorially flat: axiom F1 holds by construction and we deduce F2 from Lemma 1.3.4. On the interior of \(\mathscr{S}_{[\mathscr{T}]}\) no subdivision was made to yield \(\tilde{\mathscr{S}}_{[\mathscr{T}]}\).
### Passing to the quotient
Observe \(\operatorname{Sym}([\mathscr{T}])\) acts equivariantly on \(|\mathscr{S}_{[\mathscr{T}]}|\). Intuitively we need to understand the quotient by this action. More formally we specify a piecewise linear space by giving a groupoid presentation and noting the result is fibred in setoids.
Consider \(|\operatorname{Sym}([\mathscr{T}])|\) copies of \(\tilde{\mathscr{S}}_{[\mathscr{T}]}\) labelled by the elements of \(\operatorname{Sym}([\mathscr{T}])\). Glue these complexes as follows. Whenever \(\sigma_{1},\sigma_{2}\) are elements of \(\operatorname{Sym}([\mathscr{T}])\) such that \(\sigma_{1}\sigma_{2}^{-1}\) fixes a subcomplex \(\mathscr{G}\hookrightarrow\mathscr{T}\), we glue along the subcomplex \(\mathscr{G}\). Denote the resulting piecewise linear complex \(\mathscr{G}\) and observe \(\operatorname{Sym}([\mathscr{T}])\) acts freely upon it.
Define a subcomplex \(R\) of \(\mathscr{G}\times\mathscr{G}\) whose strata are pairs \((\kappa,\sigma(\kappa))\) whenever \(\kappa\) lies in \(\mathcal{P}_{\mathscr{G}}\) and \(\sigma\) in \(\operatorname{Sym}([\mathscr{T}])\). This is a strict equivalence relation and thus the quotient is a piecewise linear space \(\mathscr{U}_{[\mathscr{T}]}\).
#### 2.3.1. Gluing local patches
We now construct a diagram
\[g:J\to\mathbf{PLS}\]
of piecewise linear spaces with all morphisms open: the colimit over this diagram will be \(\operatorname{Supp}(\Theta)\) and since all morphisms will be inclusions of subcomplexes, the result is a piecewise linear space.
Objects of \(J\) are pairs consisting of a copy of \(\mathscr{U}_{[\mathscr{T}]}\) and a \(\operatorname{Sym}([\mathscr{T}])\) equivariant weight function
\[\varphi_{[\mathscr{T}]}:\mathcal{P}_{\tilde{\mathscr{S}}_{[\mathscr{T}]}} \to\bigcup_{\theta}G_{\theta}\]
where \(\varphi_{[\mathscr{T}]}\) satisfies the conditions of Definition 2.1.1. There is a morphism of pairs
\[(\mathscr{U}_{[\mathscr{T}]},\varphi_{[\mathscr{T}]})\to(\mathscr{U}_{[ \mathscr{T}^{\prime}]},\varphi_{[\mathscr{T}]})\]
whenever \([\mathscr{T}^{\prime}]\) arises as the combinatorial type of \(\mathscr{T}_{p}\) for \(p\) not in the dense stratum of \(\tilde{\mathscr{S}}_{[\mathscr{T}]}\).
The image of a pair \((\mathscr{U}_{[\mathscr{T}]},\varphi_{[\mathscr{T}]})\) under \(g\) is the piecewise linear space \(\mathscr{U}_{[\mathscr{T}]}\). We now define the image of a morphism
\[(\mathscr{U}_{[\mathscr{T}]},\varphi_{[\mathscr{T}]})\to(\mathscr{U}_{[ \mathscr{T}^{\prime}]},\varphi_{[\mathscr{T}]})\]
under \(g\). Certainly we know \([\mathscr{T}]\neq[\mathscr{T}^{\prime}]\). The proof of Proposition 2.2.7 shows that there is a subcomplex
\[\iota:\tilde{\mathscr{S}}_{[\mathscr{T}^{\prime}]}\to\tilde{\mathscr{S}}_{[ \mathscr{T}]}\text{ which descends to the quotient to define a subcomplex }\iota^{\prime}:\mathscr{U}_{[\mathscr{T}^{\prime}]}\hookrightarrow\mathscr{U} _{[\mathscr{T}]}.\]
And this is our morphism - a subcomplex is an open morphism.
The colimit over the piecewise linear spaces yields a new piecewise linear space \(\operatorname{Supp}(\Theta)\). There is a corresponding diagram of universal families with colimit denoted \(\mathscr{X}\) and a morphism \(\mathscr{X}\to\Theta\times\operatorname{Supp}(\Theta)\to\operatorname{Supp} (\Theta)\). There is a unique function
\[\varphi:\mathcal{P}_{\mathscr{X}}\to\bigcup_{\theta}G_{\theta}\]
which pulls back to each of the \(\varphi_{[\mathscr{T}]}\).
Proof of Theorem 2.2.1.: We check the piecewise linear space \(\operatorname{Supp}(\Theta)\) satisfies the universal property of Theorem 2.2.1. Consider a family of tropical supports over a cone \(\sigma\) say
\[\mathscr{X}_{\sigma}\to\sigma\times\Theta\to\sigma,\varphi_{\sigma}.\]
Enumerating the \(n\) corners of the PL subdivision corresponding to \(\mathscr{X}_{\sigma}\) gives a section of the map \(\Theta^{n}\times\sigma\to\sigma\) which we think of as a map \(\sigma\to\Theta^{n}\). The combinatorial type of a PL polyhedral structure arising from a point in the interior \(\sigma\) dictates a choice of \(\mathscr{U}_{D}\) and we have the composition
\[\sigma\to\Theta^{n}\to\mathscr{U}_{D}.\]
The weight function thus defines a map from \(\sigma\) to the image under \(g\) of a unique pair \((\mathscr{U}_{D},\varphi)\). The function \(\varphi\) is determined uniquely by \(\varphi_{\sigma}\). In this way we obtain a map to \(\mathsf{Supp}(\Theta)\) along which universal families pull back.
## 3. Logarithmic flatness for toric varieties
In this section we develop our understanding of the transversality condition captured by _logarithmic flatness_. In the sequel we will define what it means for a coherent sheaf \(\mathcal{F}\) on a logarithmic \(S\) scheme \(X/S\) to be logarithmically flat over \(S\). In this section we restrict to the case that \(S\) is a point with the trivial logarithmic structure and \(X\) is a toric variety equipped with divisorial logarithmic structure from the toric boundary.
**Situation 3.0.1**.: Let \(X\) be an affine toric variety with dense torus \(X^{o}\) and cocharacter lattice \(M_{X}\). Define \(\mathcal{E}=\mathcal{O}_{X}^{n}\) a locally free coherent sheaf on \(X\). Consider a short exact sequence of sheaves on \(X\)
\[0\to\mathcal{G}\to\mathcal{O}_{X}^{n}\xrightarrow{q}\mathcal{F}\to 0\ \mathrm{with\ global\ sections}\ 0\to G\to\mathbb{C}[X]^{n}\xrightarrow{q}F\to 0.\]
Write \(G^{o}\) for \(\mathcal{G}(X^{o})\) and note since open immersions are flat, \(\mathcal{G}(X^{o})\) is a submodule of \(\mathbb{C}[X^{o}]^{n}\). Assume \(\mathcal{F}\) has no non-zero sections with support contained within the toric boundary of \(X\).
### The tropical support of \(q\)
The tropical support of a surjection of sheaves \(q\) on a toric variety \(X\) is a piecewise linear space \(\mathscr{T}\) subdividing \(\mathbb{R}^{\dim(X)}=M_{X}\otimes\mathbb{R}\). Tropical support will play an important role in the sequel where it is defined in terms of torus actions. In Situation 3.0.1, tropical support may be understood most readily in terms of Grobner theory. In this section we take the Grobner theory approach.
#### 3.1.1. Monomials
Observe both \(\mathbb{C}[X^{o}]^{n}=\mathcal{O}_{X}^{n}(X^{o})\) and \(\mathbb{C}[X]^{n}=\mathcal{O}_{X}^{n}(X)\) admit natural actions of \(X^{o}\). A monomial of \(X^{o}\) (respectively \(X\)) is a character occuring in the \(X^{o}\) representation \(\mathbb{C}[X^{o}]^{n}\) (respectively \(\mathbb{C}[X]^{n}\)). Write \(\mathrm{mon}(\mathcal{E}),\mathrm{Mon}(\mathcal{E})\) for the set of monomials of \(X^{o}\) and \(X\) respectively.
#### 3.1.2. Term orders
A _term order_ on \(\mathrm{mon}(\mathcal{E})\) respectively \(\mathrm{Mon}(\mathcal{E})\) is a total pre-order.
**Remark 3.1.1**.: This is weaker than the notion of term order considered in Grobner theory in two ways. First the relation need not be anti-symmetric. A more important subtlety is that for \(M_{1},M_{2},P\) monomials with \(M_{1}\leq M_{2}\) it need not be true that
\[M_{1}P\leq M_{2}P.\]
Instead of unpacking this definition we construct all term orders which will be of interest to us in Example 3.1.2.
**Example 3.1.2**.: The co-character/ character pairing assigns to a cocharacter \(w\) of \(X^{o}\) functions
\[\mathrm{Mon}(\mathcal{E})\to\mathbb{Z}\quad\mathrm{mon}(\mathcal{E})\to \mathbb{Z}.\]
Pulling back the total order on \(\mathbb{Z}\) assigns to \(w\) a term order on \(\mathrm{Mon}(\mathcal{E}),\mathrm{mon}(\mathcal{E})\) respectively. For \(k\) a positive integer the term order associated to \(w\) and \(kw\) is the same. This allows us to define a term order associated to every \(w^{\prime}\in M\otimes\mathbb{Q}=\mathbb{Q}^{n}\): choose a positive integer \(\ell\) such that \(\ell w^{\prime}\) lies in \(\mathbb{Z}^{n}\) and use the term order associated to \(\ell w\).
#### 3.1.3. Initial forms
Any element \(g^{o}\in\mathbb{C}[X^{o}]^{n}\) respectively \(g\in\mathbb{C}[X]^{n}\) may be expressed as a \(\mathbb{C}\) linear combination of monomials
\[g^{o}=\sum_{m\in\operatorname{mon}(\mathbb{C}[X^{o}]^{n})}a^{o}_{m}m,\quad g= \sum_{m\in\operatorname{Mon}(\mathbb{C}[X]^{n})}a_{m}m\text{ where }a^{o}_{m},a_{m}\in\mathbb{C}.\]
We define new elements of \(\mathbb{C}[X^{o}]^{n}\) and \(\mathbb{C}[X]^{n}\) respectively
\[\operatorname{in}_{w}(g^{o})=\sum_{w(m)\text{ maximal}}a^{o}_{m}m\quad\text{ and }\quad\operatorname{In}_{w}(g)=\sum_{w(m)\text{ maximal}}a_{m}m.\]
**Remark 3.1.3**.: The coefficients \(a^{o}_{m},a_{m}\) are not well defined. Indeed for \(\mathbb{C}[X]=k[x]\) and \(n=2\) we have
\[(x,0)+(0,x)=(x,x)\]
and note \((x,x),(x,0)\) and \((0,x)\) are all monomials. The initial form \(\operatorname{In}_{w}(g)\) is none the less well defined.
**Remark 3.1.4**.: The initial forms need not be monomials. For example setting \(\mathbb{C}[X]=\mathbb{C}[x,y],n=1\) and \(w=(1,1)\) we find
\[x+y=\operatorname{in}_{w}(x+y).\]
#### 3.1.4. Initial submodules
Given a submodule
\[G\leq\mathbb{C}[X]^{n}\text{ or }G^{o}\leq\mathbb{C}[X^{o}]^{n}\]
we define \(\operatorname{In}_{w}(G)\) the submodule of \(\mathbb{C}[X]^{n}\) generated by \(\operatorname{In}_{w}(g)\) for \(g\in G\). Similarly \(\operatorname{in}_{w}(G^{o})\) is the submodule of \(\mathbb{C}[X^{o}]^{n}\) generated by \(\operatorname{in}_{w}(g)\) for \(g\in G^{o}\).
#### 3.1.5. The Grobner stratification
We now work in Situation 3.0.1. As \(w\) varies in \(M_{X}\otimes\mathbb{Q}\) the initial ideal \(\operatorname{in}_{w}(G^{o})\) also varies. There is a conical locally closed stratification of \(M\otimes\mathbb{Q}\) on which this initial ideal is constant. Following [1], we call this stratification the _Grobner stratification_. The _tropical support_ of \(q\) is the piecewise linear space \(\mathscr{T}(q)\) associated to the conical stratification obtained from the common refinement of the fan of \(X\) and the Grobner stratification.
### Notions of transversality
We begin by stating several notions of transversality. The remainder of this section is largely dedicated to the connections between these definitions and how we can ensure they hold.
#### 3.2.1. Logarithmic flatness
Logarithmic flatness will be defined in general in Section 4.2.2. For now we handle the special case relevant to this section. Denote the projection and torus multiplication maps
\[X\xleftarrow{\pi_{1}}X\times X^{o}\xrightarrow{m}X.\]
A sheaf \(\mathcal{F}\) on a toric variety \(X\) is called _logarithmically flat_ over \(\operatorname{Spec}(\mathbb{C})\) if \(m^{\star}\mathcal{F}\) is flat over \(X\) along the map \(\pi_{1}\).
#### 3.2.2. Transversality
For \(\sigma\) a cone in the fan of \(X\) denote the associated torus orbit \(O(\sigma)\) and set \(V(\sigma)\) the closure of \(O(\sigma)\). We say a sheaf \(\mathcal{F}\) on \(X\) is _transverse_ if the pullback of \(\mathcal{F}\) to \(V(\sigma)\) has no sections supported on the compliment of \(O(\sigma)\).
**Example 3.2.1**.: Set \(X=\mathbb{A}^{2}\), and note the fan of \(X\) has two rays, one zero cone and one two dimensional cone. Left is a curve with transverse structure sheaf. Centre is not transverse because setting \(\sigma\) any ray, the structure sheaf has a section supported on the orbit supported at the maximal cone - i.e. \((0,0)\). Right is not transverse by setting \(\sigma\) the zero cone.
#### 3.2.3. Strict and total transform
We say a sheaf \(\mathcal{F}\) on \(X\) has the strict and total transform property if given any toric modification
\[\pi:X_{\Gamma}\to X\]
the strict transform \(\pi^{!}\mathcal{F}\) and total transform \(\pi^{*}\mathcal{F}\) coincide.
#### 3.2.4. Tropical support
Our final condition requires that \(X_{\Gamma}\) is a tropical model of the tropical support of \(q\).
### Tevelev's theorem
In the sequel we focus on the following upgrade of Situation 3.0.1.
**Situation 3.3.1**.: Continuing Situation 3.0.1, let \(\pi_{\Gamma}:X_{\Gamma}\to X\) be a toric modification. Writing \(\mathcal{F}_{\Gamma}\) for the strict transform of \(\mathcal{F}\) there is a short exact sequence of coherent sheaves on \(X_{\Gamma}\)
\[0\to\mathcal{G}_{\Gamma}\to\pi_{\Gamma}^{*}\mathcal{E}\xrightarrow{q\Gamma} \mathcal{F}_{\Gamma}\to 0\ \mathrm{with\ global\ sections}\ 0\to G_{\Gamma}\to C[X_{\Gamma}]^{n} \xrightarrow{q\Gamma}F_{\Gamma}\to 0.\]
We assume that \(\mathcal{F}_{\Gamma}\) is logarithmically flat over \(\mathrm{Spec}(\mathbb{C})\).
The importance of this situation reflects the following theorem first proved by Tevelev in unpublished notes [10], generalising his published result [10]. The theorem was first stated in this form in the literature by Ulirsch [11]. We recall Tevelev's proof as we are unable to find a suitable reference.
**Theorem 3.3.2**.: Let \(\mathcal{F}\) be a sheaf on toric variety \(X\) with no sections supported on the toric boundary. There is a logarithmic modification \(X_{\Gamma}\to X\) such that the strict transform of \(\mathcal{F}\) is logarithmically flat.
Proof.: We show that \(m^{\star}\mathcal{F}\) can be flattened by strict transform under an equivariant blowup of \(X\). We think of \(\mathbb{G}_{m}^{k}\) as the dense torus of \(\mathbb{P}^{k}\) and we consider the projection map
\[\mathbb{P}^{k}\times X\to X\]
The pushforward of \(m^{\star}\mathcal{F}\) to \(\mathbb{P}^{k}\times X\) is equivariant and quasicoherent but may not be coherent. It contains a coherent equivariant subsheaf. We denote this equivariant coherent sheaf \(\mathcal{G}\).
We now consider Grothendieck's Quot scheme
\[Q=\mathsf{Quot}(\mathbb{P}^{k}\times(X)/X,\mathcal{G})\]
which is a union of projective varieties over \(X\).
By generic flatness \(\mathcal{G}\) is flat over an open subscheme \(U\subset X\) and so the map \(Q\to Y\) admits a section \(U\to Q\) over \(U\). Take \(\overline{X}\) a scheme theoretic image of \(U\) and observe the map \(\overline{X}\to X\) is birational as it is an isomorphism over \(Q\) and the domain integral because \(U\) is integral. By a theorem of Grothendieck it is a blowup [10, 2.3.5] of a closed subscheme.
The morphism \(\overline{X}\to Q\) carries a universal surjection \(\overline{\mathcal{G}}\to\mathcal{N}\) whose kernel is supported outside of \(U\) and thus \(\mathcal{N}\) is the strict transform of \(\mathcal{G}\) and is necessarily flat.
Figure 3. Curves in \(\mathbb{A}^{2}\) depicted in red. The divisors \(X=0,Y=0\) are depicted as black lines.
It remains to check the map \(\overline{X}\to X\) is a logarithmic modification. We know it is a blowup in a torus equivariant subscheme. Any such blowup is a logarithmic modification.
### Types of transversality
We explain the connection between the transversality conditions introducted in Section 3.2.
**Theorem 3.4.1**.: The following are equivalent
1. The sheaf \(\mathcal{F}_{\Gamma}\) is logarithmically flat, see Section 3.2.1.
2. The sheaf \(\mathcal{F}\) has the strict and total transform property, see Section 3.2.3.
Either of these conditions implies
1. The sheaf \(\mathcal{F}_{\Gamma}\) is transverse, see Section 3.2.2.
Any of the above implies our next condition.
2. The fan of \(X\) is a subdivision of \(\mathscr{T}(q)\), see Section 3.2.4.
The following theorem is helpful for building intuition about tropical support. Its proof does not apppear in this paper
**Theorem 3.4.2** ([14]).: Condition (4) implies Condition (1) and thus the conditions in Theorem 3.4.1 are equivalent.
The proof of Theorem 3.4.1 will occupy the remainder of this subsection.
#### 3.4.1. Consequences of logarithmic flatness
We start our proof of Theorem 3.4.1 by showing that logarithmic flatness implies two other conditions.
Proof \((1)\Rightarrow(2)\).: This is a cosmetic modification of [14, TAG 080F].
Proof \((1)\Rightarrow(3)\).: Assume \(\mathcal{F}\) were to have a section \(s\) supported on the compliment of \(O(\sigma)\) in \(V(\sigma)\) for some cone \(\sigma\). Since flatness and transversality are local conditions, without loss of generality \(V(\sigma)\) is affine. Choose a non-zero function \(f\) pulled back from the algebraic stack \([V(\sigma)/\mathbb{G}_{m}^{\dim(X)}]\) such that \(V(f)\) contains the toric boundary of the toric variety \(V(\sigma)\). Necessarily the zero set of such an \(f\) is supported on the toric boundary. By flatness the map \((f^{k})\otimes F_{\Gamma}\to F_{\Gamma}\) is injective. This implies that \(f^{k}s\) is not zero for any integer \(k\) and consequently the localisation map from \(\mathcal{O}_{V(\sigma)}\) obtained by inverting \(f\) does not map \(s\) to zero. It follows that the restriction of \(s\) to \(O(\sigma)=V(\sigma)\backslash V(f)\) is not zero and so the support of \(s\) was not contained within the toric boundary.
#### 3.4.2. Test cocharacters
Given a cocharacter \(w\in M_{X}\) we define a morphism
\[\varphi_{w}^{o}:S^{o}=\operatorname{Spec}(\mathbb{C}((t)))\to X^{o}\]
which on the level of coordinate rings is specified by
\[(\varphi_{w}^{o})^{\#}:\mathbb{C}[X_{1}^{\pm 1},...,X_{n}^{\pm 1}]\to\mathbb{C}((t)) \quad X_{i}\mapsto t^{w_{i}}.\]
Assuming \(w\) lies in the support of the fan of \(X\), the map \(\varphi_{w}^{o}\) extends to a map
\[\phi_{w}:S=\operatorname{Spec}(\mathbb{C}[[t]])\to X.\]
The short exact sequence
\[0\to\mathcal{G}\to\mathcal{O}_{X}^{n}\to\mathcal{F}\to 0\]
pulls back along \(m\) to a short exact sequence
\[0\to\mathcal{G}^{\prime}\to\mathcal{O}_{X\times X^{o}}^{n}\to\mathcal{F}^{ \prime}\to 0.\]
We set \(s\) the special point of \(S\) and pull back along \(s\to X\) to obtain a surjection of sheaves
\[0\to\mathcal{G}_{w}\to\mathcal{O}_{X^{o}}^{n}\to\mathcal{F}_{w}\to 0\]
on \(X^{o}\). Evaluating this sheaf on global sections we obtain
\[0\to G_{w}\to\mathbb{C}[X^{o}]^{n}\to F_{w}\to 0.\]
**Lemma 3.4.3**.: There is an equality of submodules of \(\mathbb{C}[X^{o}]^{n}\)
\[G_{w}=\mathrm{in}_{w}(G^{o}).\]
Proof.: This is a special case of Proposition 5.2.1.
Proof \((1)\Rightarrow(4)\).: The submodule \(G_{w}\) depends only upon \(s\). The point \(s\) depends only upon the cone \(\sigma\) for which \(w\) lies in the interior of \(\sigma\). The result now follows from Lemma 3.4.3.
#### 3.4.3. Transversality and the strict/total transform property
We work in Situation 3.3.1 but impose that Condition (4) of Theorem 3.4.1 is false for \(q_{\Gamma}\) and impose Condition (3) instead of logarithmic flatness. We will show that Condition (3) implies Condition (4) by finding a contradiction. Pick \(\sigma\) a minimal cone in the fan of \(X\) such that \(\mathrm{in}_{w}(G^{o})\) is not constant for \(w\) in the interior of \(X\) and let \(\tau\) be any facet of \(\sigma\).
Pull \(q\) back to \(O(\tau)\) and take global sections to obtain a surjection of modules
\[\mathbb{C}[O(\tau)]^{n}\to\mathcal{F}_{\sigma}\]
with kernel denoted \(G_{\sigma}\) a \(\mathbb{C}[O(\tau)]\) module. Let \(w_{D}\) be a point in the interior of \(\sigma\) lying in the stratum whose closure contains \(\tau\).
**Lemma 3.4.4**.: If \(\mathcal{F}_{\Gamma}|_{V(\tau)}\) has no sections supported on \(V(\sigma)\) then
\[G_{\sigma}=\mathrm{in}_{w_{D}}(G_{\tau})\]
Proof.: It suffices to handle the case that \(\tau\) is the zero cone and \(\sigma\) a ray. The result is now Lemma 3.4.3.
Proof \((3)\Rightarrow(4)\).: In the above setup we claim that \(\mathcal{F}_{\Gamma}|_{V(\tau)}\) has a section supported on \(V(\sigma)\), which is in the compliment of \(O(\tau)\). Thus whenever (4) is false (3) is also false. The contrapositive proves our claim.
We argue \(G_{\sigma}<\mathrm{in}_{w_{D}}(G_{\tau})\) is a proper submodule contradicting the conclusion of Lemma 3.4.4. Indeed for any \(w^{\prime}\) in the interior of \(\sigma\) we have \(G_{\sigma}\leq\mathrm{in}_{w^{\prime}}(G)\). Choose \(w^{\prime}\) not in the same stratum \(\kappa\) as \(w_{D}\), but ask \(w^{\prime}\) lies in the closure of \(\kappa\). Lemma 3.4.3 applied to a sufficiently fine toric modification of \(X_{\Gamma}\) now shows \(\mathrm{in}_{w}(G)\) admits a degeneration to \(\mathrm{in}_{w_{D}}(G)\) and so it is not possible that \(\mathrm{in}_{w_{D}}(G)<\mathrm{in}_{w^{\prime}}(G)\) unless they are equal. Since we assumed no equality, there is some element of \(\mathrm{in}_{w_{D}}(G)\) which is not in \(\mathrm{in}_{w^{\prime}}(G)\) and thus not in \(G_{\sigma}\).
#### 3.4.4. Strict transforms, total transforms and logarithmic flatness
We deduce that \((2)\Rightarrow(1)\) from the following proposition.
**Proposition 3.4.5**.: Assume that in Situation 3.3.1 the strict and total transform of \(\mathcal{F}\) under \(\pi_{\Gamma}\) coincide. Then \(\mathcal{F}\) is logarithmically flat.
Proof.: The valuative criterion for flatness [11, 10, 12] states that to check \(\mathcal{F}\) is flat over \(X\) it suffices to check that for \(S\) any trait and given a map \(p:S\to X\) we have the following property. The sheaf \(\mathcal{F}_{S}\) on \(S\times X^{o}\) obtained by pulling back \(\mathcal{F}\) along the induced map \(S\times X^{o}\to X\) is logarithmically flat over \(S\).
In this paragraph we show that it suffices to handle the case that the generic point of \(S\) maps to the interior of the dense torus of \(X\). Let \(\sigma\) be a cone of \(X\). We claim now that \(\mathcal{F}\) satisfying the hypotheses of the theorem implies that \(\mathcal{F}|_{V(\sigma)}\) also satisfies these hypotheses. It suffices to handle the case that \(\sigma\) is a ray as the argument may be iterated for the general case. For \(\sigma\) a ray note that the logarithmic modification \(X_{\Gamma}\to X\) induces a logarithmic modification \(Y\to V(\sigma)\). Since flatness is preserved by base change the fact \(\mathcal{F}_{\Gamma}\) is logarithmically flat implies that the pullback of \(\mathcal{F}|_{V(\sigma)}\) to \(Y\) is logarithmically flat. This implies that the strict and total transforms coincide because logarithmic flatness implies no sections supported on the compliment of \(X^{o}\).
To finish the proof we appeal to the valuative criterion for flatness. A map \(S\to X\) such that the generic point of \(S\) intersects the dense torus of \(X\) specifies a cocharacter of \(X\) and thus a ray \(\rho\) in the fan of \(X\). Choose a logarithmic modification
\[X_{\Gamma^{\prime}}\to X_{\Gamma}\to X\]
such that \(\rho\) is a ray of \(\Gamma^{\prime}\). By the universal property of blowup the map from \(S\to X\) factors through \(X_{\Gamma^{\prime}}\). Since pullback is functorial we know \(\mathcal{F}_{S}\) is pulled back from a sheaf \(\mathcal{F}^{\prime\prime}\) on \(X_{\Gamma^{\prime}}\times X^{o}\). Observe \(\mathcal{F}^{\prime\prime}\) is flat over \(X\) by construction. Flatness is preserved under base change and we have verified the valuative criterion for flatness.
Proof \((2)\Rightarrow(1)\).: Theorem 3.3.2 implies that there is a logarithmic modification \(\pi:X_{\Gamma}\to X\) on which the strict transform of \(\mathcal{F}\) is transverse. The strict and total transform property implies we are then in the situation of Proposition 3.4.5.
## 4. Proper monomorphisms and logarithmic surjections of coherent sheaves
Grothendieck's Hilbert scheme is the moduli space of closed immersions in the category of schemes. Proper monomorphisms in the category of schemes are precisely closed immersions. The logarithmic Hilbert scheme is a moduli space of proper monomorphisms in the category of logarithmic schemes.
More generally, Grothendieck's Quot scheme is a moduli space of quotients of a fixed _coherent_ sheaf \(\mathcal{E}\). There is no established definition of coherent sheaf in the setting of logarithmic geometry, although a logarithmic version of \(\mathbb{G}_{m}\) torsors have been studied [14].
Our strategy for the Quot scheme is to work with equivalence classes of sheaves on logarithmic modifications. Up to issues of descent, we are studying a geometrically meaningful collection of sheaves of modules in the logarithmic etale topology. Our perspective is compatible with the object studied in [14].
An equivalence class of quotients of \(\mathcal{O}_{X}\) on logarithmic modifications gives rise to equivalence class of proper monomorphism. Moreover, for \(S\) an etale stalk, all proper monomorphisms to \(X\times S\) arise in this way. Thus, the logarithmic Hilbert space is recovered as a special case of the logarithmic Quot space by setting \(\mathcal{E}=\mathcal{O}_{X}\).
**Goal 4.0.1**.: Define an _logarithmic surjection of coherent sheaves_ - the logarithmic analogue of a surjection of coherent sheaves. Make sense of flatness for logarithmic surjections of coherent sheaves by adapting Kato's definitions of _logarithmic flatness_ and _integral_.
In Section 8 we define the logarithmic Quot scheme as a moduli space of (logarithmically flat and integral) logarithmic surjections of coherent sheaves. We begin this section by recalling fundamental notions of logarithmic geometry and the theory of _Artin fans_. We will have cause to fix a sheaf \(\mathcal{E}\) on \(X\). To simplify notation we use the same symbol \(\mathcal{E}\) to denote the pullback of the fixed sheaf \(\mathcal{E}\) on \(X\times S\).
### Artin fans and logarithmic modification
See [14, Definition 1.2] for the definition of fine and saturated logarithmic schemes and further background. In this paper all logarithmic schemes are fine and saturated; in the sequel we write logarithmic scheme with fine and saturated being understood.
#### 4.1.1. Tropical geometry and Artin fans
Stacks over \(\mathbf{RPC}\) can be lifted to define stacks on the category of logarithmic schemes, see [13, Section 6]. Given a category fibered in groupoids \(C\to\mathbf{RPC}\) one defines a category fibered in groupoids \(\mathcal{A}_{C}\) over the category of logarithmic schemes by setting
\[\mathcal{A}_{C}(S)=C(\Gamma(X,M_{X})).\]
Stackifying \(\mathcal{A}_{C}\) yeilds \(\mathcal{C}\). If \(C\) is a cone stack, see [13, Section 2] then \(\mathcal{C}\) is a zero dimensional algebraic stack. We call algebraic stacks arising in this way _Artin fans_, see [1, 1] for the theory of Artin fans. The map from cone stacks to Artin fans defines an equivalence of categories.
For a piecewise linear space \(\mathcal{T}\) we write the corresponding zero dimensional stack by \(a^{*}\mathcal{T}\). In the special case of the moduli space of tropical supports we write \(a^{*}\mathsf{Supp}(\mathcal{X})=\mathit{Supp}(\mathcal{X})\). We do not assert that this zero dimensional stack is algebraic, although it has a logarithmic etale cover by Artin fans.
#### 4.1.2. The Artin fan of a logarithmic scheme
Let \(X\) be a logarithmic scheme with locally connected logarithmic strata. There is an initial strict morphism from \(X\) to an Artin fan with faithful monodromy [1]. We denote this map
\[X\to\mathcal{X}\]
and call \(\mathcal{X}\) the _Artin fan_ of \(X\). Write \(\mathrm{Trop}(X)\) for the stack on \(\mathbf{RPC}\) such that \(a^{*}\mathrm{Trop}(X)=\mathcal{X}\).
The assignment of an Artin fan to a logarithmic scheme is not functorial in general, but there is a substitute for functorality, see [1, Section 5]. Given a morphism of logarithmic schemes \(X\to Y\) there is a relative Artin fan \(x_{Y}\) and a commutative diagram
\[\begin{CD}X@>{}>{}>X_{Y}\\ @V{}V{}V@V{}V{}V\\ Y@>{}>{}>\mathcal{Y}.\end{CD}\]
Indeed \(X\to\mathcal{X}_{Y}\) is the initial strict map from \(X\) to an Artin fan (with faithful monodromy), through which the map \(X\to\mathcal{Y}\) factors.
#### 4.1.3. Logarithmic modification
A _logarithmic modification_\(Y\to X\) of \(X\) is a morphism that is etale locally on \(X\) the base change of a morphism
\[a^{*}\Gamma\to a^{*}\sigma\]
along a strict map \(X\to a^{*}\sigma\) where \(\Gamma\to\mathrm{Trop}(B)\) is a morphism of cone complexes which is an isomorphism of topological realisations. Notice the morphism \(\Gamma\to\mathrm{Trop}(B)\) need not be a tropical model and thus our definition of logarithmic modification permits lattice changes.
#### 4.1.4. From Artin fan to locally closed stratification
Assume \(X\) has locally connected logarithmic strata. There is a bijection between cones \(\sigma\) of \(\mathrm{Trop}(X)\) and (stacky) points \(a_{\sigma}\) of the Artin fan \(\mathcal{X}=a^{*}\mathrm{Trop}(X)\). Thus the map \(X\to\mathcal{X}\) defines by pulling back points a locally closed semi-stratification of \(X\). We denote the stratum associated to a cone \(\sigma\) by \(X_{\sigma}\). Fixing a tropical model \(\Gamma\to\mathrm{Trop}(X)\) defining logarithmic modification \(X_{\Gamma}\to X\), note to each cone \(\gamma\) of \(\Gamma\) there is a locally closed stratum \(O(\gamma)\) of \(X_{\Gamma}\). We denote the preimage of the closure of \(a_{\gamma}\) by \(V(\gamma)\).
#### 4.1.5. Star piecewise linear spaces
Consider a subdivision \(\mathscr{G}\to\operatorname{Trop}(X)\) and let \(\gamma\) be an element of \(\mathcal{P}_{\mathscr{G}}\) such that the image of \(\gamma\) lies in the interior of a cone \(\sigma\) of \(\operatorname{Trop}(X)\). Suppose \(\sigma\) has associated local cone \((N_{\sigma},U_{\sigma})\) and note
\[N_{\sigma}\cong\mathbb{Z}^{\dim(\sigma)}\cong\sigma^{\operatorname{gp}}.\]
We let \(\gamma\) have associated local cone \((N_{\gamma},U_{\gamma})\) and observe there is a natural inclusion \(N_{\gamma}\hookrightarrow N_{\sigma}\). The _star piecewise linear space_\(\operatorname{St}_{\gamma}\) of \(\gamma\) is the data of a piecewise linear structure on \(N_{\sigma}(\gamma)=N_{\sigma}/N_{\gamma}\). To specify such a piecewise linear structure it is enough to specify the associated locally closed stratification \(\mathcal{P}_{\gamma}\) of \(N_{\sigma}(\gamma)\otimes\mathbb{R}\).
Strata of \(\mathcal{P}_{\gamma}\) biject with strata \(\kappa\) of \(\mathcal{P}_{\mathscr{G}}\) such that \(\gamma\) lies in the closure of \(\kappa\). The stratum associated to \(\kappa\) in \(\mathcal{P}_{\mathscr{G}}\) is the image of \(\kappa\) under the quotient map \(M_{\sigma}\to M_{\sigma}/M_{\gamma}\). Write \(\operatorname{St}_{\gamma}(\kappa)\) for the cone of \(\operatorname{St}_{\gamma}\) corresponding to \(\kappa\) in \(\mathcal{P}_{\mathscr{G}}\). For example, \(\operatorname{St}_{\gamma}(\gamma)\) is the zero cone. We warn the reader that if \(\sigma\) is a face of a cone \(\sigma^{\prime}\) then the star fan of \(\gamma\) in \(\mathscr{G}(\sigma)\) does not detect cones in \(\mathscr{G}(\sigma^{\prime})\) which contain \(\gamma\) in their closure.
### Logarithmic Flatness
Example 4.2.1 demonstrates our claim from the introduction that pullback does not describe a map
\[r_{i}:\operatorname{Quot}(X,\mathcal{E})\to\operatorname{Quot}(D_{i}, \mathcal{E}|_{D_{i}}).\]
In this section we understand the open set on which \(r_{i}\) is well defined. The correct technical condition to ask for is _logarithmic flatness_. An important subtlety is that being logarithmically flat over a point is a non-trivial condition generalising strong transversality [14]. The special fibre in Example 4.2.1 does not satisfy this condition.
**Example 4.2.1**.: Consider a strict map
\[V(X+t(Y+Z))\to\mathbb{P}^{2}\times\mathbb{A}^{1}\]
where the target is a logarithmic scheme with toric logarithmic structure. This morphism is logarithmically flat over \(\mathbb{A}^{1}\) away from \(0\) where logarithmic flatness fails. Pulling back the universal surjetion
\[\mathcal{O}_{X}\to\mathcal{O}_{Z}\]
to the divisor \(X=0\) defines a subscheme of \(\mathbb{P}^{1}\times\mathbb{A}^{1}\). This subscheme is not flat (in the usual sense) over \(\mathbb{A}^{1}\): the fibre over \(0\in\mathbb{A}^{1}\) is dimension one whereas the fibre over every other closed point of \(\mathbb{A}^{1}\) has dimension zero.
#### 4.2.1. Logarithmic flatness for subschemes over a point
Let \(Z\) be a strict closed subscheme of \(X\). Every such scheme is flat over \(\operatorname{Spec}(\mathbb{C})\) but not all choices of \(Z\) are _logarithmically flat_ in the sense of the following rephrasing of the definition presented in [13, Section 1.10].
**Definition 4.2.2**.: We say \(Z\) is logarithmically flat over \(\operatorname{Spec}(\mathbb{C})\) if \(Z\) is flat over the Artin fan of \(X\).
In the special case that \(X\) is a toric variety with divisorial logarithmic structure from its toric boundary, Definition 4.2.2 coincides with the notion from Section 3.2.1. There is also a version of the transversality of Section 3.2.2.
**Definition 4.2.3**.: A subscheme \(Z\) of \(X\) is _transverse_ if for every logarithmic stratum \(O(\sigma)\) the closure of \(Z\cap O(\sigma)\) in \(V(\sigma)\) coincides with \(V(\sigma)\cap Z\).
Definition 4.2.3 asks that \(Z\cap V(\sigma)\) has no embedded component supported on the compliment of \(O(\sigma)\). Imposing that \(Z\) is logarithmically flat over \(S\) is a transversality condition closely related to strong transversality defined in [14]. Figure 3 has examples of how transversality can fail.
#### 4.2.2. Logarithmic flatness in the language of Artin fans
Let \(\pi:X\to B\) be a morphism of logarithmic schemes and let \(\mathcal{F}\) be a coherent sheaf on \(X\). We upgrade Kato's definitions of logarithmic flat and integral for a morphism of logarithmic schemes to define what it means for a coherent sheaf to be logarithmically flat and integral over a base, see [11, Section 1.10]. We also recast Kato's definition in the language of Artin fans.
**Definition 4.2.4**.: We say \(\mathcal{F}\) is _logarithmically flat_ over \(B\) if \(\mathcal{F}\) is flat over
\[B_{X}=B\times_{\mathcal{B}}\mathcal{X}_{B}\]
where \(X\) is considered a \(B_{X}\) scheme in the obvious way. We say \(\mathcal{F}\) is _integral_ over \(B\) if the map \(\mathcal{X}_{B}\to\mathcal{B}\) is flat as a morphism of underlying algebraic stacks.
An important special case is to understand sheaves logarithmically flat over the standard log point \(\mathsf{pt}^{\dagger}\). These are certain sheaves on logarithmic modifications of \(X\times\mathsf{pt}^{\dagger}\). We call the underlying scheme of such a logarithmic modification an _expansion_. The space of tropical supports will be upgraded to become something very similar to the stack of expansions introduced in [13]. The link between the geometry of the stack of expansions and tropical geometry is explained in [10, 10].
We define a log point to be a logarithmic scheme with a single point. Our next lemma gives a necessary condition for a sheaf on \(X\times S\) to be logarithmically flat over \(S\). For \(\sigma\) a cone of \(\operatorname{Trop}(X)\) we write \(V(\sigma)\) for the preimage of the closure of the point \(a_{\sigma}\) of \(X\) in \(X\times S\).
**Lemma 4.2.5**.: Set \(\mathcal{F}\) a sheaf on the underlying scheme of \(X\) where \(X\) is a logarithmic scheme over a log point \(S\). Assume \(\mathcal{F}\) and \(\mathcal{O}_{X}\) are logarithmically flat over \(S\) then we have the following.
1. For every cone \(\sigma\) of \(\operatorname{Trop}(X)\), the restriction \(\mathcal{F}|_{V(\sigma)}\) has no sections with support contained in the complement of \(O(\sigma)\).
2. In the special case \(\mathcal{F}=\mathcal{O}_{Z}\) is the structure sheaf of a closed subscheme \(Z\), if \(\mathcal{O}_{Z}\) is logarithmically flat then \(Z\cap V(\sigma)\) is the closure of \(Z\cap O(\sigma)\).
Proof.: We can investigate support locally, so restrict attention to a cone \(\tau\) of \(\operatorname{Trop}(X)\) such that \(\sigma\leq\tau\). Flatness is preserved under base change so we can assume \(\sigma\) is the zero cone. Write \(Y_{\tau}\) for the toric variety corresponding to the cone \(\tau\) and observe there is a natural map \(Y_{\tau}\to\operatorname{Trop}(X)\). Since flatness is preserved under base change we consider a Cartesian square
If \(\mathcal{F}\) has a non-zero section supported on \(V(\sigma)\backslash O(\sigma)\) then for the right choice of \(\tau\) this pulls back to a non-zero section \(s\) of \(g^{\star}\mathcal{F}\). Define \(Y_{\tau}^{o}\) the dense torus of \(Y_{\tau}\). The support of \(s\) is contained in the complement of the preimage in \(X\) of \(S\times Y_{\tau}^{o}\).
Assume for contradiction that the support of \(s\) were contained in the preimage \(Z\) of a closed subscheme \(V(f)\) of \(S\times Y_{\tau}\). Since the stalk of \(s\) vanished away from \(Z\), we know \(s\) must vanish on complement of \(Z\). Without loss of generality \(X_{\tau}\) is affine and global sections of \(\mathcal{F}\) are a module \(M\). Thus passing to an affine patch \(\operatorname{Spec}(A)\) of \(S\times Y_{\tau}\), \(s\) vanishes in the localisation of \(M\) (considered an \(A\) module) obtained by inverting \(f\). In other words \(f^{k}s=0\) for some \(k\). But by flatness the morphism \((f^{k})\otimes M\to M\) sending \(f^{k}a\otimes b\mapsto f^{k}ab\) is injective.
For \(S\) a log point, we say a sheaf on \(X\times S\) is _transverse_ if it satisfies the first condition in Lemma 4.2.5. For \(S\) any logarithmic scheme say \(\mathcal{F}\) is transverse over \(S\) if the pullback of \(\mathcal{F}\) is
transverse over every strict closed point of \(S\). Transversality is an interesting condition first because it is easier to check than logarithmic flatness. Second, we will see that transversality is the weakest condition to ensure the morphism \(r_{i}\) exists. Third a theorem of Tevelev shows that the difference between transversality and logarithmic flatness is not something our moduli space detects.
**Remark 4.2.6**.: Lemma 4.2.5 is similar to the proof that \((1)\Rightarrow(3)\) in Theorem 3.4.1 except that \(X\) need not be toric and we work in the relative situation.
#### 4.2.3. From logarithmic flatness to morphisms between Quot schemes
Let \(X\) be a scheme and \(D\) a divisor on \(X\). For \(S\) noetherian consider a coherent sheaf \(\mathcal{F}\) on \(X\times S\) which is flat over \(S\). Let
\[\iota_{D}:D\times S\to X\times S\]
be the natural inclusion.
**Proposition 4.2.7**.: The sheaf \(\iota_{D}^{\star}\mathcal{F}\) is flat over \(S\) if for each closed point \(s\) of \(S\) the pull back \(\mathcal{F}_{s}\) has no sections supported on \(D\).
Proof.: Flatness is local so we pass to affine patches. Let \(M\) be a \(B\) module where \(B\) is an \(A\) algebra. Assume \(A,B\) are Noetherian and \(M\) is finitely generated over \(B\). Let \(f\in B\) such that for any maximal ideal \(m\) of \(A\) multiplication by \(f\) is an injective map on \(M/mM\). Then \(M\) is flat over \(A\) implies \(M/f\) is flat over \(A\) by [16, Theorem 22.6].
Proposition 4.2.7 is evidence that transversality is the natural condition to consider if one hopes to extend the morphisms
\[r_{i}:\mathsf{Quot}(X,\mathcal{E})\to\mathsf{Quot}(D,\mathcal{E}|_{D})\]
discussed in the introduction. In the sequel we fix a sheaf \(\mathcal{E}\) on \(X\).
**Corollary 4.2.8**.: Let \(\mathsf{Quot}(X,\mathcal{E})^{o}\) be the moduli space whose fibre over a logarithmic scheme \(S\) consists of surjections of sheaves \(q:\mathcal{E}\to\mathcal{F}\) on \(X\times S\) such that \(\mathcal{F}\) is transverse over \(S\).
1. There is an open inclusion \[\mathsf{Quot}(X,\mathcal{E})^{o}\hookrightarrow\mathsf{Quot}(\underline{X}, \mathcal{E}).\]
2. Pullback to \(D_{i}\) defines a morphism \[\mathsf{Quot}(X,\mathcal{E})^{o}\to\mathsf{Quot}(D_{i},\iota_{D_{i}}^{\star} \mathcal{E})\]
Proof.: For statement (1) we need to verify the map is open which follows from [16, Theorem 53]. Statement (2) follows directly from Proposition 4.2.7.
### Logarithmic surjections of coherent sheaves
Let \(\mathcal{E}\) be a coherent sheaf on a fine and saturated logarithmic scheme \(X\) defined over \(\operatorname{Spec}(\mathbb{C})\). The moduli space \(\mathsf{Quot}(X,\mathcal{E})^{o}\) is not proper. The solution is to study a moduli space whose \(S\) points are _logarithmic surjections of coherent sheaves_ of the pullback \(\mathcal{E}_{S}\) on \(X\times S\) flat over \(S\). To simplify notation we drop the subscript \(S\) from \(\mathcal{E}_{S}\).
**Definition 4.3.1**.: A logarithmic surjection of coherent sheaves on \(X\times S\) which is flat over \(S\) is written \([\pi_{\Gamma},q_{\Gamma}]\) and is defined to be an equivalence class of pairs
\[(\pi_{\Gamma}:(X\times S)_{\Gamma}\to X\times S,q_{\Gamma}:\pi_{\Gamma}^{ \star}\mathcal{E}\twoheadrightarrow\mathcal{F}_{\Gamma})\]
where \(\pi_{\Gamma}\) is a logarithmic modification and \(q_{\Gamma}\) is a surjection of coherent sheaves on \((X\times S)_{\Gamma}\). We require both \((X\times S)_{\Gamma}\) and \(\mathcal{F}_{\Gamma}\) are logarithmically flat and integral over \(S\).
Given two pairs \(((X\times S)_{\Gamma},q_{\Gamma})\) and \(((X\times S)_{\Gamma^{\prime}},q_{\Gamma^{\prime}})\) set \(\Gamma^{\prime\prime}\) to be the common refinement of \(\Gamma,\Gamma^{\prime}\). There are two logarithmic modifications
\[\pi_{\Gamma^{\prime\prime},\Gamma}:X_{\Gamma^{\prime\prime}}\to X_{ \Gamma}\quad\pi_{\Gamma^{\prime\prime},\Gamma^{\prime}}:X_{\Gamma^{\prime \prime}}\to X_{\Gamma^{\prime}}.\]
The equivalence relation is the smallest which identifies \((\pi_{\Gamma},q_{\Gamma})\) with \((\pi_{\Gamma^{\prime}},q_{\Gamma^{\prime}})\) whenever
\[\pi_{\Gamma^{\prime\prime},\Gamma}^{\star}q_{\Gamma}=\pi_{\Gamma^{\prime\prime},\Gamma^{\prime}}^{\star}q_{\Gamma^{\prime\prime}}.\]
We remark that the logarithmic modification of \(X\times S\) corresponding to the common refinement of \(\Gamma\) and \(\Gamma^{\prime}\) need not be integral over \(S\). The next proposition shows that logarithmic flatness is preserved.
**Proposition 4.3.2**.: Let \(V\) be logarithmically flat over \(S\). If \(\mathcal{F}\) a coherent sheaf on \(V\) is logarithmically flat over \(S\), then given a logarithmic modification
\[V_{\Gamma}\to V\ \mathrm{where}\ a^{\star}\Gamma\to\mathcal{S}\ \mathrm{is\ flat},\]
the pullback of \(\mathcal{F}\) to a sheaf \(\mathcal{F}_{\Gamma}\) on \(V_{\Gamma}\) is also logarithmically flat over \(S\).
Proof.: Logarithmic flatness of \(\mathcal{F}\) implies \(\mathcal{F}\) is flat over \(S\times_{S}\mathcal{V}\). We are required to check \(\mathcal{F}_{\Gamma}\) is flat over \(a^{\star}\Gamma\times_{S}S\). A diagram chase shows all squares in the following diagram are cartesian.
Since flatness is preserved under base change [12, TAG 01U8], we deduce \(\mathcal{F}_{\Gamma}\) is flat over \(S\times_{S}a^{\star}\Gamma\) and the result is proved.
### Proper monomorphisms and logarithmic surjections of coherent sheaves
The functor of points of Grothendieck's Hilbert scheme associated to \(X\) assigns to a scheme \(S\) the set of closed immersions to \(X\times S\) which are flat over \(S\). In the category of schemes closed immersions are precisely proper monomorphisms. Thus the data of an ideal sheaf is the same as the data of a proper monomorphism. In the category of logarithmic schemes a strict closed immersion is a proper monomorphism [13, Proposition 1.4, Property (v)], but there is a second class of proper monomorphism.
**Lemma 4.4.1**.: Logarithmic modifications are proper monomorphisms.
Proof.: A logarithmic modification is the base change of a tropical model. It is easy to see tropical models are monomorphisms and being a monomorphism is preserved under base change. The map of logarithmic stacks corresponding to a tropical model is always proper and since properness is preserved under base change, the result is proved.
Let \(X\) be a logarithmic scheme whose Artin fan is \(a^{\star}\mathrm{Trop}(X)\) for some cone complex \(\mathrm{Trop}(X)\). The next lemma shows that any proper monomorphism to \(X\) can be understood in terms of logarithmic modification and strict closed immersion.
**Lemma 4.4.2**.: Let \(Z\to X\) be a proper monomorphism in the category of logarithmic schemes. There is a commutative square
where \(\pi_{Z}\) and \(\pi_{X}\) are logarithmic modifications and \(\iota_{\tilde{Z}}\) is a strict closed immersion.
Lemma 4.4.2 is asserting that an injection of logarithmic coherent sheaves on \(X\)
\[I_{Z}\hookrightarrow\mathcal{O}_{X}\]
is the same data as an equivalence class of proper monomorphism to \(X\).
Proof.: If \(Z\) is atomic, the morphism \(g\) induces a morphism of cone complexes
\[\operatorname{Trop}(Z)\to\operatorname{Trop}(X)\]
and thus a piecewise linear subdivision of \(\operatorname{Trop}(X)\). If \(Z\) is not atomic chose an open cover of \(Z\) by atomic schemes \(Z_{i}\). Each \(Z_{i}\) induces a subdivision. Take the common refinement to obtain the subdivision
\[\mathscr{T}(Z)\to\operatorname{Trop}(X).\]
Choose a tropical model of \(\mathscr{T}(Z)\) say
\[\Sigma\to\operatorname{Trop}(X)\]
and write \(\pi_{X}:\tilde{X}\to X\) for the corresponding logarithmic modification. Consider the fibre product of cone stacks of \(\operatorname{Trop}(Z)\) with \(\Sigma\) over \(\operatorname{Trop}(X)\) and write \(\pi_{Z}:\tilde{Z}\to Z\) for the corresponding logarithmic modification. Note the map \(\tilde{Z}\to X\) factors through \(\tilde{X}\) by the universal property of fibre product and by construction the map \(\tilde{Z}\to\tilde{X}\) is strict.
We know both \(g\) and \(\pi_{Z}\) are monomorphisms. It follows that the composition of \(\pi_{X}\) with \(\iota_{\tilde{Z}}\) is a monomorphism and thus \(\iota_{\tilde{Z}}\) is a strict proper monomorphism and is thus a closed immersion [14, Proposition 1.4].
We have thus shown that a logarithmic surjection of coherent sheaves with domain \(\mathcal{O}_{X}\) is the data of an equivalence class of proper monomorphisms.
### The space of tropical supports
We turn our attention to \(a^{*}\mathsf{Supp}(X)\) which we denote \(\mathit{Supp}(\mathcal{X})\). Where there is no ambiguity, we drop \(\mathcal{X}\) when writing \(\mathit{Supp}\) and \(\mathsf{Supp}\). Tropical models of \(\mathsf{Supp}\) are denoted by a subscript,
\[\mathsf{Supp}_{\Sigma}\to\mathsf{Supp}.\]
We will write \(a^{*}\mathsf{Supp}_{\Sigma}=\mathit{Supp}_{\Sigma}\).
Consider a diagram with all horizontal morphisms tropical models and all vertical morphisms combinatorially flat.
#### 4.5.1. A flat map of Artin fans
The morphism of Artin fans
\[a^{*}\pi:a^{*}\mathcal{X}_{\Sigma}\to a^{*}\mathsf{Supp}_{\Sigma}\]
need not be flat. In this subsection we modify \(\mathcal{X}_{\Sigma}\) in a canonical way to make the map flat.
Our modification is specified on the level of cone complexes. In light of [14, Theorem 2.1.4] a morphism of Artin fans is flat if the corresponding morphism of cone complexes \(\mathcal{X}_{\Sigma}\to\mathsf{Supp}_{\Sigma}\) has the following two properties.
1. The image of every element of \(\mathcal{P}_{\mathcal{X}_{\Sigma}}\) lies in \(\mathcal{P}_{\mathsf{Supp}_{\Sigma}}\).
2. For every cone \(\sigma\) of \(\mathcal{X}_{\sigma}\) mapping to cone \(\tau\) of \(\mathit{Supp}_{\Sigma}\) defines a surjection from the lattice of \(\sigma\) to the lattice of \(\tau\).
Our definition of combinatorial flatness ensures that (1) always holds. We define a diagram of logarithmic schemes
\[\mathcal{X}_{\Sigma}\to\mathcal{X}_{\Sigma}\to\mathit{supp}_{\Sigma}.\]
where the first morphism is a logarithmic modification defined by modifying the lattice and the second has properties (1) and (2).
**Construction 4.5.1**.: We define the morphism
\[\mathcal{X}_{\Sigma}\to\mathcal{X}_{\Sigma}\to\mathit{supp}_{\Sigma}.\]
We do so by replacing the local cones of \(\mathcal{X}_{\Sigma}\). Fix \(\mathfrak{s}=(N_{\mathfrak{s}},U_{\mathfrak{s}})\) a local cone of \(\mathcal{X}_{\Sigma}\). The map to \(\mathit{supp}_{\Sigma}\) defines a map from \(\mathfrak{s}\) to a piecewise linear cone \(\mathfrak{t}=(N_{\mathfrak{t}},U_{\mathfrak{t}})\). This is the data of a monoid morphism
\[N_{\mathfrak{s}}\to N_{\mathfrak{t}}.\]
We define a new piecewise linear cone \((N_{\mathfrak{s}^{\prime}},U_{\mathfrak{s}^{\prime}})\) as follows. Whenever the image of \(\mathfrak{s}\) in \(\mathsf{Supp}\) has the same dimension as \(\mathfrak{s}\), we define the group \(N_{\mathfrak{s}^{\prime}}\) to be the subgroup of \(N_{\mathfrak{s}^{\prime}}\otimes\mathbb{Q}\) whose image in \(N_{\mathfrak{t}}\otimes\mathbb{Q}\) lies in \(N_{\mathfrak{t}}\). Identifying \(N_{\mathfrak{t}}\otimes\mathbb{R}=N_{\mathfrak{t}^{\prime}}\otimes\mathbb{R}\), we define \(U_{\mathfrak{t}^{\prime}}=U_{\mathfrak{t}}\). The abelian group corresponding to other cones is taken to be minimal compatible with the above choice.
We define a topological space \(|\mathcal{X}_{\Sigma}|=|\mathcal{X}_{\Sigma}|\) and \(\mathcal{P}_{\mathcal{X}}=\mathcal{P}_{\mathcal{X}}\). We replace the homeomorphisms
\[f_{\kappa}:\overline{\kappa}\to\overline{U}_{\mathfrak{t}_{\kappa}}\text{ with }f_{\kappa}^{\prime}:\overline{\kappa}\to\overline{U}_{\mathfrak{t}_{\kappa}^{ \prime}}.\]
The remaining data of the piecewise linear space \(\mathcal{X}_{\Sigma}\) is inherited from \(\mathcal{X}\).
The algebro-geometric justification for making this change of lattice is that in the definition of logarithmic surjection of coherent sheaves we work up to logarithmic modification, and in particular changing the lattice. Thus, applying a logarithmic modification to the universal family over \(\mathit{supp}\) is merely a convenience provided it preserves combinatorial flatness.
#### 4.5.2. Moduli of logarithmic modification
Suppose one is interested in studying the moduli problem of proper monomorphisms. Our logarithmic Hilbert scheme has nice properties, but we pay the price of working up to logarithmic modification.
A related question is to study the moduli space of logarithmic modifications of \(X\) with no equivalence relation (and not allowing more generl proper monomorphisms). The resulting moduli space is the open subfunctor of \(\mathsf{Supp}\) obtained by discarding all strata admitting a map from a cone \(\sigma\) such that the universal family does not pull back to a cone complex over \(\sigma\).
## 5. Tropical support for constant degeneration
Consider the projective, logarithmically flat and integral morphism of fine and saturated logarithmic schemes \(V=X\times W\to W\). Let \(\mathcal{E}\) be a sheaf on \(V\) logarithmically flat over \(W\). In this section we restrict attention to the case that \(W=\operatorname{Spec}(A)\) is an affine atomic logarithmic scheme with Artin fan \(\mathcal{W}=a^{\star}\mathrm{Trop}(W)\) for \(\mathrm{Trop}(W)\) a cone. We moreover impose that the image of \(W\) in its Artin fan \(\mathcal{W}\) is a single point. In particular there exists a cone complex \(\mathrm{Trop}(V)\) such that there is an equality of Artin fans
\[\mathcal{V}=\mathcal{X}\times\mathcal{W}=a^{\star}\mathrm{Trop}(V).\]
**Goal 5.0.1**.: To a logarithmic surjection of coherent sheaves
\[q=[\pi_{\Gamma}:V_{\Gamma}\to V,q_{\Gamma}:\pi_{\Gamma}^{\star}\mathcal{E}\to \mathcal{F}_{\Gamma}]\]
on \(V\) which is flat over \(W\) we associate a subdivision
\[\mathcal{T}(q)\to\mathrm{Trop}(V).\]
We call the piecewise linear space \(\mathscr{T}(q)\) the _tropical support_ of \(q\). A useful heuristic is that tropical support tracks the minimal logarithmic modification
\[\pi_{\Gamma}:V_{\Gamma}\to V\ \mathrm{such\ that\ we\ can\ write}\ q=[\pi_{\Gamma},q_{\Gamma}].\]
Morally none of our hypotheses on \(W\) are necessary and we handle the general case in Section 6.
### Defining tropical support
Fix a representative \((\pi_{\Gamma},q_{\Gamma})\) for \(q\) such that \(V_{\Gamma}\to W\) is projective and write \(\Gamma\to\mathrm{Trop}(V)\) for the tropical model corresponding to \(\pi_{\Gamma}\). Note \(V\) has a locally closed stratification pulled back from its Artin fan with strata \(V_{\sigma}\) of \(V\) indexed by cones \(\sigma\) of \(\mathrm{Trop}(V)\). Similarly \(V_{\Gamma}\) has a locally closed stratification with strata \(O(\gamma)\) indexed by cones \(\gamma\) of \(\Gamma\). We write the set of cones of \(\Gamma\) whose interior is mapped to the interior of a cone \(\sigma\) of \(\mathrm{Trop}(V)\) by \(\Gamma(\sigma)\).
#### 5.1.1. Torus actions and subdivisions
Fix a dimension \(k\) cone \(\gamma\) of \(\Gamma(\sigma)\) and note there is an inequality \(\dim(\sigma)=\ell\geq k\). There is a cartesian diagram
The cocharacter lattice of \(\mathbb{G}_{m}^{k}\) may be identified with \(\gamma^{\mathrm{gp}}\). The inclusion of \(\gamma^{\mathrm{gp}}\) into \(\sigma^{\mathrm{gp}}\) defines a morphism of tori \(\mathbb{G}_{m}^{k}\to\mathbb{G}_{m}^{\ell}\) inducing the map \(g\). There is a free action of a torus \(T_{\gamma}\sim\mathbb{G}_{m}^{\ell-k}\) on \(O(\gamma)\) defined by automorphisms of \(\mathbb{G}_{m}^{\ell}\) up to the action of \(\mathbb{G}_{m}^{k}\). The cocharacter lattice of the torus \(T_{\gamma}\) is thus identified with the lattice \(M_{\sigma}(\gamma)=\sigma^{\mathrm{gp}}/\gamma^{\mathrm{gp}}\).
A subtorus of \(T_{\gamma}\) is then the data of a saturated subgroup of \(M_{\sigma}(\gamma)\). The torus \(T_{\gamma}\) acts on the set of surjections
\[\{q_{\Gamma,\gamma}:\pi_{\Gamma}^{\star}\mathcal{E}|_{V(\gamma)}\to\mathcal{F} |_{V(\gamma)}\}\]
by pullback. Denote the maximal subtorus of \(T_{\gamma}\) stabilising \(q_{\Gamma,\gamma}\) by \(T_{\gamma}(q_{\Gamma})\) and the corresponding subgroup of \(M_{\sigma}(\gamma)\) by \(L_{\gamma}(q_{\Gamma})\). Observe there is an identification between cones in \(\Gamma(\sigma)\) of which \(\gamma\) is a face and cones of the star fan \(\mathrm{St}_{\gamma}\).
**Theorem/Definition 5.1.1**.: Fix \(q_{\Gamma}:\pi_{\Gamma}^{\star}\mathcal{E}\to\mathcal{F}_{\Gamma}\) a surjection of sheaves on a logarithmic modification \(\pi_{\Gamma}:V_{\Gamma}\to V\) with \(\mathcal{F}_{\Gamma}\) logarithmically flat over \(W\). There is an initial piecewise linear subdivision \(\mathscr{T}(q_{\Gamma})\to\mathrm{Trop}(V)\) with the following properties.
1. It is possible to factor \(\Gamma\to\mathscr{T}(q_{\Gamma})\to\mathrm{Trop}(V)\).
2. Let \(\gamma\leq\gamma^{\prime}\) be cones of \(\Gamma(\sigma)\) in \(\mathrm{Trop}(V)\) where \(a_{\sigma}\) lies in the image of the map \(V\to\mathcal{X}\times\mathscr{W}\). The cone \(\mathrm{St}_{\gamma}(\gamma^{\prime})\) is contained in \(L_{\gamma}(q)\) if and only if \(\gamma^{\prime}\) and \(\gamma\) lie in the same stratum of \(\mathcal{P}_{\mathscr{T}(q_{\Gamma})}\).
Moreover \(\mathscr{T}(q_{\Gamma})\) depends only on the logarithmic surjection of coherent sheaves \(q=[q_{\Gamma},V_{\Gamma}]\). We define the piecewise linear complex \(\mathscr{T}(q)=\mathscr{T}(q_{\Gamma})\) to be the _tropical support_ of \(q\).
Proof of this theorem occupies the remainder of this section. It will suffice to check the theorem Zariski locally on \(V\). Indeed, given \(\mathscr{T}(q|_{U_{i}})\) for \(U_{i}\) a cover of \(V\), the piecewise linear space \(\mathscr{T}(q)\) is the common refinement of the piecewise linear subdivisions \(\mathscr{T}(q|_{U_{i}})\). The reason we imposed an initial piecewise linear subdivision is that the map \(V\to\mathscr{V}\) may not be surjective. The initial condition ensures uniqueness.
### Logarithmic flatness and Grobner theory
We consider the following diagram in which the vertical maps are flat
Choosing cones \(\gamma\leq\gamma^{\prime}\) of \(\Gamma(\sigma)\), we are interested in understanding the relation between \(q|_{O(\gamma)}\) and \(q|_{O(\gamma^{\prime})}\). Our strategy is to work in coordinates where the problem reduces to Grobner theory.
#### 5.2.1. Notation
The cone \(\operatorname{St}_{\gamma}(\gamma^{\prime})\) defines an affine toric variety \(Y=Y(\gamma^{\prime})\) over \(\mathbb{C}\). Note \(\gamma^{\prime}\) identifies an open subset \(U=U(\gamma^{\prime})\) in \(V(\gamma)\) which contains \(O(\gamma^{\prime})\) as a locally closed subscheme. Shrinking \(V\) such that \(V_{\sigma}=\operatorname{Spec}(A)\) is a sufficiently small affine, we may write a map of rings
\[A\to A\otimes\mathbb{C}[Y]\text{ corresponding to }U\to V_{\sigma}.\]
The pullback of \(\mathcal{E}\) to \(V_{\sigma}\) has global sections defined by an \(A\) module \(E\). In the sequel we often write \(A[Y]=A\otimes\mathbb{C}[Y]\).
Restricting the surjection \(q\) to \(U(\gamma^{\prime})\) and taking global sections defines a short exact sequence
\[0\to G\to E\otimes_{A}A[Y]\xrightarrow{q|_{U}}F\to 0.\]
Here \(G\) is defined such that the sequence is exact. Write \(Y^{o}\) for the dense torus of \(Y\) and let \(G^{o}\) be the \(A[Y^{o}]\) module generated by the image of \(G\) under the localisation map
\[E\otimes_{A}A[Y]\to E\otimes_{A}A[Y^{o}].\]
Taking global sections of \(q|_{O(\gamma)}\) fits into the short exact sequence
\[0\to G^{o}\to E\otimes_{A}A[Y^{o}]\xrightarrow{\Gamma(q|_{O(\gamma)})}F^{o} \to 0.\]
Write \(A[Y]=A[X_{1},...,X_{k},X_{k+1}^{\pm 1},...,X_{n}^{\pm 1}]\) and \(A[\overline{Y}]=A[X_{k+1}^{\pm 1},...,X_{n}^{\pm 1}]\). We can then write a short exact sequence
\[0\to G^{\prime}\to E\otimes_{A}A[\overline{Y}]\xrightarrow{\Gamma(q|_{O( \gamma^{\prime})})}F^{\prime}\to 0.\]
Our task is to relate \(G^{o}\) and \(G^{\prime}\).
#### 5.2.2. Initial degenerations for modules
We define a _monomial_ of \(A[Y]=A[X_{1},...,X_{k},X_{k+1}^{\pm 1},...,X_{n}^{\pm 1}]\) to be an element of the form \(aX_{i}^{j}\) for some \(a\in A\) and a primitive monomial if \(a=1\). A _monomial_ of \(E\otimes_{A}A[Y]\) is any element of the form \(e\otimes m\) for \(e\in E\) and \(m\) a monomial of \(A[Y]\).
We define a _term order_\(w\) to be a morphism of monoids from the set of monomials under multiplication to the real numbers under addition. The _initial form_ of an element \(f=\sum_{i}m_{i}\in E\otimes_{A}A[Y^{o}]\) and a submodule \(G^{o}\subset E\otimes_{A}A[Y^{o}]\), where \(m_{i}\) are monomials, with respect to a term order \(w\) are respectively
\[\operatorname{in}_{w}(f)=\sum_{w(m_{i})\text{maximal}}m_{i}\in E\otimes A[Y] \text{ and }\operatorname{in}_{w}(G^{o})\text{ the }A[Y]\text{ module generated by }\{\operatorname{in}_{w}(f)|f\in G^{o}\}.\]
The same definitions work replacing \(G^{o}\) by \(G\).
Note a monomial on a toric variety \(Y\) specifies a character of \(Y^{o}\). Given a cocharacter \(w\) the pairing between characters and cocharacters assigns to every monomial an integer. An element of the cocharacter lattice of \(Y^{o}\) thus specifies a term order on monomials of \(\mathbb{C}[Y^{o}]\) and thus on monomials of \(E\otimes_{A}A[Y^{o}]\).
#### 5.2.3. Pullback to a trait
Let \(S=\operatorname{Spec}(R)\) be a trait with generic point \(\eta\) and closed point \(s\). For a cone \(\gamma\) of \(\Gamma\) write \(a_{\gamma}\) for the associated point of \(a^{\star}\Gamma\). Choose a morphism \(\varphi^{\prime}:S\to a^{\star}\Gamma\) such that the image of the generic point is \(a_{\gamma}\) and the image of the special point is \(a_{\gamma^{\prime}}\). We probe flatness by studying the following enhancement of the diagram considered at the start of this section.
This diagram is defined by declaring all squares cartesian.
We will assume \(\gamma\) is the zero cone. The argument is identical in the general case, except that every short exact sequence must be tensored by the flat \(\mathbb{C}\) module \(\mathbb{C}[T_{1}^{\pm 1},...,T_{\dim(\gamma)}^{\pm 1}]\) at every step. This has no effect on our argument except to complicate notation.
We use two facts about this diagram. First \(\varpi\) is flat because it is the base change of a flat map. Second pulling back \(q\) to \(V_{s}\) yields a short exact sequence
\[0\to G_{\varphi}=G^{\prime}\otimes_{\mathbb{C}}M\to E\otimes_{A}A[\overline{Y} ]\otimes_{\mathbb{C}}M\xrightarrow{q_{s}}F^{\prime}\otimes_{\mathbb{C}}M\to 0\]
Here \(M\) is the structure sheaf of a torus of rank \(\dim(\gamma^{\prime})-\dim(\gamma)\). It arises because the image of \(s\) is stacky. Note that \(G_{\varphi}\) depends only on the restriction of \(\varphi\) to its special point. We choose an identification
\[A[\overline{Y}]\otimes M\to A[Y^{o}]\]
extending the natural map \(A[\overline{Y}]\to A[Y^{o}]\). In the sequel we think of \(G_{\varphi}\) as a submodule of \(E\otimes A[Y^{o}]\).
#### 5.2.4. Initial degenerations and traits
The morphism \(\varphi\) specifies a cocharacter of the torus \(Y^{o}\) and thus an element of the cocharacter lattice \(m_{\varphi}\in M_{Y^{o}}\) inducing term order \(w_{\varphi}\).
**Proposition 5.2.1**.: If \(\mathcal{F}\) is logarithmically flat then there is an equality of submodules
\[G_{\varphi}=\operatorname{in}_{w_{\varphi}}(G^{o})\leq\Gamma(\iota^{\star} \pi_{\Gamma}^{\star}\mathcal{E},Y_{s})=E\otimes_{A}A[Y^{o}].\]
Proof.: Let \(m^{\prime}:U\times Y^{o}\to U\times Y^{o}\) be the torus multiplication map. Consider a diagram
Here we have used the fact \(U=V_{\sigma}\times Y\) to define \(\pi_{Y}\) as the composition \(U\times Y^{o}\to U\to W\times Y\). Pulling back \(q\) along \(\pi_{U}\) defines a short exact sequence of sheaves which on the level of global sections is given by
\[0\to\ker(\pi_{U}^{\star}q)\to E\otimes_{A}A[Y]\otimes_{\mathbb{C}}\mathbb{C}[ Y^{o}]\xrightarrow{\pi_{U}^{\star}q}F\to 0\]
Observe \(\ker(\pi_{1}^{\star}q)\) is the submodule of \(E\otimes_{A}A[Y]\otimes_{\mathbb{C}}\mathbb{C}[Y^{o}]\) generated by \(g\otimes f\) where \(g\in G\) and \(f\) lies in \(\mathbb{C}[Y^{o}]\). On the level of coordinate rings \(m^{\prime}\) is specified by
\[m^{\prime\#}:A[Y]\otimes_{\mathbb{C}}\mathbb{C}[Y^{o}]\to A[Y]\otimes_{ \mathbb{C}}\mathbb{C}[Y^{o}].\]
\[m\otimes f\mapsto m\otimes mf\text{ whenever }m\text{ a monomial in }\mathbb{C}[Y].\]
Consequently we have a short exact sequence of global sections
\[0\rightarrow\ker(m^{\star}q)\to E\otimes_{A}A[Y]\otimes_{\mathbb{C}} \mathbb{C}[Y^{o}]\to F\otimes_{A}A[Y^{o}]\to 0\]
where we have
\[\ker(m^{\star}q)=\left\langle\sum_{i}(e_{i}\otimes a_{i}m_{i})\otimes m_{i}|m_ {i}\text{ a monomial and }\sum_{i}e_{i}\otimes a_{i}m_{i}\in G^{o}\right\rangle.\]
Pulling back along \(\varphi\) yields the short exact sequence of global sections of sheaves on \(S\times Y^{o}\)
\[0\to G_{S}\to E\otimes_{A}A[X^{o}]\otimes R\xrightarrow{q_{S}}F_{S}\to 0. \tag{1}\]
and we learn \(\ker(q_{S})\) is generated by \(\sum_{i}e_{i}\otimes a_{i}m_{i}\otimes\varphi^{\#}(m_{i})\) where \(\sum_{i}e_{i}\otimes a_{i}m_{i}\) is an element of \(G\) and \(m_{i}\) are monomials. Write \(\pi\) for the uniformiser of \(R\). Flatness of \(F_{S}\) over \(R\) ensures if \(\pi g\in G_{S}\) for \(g\in R[Y^{o}]\) then \(g\in G_{S}\). Restricting to the special fibre of \(S\), that is quotienting by the ideal \((\pi\otimes 1)\) yields the ideal \(\operatorname{in}_{w}(G^{o})\).
**Corollary 5.2.2**.: As \(w\) varies within the interior of any cone of the star fan of \(\gamma\) the module \(\operatorname{in}_{w}(G^{o})\) is constant.
Proof.: Consider \(w\) a cocharacter of \(Y^{o}\) specifying a point of \(M_{Y}\) within the fan of \(Y\). This is the data of a morphism \(S^{o}\rightarrow(\mathbb{C}^{\star})^{n}\). Since \(w\) is contained in the support of the fan of \(Y\) the morphism from \(S^{o}\) extends to a morphism from \(S\) to \(Y\). We thus identify \(G_{\varphi}\) with \(\operatorname{in}_{w}(G^{o})\) but the former depends only upon the cone in which \(w\) lies.
### Grobner theory and torus actions
There is a close link between Grobner theory and torus actions. We continue notation from the previous section. Cocharacters of a torus specify the data of a one dimensional subtorus. Given cocharacter \(w\) of \(Y^{o}\) we write \(T_{w}\) for the associated subtorus.
**Lemma 5.3.1**.: The morphism \(q|_{V(\gamma)}:\pi_{\Gamma}^{\star}E|_{V(\gamma)}\rightarrow\mathcal{F}_{ \Gamma}|_{V(\gamma)}\) is invariant under the action of the one dimensional subtorus \(T_{w}\) if and only if
\[\operatorname{in}_{w}\big{(}\Gamma(O(\gamma),\ker(q|_{V(\gamma)}))\big{)}= \Gamma(O(\gamma),\ker(q|_{V(\gamma)})).\]
Proof.: First note \(\Gamma(\ker(O(\gamma),q|_{V(\gamma)})=G^{o}\). The morphism \(q|_{V(\gamma)}\) is characterised by its kernel so we need understand when \(\ker(q|_{V(\gamma)})\) is fixed by our torus action. The submodule \(G^{o}\) is fixed if and only if \(\operatorname{in}_{w}(G^{o})=G^{o}\).
**Corollary 5.3.2**.: The set of points \(w\) in the cocharacter lattice of \(Y^{o}\) such that \(\operatorname{in}_{w}(G^{o})=G^{o}\) forms a saturated subgroup. If \(\mathcal{F}\) is logarithmically flat then this subgroup is a union of cones.
Proof.: First sentence from Lemma 5.3.1. Second sentence also uses Corollary 5.2.2.
We work in the setup of Section 5.1. The next lemma is important because it suggests how to prove Theorem 5.1.1. Meditating on the consequences of this lemma suggests the right definition of piecewise linear space. Lemma 5.3.3 is saying we can do a sort of global version of Grobner theory - thinking about cones without thinking explicitly about coordinate patches.
**Lemma 5.3.3**.: There are cones \(\kappa_{1},...,\kappa_{n}\) in \(\operatorname{St}_{\gamma}\) such that there is an equality of submonoids
\[\bigcup_{i}(\kappa_{i}\cap M_{\sigma}(\gamma))=L_{\gamma}(q)\leq M_{\sigma}( \gamma).\]
Proof of Theorem 5.1.1.: Observe \(V_{\sigma}\) contains an open subset \(V_{\sigma}^{o}\) which is the preimage of the point \(a_{\sigma}\) in \(\mathcal{V}\) associated to the cone \(\sigma\). The preimage \(V^{\prime}(\gamma)\) of \(V_{\sigma}^{o}\) in \(V(\gamma)\) is a toric variety bundle over \(V_{\sigma}^{o}\) Take an open cover \(\{U_{i}\}\) of \(V_{\sigma}^{o}\) whose preimage in \(V(\gamma)\) is isomorphic to \(U_{\sigma_{i}}\times Y\) for \(Y\) a toric variety. Let \(L_{i}\) be the maximal subtorus of \(T_{\gamma}\) which stabilises \(q|_{U_{i}}\). In the case \(V_{\sigma}^{o}=V_{\sigma}\) we know \(T_{\gamma}(q)=\cap_{i}L_{i}\); in fact this holds in general since being torus fixed is a closed condition and \(V\) is logarithmically flat. It thus suffices to check each \(L_{i}\) is a union of cones. Without loss of generality \(U_{i}\) is affine and the result follows from Corollary 5.3.2.
### Construction
We now explain how to construct the piecewise linear space \(\mathscr{T}(q_{\Gamma})\). We do so by specifying a locally closed stratification of the interior of each cone \(\sigma\) of \(\operatorname{Trop}(V)\).
**Construction 5.4.1**.: For each choice of \(\sigma\) a cone of \(\operatorname{Trop}(V)\) with \(a_{\sigma}\) in the image of \(V\to\mathcal{X}\times\mathpzc{W}\) we define a locally closed stratification \(P_{\sigma}\) of \(|\sigma|^{o}\) as follows. Define an equivalence relation \(\sim\) on polyhedra of \(\Gamma(\sigma)\) such that \(\gamma\sim\gamma^{\prime}\) if \(\gamma^{\prime}\) is a face of \(\gamma\) and the image of \(\gamma^{\prime}\) in \(\operatorname{St}_{\gamma}\) lies in \(L_{\gamma}(q_{\Gamma})\). We specify a locally closed stratification \(\mathcal{P}\) of the interior of \(\sigma\) by declaring \(p,q\in|\sigma|^{o}\) lie in the same stratum if and only if \(p,q\) lie in the interiors of cones related by \(\sim\).
For each choice of \(\sigma\) a cone of \(\operatorname{Trop}(V)\) with \(a_{\sigma}\) not in the image of \(V\to\mathcal{X}\times\mathpzc{W}\) we define a locally closed stratification \(P_{\sigma}\) of \(|\sigma|^{o}\) as follows. Define an equivalence relation \(\sim\) on polyhedra of \(\Gamma(\sigma)\) such that \(\gamma\sim\gamma^{\prime}\) if whenever \(\sigma\) is a face of \(\sigma^{\prime}\) and \(\sigma^{\prime}\) is as in the first paragraph, \(\gamma\) and \(\gamma^{\prime}\) lie in the closure of the same strata of \(\mathcal{P}_{\sigma^{\prime}}\).
We define \(\mathscr{T}(q)\) to be the unique piecewise linear space such that the subset of \(\mathcal{P}_{\mathscr{T}(q)}\) mapping to the interior of \(\sigma\) is \(\mathcal{P}_{\sigma}\) for all \(\sigma\).
We next verify that the piecewise linear space \(\mathscr{T}(q)\) built in Construction 5.4.1 exists. Our task is to check the \(\mathcal{P}_{\sigma}\) fit together to define a locally closed stratification. We first handle strata in \(\Gamma(\sigma)\) for a fixed \(\sigma\).
**Lemma 5.4.2**.: Fixing any cone \(\sigma\), there is a piecewise linear space subdividing \(\sigma\) whose restriction to the interior of \(\sigma\) is \(\mathcal{P}_{\sigma}\).
Proof.: It suffices to handle the case that \(a_{\sigma}\) lies in the image of \(V\to\mathcal{X}\times\mathpzc{W}\). Lemma 5.3.3 implies each stratum of \(\mathcal{P}_{\sigma}\) naturally carries the structure of a piecewise linear cone. It remains to check \(\mathcal{P}_{\sigma}\) is a bone fide stratification. This follows by observing that \(\gamma\leq\gamma^{\prime}\) elements of \(\Gamma(\sigma)\) and \(q_{\Gamma}|_{O(\gamma)}\) is fixed by a subtorus of \(Y_{\gamma}^{o}\) then \(q_{\Gamma}|_{O(\gamma^{\prime})}\) is fixed by the same subtorus. This property can be checked on any affine cover of \(X\) and thus follows from our Grobner theory characterisation of being torus fixed.
It remains to check that the locally closed stratifications \(\mathcal{P}_{\sigma}\) fit together to form a piecewise linear complex as claimed. The only work we must do is verify that the locally closed stratification \(\mathcal{P}_{\mathscr{T}(q)}\) is a stratification, not just a semistratification.
**Lemma 5.4.3**.: Let \(\kappa,\kappa^{\prime}\) be strata of \(\mathcal{P}_{\mathscr{T}(q)}\) such that \(\kappa\) intersects the closure of \(\kappa^{\prime}\). Then \(\kappa\) lies in the closure of \(\kappa^{\prime}\).
Proof.: We split the proof into three cases.
**Case I.** If \(\kappa,\kappa^{\prime}\) lie in the interior of the same cone \(\sigma\) of \(\operatorname{Trop}(V)\) then the result follows from Lemma 5.4.2.
**Case II.** We consider the case that \(\kappa,\kappa^{\prime}\) lie in the interiors of different cones \(\sigma\leq\sigma^{\prime}\) both of whose interiors map to the interior of the same cone of \(\operatorname{Trop}(W)\). Assume moreover that the associated point of \(\mathpzc{W}\) lies in the image of the map \(W\to\mathpzc{W}\). We must check that there are not cones \(\gamma\leq\gamma^{\prime}\) of \(\Gamma(\sigma),\Gamma(\sigma^{\prime})\) respectively with the following property. Observe a one dimensional sublattice of
\(\sigma^{\mathsf{gp}}\) specifies a one dimensional sublattice of \((\sigma^{\prime})^{\mathsf{gp}}\) by inclusion. Thus it makes sense to define our property as \(L_{\gamma^{\prime}}(q)<L_{\gamma}(q)\).
We define \(K=\ker(\pi_{\Gamma}^{\star}\mathcal{E}\to\mathcal{F}_{\Gamma})\) and note this kernel characterises \(q_{\Gamma}\). We know there are no sections of \(\mathcal{F}_{\Gamma}|_{V(\gamma)}\) whose support is contained within \(\mathcal{F}_{\Gamma}|_{V(\gamma^{\prime})}\). Knowing \(K|_{O(\gamma)}\) thus determines \(K|_{V(\gamma)}\) as the maximal subsheaf of \(\pi_{\Gamma}^{\star}\mathcal{E}\) whose restriction to \(O(\gamma)\) is \(K|_{O(\gamma)}\). Thus if \(K|_{O(\gamma)}\) is preserved when pulling back along some isomorphism, so is \(K|_{V(\gamma)}\) and thus \(K|_{V(\gamma^{\prime})}\).
**Case III.** Now suppose \(\kappa,\kappa^{\prime}\) lie in different cones \(\sigma\leq\sigma^{\prime}\) which map to different cones of \(\operatorname{Trop}(W)\). Then the result is a property of piecewise linear spaces following from point set topology.
### Proof of Theorem
We are now in a position to verify that the piecewise linear space of Construction 5.4.1 satisfies the hypothesis of Theorem 5.1.1.
Proof of Theorem 5.1.1.: Properties (1) and (2) are clear. We check the tropical support is independent of the choice of \(\Gamma\). Consider a refinement of subdivisions
\[\Gamma^{\prime}\to\Gamma\to\operatorname{Trop}(V)\]
with corresponding logarithmic modification
\[V_{\Gamma^{\prime}}\xrightarrow{\pi_{\Gamma^{\prime},\Gamma}}V_{\Gamma} \xrightarrow{\pi_{\Gamma}}V\]
It suffices to check \(\mathcal{T}(\pi_{\Gamma,\Gamma^{\prime}}^{\star}q_{\Gamma})=\mathcal{T}(q_{ \Gamma})\). Equivalently the stratifications \(\mathcal{P}_{\mathcal{T}(q_{\Gamma})}\) and \(\mathcal{P}_{\mathcal{T}(\pi_{\Gamma^{\prime},\Gamma^{\prime}}^{\star}q_{ \Gamma})}\) of \(|\operatorname{Trop}(V)|\) coincide. Let \(\gamma^{\prime}\) be a stratum of \(\Gamma^{\prime}\) with image contained in \(\gamma\) a stratum of \(\Gamma\). If \(\Gamma\) has the same dimension as \(\gamma^{\prime}\) the strata \(O(\gamma)\) and \(O(\gamma^{\prime})\) are isomorphic and the associated surjections of sheaves may be identified. Consequently \(L_{\gamma}(q)=L_{\gamma^{\prime}}(q^{\prime})\). In general, since \(q_{\Gamma^{\prime}}\) is pulled back from \(q_{\Gamma}\) we have \(L_{\gamma}(q)=L_{\gamma^{\prime}}(q^{\prime})\).
## 6. Tropical support in families
We continue with the notation introduced at the start of Section 5.1 except we drop all assumptions on \(W\). For semantic convenience throughout we pretend \(W\) has locally connected logarithmic strata and thus admits an Artin fan. In Remark 6.3.5 we explain how to adapt our discussion to remove this hypothesis.
**Goal 6.0.1**.: To a logarithmic surjection of coherent sheaves
\[q=[\pi_{\Gamma}:(X\times W)_{\Gamma}\to X\times W,q_{\Gamma}:\pi_{\Gamma}^{ \star}\mathcal{E}\to\mathcal{F}_{\Gamma}]\]
on \(X\) flat over \(W\) we associate a map
\[\mathcal{W}\to\text{{Supp}}.\]
### Tropical support over atomic logarithmic schemes
All of the difficulty in achieving Goal 6.0.1 appears when \(W\) is atomic and has locally connected logarithmic strata. In this situation the Artin fan of \(X\times W\) is \(\mathpzc{X}\times\mathpzc{W}\). Our strategy is to use the universal property of \(\mathsf{Supp}\). The work we do in this section comes down to checking the definition of tropical support generalises nicely from the case that the image of \(W\) in \(\mathpzc{W}\) is a single point.
We now define tropical support for \(W\) atomic with Artin fan \(a^{\star}\sigma\) for some cone \(\sigma\). The preimage of the closed point in \(\sigma\) is a non-empty closed subscheme \(V_{W}(\sigma)\) of \(W\). For \(q\) a logarithmic surjection of coherent sheaves flat over \(W\) define \(\mathcal{T}(q)\) to be \(\mathcal{T}(q|_{V_{W}(\sigma)})\) as defined in the last section. By definition this is a combinatorially flat morphism of piecewise linear spaces, defining a map
\[\mathpzc{W}\to\text{{Supp}}.\]
It is far from clear that this construction satisfies descent in the strict etale topology as we vary \(W\). In the remainder of this section we check strict descent.
### The relative Quot scheme trick
The relative Quot scheme trick is a local version of Maulik and Ranganathan's technique for constructing logarithmic Donaldson-Thomas spaces [14]. Assume \(W\) is atomic and fix a tropical model \(\Gamma\to\operatorname{Trop}(V)\) such that \(V_{\Gamma}\) is integral over \(S\). There is a cartesian diagram
and the sheaf \(\pi_{\Gamma}^{\star}\mathcal{E}\) is pulled back from a sheaf \(\mathcal{E}_{\Gamma}\) on \(\mathcal{X}_{\Gamma}\). Logarithmic flat and integral over a base imply flatness, so a representative of a logarithmic surjection of coherent sheaves on \(V\) is specified by a \(\underline{W}\) point of Grothendieck's relative Quot scheme \(Q=\operatorname{Quot}(\underline{\mathcal{X}_{\Gamma}}/\mathcal{W},\mathcal{ E}_{\Gamma})\). Here we adopt the notation for any logarithmic scheme \(V\), \(\underline{V}\) is the underlying scheme.
Not all \(\underline{W}\) points of \(Q\) give rise to logarithmic surjections of coherent sheaves because logarithmic flatness is stronger than flatness. We will often restrict attention to the open substack \(Q^{o}\) of \(Q\) whose points specify logarithmically flat quotients. This technique is useful because tropical support can be understood in terms of locally closed substacks of \(Q^{o}\).
### Strict descent and base change
A strict map of atomic logarithmic schemes \(f:V\to W\) induces a map of Artin fans \(a^{\star}f_{\operatorname{trop}}:\mathcal{V}=a^{\star}\tau\to\mathcal{W}=a^{ \star}\sigma\). A family of tropical supports
\[\mathscr{T}\to\operatorname{Trop}(X)\times\sigma\to\sigma\]
over \(\sigma\) pulls back along the map of cones \(f_{\operatorname{trop}}\) to define a family of tropical supports
\[f_{\operatorname{trop}}^{\star}\mathscr{T}\to\operatorname{Trop}(X)\times\tau\to\tau\]
over \(\tau\).
**Proposition 6.3.1**.: The subdivision \(\mathscr{T}(q)\) respects strict base change. More precisely in the situation of the previous paragraph
\[f_{\operatorname{trop}}^{\star}\mathscr{T}(q)=\mathscr{T}(f^{\star}q).\]
To prove Proposition 6.3.1 we develop the relative Quot trick introduced in the last section.
#### 6.3.1. Constant degeneration implies constant tropical support
Assume now that the image of \(W\) in \(\mathcal{W}\) consists of a single (stacky) point \(a_{s}\). There is a diagram with all squares cartesian
To avoid stack issues pull \(\mathcal{X}_{s}\) back along a map from \(\operatorname{Spec}(\mathbb{C})\) to \(a_{s}\) to define a scheme \(X_{s}\). In the relative Quot scheme trick we consider morphisms \(W\to\operatorname{Quot}(X_{s},\mathcal{E}_{\Gamma})=Q\). Observe any map from a scheme \(S\) to the open subset \(Q^{o}\) of \(Q\) includes the data of a map from \(S\) to \(\mathcal{W}\) which specifies a logarithmic structure on \(S\) by pullback. In this way a point of \(Q^{o}\) gives the data of a logarithmic surjection of coherent sheaves flat over a log point. The tropical support of a point of the scheme \(Q^{o}\) is the tropical support of this logarithmic surjection of coherent sheaves.
**Lemma 6.3.2**.: Proposition 6.3.1 holds whenever the image of \(W\) in \(\mathcal{W}\) consists of a single point.
Lemma 6.3.2 reinterprets and generalises [14, Lemma 4.3.2]. It establishes even without the hypothesis that the image of \(W\) is a point, there is a locally closed stratification of \(Q^{o}\) on which tropical support is well understood. Understanding tropical support in families then amounts to understanding how these locally closed pieces fit together.
**Remark 6.3.3**.: Since the image of \(W\) in \(\mathscr{W}\) is a single point, we know \(W\) is connected. The data of \(q\) gives a map \(\underline{W}\to Q^{o}\). The tropical support of \(W\) could be characterised as the tropical support of any point in the connected component of \(Q^{o}\) in which the image of \(\underline{W}\) lies.
The remainder of Section 6.3.1 is a proof of Lemma 6.3.2. We start by sketching the proof idea. Note first the substack of \(Q^{o}\) consisting of points with fixed tropical support is constructible. Indeed, we are imposing that \(q_{\Gamma}\) is fixed by certain torus actions (a closed condition) and not fixed by other torus actions (an open condition). A constructible subset is closed/open if and only if it is preserved under specialisation/generization respectively. Being torus fixed is a closed condition, so our task is to show that being torus fixed is also open in our situation.
Write \(V(\gamma)\) for the closed stratum of \(X_{\Gamma}\) corresponding to cone \(\gamma\) and \(O(\gamma)\) for the locally closed stratum. The automorphism torus associated to \(O(\gamma)\) is denoted \(T_{\gamma}\) and we denote the restriction of \(\pi_{\Gamma}\) to \(V(\gamma)\) by
\[\pi_{\gamma}:V(\gamma)\to X.\]
The restriction of a quotient of \(\mathcal{E}\) denoted \(q_{\Gamma}\) to \(V(\gamma)\) is written
\[q_{\gamma}:\pi_{\gamma}^{\star}\mathcal{E}\to\mathcal{F}_{\gamma}.\]
We write \(\overline{Q}_{\gamma}=\operatorname{Quot}(\underline{V(\gamma)},\pi_{\Gamma }^{\star}\mathcal{E}|_{V(\gamma)})\) and \(Q_{\Gamma}^{o}\) for the open logarithmically flat subscheme. By Corollary 4.2.8, there is a map
\[Q^{o}\to Q_{\gamma}^{o}=\operatorname{Quot}(\underline{V(\gamma)},\pi_{ \Gamma}^{\star}\mathcal{E}|_{V(\gamma)})\text{ sending }q_{\Gamma}\mapsto q_{\gamma}.\]
Let \(S\) be a trait and chose a morphism from \(S\) to \(Q^{o}\). Write \(s\) for the special point of \(S\) and \(\eta\) for the generic point. Our map specifies a surjection of sheaves on \(X\times S\) written \(q(S):\pi_{X}^{\star}\mathcal{E}\to\mathcal{F}_{S}\). The restriction of \(q(S)\) to \(V(\gamma)\) is \(q_{\gamma}(S)\). The restriction of \(q(S),q_{\gamma}(S)\) to the special and general fibre are denoted \(q(s),q_{\gamma}(s)\) and \(q(\eta),q_{\gamma}(\eta)\) respectively. We must verify the tropical support of \(q(s)\) and \(q(\eta)\) coincide. Since being torus fixed is closed, if these tropical supports did not coincide then we could find a cone \(\gamma\) of \(\Gamma\) with the following property.
**Property \(\star\)**: There is a one parameter subgroup \(T^{o}\) of \(T_{\gamma}\) which fixes \(q_{\gamma}(s)\) but does not fix \(q_{\gamma}(\eta)\).
The proof of Lemma 6.3.2 is completed with the following steps. Assume for contradiction that there exists a cone \(\gamma\) and torus \(T^{o}\) as in property \(\star\). We write \(\eta=\operatorname{Spec}(K)\).
**Step I**. Construct a second map from the trait \(S\) to \(\overline{Q}_{\gamma}\). We write \(S=S^{\prime\prime}\) when referring to this second family to avoid confusion: the special point of \(S^{\prime\prime}\) is \(s^{\prime\prime}\) and the generic point \(\eta^{\prime\prime}\). The construction ensures that \(q_{\gamma}(s^{\prime\prime})=q_{\gamma}(s)\) and
\[q_{\gamma}(\eta^{\prime\prime}):\mathcal{E}^{\prime\prime}\to\mathcal{F}^{ \prime\prime}_{\eta^{\prime\prime}}\]
is invariant under the action of \(T^{o}\). Deduce the images of \(S\) and \(S^{\prime\prime}\) lie in the same connected component of \(\overline{Q}_{\gamma}\).
**Step II**. Use the action of \(T^{o}\) to construct a map \(S^{\prime}=\operatorname{Spec}(K[[t]])\to\overline{Q}_{\gamma}\). The construction has the property that the restriction of the universal surjection to the special fibre fits into a diagram
\[\mathcal{E}^{\prime\prime}\to\mathcal{F}^{\prime}_{s^{\prime}}\overset{p}{ \to}\mathcal{F}^{\prime\prime}_{\eta^{\prime\prime}}\]
where the kernel of \(p\) is non trivial. Use this to argue that \(\mathcal{F}^{\prime\prime}_{\eta^{\prime\prime}}\) and \(\mathcal{F}^{\prime}_{s^{\prime}}\) have different Hilbert polynomials and thus the images of \(S^{\prime\prime}\) and \(S^{\prime}\) do not lie in the same connected component of \(\overline{Q}_{\gamma}\).
**Step III.** Observe the map \(\eta^{\prime}\to\overline{Q}_{\gamma}\) admits a lift to a map \(R=\operatorname{Spec}(K[T^{\pm 1}])\to\overline{Q}_{\gamma}\) and we may factor
\[\eta\to R\to\overline{Q}_{\gamma}.\]
Thus the images of \(S\) and \(S^{\prime}\) lie in the same connected component of \(\overline{Q}_{\gamma}\).
The final sentences of each step cannot all be true and we obtain a contradiction.
Proof.: We execute steps in the proof outlined above. Note \(T^{o}\) specifies a ray \(\rho\) in the star fan of \(\gamma\). After subdividing \(\gamma\) we may assume \(\rho\) is a ray in this star fan and that moreover the projection map induces a combinatorially flat map of smooth fans \(\operatorname{St}_{\gamma}\to\rho\). The closed stratum associated to \(\operatorname{St}_{\rho}\) is denoted \(V(\rho)\).
**Step I**. Transversality means we can restrict the universal surjection \(q_{\gamma}(S)\) to \(V(\rho)\) and still have a family flat over \(S\). The map \(V(\gamma)\to V(\rho)\) is flat because it is pulled back from a combinatorially flat map of smooth fans. Thus pulling \(q_{\gamma}(S)|_{V(\rho)}\) back to \(V(\gamma)\) gives our new map from \(S\) to \(\overline{Q}_{\gamma}\).
**Step II.** The action of \(T^{o}\) on \(q\) defines a map \(R\to Q^{o}\) defining a map \(\eta^{\prime}\to Q^{o}\). By properness we can take a limit obtaining \(S^{\prime}\to\overline{Q}_{\gamma}\).
We can understand the surjection of sheaves associated to \(s\) on affine patches via Grobner theory. Choose an affine patch \(U_{i}\) of \(X_{\sigma}\). Assume \(q\) restricted to the affine patch of \(X_{\Gamma}\) formed by intersecting \(U_{i}\) with the locally closed stratum associated to \(\rho\) fits into a short exact sequence of global sections
\[0\to G\to E\otimes_{A}A[T,Y_{1}^{\pm 1},...,Y_{n}^{\pm 1}]\to F\to 0.\]
Here we pick coordinates such that \(T^{o}\) acts trivially on the \(Y_{i}\). Associated to \(T^{o}\) is then a term order \(w\) assigning each \(Y_{i}\) to \(0\) but \(T\) to \(1\).
With the above setup, \(q_{\gamma}(s^{\prime})\) on our affine patch has global sections fitting into the short exact sequence
\[0\to G^{\prime}\to E\otimes_{A}A[T,Y_{1}^{\pm 1},...,Y_{n}^{\pm 1}]\to F_{s^{\prime}}\to 0\]
where \(G^{\prime}\) is \(\operatorname{in}_{w}(G)\) for \(w\) the term order defined by \(T^{o}\). On the same affine patch we know \(q(\eta^{\prime\prime})\) is pulled back from a surjection of sheaves on \(V(\rho)\) and so fits into the short exact sequence of global sections
\[0\to G^{\prime\prime}\to E\otimes_{A}A[T,Y_{1}^{\pm 1},...,Y_{n}^{\pm 1}]\to F_{\eta^{\prime \prime}}\to 0.\]
Here \(G^{\prime}=\operatorname{in}_{-w}(G)\). Note \(G^{\prime}\) and \(G^{\prime\prime}\) can only coincide if \(G\) were generated by elements homogenous in \(T\). If \(G\) were so generated then by transversality \(G\) is generated by polynomials in which \(T\) does not appear. Thus \(q_{\gamma}\) restricted to this the preimage of \(U_{i}\) is torus fixed, which cannot happen for all affine patches \(U_{i}\) lest \(q\) be torus fixed.
Hilbert polynomial is additive in short exact sequences. There is a short exact sequence
\[\ker(p)\to\mathcal{F}^{\prime}_{s^{\prime}}\to\mathcal{F}^{\prime\prime}_{ \eta^{\prime\prime}}\to 0\]
and we now know \(\ker(p)\) is not the zero sheaf because we saw this on affine patches. Thus the Hilbert polynomials of \(\mathcal{F}^{\prime}_{s^{\prime}}\) and \(\mathcal{F}^{\prime\prime}_{\eta^{\prime\prime}}\) differ.
**Step III.** Immediate from above analysis.
#### 6.3.2. Dropping the constant degeneration hypothesis
We extend the analysis of Section 6.3.1 by dropping the assumption that the image of \(W\) in \(\mathscr{W}\) is a single point. We still assume that \(W\) admits a strict map to an Artin cone. Fix a piecewise linear subdivision \(\mathscr{T}\) of \(\operatorname{Trop}(W)\times\operatorname{Trop}(X)\) such that \(\Gamma\) is a tropical model of \(\mathscr{T}\). We say a point \(p\) of \(Q^{o}\) is _compatible with_\(\mathscr{T}\) if the tropical support associated to \(p\) is obtained by restricting \(\mathscr{T}\) to the preimage of the appropriate face of \(\operatorname{Trop}(W)\).
**Proposition 6.3.4**.: Being compatible with \(\mathscr{T}\) is an open condition on \(Q^{o}\).
Proof.: The proof of Lemma 6.3.2 carries through almost vis a vis. The difference is constructing the map
\[S^{\prime\prime}\to\overline{Q}_{\gamma}\]
in Step I. We explain how to adapt the construction to the present setting. The image of \(s\) in \(\mathcal{W}\) is \(a^{\star}\sigma_{s}\) and \(\eta\) maps to \(a^{\star}\sigma_{\eta}\). There are cones \(\gamma_{1},...,\gamma_{q}\) of \(\Gamma\) minimal with the property that the interior of \(\gamma_{i}\) maps to the interior of \(\sigma_{\eta}\) and \(\gamma\) is a face of \(\gamma_{i}\).
After possibly subdividing we may assume the star fan of each \(\gamma_{i}\) is equivariant and smooth. For each \(\gamma_{i}\) there is a cone \(\rho_{i}\) of relative dimension one over \(\operatorname{Trop}(W)\) corresponding to the direction defined by \(T^{o}\) in the star fan of \(\gamma_{i}\). Subdividing further we may assume the map from \(V(\gamma_{i})\) to \(V(\rho_{i})\) is combinatorially flat and thus flat. The the union of the \(V(\gamma_{i})\) is flat over the union of the \(V(\rho_{i})\). Indeed for fixed \(i\) this follows as in the constant degeneration case; it suffices to check for each \(\rho_{i}\) by the valuative criterion for flatness.
Note \(\rho_{i}\) are precisely cones which contain a fixed cone \(\rho\) lying over \(\eta\) in their image. There is now a flat map \(V(\gamma)\to V(\rho)\) and the proof goes through as before.
#### 6.3.3. When \(W\) is not atomic
We finish this subsection by removing the hypothesis that \(W\) is atomic. Any logarithmic scheme \(W\) admits an etale cover by atomic logarithmic schemes \(W_{i}\to W\). The expanded sheaf can be pulled back to \(W_{i}\) and thus we have already constructed a morphism from each \(\mathcal{W}_{i}\) to _Supp_. The Artin fan of \(\mathcal{W}\) is the colimit of the Artin fans of each \(\mathcal{W}_{i}\).
Proof of Proposition 6.3.1.: Tropical support is controlled in the logarithmic Quot scheme trick by the image of \(W\) in the relative Quot scheme. We need to show each logarithmic stratum of \(W\) is mapped to the locally closed subscheme compatible with \(\mathcal{T}(q)\).
If this were not true there would be a locally closed subscheme \(W^{\prime}\) of \(W\) which is mapped to \(a_{\sigma^{\prime}}\) for some \(\sigma^{\prime}\) such that the tropical support over \(W^{\prime}\) was not the restriction of \(\mathcal{T}(q)\) to \(\sigma^{\prime}\). Pick \(\sigma^{\prime}\) maximal with this property and observe \(W^{\prime}\) must contain a point of \(W\) in its closure mapped to \(a_{\sigma^{\prime\prime}}\) such that \(\sigma^{\prime}\) is a proper face of \(\sigma^{\prime\prime}\) (else \(W\) is not atomic). This contradicts Proposition 6.3.4 unless \(\sigma^{\prime}\) is maximal. For \(\sigma^{\prime}\) maximal \(V(\sigma^{\prime})\) is connected (else \(W\) is not atomic).
**Remark 6.3.5**.: Throughout we have assumed that \(W\) has locally connected logarithmic strata and thus we can speak of the Artin fan of \(W\). This was convenient for stating clean results but not necessary to obtain a natural map
\[W\to\textit{Supp}\]
which etale locally on \(W\) factors through a strict map from \(W\) to an Artin fan. As in the case \(W\) admits an Artin fan it suffices to work etale locally on \(W\) so we may assume our logarithmic modification of \(W\times X\) is pulled back from a morphism of Artin fans
\[\Gamma\to a^{\star}\sigma\times\mathcal{X}.\]
Here \(a^{\star}\sigma\) is any Artin cone such that there is a strict map \(W\to a^{\star}\sigma\). Replacing \(\mathcal{W}\) by \(a^{\star}\sigma\) the proof of Proposition 6.3.4 remains valid. However the map from \(W\) to _Supp_ need not factor through \(a^{\star}\sigma\) as tropical support need not be constant. Instead we use Proposition 6.3.4 to obtain a strict etale open cover of \(W\) on which Proposition 6.3.1 holds. The map \(W\to\textit{Supp}\) can now be defined as in the proof of Proposition 6.3.1.
## 7. Flat limits after Tevelev.
The goal of this section is to develop the techniques needed to show the logarithmic Quot scheme is proper. Let \(\underline{S}\) be a trait with generic point \(\underline{\eta}\) and consider a sheaf \(\mathcal{F}\) on \(X\times S\) which is flat over \(\underline{S}\). Define a logarithmic scheme \(\eta\) by equipping \(\underline{\eta}\) with either of the following logarithmic structures
1. Case 1: equip \(\eta\) with the trivial logarithmic structure.
2. Case 2: equip \(\eta\) with logarithmic structure with ghost sheaf \(\mathbb{N}\).
Assume the pullback of \(\mathcal{F}\) to \(X\times\eta\) is logarithmically flat over \(\eta\).
**Theorem 7.0.1**.: In both Case 1 and Case 2 there is a logarithmic structure on \(\underline{S}\) extending the the logarithmic structure on \(\eta\) defining a logarithmic scheme \(S\) with the following property. There is a logarithmic modification
\[\pi_{\Gamma}:(X\times S)_{\Gamma}\to X\times S\]
such that the strict transform of \(\mathcal{F}\) under \(\pi_{\Gamma}\) is logarithmically flat and integral over \(S\).
Theorem 7.0.1 is the technical ingredient required to show that the logarithmic Quot scheme is universally closed. Our argument follows [16, Theorem 4.6.1, especially Section 7]
### Generically trivial logarithmic structure after Tevelev
We first handle Case 1 where the generic point of \(S\) has trivial logarithmic structure.
#### 7.1.1. Logarithmic structure on \(S\)
Define logarithmic scheme \(S\) by equipping \(\underline{S}\) with the divisorial logarithmic structure from its special point. We are left to find the logarithmic modification \(\pi_{\Gamma}\).
#### 7.1.2. Reduction to the toric case
In this section we reduce the proof of Theorem 7.0.1 in Case 1 to the situation that \(X\) is toric equipped with its toric boundary divisors.
**Lemma 7.1.1**.: To prove Theorem 7.0.1, Case 1 it suffices to check the case \(X\) is a toric variety.
Proof.: Being logarithmically flat is a local property so it suffices to ensure logarithmic flatness on an open cover. Observe \(\mathbb{A}^{n}\) may be covered by very affine varieties. It follows that any scheme admits an open cover by very affine varieties. Take such an open cover \(\{U_{i}\hookrightarrow X\}\). Since \(D\) is simple normal crossing without loss of generality \(D|_{U_{i}}=V(f_{1},...,f_{k})\) for some regular sequence \(f_{i}\). The functions \(f_{1},...,f_{k}\) define a function from \(U_{i}\) to \(\mathbb{A}^{k}\). Combining this with the fact \(U_{i}\) is very affine we can consider \(U_{i}\) a closed subscheme of \((\mathbb{C}^{*})^{\ell}\times\mathbb{A}^{k}\). This embedding has the property that the toric boundary of \(\mathbb{A}^{k}\) pulls back to the divisor \(D\cap U_{i}\).
We now consider a coherent sheaf \(\mathcal{F}\) on \(U_{i}\). Any such coherent sheaf can be pushed forward to a coherent sheaf on the toric variety \((\mathbb{C}^{*})^{\ell}\times\mathbb{A}^{k}\). Suppose we find a toric modification \(U_{i}(\Gamma)\) of \((\mathbb{C}^{*})^{\ell}\times\mathbb{A}^{k}\) such that the strict transform of \(\iota_{*}\mathcal{F}\) is logarithmically flat over \(S\). The strict pull-back of a logarithmic modification is a logarithmic modification and thus we obtain a logarithmic modification of \(X\times S\).
Any toric morphism to \(\mathbb{A}^{1}\) is flat and thus every logarithmic modification of \(X\times S\) is integral over \(S\) (whose log structure corresponds to the cone \(\mathbb{R}_{\geq 0}\)). Logarithmic flatness of \(\mathcal{F}\) over \(S\) follows from the same statement for \(\iota_{*}\mathcal{F}\).
#### 7.1.3. Toric case
We prove Case 1 of Theorem 7.0.1 in the case \(X\) is toric equipped with divisorial logarithmic structure from its toric boundary. In light of Section 7.1.1 this completes our analysis of Case 1.
**Proposition 7.1.2**.: Given a coherent sheaf on \(X\times S\) which is logarithmically flat over \(X\times\eta\), there is a logarithmic modification
\[\pi_{\Gamma}:(X\times S)_{\Gamma}\to X\times S\]
such that the strict transform of \(\mathcal{F}\) is logarithmically flat.
Proposition 7.1.2 is a translation of a mild upgrade of Theorem 3.3.2.
Proof.: For the strict transform of \(\mathcal{F}\) to be logarithmically flat we require first that the morphism of Artin fans
\[a^{\star}(\operatorname{Trop}(X)\times\operatorname{Trop}(S))_{\Gamma}\to a^{ \star}\operatorname{Trop}(S)\]
is flat. This is true for any morphism of Artin fans to \(\operatorname{Trop}(S)=\mathbb{R}_{\geq 0}\) mapping the generic point to the generic point.
Thus it suffices to ensure that the sheaf \(\mathcal{F}\) on \(X\times S\) is flat in the usual sense over
\[X\times S\to\mathcal{X}\times\mathcal{S}\times_{S}S=\mathcal{X}\times S.\]
The map
\[X\times S\to[X\times S/\mathbb{G}_{m}^{k}]=\mathcal{X}\times S\]
is a global quotient, where \(X\) is a toric variety of dimension \(k\) and \(\mathbb{G}_{m}^{k}\) acts as the dense torus of \(X\).
We first rephrase the requirement that \(\mathcal{F}\) is flat over \(\mathcal{X}\times S\) without the language of stacks. Consider the map
\[\Psi:\mathbb{G}_{m}^{k}\times(X\times S)\to X\times S.\] \[(g,y)\mapsto g^{-1}y.\]
Note \(\mathcal{F}\) is flat over \(\mathcal{X}\times S\) if and only if \(\Psi^{\star}\mathcal{F}\) is flat with respect to the projection map \(\mathbb{G}_{m}^{k}\times X\to X\).
We must now show that \(\Psi^{\star}\mathcal{F}\) can be flattened by an equivariant blowup of \(X\times S\). Replacing \(S\) with \(\mathbb{A}^{1}\) this follows from Theorem 3.3.2. The same proof works for \(S\) a trait.
### The general case
We now handle Case 2 where the generic point of \(S\) has logarithmic structure with ghost sheaf \(\mathbb{N}\). The basic strategy is to reduce to Case 1 which we already know how to handle.
#### 7.2.1. Logarithmic structures on \(S\)
The first part of Theorem 7.0.1 requires us to specify a logarithmic structure on \(S\). A sufficient class of such logarithmic structures extending the logarithmic structure on \(\eta\) with ghost sheaf \(\mathbb{N}\) are the _logarithmic extensions_ introduced in [14, Section 7.1].
#### 7.2.2. Construction
In this subsection we construct a logarithmic structure on \(\underline{S}\) defining logarithmic scheme \(S\); and a logarithmic modification of \(X\times S\). Together these prove Theorem 7.0.1.
**Setup, notation and link to Case 1.** We are given a logarithmic modification \((X\times\eta)_{\Gamma}\) of \(X\times\eta\). This is the same data as a polyhedral subdivision \(\Gamma_{1}\to\operatorname{Trop}(X)\). Taking the cone over \(\Gamma_{1}\) recovers a subdivision \(\Gamma\) of \(\operatorname{Trop}(X)\times\operatorname{Trop}(\eta)=\operatorname{Trop}(X) \times\mathbb{R}_{\geq 0}\).
Each vertex of \(\Gamma_{1}\) specifies a ray \(\gamma\) in \(\Gamma\) and thus a closed subscheme \(V(\gamma)\times\eta\) of \((X\times\eta)_{\Gamma}\). We now define a new logarithmic structure on \(V(\gamma)\times\eta\) and a morphism from the resulting logarithmic scheme \(V(\gamma)\times\eta^{\dagger}\) to \(\eta^{\dagger}\). Here we define \(\eta^{\dagger}\) to be the logarithmic scheme obtained by equipping the underlying scheme of \(\eta\) with the trivial logarithmic structure.
**Defining new logarithmic structure.** Let \(\mathcal{A}\) be the scheme underlying an Artin fan. The scheme \(\mathcal{A}\) carries a natural logarithmic structure as an Artin fan. Thus specifying a map from a scheme \(W\) to \(\mathcal{A}\) specifies a logarithmic structure on \(W\). This is the logarithmic structure pulled back from the natural logarithmic structure on \(\mathcal{A}\).
Our strategy for defining a new logarithmic structure on the scheme \(V(\gamma)\times\eta\) underlying \(V(\gamma)\times\eta\) is to give a map to the underlying algebraic stack of an Artin fan. We already have one such map given by restricting the underlying map of schemes
\[(X\times S)_{\Gamma}\to a^{\star}\Gamma.\]
We know the image of \(V(\gamma)\times\eta\) lies in the closed substack corresponding to the Star fan of \(\gamma\) denoted \(a^{\star}\mathrm{St}_{\gamma}\). This closed substack is itself the stack underlying an Artin fan. The logarithmic
structure on \((V(\gamma)\times\eta)^{\dagger}\) is pulled back from the logarithmic structure of the Artin fan with underlying stack \(a^{*}\mathrm{St}_{\gamma}\). We can express this logarithmic scheme as a product
\[(V(\gamma)\times\eta)^{\dagger}=V(\gamma)^{\dagger}\times\eta^{\dagger}.\]
**Data of Case 1.** There is a natural map
\[(V(\gamma)\times\eta)^{\dagger}\to\eta^{\dagger}.\]
On underlying schemes this is the projection map. Since \(\eta^{\dagger}\) has the trivial logarithmic structure, there is a unique way to upgrade this to our morphism of logarithmic schemes.
**Applying Case 1.** Write \(S^{\dagger}\) for the logarithmic scheme obtained by equipping \(S\) with logarithmic structure from its special point. By Proposition 7.1.2 there is a logarithmic modification
\[\pi^{\dagger}_{\gamma^{\prime}}:(V(\gamma)^{\dagger}\times S^{\dagger})_{ \gamma^{\prime}}\to V(\gamma)^{\dagger}\times S^{\dagger}\]
such that the strict transform of \(\mathcal{F}\) is logarithmically flat over \(S^{\dagger}\). The data of \((V(\gamma)^{\dagger}\times S^{\dagger})_{\gamma^{\prime}}\) is a subdivision
\[\gamma^{\prime}\to\mathrm{Trop}(V(\gamma)^{\dagger})\times\mathbb{R}_{\geq 0}.\]
**Upgrading the rank.** We now upgrade \(\pi^{\dagger}_{\gamma^{\prime}}\) to
\[\pi^{\prime}_{\gamma}:(V(\gamma)\times S_{\sigma})_{\gamma^{\prime}}\to V( \gamma)\times S_{\sigma}\]
where the logarithmic scheme \(S_{\sigma}\) is the logarithmic extension of \(\eta\) corresponding to the cone \(\mathbb{N}^{2}\). Indeed we simply take the fibre product of \(\pi^{\dagger}_{\gamma^{\prime}}\) with \(\mathrm{pt}^{\dagger}\). We adopt the convention \(\sigma=\mathbb{N}^{2}\) with the first copy of \(\mathbb{N}\) being the ghost sheaf at the generic point and the second copy corresponding to the uniformiser of the special point of \(\underline{S}\).
**Gluing data associated to each vertex.** Write \(3\varepsilon\) for the rational number which is the smallest lattice distance between two vertices in \(\Gamma_{1}\). For each vertex \(\gamma_{i}\) in \(\Gamma_{1}\) identify a two dimensional cone \(\sigma_{i}\) in \(\sigma\) and containing the ray \(\{(i,0):i\in\mathbb{N}\}\).
By our previous discussion, for each vertex \(\gamma_{i}\) of \(\Gamma\) we get a subdivision \(\gamma^{\prime}_{i}\) of \(\mathrm{St}(\gamma_{i})\times\mathbb{N}^{2}\) which is the trivial subdivision over the ray \(\{(0,i)|i\in\mathbb{N}\}\). By taking a fibre we can think of each point \(p\) in \(\mathbb{N}^{2}\) as specifying the data of a polyhedral subdivision of \(\mathrm{St}(\gamma_{i})\). For \(p\) in the \(x\) axis the polyhedral subdivision is the same. By shrinking \(\sigma_{i}\) we may assume that whenever the \(x\) coordinate is at most one, every vertex in the subdivision \(\gamma^{\prime}_{i}\) lies within lattice distance \(\epsilon\) of the vertex associated to the zero cone in \(\mathrm{St}_{\gamma_{i}}\).
Set \(\sigma_{0}\subset\sigma\) a smooth subcone contained in each \(\sigma_{i}\). We now define \(S=S_{\sigma_{0}}\). To finish our construction we must specify a subdivision of \(\mathrm{Trop}(X)\times\sigma_{0}\). A piecewise linear subdivision on an \(\varepsilon\) neighbourhood of each cone \(\gamma_{i}\) corresponding to a vertex of \(\Gamma_{1}\) is specified by the subdivision \(\gamma^{\prime}_{i}\). Each subdivision extends to a subdivision of \(\mathrm{Trop}(X)\times\sigma_{0}\). Take the common refinement \(\mathcal{F}\) of the piecewise linear subdivision induced by each \(\gamma_{i}\). Choose any smooth subdivision \(\Gamma\to\mathrm{Trop}(X)\times S_{\sigma_{0}}\). Shrinking \(\sigma_{0}\) if needed and pulling back the resulting subdivision as in [11], without loss of generality the image of each stratum of \(\Gamma\) is a cone of \(\sigma_{0}\).
#### 7.2.3. Verifying construction works
First observe the map of Artin fans
\[a^{*}\Gamma\to a^{*}\sigma_{0}\]
is flat by miracle flatness. By construction the image of each cone is a cone and thus fibre dimension is constant. The base is regular because \(\sigma_{0}\) was chosen to be smooth. This handles being integral and we are left to check the pullback of \(\mathcal{F}\) is logarithmically flat.
It remains to check the strict transform \(\mathcal{F}_{\Gamma}\) of \(\mathcal{F}\) to \((X\times S)_{\Gamma}\) is logarithmically flat over \(S\). This is the same as checking \(\mathcal{F}_{\Gamma}\) is flat in the usual sense over the codomain of the morphism
\[(X\times S)_{\Gamma}\to a^{\star}\Gamma\times_{S}S.\]
This morphism is of finite presentation over a noetherian base (since \(S\) is a trait) so we may appeal to the valuative criterion for flatness [1, 11.8.1]. The target has a cover by schemes \(V(\gamma_{i})\) where \(\gamma_{i}\) is a cone corresponding to a vertex of \(\Gamma_{1}\). The image of any trait is contained within one of these closed substacks. Thus it suffices to verify the restriction of the map to the preimage of each \(V(\gamma_{i})\) is flat over \(V(\gamma_{i})\). This holds by construction.
## 8. The logarithmic Quot space
In the sequel we set \(X\) a projective (fine and saturated) logarithmic scheme which is logarithmically flat over a point with the trivial logarithmic structure. Fix also a Hilbert polynomial \(\Phi\). Let \(\mathcal{E}\) be a coherent sheaf on \(X\).
**Definition 8.0.1**.: The logarithmic Quot space \(\operatorname{Quot}(X,\mathcal{E})\) is the groupoid valued sheaf on the category of logarithmic schemes obtained by sheafifying the presheaf which assigns to a logarithmic scheme \(S\) the groupoid of logarithmic surjections of coherent sheaves on \(X\times S\) which are flat over \(S\).
Since logarithmic flatness and being integral are preserved under strict base change, we can think of the logarithmic Quot scheme as a (not necessarily algebraic) stack in the strict etale site. The next two sections constitute a proof of Theorem A and Theorem B.
### Representability
In Section 5.1 we showed that a morphism from \(S\) to the presheaf used to define the logarithmic Quot scheme specifies in particular a map from \(\mathcal{S}\) to the stack of tropical supports \(\mathit{Supp}\). Proposition 6.3.1 shows this assignment descends to define a morphism
\[\operatorname{Quot}(X,\mathcal{E})\to\mathit{Supp}(X).\]
Given a tropical model \(\operatorname{Supp}_{\Sigma}\to\operatorname{Supp}\) the associated _proper model_ of \(\operatorname{Quot}(X,\mathcal{E})\) is the fibre product
\[\operatorname{Quot}_{\Sigma}(X,\mathcal{E})=\operatorname{Quot}(X,\mathcal{E })\times_{\mathit{Supp}}\mathit{Supp}_{\Sigma}\xrightarrow{\pi_{\Sigma}} \operatorname{Quot}(X,\mathcal{E}).\]
Representability properties of the logarithmic Quot space are captured by the proper models.
#### 8.1.1. Open cover
Given a cone complex \(\operatorname{Supp}_{\Lambda_{2}}\) embedded in \(\operatorname{Supp}\) and given moreover a combinatorially flat tropical model of the universal piecewise linear space
we apply the Construction 4.5.1 to define a morphism of piecewise linear spaces whose corresponding diagram of stacks on the category of logarithmic schemes
\[\mathcal{X}_{\Lambda_{1}}\to\mathit{Supp}_{\Lambda_{2}}\]
is flat.
Now set \(U_{\Lambda}\) to be the open substack of the relative Quot scheme \(\operatorname{Quot}(\mathcal{X}_{\Lambda_{1}}/\mathit{Supp}_{\Lambda_{2}})\) whose \(S\) valued points satisfy two conditions.
1. **Logarithmic flatness**: the logarithmic surjection of coherent sheaves \([\pi_{\Gamma},q_{\Gamma}]\) on \(S\times X\) is logarithmically flat over \(S\).
2. **Stability**: The family of tropical supports associated to \([\pi_{\Gamma},q_{\Gamma}]\) defined in Section 6 coincides with the family pulled back along the morphism \(S\to\mathit{Supp}_{\Lambda_{2}}\).
The first condition is open because flatness is an open condition. The fact that stability is open requires more work, and this claim appears as Proposition 6.3.4. Note since \(\pi:\mathscr{X}_{\Lambda_{1}}\to\mathsf{Supp}_{\Lambda_{2}}\) is proper, the relative Quot scheme \(\mathrm{Quot}(\mathscr{X}_{\Lambda_{1}}/\mathsf{Supp}_{\Lambda_{2}})\) is an algebraic space, and thus \(U_{\Lambda}\) is also an algebraic space. We say \(\Lambda\) is _compatible_ with \(\Sigma\) if \(\mathsf{Supp}_{\Lambda_{2}}\) is a subcomplex of \(\Sigma\).
**Lemma 8.1.1**.: Whenever \(\Lambda\) is compatible with \(\Sigma\), the natural morphism
\[U_{\Lambda}\to\mathsf{Quot}_{\Sigma}(X,\mathcal{E})\]
is an open immersion.
It follows that \(U_{\Lambda}\to\mathsf{Quot}(X,\mathcal{E})\) is logarithmic etale as it is the composition of logarithmically etale morphisms.
Proof.: Consider two tropical models of \(\mathscr{X}\), say \(\Lambda_{1},\Lambda_{1}^{\prime}\), which are both integral over a fixed base \(\mathsf{Supp}_{\Lambda_{2}}\). Write \(\overline{\Lambda}\) for the common refinement of \(\Lambda_{1},\Lambda_{1}^{\prime}\) and note there are corresponding maps of universal expansions
\[\mathcal{X}_{\overline{\Lambda}}\to\mathcal{X}_{\Lambda_{1}},\mathcal{X}_{ \overline{\Lambda}}\to\mathcal{X}_{\Lambda_{1}^{\prime}}.\]
We now show that the transverse and stable locus in \(\mathrm{Quot}(\mathcal{X}_{\Lambda_{1}}/\mathit{Supp}_{\Lambda_{2}},\mathcal{ E})\) is canonically identified with the transverse and stable locus in \(\mathrm{Quot}(\mathcal{X}_{\Lambda_{1}^{\prime}}/\mathit{Supp}_{\Lambda_{2}}, \mathcal{E})\). Indeed a surjection of sheaves on \(S\times_{\mathit{supp}_{\Lambda_{2}}}\mathcal{X}_{\Lambda_{1}}\) can be pulled back to a surjection of sheaves on \(S\times_{\mathit{supp}_{\Lambda_{2}}}\mathcal{X}_{\overline{\Lambda}}\). Stability ensures this surjection is pulled back from a surjection of sheaves on \(S\times_{\mathit{supp}_{\Lambda_{2}}}\mathcal{X}_{\Lambda_{1}^{\prime}}\). Since both \(\Lambda_{1}\) and \(\Lambda_{1}^{\prime}\) are models of the universal tropical support, the above operation sends transverse sheaves to transverse sheaves. The restriction of a surjection \(q\) of sheaves to strata of relative dimension zero over the base in the tropical support determines \(q\). Both \(\Lambda_{1}\) and \(\Lambda_{1}^{\prime}\) are combinatorially flat over \(\Sigma\) so the common refinement does not effect those cones of relative dimension zero over the base which are strata of the tropical support. This ensures the above assignment is a bijection.
Write \(Q^{o}\) for the logarithmically flat and stable locus in \(\mathrm{Quot}(\mathcal{X}_{\Lambda_{1}}/\mathit{Supp}_{\Lambda_{2}},\mathcal{ E})\). Note \(Q^{o}\) is an open in the transverse and stable locus of \(\mathrm{Quot}(\mathcal{X}_{\Lambda_{1}}/\mathit{Supp}_{\Lambda_{2}},\mathcal{ E})\). In light of the previous paragraph we can identify \(Q^{o}\) as an open inside \(\mathrm{Quot}(\mathcal{X}_{\Lambda_{1}^{\prime}}/\mathit{Supp}_{\Lambda_{2}}, \mathcal{E})\).
**Corollary 8.1.2**.: The logarithmic Quot scheme is a logarithmic space in the sense of [11, Definition 4.11.1].
Proof.: The logarithmic etale morphisms \(U_{\Lambda}\to\mathrm{Quot}(X,\mathcal{E})\) form the requisite cover.
#### 8.1.2. Prorepresentability and cover by proper models
Denote the set of tropical models
\[S_{\mathcal{X}}=\{\mathsf{Supp}_{\Sigma}(\mathcal{X})\to\mathsf{Supp}( \mathcal{X})\}.\]
**Proposition 8.1.3**.: Taking colimits in the category of stacks over \(\mathbf{LogSch}\) there is an equality of moduli stacks
\[\varinjlim_{\Sigma\in\widehat{S}_{\mathcal{X}}}\mathsf{Quot}_{\Sigma}(X, \mathcal{E})=\mathsf{Quot}(X,\mathcal{E}).\]
Proof.: The morphisms \(\pi_{\Sigma}\) specify a map
\[\varinjlim_{\Sigma\in\widehat{S}_{\mathcal{X}}}\mathsf{Quot}_{\Sigma}(X, \mathcal{E})\to\mathsf{Quot}(X,\mathcal{E}).\]
We write down an inverse on the level of functors of points. Note the proposition is false if one takes colimits in the category of prestacks.
A morphism \(B\to\mathsf{Quot}(X,\mathcal{E})\) is the data of an etale cover
\[\{f_{i}:U_{i}\to B\}\text{ where we denote }U_{i}\cap U_{j}=U_{ij}\]
and a surjection of expanded logarithmic sheaves \(q_{i}:\mathcal{E}\to\mathcal{F}\) on \(X\times U_{i}\). Refining the open cover if necessary, \(\mathcal{F}|_{U_{i}}=[\varphi^{\star}\mathcal{X}_{\Sigma},\mathcal{F}_{\Gamma}]\) for some choice of \(\Sigma\) and \(\varphi:U_{i}\to\mathsf{Supp}_{\Sigma}(X)\). Pulling \(q_{i}\) back to a morphism \(f_{i}^{\star}q_{i}\) of sheaves on \(U_{i}\) we obtain a morphism
\[U_{i}\to\mathsf{Quot}_{\Sigma_{i}}(X).\]
Moreover denoting the common subdivision of \(\Sigma_{i}\) and \(\Sigma_{j}\) by \(\Sigma_{ij}\) the restriction of \(g_{i}\) to \(U_{ij}\) factors
\[g_{i}|_{U_{ij}}:U_{ij}\to\mathsf{Quot}_{\Sigma_{ij}}(X)\xrightarrow{h_{ij}} \mathsf{Quot}_{\Sigma_{i}}(X).\]
Since \(h_{ij}=h_{ji}\) the compositions
\[U_{i}\to\mathsf{Quot}_{\Sigma_{i}}(X)\to\varinjlim\mathsf{Quot}_{\Sigma}^{ \log}(X)\]
glue to define a morphism
\[g:B\to\varinjlim\mathsf{Quot}_{\Sigma}^{\log}(X).\]
**Theorem 8.1.4**.: The model \(\mathsf{Quot}_{\Sigma}(X,\mathcal{E})\) is an algebraic stack of Deligne-Mumford type with logarithmic structure.
Proof.: First observe the morphism \(U_{\Lambda}\to\mathrm{Quot}_{\Lambda_{1}}(X,\mathcal{E})\) is a strict open immersion as in Lemma 8.1.1. We are left to check stabilisers are finite which may be done locally, and thus we check our claim for the \(U_{\Lambda}\). A map from a point to \(U_{\Lambda}\) can be thought of as two pieces of data. First a map from the point \(p\) to the Artin stack \(\mathrm{Supp}_{\Lambda}\) and second a surjection of sheaves \(q\) on some logarithmic modification \(X_{\Gamma}\). The image of \(p\) in \(\mathrm{Supp}_{\Lambda}\) has stabiliser group \(\mathbb{G}_{m}^{k}\) for some \(k\). The stabiliser of \(p\) considered a point of \(U_{\Lambda}\) is the subgroup \(G\) of \(\mathbb{G}_{m}^{k}\) whose action fixes \(q\). This follows by comparing the definition of tropical support with [13, Theorem 1.8]. Now observe \(G\) is an algebraic subgroup of \(\mathbb{G}_{m}^{k}\) containing no one dimensional subtori by Theorem 5.1.1. All such groups are finite.
### Open subscheme
We observe \(\mathrm{Quot}(X,\mathcal{E})^{o}\) is a open sub-scheme of \(\mathrm{Quot}(X,\mathcal{E})\). Indeed \(\mathrm{Quot}(X,\mathcal{E})^{o}\) comes equipped with universal data of a surjection of sheaves \(\mathcal{E}\to\mathcal{F}\) on \(\mathrm{Quot}(X,\mathcal{E})^{o}\times X\). This universal surjection of sheaves does not obviously define a logarithmic surjection of coherent sheaves because it is not clear \(\mathcal{F}\) is logarithmically flat over \(\mathrm{Quot}(X,\mathcal{E})^{o}\). However we know by Theorem 3.3.2 that there is a logarithmic modification of \(\mathrm{Quot}(X,\mathcal{E})^{o}\times X\) such that the strict transform of \(\mathcal{F}\) is logarithmically flat. Being integral is automatic because \(\mathrm{Quot}^{o}(X,\mathcal{E})\) has the trivial logarithmic structure, and thus we have constructed a valid logarithmic surjection of coherent sheaves over \(\mathrm{Quot}(X,\mathcal{E})^{o}\).
The universal property of \(\mathrm{Quot}(X,\mathcal{E})\) now defines a map \(\iota:\mathrm{Quot}(X,\mathcal{E})^{o}\to\mathrm{Quot}(X,\mathcal{E})\). The image is the locus in \(\mathrm{Quot}(X,\mathcal{E})\) is the locus with trivial tropical support. This tropical support corresponds to the zero dimensional cone in \(\mathsf{Supp}\) and thus \(\iota\) is open. Transversality ensures the map is an injection. Thus we have defined an open immersion.
### Universally closed and separated
We have shown the model \(\mathrm{Quot}_{\Sigma}(X,\mathcal{E})\) is an algebraic Deligne-Mumford stack \(\underline{\mathrm{Quot}}_{\Sigma}(X,\mathcal{E})\) equipped with a logarithmic structure.
**Theorem 8.3.1**.: Connected components of the Deligne-Mumford stack \(\underline{\mathrm{Quot}}_{\Sigma}(X,\mathcal{E})\) are universally closed and separated.
By [11, Theorem 2.2.5.2] it follows that each model \(\operatorname{Quot}_{\Sigma}(X,\mathcal{E})\) satisfies the _unique right lifting property defined_ in the same theorem statement. An exercise in abstract nonsense shows \(\operatorname{Quot}(X,\mathcal{E})\) satisfies the same right lifting property.
We check the valuative criterion. Let \(\underline{S}=\operatorname{Spec}(R)\) be a trait with generic point \(\eta\) and special point \(s\). Consider a commutative square
We must check that for any \(S\) at most one map \(f\) exists. Moreover after replacing \(R\) by a ramified base change, the morphism \(f\) exists. It is also necessary to check that the logarithmic Quot space is bounded. We defer the boundedness proof to Section.
Proof of valuative criterion.: We first show existence after a ramified base change. A morphism from \(\eta\) to \(\operatorname{Quot}\) specifies a morphism \(\phi:\eta\to\mathscr{Supp}_{\Sigma}\) and a surjection of sheaves on \(\mathscr{X}\times_{\mathscr{Supp}_{\Sigma}}\eta\). Since \(\eta\) is a single point the underlying scheme of \(\mathscr{X}\times_{\mathscr{Supp}_{\Sigma}}\eta\) is a product \(X_{\Gamma}\times\eta\) for some scheme \(X_{\Gamma}\). Properness of Grothendieck's Quot scheme \(\operatorname{Quot}_{\Sigma}(\underline{X}_{\Gamma},\mathcal{E})\) defines a surjection of sheaves on \(S\times X_{\Gamma}\). By Proposition 7.0.1, possibly after replacing \(R\) by a base change, there is a logarithmic scheme \(S\) containing \(\eta\) as a subscheme, and a logarithmic modification of \(X_{S}\) such that the strict transform of \(\mathcal{F}\) is logarithmically flat. The strict transform of \(\mathcal{E}\) is already logarithmically flat. By miracle flatness the expansion is integral. Thus we have the data of a logarithmic surjection of coherent sheaves on \(X\times S\) which is logarithmically flat over \(S\).
We have thus defined a map from \(S\) to \(\operatorname{Quot}(X,\mathcal{E})\) and must verify the map factors through \(\operatorname{Quot}_{\Sigma}(X,\mathcal{E})\). This is true after a ramified base change of \(R\) provided the cone \(\sigma_{i}\) used to define \(S\) is sufficiently small.
It remains to check uniqueness. Suppose we are given two surjections of logarithmic schemes on \(X\times S\) which agree on \(X\times\eta\). Modifying the logarithmic structure if necessary, we may choose representatives
\[(\pi_{\Gamma}:X_{\Gamma}\to X,q_{\Gamma})\text{ and }(\pi_{\Gamma}:X_{ \Gamma}\to X,q^{\prime}_{\Gamma})\text{ where both logarithmic modifications are the same.}\]
Note the logarithmic structure on \(S\) is one of the \(S_{\sigma}\) introduced in [11, Section 7.1] and discussed in Section 7.2.1. Therefore the fact that common refinements of logarithmic refinements need not be integral is not an issue: simply shrink the cone \(\sigma\) and we can be sure that the map is integral. By the Quot scheme trick, the surjections \(q_{\Gamma},q^{\prime}_{\Gamma}\) are the data of morphisms to the relative Quot scheme \(\operatorname{Quot}(X_{\Gamma}/S,\mathcal{E})\). This relative Quot scheme is separated (over \(S\)) by a theorem of Grothendieck. Since \(q_{\Gamma}\) and \(q^{\prime}_{\Gamma}\) agree on the generic point, separatedness of Grothendieck's Quot scheme implies \(q_{\Gamma}=q^{\prime}_{\Gamma}\).
## 9. Examples
The goal of this section is to describe examples of the logarithmic Quot space and computations of tropical support. It is instructive to focus on the logarithmic Hilbert scheme. The key takeaway from is that studying the logarithmic Quot scheme does not pose substantially more difficulty than studying Grothendieck's Quot scheme.
### Trivial logarithmic structure
Let \(X\) be a logarithmic scheme obtained by equipping a scheme \(X\) with the trivial logarithmic structure. Then \(X\) is automatically logarithmically flat over a point with the trivial logarithmic structure and any coherent sheaf \(\mathcal{E}\) on \(X\) is logarithmically flat.
In this situation logarithmic modifications of \(X\times S\) which are integral over \(S\) are Kummer and are unimportant for understanding additional logarithmic surjections of coherent sheaves. Thus the logarithmic Quot scheme of \(\mathcal{E}\) on \(X\) is Grothendieck's Quot scheme \(\operatorname{Quot}(\underline{X},\mathcal{E})\) equipped with the trivial logarithmic structure.
### Tropical support
We provide two instructive examples of computing the tropical support. Tropical support can be accessed either through torus actions or cohomologically. Here we take the torus action perspective.
#### 9.2.1. Points on \(\mathbb{A}^{2}\)
We give an example of computing the tropical support. First consider a logarithmic modification of \(\mathbb{A}^{2}\times\mathfrak{pt}^{\dagger}\). Such a logarithmic modification is the data of a polyhedral subdivision of \(\mathbb{R}^{2}_{\geq 0}\), see the left of Figure 4.
A point of the logarithmic Hilbert scheme of \(\mathbb{A}^{2}\) is specified by a closed subscheme of such an expansion. Two distinct points in a logarithmic modification of \(\mathbb{A}^{2}\) give a length two subscheme of \(\mathbb{A}^{2}\). See the right of Figure 4.
The tropical support records only the two blue vertices. These vertices correspond to components containing subschemes which are not fixed fixed by action of the two dimensional torus associated to each component.
#### 9.2.2. Curves and points in \(\mathbb{P}^{2}\)
We give an example of tropical support for mixed dimensional subschemes. The example in Figure 5 occurs when studying the moduli space of curves and points in \(\mathbb{P}^{2}\).
Once again the blue data is the tropical support. In particular we see a tropical version of an embedded point in a tropical curve. Algebraically this arises from a point in the same component as a tropical curve fixed by a one dimensional torus.
### The logarithmic linear system
The logarithmic Hilbert scheme of divisors in a toric variety is the first non-trivial example of a logarithmic Hilbert scheme where the sub-schemes have dimension at least three.
Figure 4. Left a polyhedral subdivision of \(\mathbb{R}^{2}_{\geq 0}\) and right the associated expansion containing a subscheme (purple dots). The tropical support is obtained from the left hand diagram by keeping only data of blue vertices. These are the vertices corresponding to components which contain a point, and are thus fixed by all tori.
The _logarithmic linear system_ of hypersurfaces in a toric variety is a toric stack. For toric surfaces the situation is described in detail in [14, Sections 1 and 2] following an observation of Maulik and Ranganathan [13]. The construction is identical for moduli of hypersurfaces in toric varieties of every dimension. The only difference is to increase the dimension of the polytope. The fan of the logarithmic linear system is closely related to the Gelfand-Kapranov-Zelevinsky secondary polytope [11].
### Logarithmic Hilbert scheme of two points on \(\mathbb{A}^{2}\)
The logarithmic strata of the logarithmic Hilbert scheme of two points in \(\mathbb{A}^{2}\) are governed by the possible tropical supports. Up to permuting the \(x\) and \(y\) axes all possible tropical supports are depicted in Figure 6.
We now describe the isomorphism class of the scheme underlying some of these logarithmic strata. The top logarithmic stratum is the locus of points in \(\mathbb{A}^{2}\) supported away from the toric boundary. The Ghost sheaf of the associated logarithmic stratum is zero.
We now turn our attention to the locus \(X_{1}\) associated to the tropical diagrams in a blue oval. The logarithmic stratum associated to the upper diagram has ghost sheaf \(\mathbb{N}^{2}\). The underlying locally closed stratum is the stack quotient of the Hilbert scheme of two points on \((\mathbb{C}^{\star})^{2}\) by the action of \((\mathbb{C}^{\star})^{2}\). Note this space is a Deligne-Mumford stack and not a scheme since the ideal \((X^{2}-1,Y-1)\) is fixed by the action sending \(X\mapsto-X\) and \(Y\mapsto Y\).
Associated to the bottom tropical diagram is a product - one for each dot. Assigned to each dot is the quotient of the Hilbert scheme of one point on \((\mathbb{C}^{\star})^{2}\) by the action of \((\mathbb{C}^{\star})^{2}\). The stratum is thus a product of two copies of \(\operatorname{Spec}(\mathbb{C})\) and is thus a single point.
On the level of underlying schemes, the closure of \(X_{1}\) is thus a single point. One can think of \(X_{1}\) as a blowup of \((\mathbb{C}^{\star})^{2}\) in the point \((1,1)\). There are no one point compactifications of this blowup. Indeed blowing down the exceptional curve in such a compactification yields a one point compactification of \((\mathbb{C}^{\star})^{2}\). There are no one point compactifications of \((\mathbb{C}^{\star})^{2}\) which are algebraic spaces. The author thanks Bernd Siebert for a conversation about this example.
Figure 5. Left a polyhedral subdivision of \(\mathbb{R}^{2}_{\geq 0}\) and right the associated expansion containing a subscheme (purple). The tropical support is obtained from the left hand diagram by keeping only data of blue tropical curve. Note vertex \(v_{4}\) is not seen by the tropical support because the associated subscheme is fixed by a one dimensional torus. By contrast tropical support detects vertex \(v_{9}\) because the subscheme in \(X_{v_{9}}\) is not fixed by any one parameter subgroup of the torus.
### Two choices of subdivision
In the perspective of our paper Maulik and Ranganathan's version of the logarithmic Hilbert scheme of curves requires two choices. They fix both a subdivision of \(\operatorname{Supp}\) and a subdivision of the universal family \(\mathscr{X}\). The goal of this section is to explain how these choices are connected and the ramifications for choices of tropical model of the logarithmic Quot scheme.
Fix a combinatorial type of tropical support in the universal expansion. This fixes a combinatorial type of tropical support. To subdivide \(\mathscr{X}\), it suffices to choose for each point of \(\operatorname{Supp}\) a polyhedral subdivision of \(\operatorname{Trop}(X)\) refining the PL structure induced by \(\mathscr{X}\). This combinatorial model must be chosen in a particular way in order to define a valid tropical model of \(\operatorname{Trop}(X)\). The key criterion is that the slant of each ray in the one skeleton is locally constant.
Figure 6. Tropical supports which appear when studying the logarithmic Hilbert scheme of \(\mathbb{P}^{2}\). The height records the rank of the logarithmic stratum (rank zero for the first row and rank four on the bottom). There is a red line from diagram A to diagram B whenever the logarithmic stratum associated to B lies in the closure of the logarithmic stratum associated to A. The blue circle is explained in the text.
Fixing such a choice on each stratum of \(\mathsf{Supp}\) gives rise to a subdivision of the universal expansion. This subdivision need not be combinatorially flat, but there is a universal way to flatten it (by pushing forward the locally closed stratification as in [14]). This defines a subdivision \(\mathsf{Supp}^{\prime}\to\mathsf{Supp}\) which need not be a tropical model.
Maulik and Ranganathan make a second choice of tropical model \(\mathsf{Supp}_{\Sigma}\to\mathsf{Supp}\). If one fixes a subdivision of the universal expansion up front then the choice of \(\Sigma\) is not arbitrary. Indeed combinatorial flatness implies we can factor
\[\mathsf{Supp}_{\Sigma}\to\mathsf{Supp}^{\prime}\to\mathsf{Supp}.\]
Thus the two choices are linked, although neither choice determines the other.
|
2310.01551 | Harnessing the Power of Choices in Decision Tree Learning | We propose a simple generalization of standard and empirically successful
decision tree learning algorithms such as ID3, C4.5, and CART. These
algorithms, which have been central to machine learning for decades, are greedy
in nature: they grow a decision tree by iteratively splitting on the best
attribute. Our algorithm, Top-$k$, considers the $k$ best attributes as
possible splits instead of just the single best attribute. We demonstrate,
theoretically and empirically, the power of this simple generalization. We
first prove a {\sl greediness hierarchy theorem} showing that for every $k \in
\mathbb{N}$, Top-$(k+1)$ can be dramatically more powerful than Top-$k$: there
are data distributions for which the former achieves accuracy $1-\varepsilon$,
whereas the latter only achieves accuracy $\frac1{2}+\varepsilon$. We then
show, through extensive experiments, that Top-$k$ outperforms the two main
approaches to decision tree learning: classic greedy algorithms and more recent
"optimal decision tree" algorithms. On one hand, Top-$k$ consistently enjoys
significant accuracy gains over greedy algorithms across a wide range of
benchmarks. On the other hand, Top-$k$ is markedly more scalable than optimal
decision tree algorithms and is able to handle dataset and feature set sizes
that remain far beyond the reach of these algorithms. | Guy Blanc, Jane Lange, Chirag Pabbaraju, Colin Sullivan, Li-Yang Tan, Mo Tiwari | 2023-10-02T18:45:46Z | http://arxiv.org/abs/2310.01551v2 | # Harnessing the Power of Choices in Decision Tree Learning
###### Abstract
We propose a simple generalization of standard and empirically successful decision tree learning algorithms such as ID3, C4.5, and CART. These algorithms, which have been central to machine learning for decades, are greedy in nature: they grow a decision tree by iteratively splitting on the best attribute. Our algorithm, Top-\(k\), considers the \(k\) best attributes as possible splits instead of just the single best attribute.
We demonstrate, theoretically and empirically, the power of this simple generalization. We first prove a greediness hierarchy theorem showing that for every \(k\in\mathds{N}\), Top-\((k+1)\) can be dramatically more powerful than Top-\(k\): there are data distributions for which the former achieves accuracy \(1-\varepsilon\), whereas the latter only achieves accuracy \(\frac{1}{2}+\varepsilon\). We then show, through extensive experiments, that Top-\(k\) outperforms the two main approaches to decision tree learning: classic greedy algorithms and more recent "optimal decision tree" algorithms. On one hand, Top-\(k\) consistently enjoys significant accuracy gains over greedy algorithms across a wide range of benchmarks. On the other hand, Top-\(k\) is markedly more scalable than optimal decision tree algorithms and is able to handle dataset and feature set sizes that remain far beyond the reach of these algorithms.
The code to reproduce our results: [https://github.com/SullivanC19/pydl8.5-topk](https://github.com/SullivanC19/pydl8.5-topk).
## 1 Introduction
Decision trees are a fundamental workhorse in machine learning. Their logical and hierarchical structure makes them easy to understand and their predictions easy to explain. Decision trees are therefore the most canonical example of an interpretable model: in his influential survey [14], Breiman writes "On interpretability, trees rate an A+"; much more recently, the survey [12] lists decision tree optimization as the very first of 10 grand challenges for the field of interpretable machine learning. Decision trees are also central to modern ensemble methods such as random
forests [11] and XGBoost [12], which achieve state-of-the-art accuracy for a wide range of tasks.
Greedy algorithms such as ID3 [14], C4.5 [15], and CART [1] have long been the standard approach to decision tree learning. These algorithms build a decision tree from labeled data in a top-down manner, growing the tree by iteratively splitting on the "best" attribute as measured with respect to a certain heuristic function (e.g., information gain). Owing to their simplicity, these algorithms are highly efficient and scale gracefully to handle massive datasets and feature set sizes, and they continue to be widely employed in practice and enjoy significant empirical success. For the same reasons, these algorithms are also part of the standard curriculum in introductory machine learning and data science courses.
The trees produced by these greedy algorithms are often reasonably accurate, but can nevertheless be suboptimal. There has therefore been a separate line of work, which we review in Section2, on algorithms that optimize for accuracy and seek to produce optimally accurate decision trees. These algorithms employ a variety of optimization techniques (including dynamic programming, integer programming, and SAT solvers) and are completely different from the simple greedy algorithms discussed above. Since the problem of finding an optimal decision tree has long been known to be NP-hard [17], any algorithm must suffer from the inherent combinatorial explosion when the instance size becomes sufficiently large (unless P=NP). Therefore, while this line of work has made great strides in improving the scalability of algorithms for optimal decision trees, dataset and feature set sizes in the high hundreds and thousands remain out of reach.
This state of affairs raises a natural question:
Can we design decision tree learning algorithms that improve significantly on the accuracy of classic greedy algorithms and yet inherit their simplicity and scalability?
In this work, we propose a new approach and make a case that provides a strong affirmative answer to the question above. Our work also opens up several new avenues for exploration in both the theory and practice of decision tree learning.
### Our contributions
#### 1.1.1 Top-\(k\): a simple and effective generalization of classic greedy decision tree algorithms
We introduce an easily interpretable greediness parameter to the class of all greedy decision tree algorithms, a broad class that encompasses ID3, C4.5, and CART. This parameter, \(k\), represents the number of features that the algorithm considers as candidate splits at each step. Setting \(k=1\) recovers the fully greedy classical approaches, and increasing \(k\) allows the practitioner to produce more accurate trees at the cost of only a mild training slowdown. The focus of our work is on the regime where \(k\) is a small constant--preserving the efficiency and scalability of greedy algorithms is a primary objective of our work--although we mention here that by setting \(k\) to be the dimension \(d\), our algorithm produces an optimal tree. Our overall framework can thus be viewed as interpolating between greedy algorithms at one extreme and "optimal decision tree" algorithms at the other, precisely the two main and previously disparate approaches to decision tree learning discussed above.
We will now describe our framework. A feature scoring function \(\mathcal{H}\) takes as input a dataset over \(d\) binary features and a specific feature \(i\in[d]\), and returns a value quantifying the "desirability" of this feature as the root of the tree. The greedy algorithm corresponding to \(\mathcal{H}\) selects as the root of
the tree the feature that has the largest score under \(\mathcal{H}\); our generalization will instead consider the \(k\) features with the \(k\) highest scores.
**Definition 1** (Feature scoring function).: _A feature scoring function \(\mathcal{H}\) takes as input a labeled dataset \(S\) over a \(d\)-dimensional feature space, a feature \(i\in[d]\), and returns a score \(\nu_{i}\in[0,1]\)._
See Section3.1 for a discussion of the feature scoring functions that correspond to standard greedy algorithms ID3, C4.5, and CART. Pseudocode for Top-\(k\) is provided in Figure1. We note that from the perspective of interpretability, the trained model looks the same regardless of what \(k\) is. During training, the algorithm considers more splits, but only one split is eventually used at each node.
#### 1.1.2 Theoretical results on the power of Top-\(k\)
The search space of Top-\((k+1)\) is larger than that of Top-\(k\), and therefore its training accuracy is certainly at least as high. The first question we consider is: is the test accuracy of Top-\((k+1)\) only marginally better than that of Top-\(k\), or are there examples of data distributions for which even a single additional choice provably leads to huge gains in test accuracy? Our first main theoretical result is a sharp greediness hierarchy theorem, showing that this parameter can have dramatic impacts on accuracy, thereby illustrating its power:
Figure 1: The Top-\(k\) algorithm. It can be instantiated with any feature scoring function \(\mathcal{H}\), and when \(k=1\), recovers standard greedy algorithms such as ID3, C4.5, and CART.
**Theorem 1** (Greediness hierarchy theorem).: _For every \(\varepsilon>0\), \(k,h\in\mathds{N}\), there is a data distribution \(\mathcal{D}\) and sample size \(n\) for which, with high probability over a random sample \(\mathbf{S}\sim\mathcal{D}^{n}\), \(\operatorname{Top-}(k+1)\) achieves at least \(1-\varepsilon\) accuracy with a depth budget of \(h\), but \(\operatorname{Top-}\)\(k\) achieves at most \(\frac{1}{2}+\varepsilon\) accuracy with a depth budget of \(h\)._
All of our theoretical results, Theorems1 to 3, hold whenever the scoring function is an _impurity-based heuristic_. This broad class includes the most popular scoring functions (see Section3.1 for more details). Theorem1 is a special case of a more general result that we show: for all \(k<K\), there are data distributions on which \(\operatorname{Top-}K\) achieves maximal accuracy gains over \(\operatorname{Top-}k\), even if \(\operatorname{Top-}k\) is allowed a larger depth budget:
**Theorem 2** (Generalization of Theorem1).: _For every \(\varepsilon>0\), \(k,K,h\in\mathds{N}\) where \(k<K\), there is a data distribution \(\mathcal{D}\) and sample size \(n\) for which, with high probability over a random sample \(\mathbf{S}\sim\mathcal{D}^{n}\), \(\operatorname{Top-}K\) achieves at least \(1-\varepsilon\) accuracy with a depth budget of \(h\), but \(\operatorname{Top-}\)\(k\) achieves at most \(\frac{1}{2}+\varepsilon\) accuracy even with a depth budget of \(h+(K-k-1)\)._
The proof of Theorem2 is simple and highlights the theoretical power of choices. One downside, though, is that it is based on data distributions that are admittedly somewhat unnatural: the labeling function has embedded within it a function that is the XOR of certain features, and real-world datasets are unlikely to exhibit such adversarial structure. To address this, we further prove that the power of choices is evident even for monotone data distributions. We defer the definition of monotone data distributions to Section4.2.
**Theorem 3** (Greediness hierarchy theorem for monotone data distributions).: _For every \(\varepsilon>0\), depth budget \(h\), \(K\) between \(\tilde{\Omega}(h)\) and \(\tilde{O}(h^{2})\) and \(k\leq K-h\), there is a monotone data distribution \(\mathcal{D}\) and sample size \(n\) for which, with high probability over a random sample \(\mathbf{S}\sim\mathcal{D}^{n}\), \(\operatorname{Top-}K\) achieves at least \(1-\varepsilon\) accuracy with a depth budget of \(h\), but \(\operatorname{Top-}\)\(k\) achieves at most \(\frac{1}{2}+\varepsilon\) accuracy with a depth budget of \(h\)._
Many real-world data distributions are monotone in nature, and relatedly, they are a common assumption and the subject of intensive study in learning theory. Most relevant to this paper, recent theoretical work has identified monotone data distributions as a broad and natural class for which classical greedy decision tree algorithms (i.e., \(\operatorname{Top-}1\)) provably succeed [1, 10]. Theorem3 shows that even within this class, increasing the greediness parameter can lead to dramatic gains in accuracy. Compared to Theorem2, the proof of Theorem3 is more technical and involves the use of concepts from the Fourier analysis of boolean functions [11].
We note that a weaker version of Theorem3 is implicit in prior work: combining [10, Theorem 7b] and [10, Theorem 2] yields the special case of Theorem3 where \(K=O(h^{2})\) and \(k=1\). Theorem3 is a significant strengthening as it allows for \(k>1\) and much smaller \(K-k\).
#### 1.1.3 Experimental results on the power of \(\operatorname{Top-}\)\(k\)
We provide extensive empirical validation of the effectiveness of \(\operatorname{Top-}\)\(k\) when trained on on real-world datasets, and provide an in-depth comparison with both standard greedy algorithms as well as optimal decision tree algorithms.
We first compare the performance of \(\operatorname{Top-}\)\(k\) for \(k=1,2,3,4,8,12,16\) (Figure2), and find that increasing \(k\) does indeed provide a significant increase in test accuracy--in some cases, \(\operatorname{Top-}\)\(8\) already achieves accuracy comparable to the test accuracy attained by DL\(8.5\)[2], an optimal decision
tree algorithm. We further show, in Figures 3 and 6, that Top-\(k\) inherits the efficiency of popular greedy algorithms and scales much better than the state-of-the-art optimal decision tree algorithms MurTree and GOSDT [10].
Taken as a whole, our experiments demonstrate that Top-\(k\) provides a useful middle ground between greedy and optimal decision tree algorithms: it is significantly more accurate than greedy algorithms, but still fast enough to be practical on reasonably large datasets. See Section 5 for an in-depth discussion of our experiments. Finally, we emphasize the benefits afforded by the simplicity of Top-\(k\). Standard greedy algorithms (i.e. Top-1) are widely employed and easily accessible. Introducing the parameter \(k\) requires modifying only a tiny amount of source code and gives the practitioner a new lever to control. Our experiments and theoretical results demonstrate the utility of this simple lever.
## 2 Related work
Provable guarantees and limitations of greedy decision tree algorithms.A long and fruitful line of work seeks to develop a rigorous understanding of the performances of greedy decision tree learning algorithms such as ID3, C4.5, and CART and to place their empirical success on firm theoretical footing [11, 12, 13, 14, 15, 16, 17, 18, 19]. These works identify feature and distributional assumptions under which these algorithms provably succeed; they also highlight the limitations of these algorithms by pointing out settings in which they provably fail. Our work complements this line of work by showing, theoretically and empirically, how these algorithms can be further improved with a simple new parameter while preserving their efficiency and scalability.
The work of [1].Recent work of Blanc, Lange, Qiao, and Tan also highlights the power of choices in decision tree learning. However, they operate within a stylized theoretical setting. First, they consider a specific scoring function that is based on a notion of influence of features, and crucially, computing these scores requires query access to the target function (rather than from random labeled samples as is the case in practice). Furthermore, their results only hold with respect to the uniform distribution. These are strong assumptions that limit the practical relevance of their results. In contrast, a primary focus of this work is to be closely aligned with practice, and in particular, our framework captures and generalizes the standard greedy algorithms used in practice.
Optimal decision trees.Motivated in part by the surge of interest in interpretable machine learning and the highly interpretable nature of decision trees, there have been numerous works on learning optimal decision trees [1, 13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 223, 219, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 320, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 87, 89, 91, 80, 83, 84, 85, 86, 88, 89, 92, 85, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 109, 111, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 14, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 209, 210, 211, 22, 223, 213, 214, 215, 216, 217, 219, 222, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 261, 270, 271, 272, 274, 278, 279, 281, 285, 286, 287, 288, 289, 290, 291, 294, 295, 296, 297, 298, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 312, 309, 313, 330, 314, 309, 320, 321, 333, 34, 35, 36, 37, 38, 39, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 110, 109, 111, 12, 113, 114, 115, 116, 179, 119, 120, 121, 123, 124, 125, 126, 127, 128,
advantage of our work over these soft trees is in interpretability. With Top-\(k\), since the splits are hard (and not soft), to understand the classification of a test point, it is sufficient to look at only one root-to-leaf path, as opposed to a weighted combination across many.
## 3 The Top-\(k\) algorithm
### Background and context: Impurity-based algorithms
Greedy decision tree learning algorithms like ID3, C4.5 and CART are all instantiations of Top-\(k\) in Figure 1 with \(k=1\) and an appropriate choice of the feature-scoring function \(\mathcal{H}\). Those three algorithms all used impurity-based heuristics as their feature-scoring function:
**Definition 2** (Impurity-based heuristic).: _An impurity function\(\mathcal{G}:[0,1]\to[0,1]\) is a function that is concave, symmetric about \(0.5\), and satisfies \(\mathcal{G}(0)=\mathcal{G}(1)=0\) and \(\mathcal{G}(0.5)=1\). A feature-scoring function \(\mathcal{H}\) is an impurity-based heuristic, if there is some impurity function \(\mathcal{G}\) for which:_
\[\mathcal{H}(S,i) =\mathcal{G}\left(\operatorname*{\mathds{E}}_{\boldsymbol{x}, \boldsymbol{y}\sim S}[\boldsymbol{y}]\right)-\Pr_{\boldsymbol{x},\boldsymbol {y}\sim S}[\boldsymbol{x}_{i}=0]\cdot\mathcal{G}\left(\operatorname*{ \mathds{E}}_{\boldsymbol{x},\boldsymbol{y}\sim S}[\boldsymbol{y}\mid \boldsymbol{x}_{i}=0]\right)\] \[-\Pr_{\boldsymbol{x},\boldsymbol{y}\sim S}[\boldsymbol{x}_{i}=1 ]\cdot\mathcal{G}\left(\operatorname*{\mathds{E}}_{\boldsymbol{x}, \boldsymbol{y}\sim S}[\boldsymbol{y}\mid\boldsymbol{x}_{i}=1]\right)\]
_where in each of the above, \((\boldsymbol{x},\boldsymbol{y})\) are a uniformly random point from within \(S\)._
Common examples for the impurity function include the binary entropy function \(\mathcal{G}(p)=-p\log_{2}(p)-(1-p)\log_{2}(1-p)\) (used by ID3 and C4.5), the Gini index \(\mathcal{G}(p)=4p(1-p)\) (used by CART), and the function \(\mathcal{G}(p)=2\sqrt{p(1-p)}\) (proposed and analyzed in [10]). We refer the reader to [10] for a theoretical comparison, and [11] for an experimental comparison, of these impurity-based heuristics.
Our experiments focus on binary entropy being the impurity measure, but our theoretical results apply to Top-\(k\) instantiated with _any_ impurity-based heuristic.
### Basic theoretical properties of the Top-\(k\) algorithm
Running time.The key behavioral aspect in which Top-\(k\) differs from greedy algorithms is that it is less greedy when trying to determine which coordinate to query. This naturally increases the running time of Top-\(k\), but that increase is fairly mild. More concretely, suppose Top-\(k\) is run on a dataset \(S\) with \(n\) points. We can then easily derive the following bound on the running time of Top-\(k\), where \(\mathcal{H}(S,i)\) is assumed to take \(O(n)\) time to evaluate (as it does for all impurity-based heuristics).
**Claim 3.1**.: _The running time of Top-\(k(\mathcal{H},S,h)\) is \(O((2k)^{h}\cdot nd)\)._
Proof.: Let \(T_{h}\) be the number of recursive calls made by Top-\(k(\mathcal{H},S,h)\). Then, we have the simple recurrence relation \(T_{h}=2kT_{h-1}\), where \(T_{0}=1\). Solving this recurrence gives \(T_{h}=(2k)^{h}\). Each recursive call takes \(O(nd)\) time, where the bottleneck is scoring each of the \(d\) features.
We note that any decision tree algorithm, including fast greedy algorithms such as ID3, C4.5, and CART, has runtime that scales exponentially with the depth \(h\). The size of a depth-\(h\) tree can
be \(2^{h}\), and this is of course a lower bound on the runtime as the algorithm needs to output such a tree. In contrast with greedy algorithms (for which \(k=1\)), Top-\(k\) incurs an additional \(k^{h}\) cost in running time. As mentioned earlier, in practice, we are primarily concerned with fitting small decision trees (e.g., \(h=5\)) to the data, as this allows for explainable predictions. In this setting, the additional \(k^{h}\) cost (for small constant \(k\)) is inexpensive, as confirmed by our experiments.
The search space of Top-\(k\):We state and prove a simple claim that Top-\(k\) returns the _best_ tree within its search space.
**Definition 3** (Search space of Top-\(k\)).: _Given a sample \(S\) and integers \(h,k\), we use \(\mathcal{T}_{k,h,S}\) to refer to all trees in the search space of Top-\(k\). Specifically, if \(h=0\), this contains all trees with a height of zero (the constant \(0\) and constant \(1\) trees). For \(h\geq 1\), and \(\mathcal{I}\subseteq[d]\) being the \(k\) coordinates with maximal score, this contains all trees with a root of \(x_{i}\), left subtree in \(\mathcal{T}_{k,h-1,S_{x_{i}=0}}\) and right subtree in \(\mathcal{T}_{k,h-1,S_{x_{i}=1}}\) for some \(i\in\mathcal{I}\)._
**Lemma 3.2** (Top-\(k\) chooses the most accurate tree in its search space).: _For any sample \(S\) and integers \(h,k\), let \(T\) be the output of Top-\(k\) with a depth budget of \(h\) on \(S\). Then_
\[\Pr_{\mathbf{x},\mathbf{y}\sim S}[T(\mathbf{x})=\mathbf{y}]=\max_{T^{\prime}\in\mathcal{T}_{k,h,S}}\left(\Pr_{\mathbf{x},\mathbf{y}\sim S}[T^{\prime}(\mathbf{x})=\mathbf{y}]\right).\]
We refer the reader to Appendix A for the proof of this lemma.
## 4 Theoretical bounds on the power of choices
We refer the reader to the Appendix B for most of the setup and notation. For now, we briefly mention a small amount of notation relevant to this section: we use **bold font** (e.g. \(\mathbf{x}\)) to denote random variables. We also use bold font to indicate _stochastic functions_ which output a random variable. For example,
\[\mathbf{f}(x)\coloneqq\begin{cases}x&\text{with probability }\frac{1}{2}\\ -x&\text{with probability }\frac{1}{2}\end{cases}\]
is the stochastic function that returns either the identity or its negation with equal probability. To define the data distributions of Theorems 2 and 3, we will give a distribution over the domain, \(X\) and the stochastic function that provides the label given an element of the domain.
Intuition for proof of greediness hierarchy theoremTo construct a distribution which Top-\(k\) fits poorly and Top-\((k+1)\) fits well, we will partition features into two groups: one group consisting of features with medium correlation to the labels and another group consisting of features with high correlation when taken all together but low correlation otherwise. Since the correlation of features in the former group is larger than that of the latter group unless all features from the latter group are considered, both algorithms will prioritize features from the former group. However, if the groups are sized correctly, then Top-\((k+1)\) will consider splitting on all features from the latter group, whereas Top-\(k\) will not. As a result, Top-\((k+1)\) will output a decision tree with higher accuracy.
### Proof of Theorem 2
For each depth budget \(h\) and search branching factor \(K\), we will define a hard distribution \(\mathcal{D}_{h,K}\) that is learnable to high accuracy by Top-\(K\) with a depth of \(h\), but not by Top-\(k\) with a depth of \(h^{\prime}\) for any \(h^{\prime}<h+K-k\). This distribution will be over \(\{0,1\}^{d}\times\{0,1\}\), where \(d=h+K-1\). The marginal distribution over \(\{0,1\}^{d}\) is uniform, and the distribution over \(\{0,1\}\) conditioned on a setting of the \(d\) features is given by the stochastic function \(\boldsymbol{f}_{h,K}(x)\). All of the results of this section (Theorems 2 and 3) hold when the feature scoring function is _any_ impurity-based heuristic.
Description of \(\boldsymbol{f}_{h,K}(x)\).Partition \(x\) into two sets of variables, \(x^{(1)}\) of size \(h\) and \(x^{(2)}\) of size \(K-1\). Let \(\boldsymbol{f}_{h,K}(x)\) be the randomized function defined as follows:
\[\boldsymbol{f}_{h,K}(x)=\begin{cases}\mathrm{Par}_{h}(x^{(1)})&\text{with probability $1-\varepsilon$}\\ x_{i}^{(2)}\sim\mathrm{Unif}[x^{(2)}]&\text{with probability $\varepsilon$},\end{cases}\]
where \(\mathrm{Unif}[x^{(2)}]\) denotes the uniform distribution on \(x^{(2)}\). \(\mathrm{Par}_{h}(x^{(1)})\) is the parity function, whose formal definition can be found in Appendix B.
The proof of Theorem 2 is divided into two parts. First, we prove that when the data distribution is \(\mathcal{D}_{h,K}\), Top-\(K\) succeeds in building a high accuracy tree with a depth budget of \(h\). Then, we show that Top-\(k\) fails and builds a tree with low accuracy, even given a depth budget of \(h+(K-k-1)\).
**Lemma 4.1** (Top-\(K\) succeeds).: _The accuracy of Top-\(K\) with a depth of \(h\) on \(\mathcal{D}_{h,K}\) is at least \(1-\varepsilon\)._
**Lemma 4.2** (Top-\(k\) fails).: _The accuracy of Top-\(k\) with a depth of \(h^{\prime}\) on \(\mathcal{D}_{h,K}\) is at most \((1/2+\varepsilon)\) for any \(h^{\prime}<h+K-k\)._
Proofs of both these lemmas are deferred to Appendix B. Theorem 2 then follows directly from these two lemmas.
### Proof of Theorem 3
In this section, we overview the proof Theorem 3. Some of the proofs are deferred to Appendix B.2.
Before proving Theorem 3, we formalize the concept of monotonicity. For simplicity, we assume the domain is the Boolean cube, \(\{0,1\}^{d}\), and use the partial ordering \(x\preceq x^{\prime}\) iff \(x_{i}\leq x_{i}^{\prime}\) for each \(i\in[d]\); however, the below definition easily extends to the domain being any partially ordered set.
**Definition 4** (Monotone).: _A stochastic function, \(\boldsymbol{f}:\{0,1\}^{d}\to\{0,1\}\), is monotone if, for any \(x,x^{\prime}\in\{0,1\}^{d}\) where \(x\preceq x^{\prime}\), \(\mathds{E}[\boldsymbol{f}(x)]\leq\mathds{E}[\boldsymbol{f}(x^{\prime})]\). A data distribution, \(\mathcal{D}\) over \(\{0,1\}^{d}\times\{0,1\}\) is said to be monotone if the corresponding stochastic function, \(\boldsymbol{f}(x)\) returning \((\boldsymbol{y}\mid\boldsymbol{x}=x)\) where \((\boldsymbol{x},\boldsymbol{y})\sim\mathcal{D}\), is monotone._
To construct the data distribution of Theorem 3, we will combine monotone functions, Majority and Tribes, commonly used in the analysis of Boolean functions due to their extremal properties. See Appendix B.2 for their definitions and useful properties. Let \(d=h+K-1\), and the distribution over the domain be uniform over \(\{0,1\}^{d}\). Given some \(x\in\{0,1\}^{d}\), we use \(x^{(1)}\) to refer to the first \(h\) coordinates of \(x\) and \(x^{(2)}\) the other \(K-1\) coordinates. This data distribution is labeled by the stochastic function \(\boldsymbol{f}\) given below.
\[\boldsymbol{f}(x)\coloneqq\begin{cases}\mathrm{Tribes}_{h}(x^{(1)})&\text{with probability $1-\varepsilon$}\\ \mathrm{Maj}_{K-1}(x^{(2)})&\text{with probability $\varepsilon$}.\end{cases}\]
Clearly \(\mathbf{f}\) is monotone as it is the mixture of two monotone functions. Throughout this subsection, we'll use \(\mathcal{D}_{h,K}\) to refer to the data distribution over \(\{0,1\}^{d}\times\{0,1\}\) where to sample \((\mathbf{x},\mathbf{y})\sim\mathcal{D}\), we first draw \(\mathbf{x}\sim\{0,1\}^{d}\) uniformly and then \(\mathbf{y}\) from \(\mathbf{f}(\mathbf{x})\). The proof of Theorem3 is a direct consequence of the following two Lemmas, both of which we prove in AppendixB.2.
**Lemma 4.3** (Top-\(K\) succeeds).: _On the data distribution \(\mathcal{D}_{h,K}\), Top-\(K\) with a depth budget of \(h\) achieves at least \(1-\varepsilon\) accuracy._
**Lemma 4.4** (Top-\(k\) fails).: _On the data distribution \(\mathcal{D}_{h,K}\), Top-\(k\) with a depth budget of \(h\) achieves at most \(\frac{1}{2}+\varepsilon\) accuracy._
## 5 Experiments
Setup for experiments.At all places, the Top-1 tree that we compare to is that given by scikit-learn[20], which according to their documentation2, is an optimized version of CART. We run experiments on a variety of datasets from the UCI Machine Learning Repository [1] (numerical as well as categorical features) having a size in the thousands and having \(\approx 50-300\) features after binarization. There were \(\approx 100\) datasets meeting these criteria, and we took a random subset of \(20\) such datasets. We binarize all the datasets - for categorical datasets, we convert every categorical feature that can take on (say) \(\ell\) values into \(\ell\) binary features. For numerical datasets, we sort and compute thresholds for each numerical attribute, so that the total number of binary features is \(\approx 100\). A detailed description of the datasets is given in AppendixC.
Footnote 2: [https://scikit-learn.org/stable/modules/tree.html/#tree-algorithms-id3-c4-5-c5-0-and-cart](https://scikit-learn.org/stable/modules/tree.html/#tree-algorithms-id3-c4-5-c5-0-and-cart)
We build decision trees corresponding to binary entropy as the impurity measure \(\mathcal{H}\). In order to leverage existing engineering optimizations from state-of-the-art optimal decision tree implementations, we implement the Top-\(k\) algorithm given in Figure1 via simple modifications to the PyDL8.5[1, 2] codebase3. Details about this are provided in AppendixD. Our implementation of the Top-\(k\) algorithm and other technical details for the experiments are available at [https://github.com/SullivanC19/pydl8.5-topk](https://github.com/SullivanC19/pydl8.5-topk).
Footnote 3: [https://github.com/aia-uclouvain/pydl8.5](https://github.com/aia-uclouvain/pydl8.5)
### Key experimental findings
Small increments of \(k\) yield significant accuracy gains.Since the search space of Top-\(k\) is a superset of that of Top-1 for any \(k>1\), the training accuracy of Top-\(k\) is guaranteed to be larger. The primary objective in this experiment is to show that Top-\(k\) can outperform Top-1 in terms of test accuracy as well. Figure2 shows the results for Top-1 versus Top-\(k\) for \(k=2,3,4,8,12,16,d\). Each plot is a different dataset, where on the x-axis, we plot the depth of the learned decision tree, and on the y-axis, we plot the test accuracy. Note that \(k=d\) corresponds to the DL8.5 optimal decision tree. We can clearly observe that the test accuracy increases as \(k\) increases--in some cases, the gain is \(>5\%\) (absolute). Furthermore, for (smaller) datasets like nursery, for which we were able to run \(k=d\), the accuracy of Top-8/16 is already very close to that of the optimal tree.
Lastly, since Top-\(k\) invests more computation towards fitting a better tree on the training set, its training time is naturally longer than Top-1. However, Figure6 in AppendixE, which plots the training time, shows that the slowdown is mild.
Top-\(k\) scales much better than optimal decision tree algorithms.Optimal decision tree algorithms suffer from poor runtime scaling. We empirically demonstrate that, in comparison, Top-\(k\) has a significantly better scaling in training time. Our experiments are identical to those in Figures 14 and 15 in the GOSDT paper [10], where two notions of scalability are considered. In the first experiment, we fix the number of samples and gradually increase the number of features to train the decision tree. In the second experiment, we include all the features, but gradually increase the number of training samples. The dataset we use is the FICO [13] dataset, which has a total of 1000 samples with 1407 binary features. We plot the training time (in seconds) versus number of features/samples for optimal decision tree algorithms (MurTree, GOSDT) and Top-\(k\) in Figure 3. We do this for depth \(=4,5,6\) (for GOSDT, the regularization coefficient \(\lambda\) is set to \(2^{-\text{depth}}\)). We observe that the training time for both MurTree and GOSDT increases dramatically compared to Top-\(k\), in both experiments. In particular, for depth \(=5\), both MurTree and GOSDT were unable to build a tree on 300 features within the time limit of 10 minutes, while Top-16 completed execution even with all 1407 features. Similarly, in the latter experiment, GOSDT/MurTree were unable to build a depth-5 tree on 150 samples within the time limit, while Top-16 comfortably finished execution even on 1000 samples. These experiments demonstrates the scalability issues with optimal tree algorithms. Coupled with the accuracy gains seen in the previous experiment, Top-\(k\) can thus be seen as achieving a more favorable tradeoff between training time and accuracy.
We note, however, that various optimization have been proposed to allow optimal decision tree algorithms to scale to larger datasets. For example, a more recent version of GOSDT has integrated
Figure 2: Test accuracy comparison between Top-\(k\) for various values of \(k\). We can see that Top-\((k+1)\) generally obtains higher accuracy than Top-\(k\), and in some cases (e.g., nursery), Top-8/16’s accuracy is even comparable to the optimal tree (Top-\(d\)). Missing points in the plots correspond to settings that did not terminate within a sufficiently large time limit. All plots are averaged over 10 random train-test splits (except avila and ml-prove that have pre-specified splits) with confidence intervals plotted for 2 standard deviations.
a guessing strategy using reference ensembles which guides the binning of continuous features, tree size, and search [MZA\({}^{+}\)]. Many of these optimizations are generally applicable across optimal tree algorithms and could be combined with Top-\(k\) for further improvement in performance.
Increasing \(k\) beyond a point does not improve test accuracy.In our experiments above, we ran Top-\(k\) only till \(k=16\): in Figure 4, we show that increasing \(k\) to very large values, which increases runtime, often does not improve test accuracy, and in some cases, may even _hurt_ due to overfitting. For 3 datasets - car, hayes-roth and tic-tac-toe - we plot train and test error as a function of \(k\). Naturally, the train accuracy monotonically increases with \(k\) in each plot. However, for both car and hayes-roth, we can observe that the test accuracy first increases and then plateaus. Interestingly, for tic-tac-toe, the test accuracy first increases and then _decreases_ as we increase \(k\). These experiments demonstrate that selecting too large of a \(k\), as optimal decision tree algorithms
Figure 4: Test accuracy plateaus for large \(k\). All runs averaged over 10 random train-test splits with maximum depth fixed to 3.
Figure 3: Training time comparison between Top-\(k\) and optimal tree algorithms. As the number of features/samples increases, both GOSDT and MurTree scale poorly compared to Top-\(k\), and beyond a threshold, do not complete execution within the time limit.
do, is a waste of computational resources and can even hurt test accuracy via overfitting.
## 6 Conclusion
We have shown how popular and empirically successful greedy decision tree learning algorithms can be improved with the power of choices: our generalization, Top-\(k\), considers the \(k\) best features as candidate splits instead of just the single best one. As our theoretical and empirical results demonstrate, this simple generalization is powerful and enables significant accuracy gains while preserving the efficiency and scalability of standard greedy algorithms. Indeed, we find it surprising that such a simple generalization has not been considered before.
There is much more to be explored and understood, both theoretically and empirically; we list here a few concrete directions that we find particularly exciting and promising. First, we suspect that power of choices affords more advantages over greedy algorithms than just accuracy gains. For example, an avenue for future work is to show that the trees grown by Top-\(k\) are more robust to noise. Second, are there principled approaches to the automatic selection of the greediness parameter \(k\)? Can the optimal choice be inferred from a few examples or learned over time? This opens up the possibility of new connections to machine-learned advice and algorithms with predictions [20], an area that has seen a surge of interest in recent years. Finally, as mentioned in the introduction, standard greedy decision tree algorithms are at the very heart of modern tree-based ensemble methods such as XGBoost and random forests. A natural next step is to combine these algorithms with Top-\(k\) and further extend the power of choices to these settings.
## Acknowledgements
We thank the NeurIPS reviewers and AC for their detailed and helpful feedback.
Guy and Li-Yang are supported by NSF awards 1942123, 2211237, 2224246 and a Google Research Scholar award. Jane is supported by NSF Graduate Research Fellowship under Grant No. 2141064, NSF Awards CCF-2006664, DMS-2022448, and Microsoft. Mo is supported by a Stanford Interdisciplinary Graduate Fellowship and a Stanford Data Science Scholarship. Chirag is supported by Moses Charikar and Greg Valiant's Simons Investigator Awards.
|
2307.15855 | Recent neutrino oscillation result with the IceCube experiment | The IceCube South Pole Neutrino Observatory is a Cherenkov detector
instrumented in a cubic kilometer of ice at the South Pole. IceCube's primary
scientific goal is the detection of TeV neutrino emissions from astrophysical
sources. At the lower center of the IceCube array, there is a subdetector
called DeepCore, which has a denser configuration that makes it possible to
lower the energy threshold of IceCube and observe GeV-scale neutrinos, opening
the window to atmospheric neutrino oscillations studies. Advances in physics
sensitivity have recently been achieved by employing Convolutional Neural
Networks to reconstruct neutrino interactions in the DeepCore detector. In this
contribution, the recent IceCube result from the atmospheric muon neutrino
disappearance analysis using the CNN-reconstructed neutrino sample is presented
and compared to the existing worldwide measurements. | Shiqi Yu, Jessie Micallef | 2023-07-29T01:12:26Z | http://arxiv.org/abs/2307.15855v1 | # Recent neutrino oscillation result with the IceCube experiment
###### Abstract
The IceCube South Pole Neutrino Observatory is a Cherenkov detector instrumented in a cubic kilometer of ice at the South Pole. IceCube's primary scientific goal is the detection of TeV neutrino emissions from astrophysical sources. At the lower center of the IceCube array, there is a subdetector called DeepCore, which has a denser configuration that makes it possible to lower the energy threshold of IceCube and observe GeV-scale neutrinos, opening the window to atmospheric neutrino oscillations studies. Advances in physics sensitivity have recently been achieved by employing Convolutional Neural Networks to reconstruct neutrino interactions in the DeepCore detector. In this contribution, the recent IceCube result from the atmospheric muon neutrino disappearance analysis using the CNN-reconstructed neutrino sample is presented and compared to the existing worldwide measurements.
**Corresponding authors:** Shiqi Yu\({}^{1*}\), Jessie Micallef\({}^{2,3}\)
\({}^{1}\) _Michigan State University_
\({}^{2}\) _Massachusetts Institute of Technology_
\({}^{3}\) _Tufts University_
\({}^{*}\) Presenter
The 38th International Cosmic Ray Conference (ICRC2023)
26 July - 3 August, 2023
Nagoya, Japan
Introduction
Neutrinos are generated and detected as three flavors, namely \(\nu_{e,\mu,\tau}\) via weak interactions, while they propagate in their mass eigenstates: \(\nu_{1,2,3}\). However, as they travel, they propagate in their respective mass eigenstates, denoted as \(\nu_{1,2,3}\). Due to their non-zero masses, the flavor observed in the detection of a neutrino may differ from its original flavor upon creation. This phenomenon is called neutrino oscillation. Neutrinos oscillations have been observed and studied by many experiments [1, 2, 3, 4, 5, 6]. The probability of being created in one flavor and subsequently detected in another is described by a unitary matrix, the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) [7] matrix. This unitary matrix can be parameterized by three mixing angles (\(\theta_{12}\), \(\theta_{13}\), and \(\theta_{23}\)), one CP-violating phase \(\delta_{CP}\), and the squared mass differences, \(\Delta m_{ij}\equiv m_{i}^{2}\)-\(m_{j}^{2}\), between the three neutrino masses \(m_{i}\), where \(i,j=1,2,3\). In this study, we measure the values of \(\Delta m_{32}^{2}\) and \(\theta_{23}\) via the \(\nu_{\mu}\) disappearance channel by studying atmospheric muon neutrinos with the IceCube Neutrino Observatory. For the other oscillation parameters that are previously well measured, we employ the reported values by [8] (\(\Delta m_{21}^{2}\approx 7.4\times 10^{-5}\)eV\({}^{2}/\)c\({}^{4}\), \(\theta_{13}\approx 9^{\circ}\), and \(\theta_{12}\approx 34^{\circ}\)).
The spectrum of cosmic rays at Earth follows a power-law with an isotropic distribution. This generates an atmospheric neutrino flux that also follows a power-law for all zenith angles. In this analysis, we make use of the energy of the neutrino, E, and the distance from their generation point to the detector, L, which can be parametrized as a function of the zenith angle, \(\theta_{zenith}\).
Atmospheric muon neutrinos are produced by the hadronic processes of cosmic rays interacting with matter in the atmosphere. The interaction spreads all over the atmosphere at all energy ranges, which provides a rich muon neutrino sample with broad ranges of neutrino energy (E) and distance of travel (L). The existing long-baseline experiments have fixed baselines and narrowly peaked neutrino beam energies optimized for neutrino oscillation studies, while IceCube can use the highest L and E in the oscillation analysis. In the effective approximation of two-flavor oscillations, the \(\nu_{\mu}\) survival probability reads:
\[P(\nu_{\mu}\rightarrow\nu_{\mu})\approx 1-\sin^{2}(2\theta_{23})\sin^{2} \frac{1.27\Delta m_{32}^{2}L}{E}, \tag{1}\]
which depends on the distance traveled, L, and the neutrino energy E. The mixing angle of \(\theta_{23}\) and mass splitting of \(\Delta m_{32}^{2}\) are the parameters to be measured and are plotted against L (represented by arrival angle, \(\theta_{zenith}\)) and E in Figure 1. The values of \(\Delta m_{32}^{2}\) can affect the frequency of oscillation stripes (see Figure 1), and hence, the locations of the strips. The brightness of the stripes is affected by the value of \(\theta_{23}\), which corresponds to the amplitude in Equation 1. The sensitivities to these two oscillation parameters arise mainly from the neutrino sample arriving through the Earth (\(\cos\theta_{zenith}\lesssim 0\)) with energy between 5 and 100 GeV. Additionally, the first oscillation "dip", for example, near 30 GeV in Figure 1 with the underlying assumptions on the oscillation parameters, gives us a strong sensitivity to the \(\Delta m_{32}^{2}\) value.
## 2 DeepCore Detector
The IceCube detector (see Figure 2) comprises 5,160 digital optical modules (DOMs) instrumenting a volume of over one km\({}^{3}\) of South Pole glacial ice deep below the surface. Each DOM contains a photomultiplier tube (PMT) and the associated electronics to detect and read out photons
as electronic signals. These DOMs can detect Cherenkov photons produced by the relativistic charged particles propagating through the ice, produced initially by neutrino interactions within the detector, and convert these signals into digitized waveforms. The arrival photons' charge and time information can be extracted from the digitized waveforms and used as inputs to a Convolutional Neural Network (CNN) [10, 11, 12] for reconstructing neutrino interactions. The DeepCore sub-detector, located in the bottom center of the IceCube array (see left panel of Figure 2), and the surrounding IceCube strings of the main array (eight red-filled and 19 orange-circled green dots in the right panel of Figure 2) offer exceptional capabilities to reconstruct events in the sub-100 GeV energy range. The DeepCore detector has a denser instrumented volume of ice (\(\sim\)10\({}^{7}\) m\({}^{3}\)) with higher quantum efficiency DOMs compared with those of the main array, which grants us the
Figure 1: Distribution of \(\nu_{\mu}\) survival probability with color representing the value of probability at given values of \(\cos(\theta_{\rm zenith})\) and E with oscillation parameters from the previous result [9].
Figure 2: IceCube neutrino observatory’s main in-ice array and DeepCore sub-detector (left) and surface layout of strings (right) where red-filled DeepCore strings and orange-circled IceCube main strings are used in the CNN reconstruction.
ability to study neutrinos with observed energy between 5 and 100 GeV arriving through the Earth (with L\(\sim 1.3\times 10^{4}\) km for those passing through the Earth core).
This analysis uses the data taken between 2012 and 2021, an equivalent livetime of 9.3 years. The simulation and calibration techniques are the same as the previous result [9], whereas machine-learning (ML) reconstruction techniques are developed and employed, and background-like neutrino candidates help to better constrain systematic uncertainties.
## 3 Reconstruction and Event Selection
We developed and applied the convolutional neural networks (CNNs) focused on the DeepCore sub-detector to reconstruct the sub-100 GeV neutrino sample, which contributes the most to the sensitivity of this study. With the help of CNN reconstructions, we select a final neutrino-rich sample with contamination from atmospheric muons well below 1% of the selected sample.
The training of CNNs is optimized separately for neutrino energy [10], arrival direction (\(\theta_{\rm zenith}\)) [11], interaction vertex position, particle identification (PID), and atmospheric muon classification [12]. We keep the neutrino candidates with their starting vertex close to DeepCore to ensure better reconstruction resolution. Energy and zenith cuts are applied to keep neutrinos with reconstructed energy between 5 and 100 GeV arriving from below the horizon, which is the region, as described in Section 1 and shown in Figure 1, that gives us the best sensitivity to oscillation parameter measurements. The CNN-reconstructed PID classifier (as shown in Figure 3) selects the signal-like candidates, i.e., \(\nu_{\mu}\) charged current (CC) interactions, over the background-like neutrino interactions (all remaining types) of this analysis. Signal-like events have a track-like topology in the detector because of their outgoing primary muons, while background-like events look like scattered cascades due to their electromagnetic and hadronic showers. A boosted decision tree is employed to serve as the atmospheric muon classifier, which helps to keep the final sample neutrino-dominated with the rate of atmospheric muon background events well within 0.6% of the sample.
The CNNs are trained independently using differently optimized samples to achieve optimized reconstruction performances on all variables. The neutrino energy and interaction vertex position
Figure 3: Stacked distributions of CNN-reconstructed PID with color representing different Monte Carlo (MC) components, dashed lines indicate boundaries between cascade- (left), mixed- (middle), and signal-like (right) events.
are trained on a \(\nu_{\mu}\) CC sample with a uniform energy spectrum between 1 and 300 GeV, with a tail extending to 500 GeV. The zenith angle, \(\theta_{\rm zenith}\), is trained using \(\nu_{\mu}\) CC events starting and ending near DeepCore generated with a uniform true \(\theta_{\rm zenith}\) distribution. The PID identifier is trained on a sample with an equal number of track-like and cascade-like neutrino events. The atmospheric muon classifier is trained on a balanced sample of track-like and cascade-like neutrino interactions and atmospheric muons. All the DOMs on the 8 DeepCore and surrounding 19 IceCube strings, as shown in Figure 2, are incorporated to the CNN via two separate input layers due to their different spatial densities. With a similar performance to the state-of-the-art likelihood-based reconstructions [13], the most considerable improvement of using CNN is from the processing speed (approximately 3,000 times faster), which is a significant advantage considering the large statistics of the full MC production of atmospheric neutrino datasets used in these analyses.
## 4 Analysis
We bin the selected neutrino sample using 3D binning: reconstructed energy, \(\cos(\theta_{\rm zenith})\), and PID (see Figure 3). The high-PID bin (PID \(\geq 0.55\)) has the highest purity of signal-like events, while cascade-like events dominate the low-PID bin (PID \(<0.25\)). The ten logarithmic energy bins and eight linear \(\cos(\theta_{\rm zenith})\) bins help to reveal the oscillation pattern in the low energy up-going region while not pushing beyond the limitation of reconstruction resolution. The binned analysis sample can be found in Figure 4.
The treatment of systematic uncertainties follow a similar procedure of the previous analysis. [9]. In this analysis, the list of free parameters is decided by re-evaluating their impacts on the recovery of the true underlying physics parameters. The fitted values of the systematic parameters compared to their nominal values and prior ranges and shown in Figure 5. We employ neutrino interaction cross-section uncertainties from GENIE 2.12.8 [14], except for the deep-inelastic scattering (DIS) parameter which is interpolated between GENIE and CSMS [15] for a higher energy range of coverage; uncertainties of atmospheric flux hadronic production are parameterized following the "Barr" parameters ("Atm. flux Y/W/I/H/G") [16]; "Atm. flux \(\Delta\gamma\)" represents the uncertainty on the cosmic-ray spectral shape; "\(N_{\rm brf}\)" accounts for the difference in ice properties of employing the birefringent polycrystalline microstructure ice-model [17] and that of the nominal MC (SPICE
Figure 4: Selected analysis sample in bins of neutrino energy, zenith angle, and PID (cascade-, mixed-, and track-like samples from left to right). Blank bins are not used in the analysis due to their low MC statistics.
3.2.1 [9]); single-DOM light efficiency, parameters affecting photon propagation in glacial ice ("ice absorption" and "ice scattering"), and refrozen ice in drilling holes ("Hole ice, p\({}_{0(1)}\)") have been introduced in the previous analysis [9]; and "\(N_{\nu}\)" ("\(N_{\mu}\)") describes the uncertainty on the normalization of neutrino (muon) flux.
## 5 Result and Conclusion
After the final selections, the low-energy atmospheric neutrino dataset taken between 2012-2021 contains 150,257 events. We achieve a good data/MC agreement and a distinctive signature of muon neutrino disappearance in the track-like bin, as shown in Figure 6.
Figure 7 shows the 90% confidence level (C.L.) contours of sin\({}^{2}(\theta_{23})\) and \(\Delta m^{2}_{32}\) assuming neutrino masses are in normal ordering (\(m_{3}>m_{2}>m_{1}\)). This result is consistent with all the
Figure 5: Fitted systematic uncertainty parameters pulled from nominal values compared to ranges of priors. Detailed descriptions and references of individual parameters are in the main text.
Figure 6: Data (black) and stacked MC comparisons of L/E projections with top panels showing events in cascade- (left), mixed- (middle), and track-like (right) bins with (solid) and without (dashed) muon neutrino disappearance applied, and bottom panels showing ratios of data/MC (black) and MC ratios of without/with oscillations (dashed orange).
previous accelerator and atmospheric neutrino oscillation measurements, as shown by the strong overlap in the 90% C.L. contours. Given the competitive sensitivity to the current world-leading measurements, improvements in the precision of global fits to neutrino oscillation parameters are expected once this result is incorporated. Since this analysis is sensitive to a higher energy range compared to other oscillations experiments and the detector technology is unique, it carries a distinct set of systematic uncertainties. The observed consistency is thus a strong validation of the standard three massive neutrino oscillation model.
As shown in Figure 8, the reported \(1\sigma\) uncertainties on the values of \(\sin^{2}(\theta_{23})\) and \(\Delta m^{2}_{32}\) from different experiments are largely well agreed. The uncertainty of \(\Delta m^{2}_{32}\) measurement from this analysis has a narrower uncertainty than the existing measurements. This is primarily benefited from the first oscillation "dip" in our 2D oscillation pattern, as discussed in Section 1 and shown in Figure 1, the location of which contributes most to the sensitivity of \(\Delta m^{2}_{32}\) constraint. Meanwhile, it also benefited from the large-statistic of selected neutrino-rich final sample, improved detector calibration and MC models, and the new ML-based reconstruction techniques.
There is a lot of room for future improvements in muon neutrino disappearance measurement
Figure 8: One \(\sigma\) uncertainties (using Wilks’ theorem and assuming normal mass ordering) on \(\sin^{2}(\theta_{23})\) (left) and \(\Delta m^{2}_{32}\) (right) of this result (black) compared with the existing measurements [3, 4, 5, 20, 6].
Figure 7: 90% C.L. contours (using Wilks’ theorem[18] and assuming normal mass ordering) and best-fit parameters (cross) of \(\sin^{2}(\theta_{23})\) and \(\Delta m^{2}_{32}\) compared to contours of other experiments [3, 4, 5, 19].
using low-energy atmospheric data with IceCube DeepCore. The near-future IceCube Upgrade [21] will help to further improve our sensitivity to muon neutrino disappearance by improving detector calibration and event resolution. There are improved MC models underway, which better describe the properties of glacial ice and the composition of atmospheric fluxes and can potentially improve future analysis. There are also ongoing analyses that are benefited from the CNN reconstructions and selections, such as non-standard neutrino interaction measurement and neutrino mass ordering searches. The CNN method could also be adapted and applied to the IceCube Upgrade, further improving the precision of neutrino oscillation measurements.
|
2307.04780 | Comparison of Point Cloud and Image-based Models for Calorimeter Fast
Simulation | Score based generative models are a new class of generative models that have
been shown to accurately generate high dimensional calorimeter datasets. Recent
advances in generative models have used images with 3D voxels to represent and
model complex calorimeter showers. Point clouds, however, are likely a more
natural representation of calorimeter showers, particularly in calorimeters
with high granularity. Point clouds preserve all of the information of the
original simulation, more naturally deal with sparse datasets, and can be
implemented with more compact models and data files. In this work, two
state-of-the-art score based models are trained on the same set of calorimeter
simulation and directly compared. | Fernando Torales Acosta, Vinicius Mikuni, Benjamin Nachman, Miguel Arratia, Bishnu Karki, Ryan Milton, Piyush Karande, Aaron Angerami | 2023-07-10T08:20:45Z | http://arxiv.org/abs/2307.04780v2 | # Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation
###### Abstract
Score based generative models are a new class of generative models that have been shown to accurately generate high dimensional calorimeter datasets. Recent advances in generative models have used images with 3D voxels to represent and model complex calorimeter showers. Point clouds, however, are likely a more natural representation of calorimeter showers, particularly in calorimeters with high granularity. Point clouds preserve all of the information of the original simulation, more naturally deal with sparse datasets, and can be implemented with more compact models and data files. In this work, two state-of-the-art score based models are trained on the same set of calorimeter simulation and directly compared.
###### Contents
* I Introduction
* II Deep Learning Models
* III Detector and Data Descriptions
* III.1 Calorimeter Simulation
* III.2 Datasets
* IV Results
* V Conclusion and Outlook
* Code Availability
## I Introduction
Detector simulations are essential tools for data analysis by connecting particle and nuclear physics predictions to measurable quantities. The most precise detector simulations are computationally expensive. This is especially true for calorimeters, which are designed to stop most particles and thus require modeling interactions from the highest accessible energies down to the lowest ones. Well-established experiments typically have bespoke fast simulations that capture the salient aspects of the precise simulations (usually based on Geant [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 1444, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 201, 211, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 231, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 2777, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 324, 325, 326, 327, 333, 341, 342, 343, 35, 36, 37, 38, 39, 400, 411, 42, 436, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 111, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 14, 15, 16, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 51, 52, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 73, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 89, 90, 92, 94, 95, 96, 97, 98, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 54, 56, 57, 58, 59, 61, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 100, 99, 11, 12, 13, 14, 15, 16, 17, 19, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 56, 57, 59, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 74, 75, 76, 78, 79, 80, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 72, 73, 74, 75, 76, 77, 78, 79, 80, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 95, 96, 97, 99, 100, 99, 11, 12, 13, 14, 15,
outputs that respect permutation invariance. With a lag compared to image-based approaches, point cloud generative models for particle/nuclear physics applications have seen a rapid development in recent years [37; 38; 39; 40; 41; 42]. However, until recently, these models have never been applied to calorimeter simulations.
The first (and until now, only) publication describing point cloud generative models applied to calorimeters is Ref. [35], which proposed generating Geant 'hits' (deposits of energy) prior to their discritization into cells. This innovative idea enables the separation of material interactions from readout geometry. However, the number of hits vastly exceeds the number of non-zero cells which makes this task difficult. In this paper, we explore point cloud generative models applied directly to cell-level information. In other words, we take calorimeter images and compare state-of-the-art generative models that represent the same inputs as either images or (zero-suppressed) point clouds. As a case study, the two representations are compared using simulations of a high-granularity hadronic calorimeter, similar to the design planned for the ePIC detector at the future Electron-Ion Collider [43; 44; 45].
This paper is organized as follows. Section II describes the DL models used for the comparison. Both the image-based and point-cloud representations are generated with diffusion models in order to make the comparison as direct as possible. The simulation of the calorimeter dataset is found in Sec. III. Discussion of the advantages and disadvantages of both representation, as well as numerical results are presented in Sec. IV. The paper ends with conclusions and outlook in Sec. V.
## II Deep learning models
Generative models for detector simulation aim to precisely emulate physics-based models, like those based on Geant, but using far less time than the full simulation. With \(\mathcal{O}(100)\) detector components, neural network architectures solely based on fully connected layers can efficiently produce high fidelity samples, resulting in surrogate models thousands of times faster than the standard simulation routines [18; 19; 20; 27]. For higher detector granularity (\(\mathcal{O}(1\)k) - \(\mathcal{O}(10\)k)), the use of data symmetries becomes crucial to achieve precision. These can be directly included in the model design through dedicated neural network architectures or included in the data pre-processing [26]. For generative models such as normalizing flows, introducing flexible network architectures is often not trivial as the model invertibility and tractable Jacobian of the transformation places a strong restriction on the model design. A second difficulty is to achieve a stable training routine of the surrogate model. At finer granularities, neural network models tend to become larger to accommodate the data complexity, often resulting in unstable training schedules. This issue becomes more prominent in generative models such as variational autoencoders, where the latent space can vary rapidly, leading to an unstable response of the decoder network, or GANs, where the adversarial training requires careful tuning of the model hyperparameters to achieve a stable training.
Diffusion models are a class of generative neural networks that allow for stable training paired with high flexibility in the model design. Data is slowly perturbed over time using a time parameter \(t\in\mathbb{R}\) that determines the perturbation level. The task of the neural network is to approximate the gradients of the log probability of the data, or the score function \(\nabla_{\mathbf{x}}p(\mathbf{x})\in\mathbb{R}^{D}\), based on data observations \(\mathbf{x}\in\mathbb{R}^{D}\) in the \(D\)-dimensional space. This can be approximated by a denoising score-matching strategy [46]. In the implementation used in this paper, data observations \(\mathbf{x}\sim p_{\text{data}}(\mathbf{x})\) are perturbed using the kernel \(\mathbf{x}_{t}\sim q(\mathbf{x}_{t}|\mathbf{x})=\mathcal{N}(\mathbf{x}_{t}; \alpha_{t}\mathbf{x},\sigma_{t}^{2}\mathbf{I})\), with time-dependent parameters \(\alpha\) and \(\sigma\) determining the strength of the perturbation to be applied. In the variance-preserving setting of diffusion processes, \(\sigma_{t}^{2}=1-\alpha_{t}^{2}\). For the time-dependence, a cosine schedule is used such that \(\alpha_{t}=\cos(0.5\pi t)\). The loss function to be minimized is implemented using a _velocity_ parameterization:
\[\mathcal{L}_{\theta}=\mathbb{E}_{\epsilon,t}\left\|\mathbf{v}_{t}-\hat{ \mathbf{v}}_{t,\theta}\right\|^{2}, \tag{1}\]
where the time-dependent network output with trainable parameters \(\theta\), \(\hat{\mathbf{v}}_{t,\theta}\), is compared with the velocity of the perturbed data at time \(t\), \(\mathbf{v}_{t}\equiv\alpha_{t}\epsilon-\sigma_{t}\mathbf{x}\), with \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The score function is then identified as
\[\nabla_{x}\log\hat{p}_{\theta}(\mathbf{x}_{t})=-\mathbf{x}_{t}-\frac{\alpha_{ t}}{\sigma_{t}}\hat{\mathbf{v}}_{t,\theta}(\mathbf{x}_{t}). \tag{2}\]
The data generation from the trained diffusion models is implemented using the DDIM sampler proposed in Ref. [47] that can be interpreted as an integration rule [48] with update rule specified by:
\[\mathbf{x}_{s}=\alpha_{s}\hat{\mathbf{x}}_{\theta}(\mathbf{x}_{t})+\sigma_{s} \frac{\mathbf{x}_{t}-\alpha_{t}\hat{\mathbf{x}}_{\theta}(\mathbf{x}_{t})}{ \sigma_{t}}. \tag{3}\]
For a fair comparison, all diffusion models are trained using the same score-matching strategy and fixed number of 512 time steps during sampling.
The fast point cloud diffusion model (FPCM) follows [41], where a permutation equivariant estimation of the score function is obtained by the combination of a DeepSets[49] architecture with attention layers [50]. During the point cloud simulation, two models are also defined: one that learns the number of non-empty cells, conditioned on the initial energy of the incoming particle, and one model that learns the score function of the normalized point cloud, also conditioned on the energy of the particle to be simulated and the number of hits to be generated. This model is trained on Dataset 1, described in Sec. III.2.
The model trained on the image dataset (CaloScore) is adapted from [34] with a few modifications. Compared
to the original implementation, the calorimeter simulation task is now broken down in two diffusion models: one that learns only the energy deposits in each layer of the calorimeter, conditioned on the initial energy of the particle to be simulated, and one model that learns to generate normalized voxels per layer, conditioned on the energy deposition in each layer and the initial energy of the particle to be simulated. Additionally, the original U-Net[51] model is combined with attention layers. These changes increase the model expressiveness and the generation fidelity. This model is trained on Dataset 2, described in Sec. III.2
## III Detector and Data Descriptions
### Calorimeter Simulation
The DD4HEP framework [52] is used to run Geant simulations of a high-granularity iron-scintillator calorimeter (based on the CALICE-style design [53]), which has dimensions similar to those of the forward hadronic calorimeter in the future ePIC detector (LFHCAL [44]) at the EIC. Specifically, the sampling structure comprises 0.3 cm scintillator tiles sandwiched between 2.0 cm thick steel plates. It consists of a total of 55 layers. The transverse area of the scintillator is set to 10 cm\(\times\)10 cm, somewhat larger than in Ref. [44]. It adopts a non-projective geometry with tower elements arranged in parallel to the \(z\) axis and has its front face at \(z\)=3.8 m.
1.7 million events of single \(\pi^{+}\) particles incident on the center of the calorimeter are simulated. The incident momentum, \(P_{\mathrm{Gen.}}\), was generated uniformly in \(\log_{10}\) space in the range \(1.0<P_{\mathrm{Gen.}}<125\) GeV/\(c\). In order to hit the center of the calorimeter, the pions were generated with a polar angle of \(\theta_{\mathrm{Gen.}}=17^{\circ}\). Because the detector is symmetric about \(\phi\), the particles are generated in the range \(0^{\circ}<\phi_{\mathrm{Gen.}}<360^{\circ}\). An energy threshold corresponding to 0.3 MeV are used to select hits for further analysis.
### Datasets
Dataset 1 is the point cloud representation of the Geant showers, while Dataset 2 represents the same showers using the image representation. Both Dataset 1 and Dataset 2 used in training share the same parent Geant simulation, such that the fast point cloud diffusion model and the image model are trained on different representations of the same set of calorimeter showers.
Dataset 1 is created by taking the Geant simulation and converting it to a format based on JetNet data [54], that stores information on jets and their constituents in a zero-suppressed point cloud representation. The Geant data is stored in files containing two datasets, _clusters_ and _cells_. The cluster dataset contains the \(P_{\mathrm{Gen}}\) of the incident pion, as well as the number of hits in the calorimeter. The cell data is comprised of a constant number of 200 cells per event. Empty cells, or cells with deposited energy below the threshold are masked, with all values set to 0.0, and ignored during training.
The \(x\), \(y\), and \(z\) distributions of the Geant simulation are initially discrete, resulting from the digitization step of the simulation, with values equal to the centers of the cells in each dimension. The point cloud model struggles to learn extremely sharp features, as the score function is not well-defined for discrete inputs. To circumvent this, a uniform smearing within a cell-width is applied to the cells along each dimension to obtain continuous distributions for the final point cloud dataset. This maintains the same distributions at histogram-level when binning according to the cell-width, but yields a point cloud dataset with smooth \(x\), \(y\), and \(z\) distributions. Without this smearing, the distributions in \(x\), \(y\), and \(z\) resemble a series of delta functions that the point cloud model struggles. The point cloud model is trained on this smeared point cloud representation of the Geant simulation.
Dataset 2 is created by converting the point cloud dataset into an image format. Images at the original granularity would would be too large for the generative model. The calorimeter cells were therefore clustered into groups of 5 along each axis of the detector to create voxels, where \(5\times 5\times 5\) cells = 1 voxel. Energy in each of the cells making up the voxel were summed and assignd to the final voxel's total energy. The final image format consists of \(11\times 11\) voxels. A hit in the voxelized dataset, and referenced in Section IV, is defined as any voxel with energy deposition above threshold.
For the final comparison, generated samples from the point cloud model are voxelized using the same method for Dataset 2. All comparisons are in this image format, at the same resolution of \(11\times 11\times 11\) voxels per image.
Images representing the full resolution of the calorimeter with \(55\times 55\times 55\) voxels were not used, as this would result in unmanageably large datasets (see Table 1), and would represent the largest calorimeter image training ever done. The point cloud model was trained on the full resolution because point clouds naturally represent the calorimeter at full granularity. Training the point cloud model on this more natural representation is in line with the goal of this work to investigate advantages/disadvantages of two representations of the calorimeter data. It is also for this reason that the generated point cloud distributions are shown separately, while the direct comparisons between models are done in the image representation. Investigating possible advantages of a point-cloud model trained directly on the voxelized dataset is left to future work.
Results
All generated samples along with Geant are converted to the same image format at the same resolution of \(11\times 11\times 11\) voxels per event for fair comparison. A variety of distributions are used to evaluate the quality of the generated images. After comparing calorimeter images generated by both models, the point cloud representation of Geant is compared to the generated samples of the point-cloud model to provide additional insight to the previous image-based comparison. For all comparisons, the Earth mover's distance (EMD) [55], also known as the 1-Wasserstein distance [56], between generated distributions and Geant distributions is calculated. The EMD score a distance-like measure of the dissimilarity between two distributions. It roughly represents the minimum amount of work needed to transform one distribution to another. While this is not the only possible metric, it is a standard and widely-used statistic that was also the main distance deployed in [34], where an image based model was compared to a Wasserstein-GAN. All EMD scores in Figures 2, 3 and 4 are calculated on final voxelized distributions
Figure 1 shows a qualitative assessment of the generative models using the 2-dimensional distribution of the average energy deposition in three layers. All voxels with an expected energy deposition above 0 are populated in both the image and point cloud based models, with very few additional hits. The calorimeter shower will have diverse shapes, as well as different overall distribution of voxels due to the variation of \(\phi_{\text{Gen.}}\). The qualitative similarities in each image in Fig 1 indicate that models reproduce the various showers from the training dataset well. Each image contains a ring due to \(\theta_{\text{Gen.}}\) being fixed while varying \(\phi_{\text{Gen.}}\).
Table 1 shows the model size, size of each dataset, and time to generate 100k calorimeter showers. The disk size and sample time under the point cloud model are for showers in the point cloud representation. The AUC is obtained from a classifier trained to distinguish the samples of both models only in the voxelized image format. Both models have very good AUC, reasonably close to 0.5, with the image model having the lower AUC. The point cloud model is smaller by a factor of 4 compared to the image based model, and samples events 3 times faster. Lastly, the point cloud dataset requires over 100 times less disk space than the image format at full granularity.
Figure 2 compares the total energy deposited in the calorimeter and total number of calorimeter hits, where a hit is defined as any voxel with energy above threshold. The EMD is also calculated between Geant and the different generative models.
Both the image-based diffusion model and the point-cloud based diffusion model are in good agreement with Geant at small deposited energies, deviating no more than 10%. At the highest deposited energies, however, both diffusion models begin to fall away from Geant, with the point-cloud model generating less energy, and the image based model generating slightly more energy than Geant. These trends begin at about 10 GeV, with the point-cloud model deviating slightly earlier. The point-cloud model also shows a slightly higher EMD score than the image based model. The region where the deviations are largest, past 20 GeV of deposited energy are rare, and statistical fluctuations begin to dominate the Geant distributions.
The number of hits shows a similar trend, though with larger deviations. At a small number of hits, both show good agreement with Geant, with deviations slightly above 10%. At 15 or more hits, both models begin to deviate well past 10%, with the point cloud model oversampling the number of hits, and the image based model generating less hits than Geant.
Figure 3 and 4 shows the average deposited energy \(x\), \(y\), and \(z\)-coordinates. Both models struggle in the first and last layers in \(x\) and \(y\) coordinates, but show good agreement in the middle layers. While the image-based model shows larger deviations in the first and last layers of the calorimeter compared to the point-cloud model, it has an overall lower EMD in both distributions. The two-pronged feature of these distributions is a result of generating the pions at a fix polar angle and varying \(\phi\). It should be noted that there are little to no hits in the first and last \(x\) and \(y\) layers of the calorimeter, so even a very small deviation from Geant will result in a large deviation percentage (bottom panels of Fig. 3 and 4). Similarly, as there are fewer hits towards the back of the detector, deviations increase slightly for the very last layers. However, The \(z\)-distributions show both models in very good agreement with the original Geant predictions, a possible effect of the \(z\)-distribution of hits being less dependant on the generated \(\theta\) and \(\phi\) ranges.
All three distributions show the point cloud samples are systematically lower than the original Geant distributions. This indicates the point cloud model would benefit from learning the energy per layer directly, as is done in the image model described Sec. II. This difference likely explains why this small bias is observed in the point cloud model, but not in the image model, and is an avenue for improving the point cloud.
Following [26], a classifier was trained to distinguish between generated showers and Geant showers. The classifier is comprised of two fully connected layers of size 256 using the RELU activation function. The classifier is trained only on vectors of voxelized images of each dataset. The area under the receiver-operator curve (AUC) for the image model was 0.673. The AUC for the point-cloud model was 0.726. Generally, being closer to 0.5, where the classifier is maximally confused, is the target. However the AUC obtained by both models is very promising, as having an AUC even slightly below 1.0 is non-trivial.
A key advantage of the point cloud model is that the distributions at the sub-voxel level can be shown. The point cloud model already simulates the data at the orig
inal granularity of the calorimeter, and voxelization is only necessary for the image representation. The original output of the point cloud model is compared to the continuous (or smeared) Geant distributions. Figure 5 shows the number of hits in the point cloud representation of the calorimeter showers. In the point-cloud representation, a hit is defined as any _cell_ that has a energy deposited above threshold.
The point-cloud model reproduces the total number of cell hits well, much better than the voxel hit distribution, shown in Fig. 2. This may indicate that while the point cloud model is overall similar to Geant in both represen
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Model & \# Parameters & Disk Size (Full) & Sample Time & AUC \\ \hline Image & 2,572,161 & 1016MB (62GB) & 8036.19s & 0.673 \\ Point Cloud & 620,678 & 509 MB & 2631.41s & 0.726 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of model size, size of data representation on disk, generation time, and AUC of the same classifier trained to distinguish between the model and the original Geant showers. All comparisons are done for 100k calorimeter showers. The all results in the image row were obtained with the scaled down, \(11\times 11\times 11\) voxel images, however the disk size of the image dataset at full granularity is shown in parenthesis.
Figure 1: The 2-dimensional distribution of the mean deposited energy in the 1st, 5th, and 10th voxelized layer of the calorimeter. The first column is the original Geant simulation. The second column is the fast point-cloud based diffusion model (FPCM), and the 3rd column is the image-based model (CaloScore).
Figure 3: Comparison of the average deposited energies in the \(x\) (left), \(y\) (center), and \(z\) (right) coordinates. The dashed red lines in the bottom panel of each figure represent the 10% deviation interval of the generated samples from the original Geant simulation.
Figure 2: Sum of all voxel energies of the image representation of FPCD model, shown in orange, and the image based model, shown in grey-blue. The parent Geant distributions are shown as a dotted black line in the top panels. The dashed red lines in the bottom panel of each figure represent the 10% deviation interval of the generated samples from the original Geant simulation. The earth mover’s distance (EMD) between each distribution and the Geant distribution is also shown.
tations, small deviations in point cloud distributions can be summed into larger deviations during the voxelization process, where 125 individual cells are combined into a single voxel. However, there is a large symmetry group under which mismodelings in the bigger space may not affect the modeling in the coarser space, so further investigation is needed. However, the very good agreement with Geant in the number of cell hits and degrading agreement in the number of voxel hits indicates that the first diffusion model of the point cloud model architecture is performing well, while the second model, responsible for sampling the cell distributions, would likely benefit from additional tuning.
Similar conclusions can be derived from Fig. 6, show the generated point samples at the full detector granularity and in good agreement with Geant. Fig. 6 shows the average \(x\), \(y\), and \(z\) coordinate distributions, as well as the cell \(\log_{10}\)E distribution in the point representation. Again, there are larger relative deviations in the first and last layers in \(x\), \(y\), and \(z\), coordinates where there are very few hits, just as in the image representation. However, there is very good agreement with the Geant simulation in layers containing a reasonable number of hits.
## V Conclusion and Outlook
In this paper, we make the first direct comparison between two score based generative models using either images or point clouds as representations of the same training data. We use Geant calorimeter simulations of a high-granularity hadronic calorimeter. Both models perform well for most distributions, with very similar AUCs, but the image-based diffusion model invariably has a lower EMD in each comparison to Geant.
Overall, the performance of the point-cloud diffusion model is very close to the image model. This is despite the point cloud model being disadvantaged in this work in a few important ways.
First, the calorimeter showers from the FPCD model are closest to Geant in the point cloud representation at the full calorimeter granularity, as shown in Fig. 5 and 6. But it is later voxelized for comparison. This may compound mismodeling during the voxelization, however further investigation is needed.
Second, the point cloud model is adapted from a model architecture initially designed for jet data from the JetNet datasets. While the high-level structure of the datasets are very similar, the data itself are quite different. For example, the first diffusion model making up the point cloud model was initially much larger, as predicting the jet multiplicity is in general a more difficult problem than the number of non-empty cells in a calorimeter shower. Reducing the size of the first diffusion model of the point cloud model architecture had no impact on performance while speeding up training. The
Figure 4: Comparison of the average and z coordinate. The dashed red lines in the bottom panel of each figure represent the 10% deviation interval of the generated samples from the original Geant simulation.
Figure 5: The total number of hits in the point cloud representation of calorimeter showers, at full granularity. The dashed red lines in the bottom panel of each figure represent the 10% deviation interval of the generated samples from the original Geant simulation.
Figure 6: Comparison of the average cell \(x\) (top left), \(y\) (top right), \(z\) (bottom left) and \(\log_{10}\)E (bottom right) distributions in the point cloud datasets. Each distribution is binned according to the cell-width to show the full granularity of the detector. The dashed red lines in the bottom panel of each figure represent the 10% deviation interval of the generated samples from the original Geant simulation.
second diffusion model making up the point cloud model architecture that is responsible for sampling the cell \(x\), \(y\), \(z\), and \(E\) was directly adapted from [41]. Further tuning of the point cloud model, particularly the cell-model can likely close the small remaining gap in performance. The image model, in contrast, is based on CaloScore, which was tuned specifically for calorimeter showers.
Lastly, the image-based model uses the energy deposition in each layer in addition to the generated particle momentum to condition the second diffusion model making up its architecture. The second diffusion model making up the point cloud model is solely conditioned on the generated particle momentum. This might explain why the point cloud model has systematically lower mean energy distributions (see Fig. 3 and 4) compared to both Geant and the image based model.
These potential sources of improvement in the point cloud model should not detract from it's already very reasonable performance, deviating from Geant more 10% only in the sparsest of layers, where the image based model also struggles. At the same time, the point cloud model offers several advantages over the image model.
First, the sheer size of the data. The point cloud data saved to HDF5 files is a factor of 100 times smaller using the same zlib compression as the image based dataset at full granularity, with no voxelization. As calorimeters continue to increase in granularity, this difference will only increase.
Second, information is lost during voxelization process; cell hits with the same \(x\), \(y\), \(z\) coordinates, but different energies are summed over in the image representation. This is true even if images are produced at the full granularity of the calorimeter, where hits within the single cells are summed over. This means that voxelized datasets cannot naturally be reverted back to a point cloud representation.
Additionally, as was showed in this work, the generated point clouds can be voxelized afterwards, or converted into other representations that better fit specific use cases.
This work establishes a benchmark for future research on generative models, offering valuable insights into the challenges of modeling hadronic showers in highly granular calorimeters using image-based techniques, while also exploring the potential of point-cloud methods. The current advantages of point clouds, in combination with improvements to close the remaining performance gap described earlier, will likely make point cloud based models a clear choice for highly granular calorimeters. This work should serve as a reference for studies utilizing future calorimeters based on the CALICE design, including those intended for use in CMS at the LHC and ePIC at the EIC.
## Code availability
The code used to produce the point cloud results shown in this document are available at [https://github.com/ftoralesacosta/GSM_for_EIC_Calo](https://github.com/ftoralesacosta/GSM_for_EIC_Calo). The code for the image based model and comparisons of images is available at [https://github.com/ViniciusMikuni/Calo4EIC](https://github.com/ViniciusMikuni/Calo4EIC). Example Geant4 datasets and generated samples are available at [https://zenodo.org/record/8128598](https://zenodo.org/record/8128598).
###### Acknowledgements.
We acknowledge support from DOE grant award number DE-SC0022355.This research used resources from the LLNL institutional Computing Grand Challenge program and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award HEP- ERCAP0021099. M.A acknowledges support through DOE Contract No. DE-AC05-06OR23177 under which Jefferson Science Associates, LLC operates the Thomas Jefferson National Accelerator Facility. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.
|
2303.06460 | GeoCamera: Telling Stories in Geographic Visualizations with Camera
Movements | In geographic data videos, camera movements are frequently used and combined
to present information from multiple perspectives. However, creating and
editing camera movements requires significant time and professional skills.
This work aims to lower the barrier of crafting diverse camera movements for
geographic data videos. First, we analyze a corpus of 66 geographic data videos
and derive a design space of camera movements with a dimension for geospatial
targets and one for narrative purposes. Based on the design space, we propose a
set of adaptive camera shots and further develop an interactive tool called
GeoCamera. This interactive tool allows users to flexibly design camera
movements for geographic visualizations. We verify the expressiveness of our
tool through case studies and evaluate its usability with a user study. The
participants find that the tool facilitates the design of camera movements. | Wenchao Li, Zhan Wang, Yun Wang, Di Weng, Liwenhan Xie, Siming Chen, Haidong Zhang, Huamin Qu | 2023-03-11T17:20:39Z | http://arxiv.org/abs/2303.06460v3 | # GeoCamera: Telling Stories in Geographic Visualizations with Camera Movements
###### Abstract.
In geographic data videos, camera movements are frequently used and combined to present information from multiple perspectives. However, creating and editing camera movements requires significant time and professional skills. This work aims to lower the barrier of crafting diverse camera movements for geographic data videos. First, we analyze a corpus of 66 geographic data videos and derive a design space of camera movements with a dimension for geospatial targets and one for narrative purposes. Based on the design space, we propose a set of adaptive camera shots and further develop an interactive tool called _GeCamera_. This interactive tool allows users to flexibly design camera movements for geographic visualizations. We verify the expressiveness of our tool through case studies and evaluate its usability with a user study. The participants find that the tool facilitates the design of camera movements.
Visual storytelling, data video, geographic visualization, authoring tools +
Footnote †: This work was done during the internship at Microsoft Research Asia.The Hong Kong University of Science and Technology (Guangzhou)
+
Footnote †: This work was done during the internship at Microsoft Research Asia.The Hong Kong University of Science and Technology
+
Footnote †: This work was done during the internship at Microsoft Research Asia.The Hong Kong University of Science and Technology
+
Footnote †: This work was done during the internship at Microsoft Research Asia.The Hong Kong University of Science and Technology
Hong Kong SAR, China
## 1. Introduction
Geographic data videos have been prospering for years as an intuitive storytelling medium for geographic data. In practice, geographic data videos increasingly comprise diverse camera movements (e.g., [(44; 77; 78)]) to depict a sequence of geographic insights. The use of appropriate camera movements is essential in the the authoring of geographic data videos. First, camera movements help present geographic visualizations from suitable viewing angles. For example, a 3D bar chart placed on a map needs to be read from a lower viewpoint, and the occlusions among the bars can be reduced by animating the camera around the chart. Second, these camera movements help change the narrative focus in the geographic context smoothly, which is beneficial for driving the narration [(19; 67)]. Third, the dynamic camera movements naturally attract attention [(73)] and engage the audience [(3)]. Finally, by changing the moving speed of the camera, certain emotions can be delivered to the audience; for example, pulling the camera very slowly away from objects to suggest their isolation and loneliness [(41)].
However, it remains challenging for people without knowledge in filmmaking to design camera movements in geographic data videos. On the one hand, although a rich palette of camera movements is available [(36)], guidance in choosing the most appropriate one for a particular geographic insight or target under the story contexts is lacking; on the other hand, off-the-shelf toolkits for authoring camera movements fail to strike a balance between granularity
(expressiveness) and agency (complexity in human operations) (Sutton et al., 2017). For instance, kelper.gl (Kalalal et al., 2018), a geospatial analytics platform, supports the animation of the temporal changes in geospatial data points from a fixed point of view. Power Map (Power, 2018), a plugin for Microsoft Excel, enables users to fly between locations with a predefined camera movement on a geographic visualization. To gain more flexibility in editing camera movements, people often resort to complex video-editing software (_e.g._, Adobe After Effect (Bog
been interested in understanding how to create data videos expressively and efficiently. Prior works often carried out a qualitative examination to capture salient characteristics in existing pieces, including content analysis, user studies, or formative interviews. Amini _et al._[3] analyzed 50 data videos and summarized visual representations and attention cues of data video content. Upon the cinematic definition of four major narrative categories (_i.e._, _establisher_, _initial_, _peak_ and _release_), they revealed common narrative structure patterns in data videos. Shi _et al._[62] studied animation in data videos with regard to cinematography and summarized 4 animation techniques and 8 visual narrative strategies from 82 examples. Xu _et al._[80] focused on the openings of data videos and summarized 6 types of cinematic styles out of hundreds of classic films. Amini _et al._[4] conducted crowdsourced studies and found that pictorial or animation representations improved viewers' engagement with data videos.
In addition to the cinematic aspects, recent works turned to narrative methodologies in data videos. Cao _et al._[13] analyzed 70 data and presented a taxonomy, including 5 distinct genres, 4 narrative structures, and 6 narrative attributes. Shu _et al._[65] proposed a design space for data-GIFs from 108 examples. They further studied the impact on the understandability of each design dimension through interview and questionnaire studies. Yang _et al._[81] investigated 103 data videos to understand how Freytag's Pyramid, a well-received narrative structure, has been utilized. Similarly, we applied content analysis to understand the cinematic styles and animated transitions in data videos. Our concentration on geographic data videos revealed a fruitful subset that encompasses richer camera effects than general data videos. In addition, unlike previous work that mostly contributed abstract design implications, our empirical findings were directly applied in our authoring tool design.
### Authoring Geographic Data Videos
Owing to the ubiquity of geospatial data, geographic data stories have been a common category in data-driven storytelling [72], where maps are essential to provide the spatial context [14, 32]. Prior works have synthesized plenty of implications for designing a geospatial data story. For instance, Nagel _et al._ demonstrated how staging transitions could effectively explain steps from a high-level view to a fine-grained view through a public exhibition design [56]. Mayr and Windhager [53] suggested how standard spatiotemporal visualization techniques affected narrative cognitive processing. Roth [57] proposed a design space of spatial narratives with three dimensions: narrative elements, genres, and tropes. Furthermore, Latif _et al._[42] showed the textual narrative sequence and its relationship with visual counterparts in geographic data stories.
However, we observe a barrier to crafting a geographic data video that tells a story in a comprehensive and palatable manner. On the one hand, general-purposed visualization authoring tools provide limited support for geographic data [19]. Template-based or automatic tools (_e.g._, DataClip [5], Flourish Studio [25], and AutoClip [61]) ignored a rich palette of visual representations for geospatial data, such as 3D globe visualizations. According to our corpus analysis, tools that feature higher expressibility (_e.g._, Ellipsis [58], DataAnimator [71], and Animated Vega-Lite [84]) do not consider camera design, which is common in geographic data videos. Some tools are tailored for specific scenarios (_e.g._, [18, 49, 64, 69]), yet none applies to geospatial data. On the other hand, off-the-shelf visualization software (_e.g._, Tableau [48], ArcGIS [6], and Mapbox [52]) or library (_e.g._, deck.gl [75], kepler.gl [76], and BigQuery [29]) are flexible for analytical tasks in geographic data, but they hardly address the need for storytelling or raise high barriers. Video makers may need to screen-record their operations and edit through general video tools or write external scripts programmatically. GeoTime [24] was one of the earliest works that integrated storytelling into a data exploration tool. With several primitive features, it helped capture the analysts' insights and support later communication. The GAV toolkit [51]) facilitated geographic storytelling within an interactive web context. Most relevant to our focus on geographic data videos, Power Map [55] automatically generated map transitions among consequent slides. Our work summarizes representative camera movements in geographic data videos into a design space. We further develop an authoring tool based on a design space that supports inexperienced users to integrate various camera effects into their data videos by selecting their narrative purposes.
### Camera Effects in Data-driven Storytelling
Camera effects orient from cinematography, with typical examples, such as trucking, tracking, zooming, rolling, and tilting [11, 36]. With the pressing need to navigate audiences in the 3D space, camera control and motion planning have been intensively studied in fields related to computer graphics [21], including terrain visualization [60], volume visualization [33, 82], game engines [30], robotics [37], urban scene reconstruction [46, 47, 83], and virtual cinematography [31, 79]. Prior research has validated camera effects as an important construct of data stories, which remain effective for narration guidance [68], aesthetic enjoyment [62], and emotion delivery [40]. Segel and Heer [59] studied narrative visualizations and decomposed general visual narrative tactics into visual structuring, highlighting, and transition guidance. They regarded camera motions as a strategy that offers transition guidance, and they found that camera zoom contributed to highlighting. Amini _et al._[5] summarized nine major attention cues in data videos, including camera angle and zoom, that helped engage the audience. Stolper _et al._[66] studied web-based data stories and identified linking separated story elements through animation as an emerging and recurring technique. Although their corpus was based more on interactive webpages, our focus on data videos shares similarities in the continuous transition between the sequences of data stories. Most relevant to our interest in geographic data stories, Cheng _et al._[19] examined the interplay of camera effects and narrations. They concluded that map-based data clips extensively applied camera animations to steer audiences' focus, especially for insights into locations and differences and background information.
From the authoring perspective, Thompson _et al._[70] listed camera as a type of graphics object in the design space of animated data graphics. Alternation of the camera's configuration, such as its position or projection properties, results in a view change, such as panning, zooming, and rotating. Tang _et al._[67] proposed a taxonomy of narrative transitions that classified camera motions as
one of the five transition types, with subtypes, including pedestal, truck, tilt, pan, dolly, zoom, and rack focus. However, their design spaces failed to capture the relationship between narratives and camera configurations. In this work, we attempt to bridge the gap between narratives and camera configurations with empirical correlations. We recommend camera configurations suitable for the users' narrative goals, which alleviates users' burden in tweaking relevant parameters. Though prior works in other areas have contributed various methods for creating camera effects easily, such as optimizing camera trajectories (Srivastava et al., 2017; Wang et al., 2018), controlling camera motions (Srivastava et al., 2017; Wang et al., 2018), and selecting view-points (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018), our work differs from them in that we do not concern low-level details of the camera configuration rather than a high-level cinematic type because we focus on the coherence among given geographic data insights.
## 3. Characterizing the design space
In this section, we explain how we identified the design patterns of the camera movements in geographic data videos. Our methodology involves several steps: collecting a corpus of high-quality geographic data videos from online sources, analyzing the corpus to develop an initial design space, and validating the design space by two professional drone photographers. Next, we present our derived design space at the end of this section.
### Data collection
Based on the usage scenario and design considerations, we investigate the design of camera movements for geographic data videos based on real-world examples. To identify design principles, we first survey how existing hand-designed geo-stories fascinate audiences immersed in different insights into geographic visualizations. Previous research (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) collected many data videos from various online resources to explore the common design patterns and performance styles of this digital storytelling. These corpora cover a wide range of high-quality data videos. By using these corpora, we filtered 66 videos with map-based visualizations. Although these videos may employ camera movements in map-related and map-unrelated clips, we only focus on those map-related clips in these videos. We split these clips according to the type of camera effects and obtained 805 camera movements on the map in total. These examples are not comprehensive, because one data video tends to reuse the same camera movements to tell similar stories, and most of the data videos are only concerned with two-dimensional maps. However, we aim to derive some versatile design patterns of camera movements to guide our authoring tool's implementation rather than covering all camera techniques for geographic videos.
### Analysis and Validation
Our target audiences are general users without any experience in crafting camera movements of videos, only their initial imagination for every story segment of the final video. For example, videos typically introduce the brief background at the beginning of the story and then delve further into details. In other words, they have their own narrative purposes for different data clips.
To identify an appropriate taxonomy of narrative purposes in geographic data videos, we conducted a literature review from narrative visualization (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018), data graphics design (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), and cinematic storytelling (Wang et al., 2018; Wang et al., 2018). After the literature review, we collected a set of important narrative purposes that camera movement can help describe key contents in a story. Taking this premise into consideration, three researchers coded each segment to analyze camera movements from the following aspects: (1) the type of movements (including details of specific parameters, _e.g._, speed) all cameras used; (2) geographic objects that camera movements describe; and (3) the narrative purpose of the segment based on the narration context. We iterated on this coding process until each segment of the camera movements could be coded consistently. Through several iterations, we identified four _geospatial targets_ that camera movements mostly focus on and six _narrative purposes_ served by different camera movements to create stories to characterize our design space. We also summarized nine basic _camera shots_ commonly used in geographic data videos to serve different _narrative purposes_ and _geospatial targets_.
After initially extracting common patterns in geographic data videos, we invited two professional drone pilots outside the data visualization domain to validate the utility of our design space and refine the design space. The newly introduced examples also demonstrated generalizability. Both of them have been working in aerial filming for 3 to 4 years. We first let the two drone pilots present their video work and introduce our design space to them. Then, they were asked to use the design space to analyze their work. We observed how they applied the design space and found all of their presented work created by camera drones could be described along our three dimensions (_i.e._, narrative purposes, geospatial targets, and camera shots). Subsequently, we conducted an open-ended discussion with each expert to gain their feedback and suggestions for creating new geographic videos using our design space. Based on their advice, we modified our design space by merging related narrative purposes. For example, merging the providing context and changing viewpoint into supplementing information. Redundant or uncommonly used camera shots were also removed to keep the items in the dimension succinct.
The validation resulted in four geospatial targets and five narrative purposes, which were described in subsection 3.3.
### Design Space
In this section, we describe a design space of geographic data videos. The design space consists of three dimensions: narrative purposes, geospatial targets, and camera shots. For different narrative purposes and geospatial targets, we can recommend several usable camera effects of basic camera types based on the investigation of existing geographic data videos. Figure 2 shows the distribution of camera shots and the corresponding narrative purposes with geospatial targets and the frequency of the number of camera movements in one data video.
#### 3.3.1. Narrative Purposes
The _narrative purpose_ explains _why_ a specific camera movement is used when crafting a geographic data video. We summarized this taxonomy from existing literature and refined it from experts. Notably, the same camera movement can serve multiple narrative purposes, depending on its graphic parameters and relation to geospatial targets. For example, a zoom-out shot is usually used for revealing the surroundings of the focal
object, whereas such a shot at a slower speed commonly serves to increase the dynamic of the current scene.
**Emphasizing a Target** is a crucial criteria of narratology to make a narrative a narrative. Placing those most important parts at the center of the story is required (Krishnan et al., 2017). In geographic data videos, it is used to highlight geospatial targets specifically by making them occupy the largest percentage of the scene. Emphasizing a target is the most frequently used purpose in geographic data videos. For example, if designers want to talk about or focus on partial visualizations, then they will select an engaging way to attract audiences to partial visualizations, such as zooming in to enlarge the important parts, or by presenting such visualizations with a relatively longer duration. When emphasizing a target, video authors sometimes adopt more combinations of different camera shots to enhance engagement compared with other narrative purposes.
**Overviewing Multiple Targets** is similar to emphasizing a target. The key difference is that this purpose is aimed at observing multiple targets. Although zoom is still a useful camera shot to present multiple targets as a whole, it sometimes causes overlaying issues with a huge amount of data and thus vision confusion. Therefore, video authors prefer to pan multiple targets individually at a finer granularity.
**Making a Comparison** uses another thing to explain or justify the main topic with similarities (_i.e._, analogy) or dissimilarities (_i.e._, contrast) in narratology (Krishnan et al., 2017). It is also regarded as a common task for complex objects in data visualization (Krishnan et al., 2017). Similar to geographic data videos, this narrative purpose only serves multiple geospatial targets. Although overviewing multiple targets and making comparisons can be used to present multiple objects, we can differentiate them by the adjacent scenes. A simple process of telling a geographic visualization is that designers first overview the map to introduce the context. Then, the map is zoomed in to emphasize an important region. Later, the map is zoomed out of the scene to compare similar data attributes between the highlighted part and the other regions.
**Supplementing Information** refers to circumstances that provide context information, such as adding additional descriptions and changing the viewing angle. Success in data visualization begins from building the context in the need of communication (Krishnan et al., 2017). In addition to presenting data insights, video creators sometimes need to add information that cannot be encoded in geographic visualizations to establish the context of the story. For example, if the author wants to deliver the information that Italy is a member of the European Union, they might add the icon of the European Union logo next to Italy on the map. In most cases, such context information is inserted into the scene at the blank regions without information in the visualizations. Hence, a pan shot is the most used technique for this narrative purpose to save space for newly added information.
**Increasing Dynamics** refers to circumstances when no target is selected, and camera effects are used to give energy to the scene and create an atmosphere. Satisfying this narrative purpose can be very simple but greatly increase the continuity of data videos. For example, zooming in/out the map subtly or just randomly moving the map. Such scenes always happen at the beginning of the story or in a new section to introduce the relevant story and bring out the focus of the next story.
#### 3.3.2. Geospatial Targets
The dimension of _geospatial targets_ describes _what_ type of visual object is commonly presented with the camera movements. Visual objects can be presented in single mode or in groups on the map. We totally define six types of _geospatial targets_. To describe the single object, we summarize three types of targets (_e.g._, location, region, and path) based on the geometric characterization of objects in geographic visualization. In most scenes, including many geospatial targets, such design usually aims to explore the relationship among these geospatial targets regardless of their respective types. Based on such findings, we group multiple targets (_i.e._, locations, regions, and paths) into a different type (_i.e._, multiple targets). Notably, the geospatial target can be **None** for a specific camera movement (_i.e._, _Increasing Dynamics_).
**Location** has a point-like geometry respectively on the map. The type of target is the smallest one in the scene. Many camera shots can be used to present such a single location (_e.g._, a building or a bridge). For example, a zoom-in shot is a commonly used technique to change the narration topic from the last scene to the next single target.
**Region** has area-like geometry on the map (_e.g._, a country or a state). The region contains an additional area attribute than the location. Sometimes, the region can ignore its area attribute. Thus, region and location can be interchanged by visual appearance and camera movements. We can design similar camera movements for display. For example, if New York City is just marked to indicate where some events happened, then we can only zoom in on this
Figure 2. Statistical results of our coding on 66 geographic data videos and 805 camera movements: (A) camera shots and their corresponding narrative purposes of different geospatial targets, and (B) the distribution of camera movement number in a data video.
visualization to emphasize its importance. However, if we want to visualize the city's boundaries, then a pan shot, which is seldom used for presenting a single-location target, is a much more recommended technique.
\(\lx@sectionsign\)**Path** has a line-like geometry on the map and is specific in the geographic domain (_e.g._, a river or the border). A path can show how connections among multiple locations. Thus, specific camera movements are used to have an overview and follow the cues of the whole path.
\(\lx@sectionsign\)**Multiple Targets** aims to describe a set of geospatial targets regardless of their respective types. When the designer presents many geospatial targets in a scene, they use the camera to construct the overview of or build a relationship (like comparison) between targets, which is related to the quantity rather than the type of targets. To be more specific, the scene should be designed to cover all the geospatial targets of this set. For example, if a set of locations and another set of paths have a comparable geospatial distribution of visualizations (_e.g._, they are both within the borders of China), then it is highly likely to present two sets of targets on the map with similar scale and similar camera movements. Limited camera movements can serve multiple targets, such as panning among adjacent targets for comparison or zooming out to expand the scale of the map to overview all related targets.
#### 3.3.3. Camera Shots
We summarized 9 types of camera movements regarding the camera's positioning, orientation, focal point, and moving path. Some types are commonly used in real-world videos, and others are expanded from cinematography in films. Our goal is to give inspiration for the design of camera movements. Notably, one camera shot can serve many narrative purposes and geospatial targets with different parameter settings, such as the camera's moving speed.
\(\lx@sectionsign\)**Static:** A static shot does not move the camera, hence resulting in an unchanged scene. We still identified this type as a camera shot because this technique is frequently used in videos. The static scene can avoid audiences' confusion due to excessive many animations and direct their attention to the main content of the scene.
\(\lx@sectionsign\)**Visual In**: A push-in shot alters the positioning of the camera to become closer to an object. It is probably the most commonly used camera movement. It can reduce the map's scale to a specific geospatial target to draw the audience's attention toward the target. Especially, a push-in shot with a slow speed can increase the duration of displaying the target and leave audiences to expect what might happen ahead.
\(\lx@sectionsign\)**Full Out**: As opposed to push-in shots, a pull-out shot changes the camera itself to make it further away from an object. In this way, the map's scale enlarges to cover the adjacent objects and surrounding environment to provide a brief context of the target.
\(\lx@paragraphsign\)**Pan**: A pan shot refers to the camera that moves from one place to another. In this way, the pan shot can change the focus of the map and thus change the targets in the scene. Hence, it is usually used to change the topic of the story and emphasize the targets in the panned scene. Panning multiple objects one by one can build an overview of all these objects. Especially, this camera movement is identified as a whip pan when panning the camera with a quick speed. Different from panning at a normal speed, the whip pan ignores the information in the panning process. Therefore, it is handy for transitions that express the meaning of moving a large distance or time elapsing.
\(\lx@paragraphsign\)**Tilt**: Similar to pan movements, a tilt shot moves the camera shot vertically upward or downward. The tilt shot, as an unveiling technique, is helpful either to reveal from top to bottom or the reverse. For example, it can be used to describe the bar height from its bottom to the top to emphasize the data values encoded by the bar.
\(\lx@sectionsign\)**Camera Roll**: A camera roll rotates the camera pointed at the same object on its long axis. The rolling camera can emphasize the object from different perspectives.
\(\lx@paragraphsign\)**Are**: An arc shot moves the camera around the same object in an arcing orbit. It is typically used to add dynamics to a static object for emphasis because of its longer duration.
\(\lx@paragraphsign\)**Tracking**: A tracking shot describes any shot that moves alongside a subject for a period of time. It can be used to simply follow a path and thus display detailed information, such as nearby cities and the surrounding transportation.
## 4. Geocamera Overview
### Design Considerations
The authoring tool is designed to simplify the process of crafting geographic data videos for the general users. We summarize a set of high-level design considerations for an authoring tool that empowers average users to craft the camera movements in geographic data videos quickly based on literature survey and design space study. We assume that the video makers have attained sufficient insights into geographic data, and their task is to organize these insights into a coherent story.
**(C1) Facilitate easy camera authoring with narrative purposes.** Average users may not be familiar enough with diverse camera shots to create a compelling geographic data video. To assist them in the authoring processes, narrative purposes, such as emphasis and comparison, should be defined to empower users to design camera movements at the semantic level. For example, when a user try to compare two regions, they simply select the comparison purpose and zoom into the regions they want to compare. Narrative purposes abstract away the details in configuring camera shots and offer users an easy and intuitive way to convey geographic insights with cameras.
**(C2) Suggest appropriate camera parameters adaptively.** Different from general data videos, videos in geographic visualization are more concerned about the performance stage (_i.e._, the proportion of objects in the scene and the viewing perspective (Shen et al., 2018)). Suppose the designer wants to show the highest bar in a bar visualization on the map. In that case, the scene will contain the whole bar visualization as the indispensable context (_e.g._, (Yin et al., 2018)). Non-experts easily get confused about editing obscure parameters of camera movements to choose appropriate performance stages for the selected geospatial targets. We should recommend adaptive graphics parameters for camera movements on the basis of geospatial targets themselves and the related geographic environments on the map.
**(C3) Depict the narrative timeline of a geographic data video.** In geographic data videos, designers always focus on exploring the spatial relationship and presenting spatial context. General video authoring tools (_e.g._, Adobe After Effects (Abb et al., 2018)) typically support
keyframe-based specifications for authoring the animation. However, traditional keyframes cannot guide users about the spatial context in geographic visualizations. We aim to preserve every spatial context users tend to narrate to highlight geographic visualizations. We propose a hierarchical timeline, including time, spatial context, and camera movements. Such a timeline allows users to recognize the current narration sequence and edit the camera and its duration based on the spatial context.
### Video Modeling and System Workflow
As illustrated in Figure 3, a geographic data video can be defined as a series of scenes. Each scene comprises one or more camera designs combined by using layouts, such as side-by-side and picture-in-picture. For each camera design, a camera shot or a combination of camera shots is chosen to present one or multiple geospatial target(s) with one of the narrative purposes.
Given the geographic visualization, GeoCamera first needs to generate a list of camera movements used for specific geospatial targets. Users first select the geospatial targets to be presented in the scene and then choose a set of camera movements for each target. GeoCamera records the selections and visualizes them into a location-camera hierarchy timeline. GeoCamera supports flexible interactions to edit semi-automatically generated concrete camera effects individually in the data video from the graphics parameters and the timeline. Lastly, GeoCamera assembles the list of camera movements into a data video.
### Creating Camera Movements
#### 4.3.1. The main previewer of interactive geographic visualizations
GeoCamera provides a canvas (Figure 1A) to visualize the geographic visualization and preview camera movements in the map individually and sequentially. In the beginning, this canvas only contains an interactive map without any encoded data as an initial hint of geographic visualizations. After users import their geographic data and select a geographic visualization type (_e.g._, hexagon), the tool will draw the selected visualization on the map with predefined visual attributes (_e.g._, color). Users are allowed to adjust the perspective and scale of the map (_Basic Control_) and select their preferred geospatial targets (_Interactive Selection_) through basic controls and target selections.
**Basic control.** Consistent with most map authoring tools (_e.g._, deck.gl [(75)]), we use an interactive map through the whole authoring process. Users can zoom the map by scrolling. They can also pan and rotate the map by clicking and dragging the cursor. These primary interaction handlers allow users to adjust the map to a favorite state in a non-programming way.
**Target selection.** With basic controls, users can opt for a single object by clicking it on the interactive visualization. The location of the object will be recorded based on the underlying geographic map. The user can also select a specific region or multiple targets by using a lasso selection (Figure 1A). Besides, we found users would probably alter the map mistakenly with imperceptible mouse operations, thereby making it challenging for the users to select targets under the same viewpoints. Therefore, we designed an error-tolerance mechanism to ensure that the viewpoints stays in the same state as it was before. Users can save the current viewpoint by clicking the "snapshot" button and then go back to the previously saved map state by clicking the "reset" button (Figure 1A).
#### 4.3.2. Library of camera shots
Based on the previous investigation of real-world videos, we designed a library of camera shot templates (Figure 1B). To facilitate the camera effect authoring for users, we create different catalogs for various narrative purposes defined in the design space. Each catalog includes several camera shots collected and summarized from the examples. Note that different camera movements that belong to the same type can serve different narration purposes with various parameters. Additionally, we expand the current categories inspired by films [(74; 36)] to enrich camera shots, such as combined camera movements (_e.g._, _Arc_ with _Tilt_ shot). For the same camera shot serving different narrative purposes, the system provides different configuration settings for users (Figure 4) to achieve their narrative goals.
Based on the observation from the example videos that a central object can be served by different camera movements, GeoCamera enables users to craft camera movements one by one for specific geospatial targets. After users determine their targeted objects, they can select a camera shot to describe these targets for storytelling.
For example, if a user wants to build an overview of multiple objects in the geographic visualization, then they need to select a camera shot under the narrative purposes of "Overviewing multiple targets" in the library. A default camera shot will be added to the timeline after the user selects the target in the geographic visualization and the narrative purpose. The default camera shot is determined on the basis of the frequency statistics of the collected real-world examples. As shown in Figure 2A, the most frequent camera shots will be chosen as default when the narrative purpose and geospatial target is confirmed. The default camera movement for the targets will be automatically created on the timeline (Figure 1D) after a few clicks and will be documented in the camera list (Figure 1C). The user can change the camera shot type for the camera movements by clicking the buttons on the camera shot list. By repeating this process, users can build their whole narration with a sequence of camera movements using their corresponding geospatial targets.
#### 4.3.3. Adaptive parameter setting for camera movements
The camera movement in our system depends on the narrative purpose, the camera shot, and the geospatial target selected. It is achieved by interpolating from the initial state to the final state of the camera. However, configuring the states of a camera is a challenging and time-consuming task for a lay person. Thus, we introduce an automation service that adaptively sets up camera movement parameters for the chosen geospatial target(s) after the user determines the camera shots under a specific narrative purpose. This step waives the user's setting of a camera's states, which largely simplifies the camera movement creation with only a few clicks.
The initial camera state of a camera movement can be set by considering the current viewpoints or the final camera state of the last movement. Meanwhile, the final state of the camera is dynamically adjusted by the three aspects of the selected geospatial target(s): the centroid, the bounding box, and the related data of the selection. The centroid is generally used to define the final location and the focus point of a camera. Moreover, the bounding box and the data of the selected target(s) decide the altitude of the camera.
If the user selects a location target, then the centroid is its geographic coordinates. For the region target case that is selected by a lasso selection tool in the UI, we calculate the centroid of the polygon. Especially, for a path target or multiple targets, we compute the bounding box first and then obtain the centroid based on the bounding box.
After deciding the location and focal point of a camera, we need to achieve an appropriate viewport for the target. The core idea is to ensure that the viewport covers the target(s) the user wants to observe. For instance, for a push-in shot, the final state of the camera movement will be set to vertically focus on the target or the centroid of the bounding box. At the same time, the altitude of the camera after the push-in movement will be guaranteed to a value that satisfies a margin space for each side at 10% or above. Similarly, we heuristically define a set of rules for the eight camera shots to adjust the viewport of the camera dynamically based on different geospatial targets.
#### 4.3.4. Location-camera hierarchy timeline
In addition to playing a single camera movement, GeoCamera also supports playing all camera movements sequentially. In the timeline panel (Figure 1D), we visualize two timelines: a general timeline and a location-camera hierarchy timeline. In general, we use a simple slider to show the playing process of all cameras in the camera list. After users click the triangle button above the general timeline, the previewer will play the camera on the geographic visualization, and the general timeline will visualize the current playing position.
However, the general timeline only shows the temporal information of the entire camera sequence. We design a location-camera hierarchy timeline to ensure the temporal information of each camera movement and each geospatial target. We draw a timeline for every camera in the camera list, including a fixed range encoded for the duration of the camera. To help users obtain geospatial targets for each camera movement, we aggregate cameras with the same geospatial targets and add a location-level timeline to show how long the video focuses on the same targets. All the location-level timelines are arranged chronologically. Therefore, camera-level timelines are in the same order. Users can drag marks on camera-level timelines to change the cameras' duration. When creating a story, having a break before starting a new topic is common, but continuous sentences are used within a topic. Based on such considerations from the narrative aspect, we identify a rule for continuity between two continuous timelines when editing duration. An interval without cameras between two continuous location-level timelines is possible. However, the interval between two continuous camera-level timelines under the same location layer is restricted. We also design a semantic overview of all timelines. Location-level timelines are named with their related locations. If we cannot obtain the exact name from the import data, the tool will show the longitude and latitude of the location. Camera-level timelines are named as their corresponding camera movements with their narrative tactics.
GeoCamera smooths transitions with linear interpolation among camera movements. When users craft the camera movements, they adjust the duration of movements at a single level. In terms of the final data videos, we set the rule to avoid the temporal contradiction of camera movements for different locations. However, an interval can appear between the ending state of the last camera movement and the initial state of the next movement. Considering the videos' coherence, we provide a special camera movement called "linear interpolation" to fill in the gaps in time. Different from other camera movements, this movement focuses on no geospatial targets. Its performance in the scene is the same as "fly to," an automatic map
Figure 4. Different suggested options for a _Push In_ shot that serve different narrative purposes: (A) _Emphasizing a Target_ and (B) _Increasing the Dynamic_. Different intensities of the movement influence the final effects of the camera movement.
Figure 3. The video model in GeoCamera. A geographic data video is a series of scenes. Each scene comprises one or multiple camera designs. Each camera design includes one or two camera shots. Each camera shot serves a geospatial target.
animation that is commonly used in map visualization tools (_e.g._, Mapbox (Salaman et al., 2017)).
### Configure Visual Effects
#### 4.4.1. Camera configuration with single previewers
Although GeoCamera recommended adaptive camera movements, users may be not satisfied with the results. Based on the demand for flexible refinement, GeoCamera enable users to edit every camera movement's graphic parameters in the camera list (Figure 1C). At the start, all camera parameters are folded up. If users want to edit a camera movement, then they will click its menu to expand its parameterization panel. These parameters are professionally related to the map, such as zoom, pitch, and bearing. Users who are not familiar with the geographic domain can easily get stuck in understanding these parameters. Therefore, GeoCamera provides a single previewer approach to help users skip this understanding process of refining a camera movement. By clicking the "edit state" button, GeoCamera displays a small view with two canvas (Figure 5). One canvas shows the state of the geographic visualization in the previewer when the camera starts moving, and another is for the ending. Both canvases have similar basic interaction handlers as the main previewer. Users adjust the states of two canvases with the mouse until they are satisfied. GeoCamera records the adjusted states and resets the parameters of the camera movement based on the recording.
To guide users to have a preliminary concept of how the camera movement works, GeoCamera allows them to play each camera movement individually. If users want to watch a camera movement while not completing a story, they just need to click the small triangle button in the parameterization panel of this camera. Then, in the previewer, GeoCamera plays the change in the geographic visualization using the camera movement from the initial state to the ending state.
#### 4.4.2. Visualization authoring
Our tool aims to provide an easier way to craft the camera animations in the presentation of geographic visualizations. This method requires an accomplished geographic visualization as the input. However, our target audiences include those non-experts in the geographic data analysis domain. They are unfamiliar with selecting the proper visualization types and drawing the selected visualization on the map. With this in mind, GeoCamera wraps graphing functions to support the authoring of several common visualizations from the cleaned geographic data. Corresponding to the summarized geospatial targets before, these visualizations include point, three-dimensional rectangular, and line map. After importing geographic data (_i.e._, every object has its exact latitude and longitude), users select an appropriate geographic visualization according to data type. Users change the input of visualization type with the drop-down menu in the Configuration Panel (Figure 1C), and then GeoCamera displays the selected visualization in the Preview Panel (Figure 1A). In this way, users only need to import the geographic data rather than a geographic visualization, which reduces the threshold of using GeoCamera.
The output of GeoCamera is a geographic data video that assembles all camera movements. However, a successful data video contains visual and auditory stimuli (Beng et al., 2017). Camera animation is a small component of visual design. Although our work focuses on camera movements in geographic storytelling, GeoCamera provides an additional layer for annotations for design enhancement. Users can click the "Annotation" button in the Configuration Panel (Figure 1C), and the panel will display all the current textual annotations that correspond to each camera movement aligned by the time sequence. The default annotation of a camera movement is none. The User can choose a camera movement and edit the text. The tool will add an annotation layer for this camera. We align the time of the added annotation layer with its camera.
## 5. Evaluation
We evaluate GeoCamera through (1) two example cases with a range of camera shots to showcase its expressiveness and (2) a user study to verify its usability.
### Example Cases
To demonstrate the generalizability of GeoCamera in different geographic visualizations, we select a 2D and a 3D geographic data videos as our use cases. The geographic visualization in the first example is a combination of heatmap and scatter plot, showing
Figure 5. The user can edit a _Tilt_ shot with two single previewers: (A) one is used for editing the initial state of the camera movement, and (B) another is for the final state. Then, the camera movement is interpolated between the two states.
Figure 6. The data visualization about gun violence from 2013 to 2017 in the United States.
more than 140,000 data points of people being killed or injured in America. This visualization shows location-based and region-based geographic data story. The data video is created based on the geographic data story. Another example is a visualization showing COVID-19 cases in Hong Kong. We replicate a designer-crafted real-world data video, explaining the case distribution. The video contains railway network map and scatter plots on map and utilizes camera movements (_e.g., Tilt_ and _Arc_ shots) for diverse geospatial targets of locations, paths, and regions that are rarely observed in the 2D environment, thereby complementing the types of camera movements in the previous example.
#### 5.1.1. US Gun Violence
In this case, we attempt to use GeoCamera to create a data video to present insight from the data about people killed or injured by guns in the United States from 2013 to 2017. The visualization contains a scatter plot for every single point in the dataset and a heatmap for the clustered points (Figure 6). At the beginning of the video, we provide an overall introduction to the dataset. We apply the frequently used _Push in_ shot in the _Increasing Dynamics_ category (Figure 2) to gradually immerse the audience in the visualization scene. The camera movements for this category are mild, thereby suggesting a long duration _Push in_ for a short-distance. The camera movement is set to last 10 seconds with the annotation showing _"The data are about recorded gun violence incidents in the US between 2014 and 2017."_ Subsequently, we observe that the eastern United States has a relatively higher chance of shooting than the other areas of America. We select the eastern part to present this observation. We again use a _Push in_ shot, but in a different category of _Overviewing Multiple Targets_. This time, the camera pushes in relatively quickly and focuses on the selected region of America within 2 seconds, and then remains for another 5 seconds to show the annotation _"The visualization indicates that eastern America has a higher chance of shooting incidents than the other parts of America."_ Then, we compare the overall shooting rate between eastern America and middle America. We select the middle part in the visualization, and the system automatically suggest a _Push out_ shot in the timeline after we select the _Making a Comparison_ category to show the two regions together at the same time. The camera movement takes about 2 seconds, and remains for another 2 seconds with the annotation showing _"However, the middle part of America is relatively safe."_ Subsequently, we aim to show another observation that the number of gun-related violence outstands in California. We replace the default camera shot (_i.e., Push In_ shot) with the _Camera Roll_ in the _Emphasizing a Target_ category to roll the camera while maintaining its focus on the center of California. This eight-second camera movement is about _"The number of gun-related violence evidently outstands in California,"_ and the audience is provided sufficient time to understand the surroundings of the mentioned State. What follows next is the most dangerous city with the highest number of shootings. We further use another _Push In_ shots in the same category to draw attention to Chicago. The camera gradually moves to Chicago for an additional 3 seconds to show that _"Unexpectedly, Chicago is the most dangerous city with the highest number of gun-related violence."_ Finally, we select a _Pull out_ shot in the _Increasing Dynamics_ category to leave the visualization scene in 10 seconds, _"Overall, there was nearly 226 thousand gun-related cases were recorded in the US, and 60 thousand people were killed."_ The total duration time of the final video with the six camera movements is 43 seconds. The time distribution of the corresponding six narrative purposes (_i.e., Increasing Dynamics_, _Overviewing Multiple Targets_, _Making a Comparison_, _Emphasizing a Target_, _Emphasizing a Target_, and _Increasing Dynamics_) is shown in the timeline, indicating the story flow of the generated video. More details are presented in the supplementary material.
#### 5.1.2. Reproducing Camera Movements
To demonstrate the expressiveness of the camera movements in our system, we reproduce the camera movements in a data video introducing the COVID-19 status in Hong Kong during February 2022. We use the manual mode that does not require specifying the narrative purposes for the camera movements, given the circumstance that the camera effects have already been provided by the sample video. The manual mode is designed for the detailed control of generating specific camera effects from scratch without considering the narrative purposes. The data video contains 32 camera movements covering the _Push in_, _Arc_, _Camera roll_, _Zoom out_, and _Pan_ shots (Figure 7). The set of camera movements form a three-minute data video. More details about the reproduction results are shown in the supplementary material.
### User Study
We conducted a user study to validate whether or not nonprofessionals could easily create different camera movements for a data story and examine if there were any usability issues for improving the system.
#### 5.2.1. Participants
We recruited eight participants (two females and six males) with knowledge in geographic visualization, denoted as P1-P8, for this study. They involve graduate students studying data visualization and employees that work at an IT company in a role related to data visualization. All the participants took no part in any activity related to the system design or preliminary study and reported that they had no or limited experience in camera movement design.
#### 5.2.2. Visual Materials and Data
We provided the participants with slides that introduced each category of our design space with
Figure 7. The snapshots of the sample camera movements taken from the real-world geographic data video. The camera shots identified in these camera movements: _Push In_ shot, _Arc shot_, _Pull Out_ shot, and _Camera Roll_ shot.
examples. The slides were used as the teaching material, and the participants were encouraged to browse the slides when creating camera movements with our system.
The data used to create data videos in the user study include personal injury road accidents in the United Kingdom from 1979. The data are aggregated and visualized with a hexagon-based heatmap (Figure 1(a)). The color and height of a hexagon are determined based on the objects it contains. In addition to data visualization, we also provided the participants various pre-extracted story insights (_e.g., "London has the most road accidents"_ and "_The road accident in Scotland area is very low_") and their corresponding visualization to maintain the focus of the user study on planning and creating camera movements.
#### 5.2.3. Procedure
The user study contains three sessions, as follows: (1) a tutorial session to familiarize with GeoC Camera, (2) a creation session to experience the authoring process, and (3) a post-study evaluation to measure their subjective preference on the utility of the system.
**Tutorial.** We started the user study with a 15-minute introduction explaining our design space. Subsequently, we provide a 20-minute demonstration on the use of the GeoCCamera system, including its functions for setting up camera movements and related interactions in camera movement editing with an example dataset. Then, the participants were asked to freely explore every function as well as the interaction of the system and raise questions whenever necessary. After familiarizing with the tool, we introduced the formal dataset for the user study, the encoding scheme of the proposed visualization, and the corresponding story insights in detail for building camera movements with GeoCCamera.
**Creation.** After the tutorial, we asked the participants to use GeoC Camera to make their own geographic data videos based on the pre-extracted story insights. The participants could browse the slides as a reference, and they can seek guidance for using the system. When finished, each participant shares and explains his or her data video. The creation phase lasts for approximately 15-30 minutes.
**Post-study Survey and Interview.** When the exploration and creation of the camera movements were completed, the participants were asked to answer a post-study questionnaire using a 5-point Likert scale (1 for strongly disagree and 5 for strongly agree). The questionnaire intends to assess the usefulness, ease of use, and satisfaction(Wang et al., 2018) of GeoCCamera. Finally, we conducted a semi-structured interview to collect qualitative feedback from each participant.
All participants completed the entire study at approximately 75-90 minutes and were compensated with a gift card worth $15 at the end of the interview session.
#### 5.2.4. Results and Findings
All the participants can complete the authoring tasks with minimal guidance. Figure 8 shows the snapshots of an example video generated by a participant with GeoCCamera.
Then, we collected the participants' subjective ratings in the form of a 10-question survey and qualitative feedback for GeoCamera from the semi-structured interview. Figure 9 presents the questions and average user ratings. Generally, all participants agree that GeoCamera is a useful tool to create expressive geographic data videos with intuitive guidance and is easy to learn. They show a strong willingness to use GeoCCamera to simplify the prototyping process.
**Usability.** All participants agreed that our tool eases manual efforts in the geographic data video creation process. P4 mentioned that _"It does not require writing code or setting complex parameters."_ Our tool provides a code-free procedure to allow video creators to be more productive. Furthermore, the participants reported the effectiveness of the automatic parameter setting for the camera movements. P1 stated that _"Taking five minutes to generate a one-minute video is very efficient. It can save a lot of time."_ P5 described the tool's benefit for nonprofessionals: _"I feel the tool is efficient for the nonprofessionals given the story to be told."_
Two participants showed their preference of the Timeline Panel. P1 described how the timeline provided a general understanding of the video's structure: _"The timeline can show the distribution of the cameras for telling a story. Thus, I can have a general idea of the video's purpose."_ P5 mentioned, _"The timeline control is easy to use because it follows the design of regular video editing software."_
**Learnability.** Most of our participants agreed that learning and using the system and the integrated functions for camera movements are easy. P1 responded very positively to learning our tool as a novice user: _"I can quickly learn the visual effects of different camera movements after exploring the tool."_ We also heard some descriptions of how our tool satisfies their needs to create camera movements with automatic parameter settings and self-controlled functions. P7 commented that _"The recommended parameters become more useful when the complexity of the camera movement increases."_ P8 also stated that _"The system provides the user with automation and control at the same time."_ For the whole creation process with the tool, two participants (P4 and P8) suggest that our system ensures a smooth flow in creating and editing operations in GeoCamera: _"The overall (camera movement creation) process is smooth and natural."_
**Expressiveness.** Overall, the participants agreed on the expressiveness of our tool. They indicated that the proposed design space of the camera movements in geographic data videos satisfies their requirements for crafting videos. The participants also suggested that GeoCamera could help discover new camera movements and learn the narrative purpose that the camera movements could serve.
First, all participants suggested that our design space for camera movements in geographic data videos is _"clear and easy to understand"_ P4 stated that _"Novice user may not know where to start to generate different camera effects for geographic data visualization."_ They described how the design space helped them create geographic data videos. P6 noted that our design space _"makes sense"_ and is _"reasonable to describe from the what, why, and how perspectives."_ In particular, the narrative purposes in our tool can provide guidance to obtain a _"more logical"_ video structure. P8 emphasized the usage of narrative purposes: _"The narrative purposes basically satisfy my intent to tell stories through camera shots."_
Second, our participants expressed their satisfaction with the diversity of the camera movements generated by our system. P1 stated that _"The current camera effects are close to the common camera movements, including those in the data videos."_ We observed that some participants attempted different camera shots under a specific narrative purpose prior to the final decision, such as P6: _"I did not know many of the camera shots before. The system organizes
the camera shots by purpose and teaches me how to use them._" P5 mentioned the final performance of the crafted videos and agreed that the final video is _"good enough for general purposes, such as reporting findings and showing insights."_ P8 showed great interest in _"some unrealized camera shots"_ suggested in the left panel. He commented that the camera shots organized for narrative purposes could increase creativity in the video authoring process.
**Flexibility.** Many participants appreciated the flexibility of our tool for editing camera movements. P3 agreed that our tool provides sufficient degree of freedom to create camera movements: _"The tool with camera module I previously used only provides limited choices of camera and usually fixed path."_ Another participant (P1) also likes the user interface for camera movement editing: _"When editing the initial and final states, the camera movements can be accurately controlled."_ P7 liked the exporting and importing functions in the tool because they _"make the camera movements reusable"_.
**Mixed-initiative Authoring.** In the survey, we found that many participants preferred the default mode with adaptive camera movement settings. In the interview session, we also asked the participants about their preference for the default mode with adaptive parameter settings and the manual mode for crafting camera movements. The participants stated that _"the adaptive camera movements are convenient"_ (P2) and that they would love to _"use auto mode first and then fine-tune the details"_ (P4). However, we also discover that the adaptive parameter setting is not always the first option, especially for proficient users. P7 preferred to edit camera movements manually: _"For simple camera movements, I would use the manual mode."_ As a proficient user, P8 commented that _"When I already know what effect I want to achieve, I use the manual mode more frequently."_
**Future Usage.** In our interview, the participants expressed their strong willingness to use the tool in the future. P1 said _"This technique can be used to generate highlight replays in e-sports. It greatly reduce efforts to show the exciting moments in a virtual space."_ P4 said _"After modeling and editing in a 3D scene, the method can simplify the demonstration process."_ P8 has experiences in drone photography, and he mentioned that _"Shooting with a drone is not easy. Maybe I could use this tool for planning and previewing before shooting from my drone."_ The participants also provided suggestions for future system improvement from visual design and camera creation. P5 noted that _"The annotation could be better by matching the length of the text and the duration of the camera movements."_ P3 said _"Combining different camera movements by the dragging and dropping interactions would be helpful."_ P2 and P3 voiced their confusion about the camera's moving trajectory: _"It would be clearer to have an overview of the camera's moving trajectory in the visualization."_
Figure 8. Video snapshots from the user study generated with GeoCamera. Each camera movement contains two snapshots of the initial and the final state.
Figure 9. Ratings for system usability on a 5-point Likert scale (N=8). The middle column shows the detailed questions. The right column displays the average and standard deviations.
P4 stated that the camera system could be integrated into a real-time monitoring system for "_--presenting anomaly information and tracking."_ P7 suggested _"building a fully automatic system from detecting the insights and creating a fast-preview."_ P2 inspired us to improve camera movements by _'considering and optimizing the overall speed."_ P3 and P7 suggested making the intermediate process of a camera movement more configurable (_e.g._, changing the path or setting an ease function for the motion) and storing the customized ones in the system. P6 and P7 expected that the system could consider artistic aspects of the camera effects. P8 suggested that _"Whether we can change the focus of the camera and exit the current scene using blur effects."_ The detailed comments about the user study are listed in the supplementary material.
## 6. Discussion
In this section, we discuss the current limitations of our study and recommend future directions.
**Understanding the best practices in authoring geographic data video.** We treat our design space as a probe for camera movements in geographic data videos rather than a comprehensive characterization. First, a corpus can never be comprehensive. More instances may expand the sub-categories of each dimension. Second, we only analyzed videos lasting 3-10 minutes. Therefore, the design space may not be valid for long videos or short-form videos, such as GIFs. In addition, the camera movement recommendations in GeoCamera are based on the statistical frequencies of the combinations from the corpus. However, this approach is not always optimal for creating a compelling and persuasive camera movement given a narrative purpose. Further investigation on global optimization for camera movement configurations of different narrative purposes is promising to improve the overall engagement of the generated data videos. Last, when contextualizing existing taxonomies into geographic data videos, we found that not all items fit and require adaptations. For instance, not all established camera effects were identified in the corpus, _e.g._, dolly zoom. And the "Increasing Dynamics" intent does not fit in the narratology. We anticipate the future studies to evaluate and extend the design space. For example, in-depth interviews with practitioners may reveal emerging tactics of camera movements that cover other dimensions to transit or surface geographic data insights. As the design space categorizes general narrative purpose, geospatial target, and camera shots in geographic data stories, a closer examination of a particular category remains promising.
**Extending GeoCamera to multifaceted authoring scenarios.** While the usability of GeoCamera is recognized by the target users in the user study, its design might have overlooked diversified authoring scenarios (Han et al., 2018; Wang et al., 2019; Wang et al., 2019). First, our design considerations were largely based on a profile of an average user. We strive to reduce the difficulty for amateur video makers in crafting camera movements, which is also beneficial for rapid prototyping when designing formal presentations. However, professionals, such as data journalists, may require more creativity support in different design phases (Kang et al., 2019). For instance, the interoperability of GeoCamera should be improved given that practitioners often iterate between tools to achieve higher expressibility (Han et al., 2018). Second, our assumption on the workflow, _i.e._, to connect a given sequence of story pieces covering data insights, can be overly simplified. Prior research (Kang et al., 2019; Wang et al., 2019) suggested that data storytelling is a much more complicated process, encompassing stages including exploring the dataset and selecting and organizing the findings. Thus, recommending data insights and story structures to alleviate human labor remains promising (Kang et al., 2019).
**Enriching editorial layers for storytelling.** Thus far, the research in authoring tools for data videos is still in its infancy (Han et al., 2018). We initially contributed an approach for average users to author the camera movements in geographic data videos. Notwithstanding, many other design features also constitute a successful geographic data video. We envision future tools to encapsulate a broader set of editorial layers for producing more engaging, persuasive, and compelling results. For instance, visual embellishments enhance the aesthetics and imply the narrative topic (Kang et al., 2019). Animated narratives can better illustrate a concept (Kang et al., 2019) or express certain emotions (Wang et al., 2019; Wang et al., 2019). In terms of the cinematic effect, our current model of geographic data video supports picture-in-picture by splitting the video into halves from the middle. The model can be further extended to enable more flexible scene arrangements. Aligning the video with background music also remains interesting (Wang et al., 2019).
## 7. Conclusion
This paper presents GeoCamera, which is a geographic data video authoring tool that empowers users to tell appealing geographic stories with tailored camera movements. Based on a design space that summarizes diverse narrative purposes, geospatial targets, and camera shots, GeoCamera facilitates easy creation of coherent camera movements by allowing users to simply select the objects on the map and specify an appropriate narrative purpose. GeoCamera has been evaluated with case and user studies, showing promising expressiveness and usability in helping its users to author diverse camera movements effortlessly. In the future, we would like to extend GeoCamera to cover more complex authoring scenarios with exploration and creativity support, while providing enriched editorial layers for more engaging, persuasive, and compelling geographic storytelling.
###### Acknowledgements.
The authors would like to thank the experts and participants for their help in the project, as well as the anonymous reviewers for their valuable comments. This work is partially supported by Hong Kong RGC GRF Grant (No. 16210321), a grant from MSRA, and NSFC (No. 62202105).
|
2305.02588 | Properties of N, $Δ$ Baryons with Screened Potential | N and $\Delta$ baryons hold an important place towards understanding the
quark dynamics inside hadrons. The hypercentral Constituent Quark Model (hCQM)
has been employed in various studies ranging from light to heavy hadrons. In
the present article, screened potential has been used to study light baryon
resonances. The Regge trajectories have been plotted alongwith the details of
slopes and intercepts. The strong decay widths to pion have been calculated for
some channels using the present masses. | C. Menapara, A. K. Rai | 2023-05-04T06:46:13Z | http://arxiv.org/abs/2305.02588v1 | # Properties of N, \(\Delta\) Baryons with Screened Potential
###### Abstract
N and \(\Delta\) baryons hold an important place towards understanding the quark dynamics inside hadrons. The hypercentral Constituent Quark Model (hCQM) has been employed in various studies ranging from light to heavy hadrons. In the present article, screened potential has been used to study light baryon resonances. The Regge trajectories have been plotted alongwith the details of slopes and intercepts. The strong decay widths to pion have been calculated for some channels using the present masses.
Screened potential, light baryon, Pion decay width
## 1 Introduction
Hadron spectroscopy is an important tool to understand the quark dynamics inside hadrons [1; 2; 3]. Most nuclear phenomena can be explained in terms of the non-relativistic interactions between protons and neutrons, which make up the nucleus's elementary constituents. Quantum Chromodynamics (QCD), on the other hand, is the theory behind nuclear forces and describes relativistic quarks and gluons as the fundamental degrees of freedom [4]. Confinement serves as QCD's defining characteristic. It appears to prohibit the free existence of isolated elementary quarks and gluons in nature. One of the central issues in physics is how to understand how confinement arises. Confinement results from the self-interactions of gluons, which act as the strong force's intermediaries between colored quarks and other gluons. Since the light, u, and d quarks that make up the nuclei are many times lighter than the proton, the majority of the visible mass in the universe is actually produced by relativistic interactions between gluons.
\(N^{*}\) corresponding to P(uud) and N(udd) with \(J=\frac{1}{2}\) and isospin \(I=\frac{1}{2}\) has always been in discussion for decades [5]. N. Isgur has nicely highlighted few facts as to why to study of \(N^{*}\) have always been in the priority as: The whole stable matter around us is made up of nucleons; it being the simplest system to manifest the non-abelian nature of QCD. However, inspite of being the lowest state, it has been known through understanding of years, that hadrons are a really complex systems whose all the properties are not known to us till date.
Over the years, many approaches have been implemented with the intention to understand thoroughly the light baryon sector. Recently all light and strange baryons have been studied through Bethe Ansatz method with U(7) by an algebraic method [6]. The earlier algebraic method has been discussed by R. Bijker et.al. to study baryon resonances in terms of string-like model [7]. A. V. Anisovich et al. has reproduced the N and \(\Delta\) spectrum using the multichannel partial wave analysis of pion and photo-induced reactions [8]. The quark-diquark model using Gursey Radicati-inspired exchange interaction has been studied [9; 10]. The semi-relativistic constituent quark model, classification number describing baryon mass range [11], mass formula obtained by Klempt [12], dynamical chirally improved quarks by BGR [13] are among various models. Also, the relativistic study with quark-diquark model have been studied for all sectors by Faustov et al. [14]. A recent study based on HAL-QCD, decuplet baryons have been focused wherein interaction potentials extracted from lattice QCD [15]. Regge phenomenology has also been employed in the study of light, strange baryons using n and J plane linear curves [16].
The present study is based on a non-relativistic approach namely hypercentral Constituent Quark Model (hCQM). Our earlier works deal with hCQM applied for hadrons from light to heavy sector. The screened potential term is accompanied with spin-dependent term in the present approach. The section 3 discusses mass spectra of N and \(\Delta\) baryons alongwith the experimental and theoretical background known so far. Section 4 is dedicated to Regge trajectory for (J,\(M^{2}\)) and (n,\(M^{2}\)). The strong decay channels with pion have been studied in section 5.
## 2 Theoretical Framework
The choice of a hypercentral SU(6) invariant potential, sometimes referred to as a potential whose value is exclusively governed by the hyperradius x, is the foundation of the hypercentral Constituent Quark Model, abbreviated as hCQM. In addition to making the process of finding a solution for the Schrodinger equation more straightforward, the selection of a hypercentral potential also has some intriguing ramifications in terms of the physical world. As a result of the fact that the hyperradius x is dependent on each of the three constituent coordinates at the same time, a hypercentral potential is not just an interaction between two bodies but can also involve terms involving three bodies [17; 18]. hCQM has been applied in various systems and with a variety of potentials by our team for heavy hadrons [19; 20; 21; 22; 23; 24; 25; 26; 27] and exotics. The linear potential has been employed for all octet, decuplet baryons in our earlier works [28; 29; 30; 31; 32].
Because of the non-abelian nature of QCD, which results in gluon-gluon interactions, which, in turn, can produce three body forces, these terms have the potential to play an important part in the description of hadrons. The space component of the 3q wave function, on the other hand, can be expanded in the hyperspherical harmonics basis, at which point the Schrodinger equation transforms into a set of coupled differential equations. Additionally, it has been shown that the low lying resonance states can be adequately described by the hyperspherical approximation, which is the assumption that only the first term in the expansion of the potential in terms of hyperspherical harmonics is maintained. Keeping these considerations in mind, a hypercentral potential can be interpreted in one of two ways: either as a standard two body potential or as a three body potential, both of which are treated in the hypercentral approximation. First of all, we introduce hyperspherical coordinates as hyperradius x and hyperangle \(\xi\) from Jacobi coordinates as [7],
\[x=\sqrt{\rho^{2}+\lambda^{2}};\ \ \xi=arctan(\frac{\rho}{\lambda}) \tag{1}\]
The model itself suggets that the potential to be chosen should be hypercentral i.e. depending only on hyperradius x. The hyperradius x depends at the same time on all the three constituent coordinates, therefore an hypercentral potential is not a pure two body interaction but can also contain three body terms. These terms can play an important role in the description of hadrons, since the non-abelian nature of QCD leads to gluon-gluon interaction which can, in turn, produce three body forces. On the other hand the space part of the 3q wave function can be expanded in the hyperspherical harmonics basis and the Schrodinger equation becomes a set of coupled differential equations. Thus, the spatial wave-function can be expressed as hyper-radial part and hyperspherical harmonics as
\[\psi_{space}=\psi(x)Y(\Omega_{\rho},\Omega_{\lambda},\xi) \tag{2}\]
\[L^{2}Y_{[\gamma]}l_{\rho}l_{\lambda}(\Omega_{\rho},\Omega_{\lambda},\xi)=- \gamma(\gamma+4)Y_{[\gamma]}l_{\rho}l_{\lambda}(\Omega_{\rho},\Omega_{\lambda},\xi) \tag{3}\]
where, \(\Omega_{\rho}\) and \(\Omega_{\lambda}\) are the angle of hyperspherical coordinates. \(\vec{L}=\vec{L_{\rho}}+\vec{L_{\lambda}}\) is the total angular momentum and \(l_{\rho}\) and \(l_{\lambda}\) are the angular momenta associated with the Jacobi coordinates \(\rho\) and \(\lambda\) respectively. \(-\gamma(\gamma+4)\) gives the eigenvalues of \(L^{2}\), where, \(\gamma=2n+\rho+\lambda\) is the grand angular momentum quantum number which acquires positive integer value.
The hyper-radial equation whose solution is \(\psi(x)\) is as follows,
\[\left[\frac{d^{2}}{dx^{2}}+\frac{5}{x}\frac{d}{dx}-\frac{\gamma(\gamma+4)}{x^ {2}}\right]\psi(x)=-2m[E-V_{3q}(x)]\psi(x) \tag{4}\]
Another choice of hypercentral potential has brought us to screened potential of the form as described below [33].
\[V^{0}(x)=a\left(\frac{1-e^{-\mu x}}{\mu}\right) \tag{5}\]
Such potential has been known to show good results using hCQM for heavy quark system including mesons and baryons [34]. However, here we have attempted to see such effect in light, strange systems as well. The screening parameter is different in case of heavy and light systems. Also, the results have been discussed for this study here. Based on a paper by R. Chaturvedi, the screening parameter \(\mu\) has been varied over a range and 0.3 has been considered as the value obtain the spectra for all the systems considered here.[35]
Also, this potential form cannot account for splittings of multiplet levels. So, an additional hyperfine splitting term is to be incorporated. However, the spin-dependent interaction \(V_{SD}(x)\) consists of three types of interactions, i.e. spin-orbit, spin-spin and tensor terms [36].
\[V_{SD}(x)=V_{LS}(x)({\bf L}\cdot{\bf S})+V_{SS}(x)\left[S(S+1)- \frac{3}{2}\right] \tag{6}\] \[+V_{T}(x)\left[S(S+1)-\frac{3({\bf S}\cdot{\bf x})({\bf S}\cdot{ \bf x})}{x^{2}}\right] \tag{7}\]
Let, \(V_{V}=\frac{r}{x}\) and \(V_{S}=\alpha x\)
The spin-orbit term,
\[V_{LS}(x)=\frac{1}{2m_{\rho}m_{\lambda}x}\left(3\frac{dV_{V}}{dx}-\frac{dV_{S}}{ dx}\right) \tag{8}\]
The spin-spin term,
\[V_{SS}(x)=\frac{1}{3m_{\rho}m_{\lambda}}\nabla^{2}V_{V} \tag{9}\]
The tensor term,
\[V_{T}(x)=\frac{1}{6m_{\rho}m_{\lambda}}\left(\frac{d^{2}V_{V}}{dx^{2}}-\frac{1 }{x}\frac{dV_{V}}{dx}\right) \tag{10}\]
**L** and **S** represent the angular momentum and total spin.
\[\overrightarrow{s_{1}}\cdot\overrightarrow{s_{2}}=\frac{1}{2}\left[\vec{S}^ {2}-s_{1}\left(s_{1}+1\right)-s_{2}\left(s_{2}+1\right)\right] \tag{11}\]
\[\vec{L}\cdot\vec{S}=\frac{1}{2}[j(j+1)-l(l+1)-S(S+1)] \tag{12}\]
\[S_{12}=2\left[3\frac{(\vec{S}\cdot\vec{r})^{2}}{r^{2}}-\vec{S}^{2}\right] \tag{13}\]
Here, \(s_{1}\) and \(s_{2}\) denote the spins of individual quarks inside the baryon. L and S denote the total orbital angular momentum and spin quantum numbers for a given state. All these have been carried out with Mathematica notebook [37].
## 3 Results and discussions
The latest update of Particle Data Group (PDG) lists the most recent and precise ground state mass to be \(m_{p}=938.272MeV\) and \(m_{N}=939.565MeV\). [5]
As for today, 28 total states are listed with PDG for \(N^{*}\) which were a few before a decade as shown in 1. CIAS experiment is focused to study nucleon structure through KY production [38]. Unlike the ground state, excited state masses for both isospins are categorised under a common \(N^{*}\), as it can be revealed by the relevant decay channel for the given mass[39]. The N(1440) resonance in \(J^{P}=\frac{1}{2}^{+}\), also known as the Roper resonance, is one of the most interesting states among the nucleon resonances [40]. The Roper resonance is lighter than the lowest negative-parity nucleon excitations, i.e., N(1535) in \(J^{P}=\frac{1}{2}^{-}\)[41] and N(1520) in \(J^{P}=\frac{3}{2}^{-}\), which cannot be easily explained if one assumes that the Roper resonance is a radial excitation of nucleon as a three-quark system. A promising physical interpretation is that the Roper resonance is the first radial excitation of nucleon but consists of a dressed-quark core augmented by a meson cloud. One of the most prominent examples of a baryon spectrum riddle is presented here. However, the present study is not able to comment on the nature of Roper resonance but the obtained result is well within the mass-range.
The ground state parameters lead to the value of 938 MeV for P and 948 MeV for N, whereas earlier both the masses were 939 MeV such that excited states cannot be separated out. Table 2 describes the obtained mass using the above phenomenological approach. The S-wave states are within a good range from those of Particle Data Group (PDG). The next four star status N(1440) with \(J^{P}=\frac{1}{2}^{+}\) is the 2S state and present result 1420 is well within the PDG range and other approaches too. Similarly the 3S and 4S states are found to be with good agreement with experimental results.
In case of 1P states, the higher spin states have lower mass compared to their respective spin states for a given angular momentum. This has been observed has an intrinsic to hypercentral Constituent Quark Model (hCQM). 1P\(\frac{1}{2}^{-}\) is 30 MeV less compared to 1535 MeV. All the spin states for 1D are experimentally established. The variation of splitting from \(\frac{1}{2}\) to \(\frac{7}{2}\) is around 80 MeV for the present masses. The very first state in negative parity is N(1520) with \(J^{P}=\frac{3}{2}^{-}\) is reproduced as 1493. Also, this state is lower than its spin partner i.e. \(J^{P}=\frac{1}{2}^{-}\) N(1535) is also consistent with our results as the model is predicting lower mass for higher spin state. Here, it is noteworthy that N(1650) \(J^{P}=\frac{1}{2}^{-}\) doesn't appear in the present data. N(1720)\(\frac{3}{2}^{+}\) with four star label is 1816 in present results. 1D\(\frac{1}{2}^{+}\) appears as a higher state than \(\frac{3}{2}^{+}\) and \(\frac{5}{2}^{+}\). Also, states in 2D are found to vary within 100 MeV difference compared to all.
All the negative parity states of 1F have been observed and assigned three and four star status. In the present results, higher spin state with \(J^{P}=\frac{9}{2}^{-}\) is under-predicted compared to PDG. The two states which doesn't appear in earlier study have been calculated here. The N(2220) 1G \(\frac{9}{2}^{+}\) obtained to be 2433 which is quite differing from PDG range. Also, the N(2600) is assigned 1H \(\frac{11}{2}^{-}\) is over-predicted in the current results. Not all the models have resolved the hyperfine splitting of masses and also that hierarchy is not maintained in all the results. Recent studies have focused on N(1895) state and its decay to light hyperons \(\gamma p\to K^{+}\Lambda\)[42, 43]. Out of the four states in the vicinity from 1875
\begin{table}
\begin{tabular}{c c|c c|c c|c c} \hline **** & states & **** & states & ** & states & * states & \\ \hline N(1440) & \(\frac{1}{2}^{+}\) & N(1700) & \(\frac{3}{2}^{-}\) & N(1860) & \(\frac{5}{2}^{+}\) & N(2040) & \(\frac{3}{2}^{+}\) \\ N(1520) & \(\frac{3}{2}^{-}\) & N(1875) & \(\frac{3}{2}^{-}\) & N(1990) & \(\frac{7}{2}^{+}\) & & \\ N(1535) & \(\frac{1}{2}^{-}\) & N(1880) & \(\frac{1}{2}^{+}\) & N(2000) & \(\frac{5}{2}^{+}\) & & \\ N(1650) & \(\frac{1}{2}^{-}\) & N(2060) & \(\frac{5}{2}^{-}\) & N(2300) & \(\frac{1}{2}^{+}\) & & \\ N(1675) & \(\frac{5}{2}^{-}\) & N(2100) & \(\frac{1}{2}^{+}\) & N(2570) & \(\frac{5}{2}^{-}\) & & \\ N(1680) & \(\frac{5}{2}^{+}\) & N(2120) & \(\frac{3}{2}^{-}\) & N(2700) & \(\frac{13}{2}^{+}\) & & \\ N(1710) & \(\frac{1}{2}^{+}\) & N(2600) & \(\frac{11}{2}^{-}\) & & & & \\ N(1720) & \(\frac{3}{2}^{+}\) & & & & & & \\ N(1895) & \(\frac{1}{2}^{-}\) & & & & & & \\ N(1900) & \(\frac{3}{2}^{+}\) & & & & & & \\ N(2190) & \(\frac{7}{2}^{-}\) & & & & & & \\ N(2220) & \(\frac{9}{2}^{+}\) & & & & & & \\ N(2250) & \(\frac{9}{2}^{-}\) & & & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Experimental Status of all known \(N^{*}\)[5]
\begin{table}
\begin{tabular}{c c c c} \hline State & \(J^{P}\) & \(M_{scr}\) & \(M_{exp}\) \\ \hline
1S & \(\frac{1}{2}^{+}\) & 939 & 938 \\
2S & \(\frac{1}{2}^{+}\) & 1420 & 1440 \\
3S & \(\frac{1}{2}^{+}\) & 1762 & 1710 \\
4S & \(\frac{1}{2}^{+}\) & 2090 & 2040* \\
5S & \(\frac{1}{2}^{+}\) & 2422 & \\ \hline \(1^{2}P_{1/2}\) & \(\frac{1}{2}^{-}\) & 1505 & 1535 \\ \(1^{2}P_{3/2}\) & \(\frac{3}{2}^{-}\) & 1493 & 1520 \\ \(1^{4}P_{1/2}\) & \(\frac{1}{2}^{-}\) & 1512 & 1650 \\ \(1^{4}P_{3/2}\) & \(\frac{3}{2}^{-}\) & 1499 & \\ \(1^{4}P_{5/2}\) & \(\frac{5}{2}^{-}\) & 1482 & 1675 \\ \hline \(2^{2}P_{1/2}\) & \(\frac{1}{2}^{-}\) & 1882 & 1895 \\ \(2^{2}P_{3/2}\) & \(\frac{3}{2}^{-}\) & 1868 & 1875 \\ \(2^{4}P_{1/2}\) & \(\frac{1}{2}^{-}\) & 1890 & \\ \(2^{4}P_{3/2}\) & \(\frac{3}{2}^{-}\) & 1875 & \\ \(2^{4}P_{5/2}\) & \(\frac{5}{2}^{-}\) & 1856 & \\ \hline \(3^{2}P_{1/2}\) & \(\frac{1}{2}^{-}\) & 2286 & \\ \(3^{2}P_{3/2}\) & \(\frac{3}{2}^{-}\) & 2270 & \\ \(3^{4}P_{1/2}\) & \(\frac{1}{2}^{-}\) & 2294 & \\ \(3^{4}P_{3/2}\) & \(\frac{3}{2}^{-}\) & 2278 & \\ \(3^{4}P_{5/2}\) & \(\frac{5}{2}^{-}\) & 2257 & \\ \hline \(4^{2}P_{1/2}\) & \(\frac{1}{2}^{-}\) & 2709 & \\ \(4^{2}P_{3/2}\) & \(\frac{3}{2}^{-}\) & 2693 & \\ \(4^{4}P_{1/2}\) & \(\frac{1}{2}^{-}\) & 2717 & \\ \(4^{4}P_{3/2}\) & \(\frac{3}{2}^{-}\) & 2701 & \\ \(4^{4}P_{5/2}\) & \(\frac{5}{2}^{-}\) & 2679 & \\ \hline \(1^{2}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 1816 & 1720 \\ \(1^{2}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 1792 & 1680 \\ \(1^{4}D_{1/2}\) & \(\frac{1}{2}^{+}\) & 1843 & 1880 \\ \(1^{4}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 1825 & 1900 \\ \(1^{4}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 1801 & 1860 \\ \(1^{4}D_{7/2}\) & \(\frac{7}{2}^{+}\) & 1771 & \\ \hline \end{tabular}
\begin{tabular}{c c c} \hline State & \(J^{P}\) & \(M_{scr}\) & \(M_{exp}\) \\ \hline \(2^{2}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2215 & \\ \(2^{2}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2190 & \\ \(2^{4}D_{1/2}\) & \(\frac{1}{2}^{+}\) & 2243 & \\ \(2^{4}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2224 & \\ \(2^{4}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2199 & \\ \(2^{4}D_{7/2}\) & \(\frac{7}{2}^{+}\) & 2168 & \\ \hline \(3^{2}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2635 & \\ \(3^{2}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2610 & \\ \(3^{4}D_{1/2}\) & \(\frac{1}{2}^{+}\) & 2663 & \\ \(3^{4}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2644 & \\ \(3^{4}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2619 & \\ \(3^{4}D_{7/2}\) & \(\frac{7}{2}^{+}\) & 2588 & \\ \hline \(1^{2}F_{5/2}\) & \(\frac{5}{2}^{-}\) & 2143 & 2060 \\ \(1^{2}F_{7/2}\) & \(\frac{7}{2}^{-}\) & 2107 & 2190 \\ \(1^{4}F_{3/2}\) & \(\frac{3}{2}^{-}\) & 2183 & 2120 \\ \(1^{4}F_{5/2}\) & \(\frac{5}{2}^{-}\) & 2154 & \\ \(1^{4}F_{7/2}\) & \(\frac{7}{2}^{-}\) & 2118 & 2190 \\ \(1^{4}F_{9/2}\) & \(\frac{9}{2}^{-}\) & 2074 & 2250 \\ \hline \(2^{2}F_{5/2}\) & \(\frac{5}{2}^{-}\) & 2561 & 2570 \\ \(2^{2}F_{7/2}\) & \(\frac{7}{2}^{-}\) & 2523 & \\ \(2^{4}F_{3/2}\) & \(\frac{3}{2}^{-}\) & 2602 & \\ \(2^{4}F_{5/2}\) & \(\frac{5}{2}^{-}\) & 2572 & \\ \(2^{4}F_{7/2}\) & \(\frac{7}{2}^{-}\) & 2534 & \\ \(2^{4}F_{9/2}\) & \(\frac{9}{2}^{-}\) & 2489 & \\ \hline \(1^{2}G_{7/2}\) & \(\frac{7}{2}^{+}\) & 2487 & \\ \(1^{2}G_{9/2}\) & \(\frac{9}{2}^{+}\) & 2433 & 2220 \\ \(1^{4}G_{5/2}\) & \(\frac{5}{2}^{+}\) & 2546 & \\ \(1^{4}G_{7/2}\) & \(\frac{7}{2}^{+}\) & 2501 & \\ \(1^{4}G_{9/2}\) & \(\frac{9}{2}^{+}\) & 2447 & \\ \(1^{4}G_{11/2}\) & \(\frac{11}{2}^{+}\) & 2383 & \\ \hline \(1^{4}H_{11/2}\) & \(\frac{11}{2}^{-}\) & 2786 & 2600 \\ \hline \end{tabular}
\end{table}
Table 2: Resonance masses of \(N^{*}\) using Screened potential (in MeV).
to 1900, the negative parity states of our results are in accordance PDG range. But for positive parity states lying in D states cannot be precisely matched. This is true for other model comparisons as well.
It is noteworthy that for low-lying states the masses with screened potential are in slightly increment which then falls with higher excited states. However, as the experimental masses fall within a range the predictions of screened and linear potential are not very far. The notable change comes into the picture with higher order correction terms.
\(\Delta\) baryon has played prime role towards the understanding of color quantum number. Even till date, \(\Delta\) is important candidate not only in the field of high energy physics, but nuclear and astrophysical systems also [44]. Over the course of many years, pion-nucleon decays and photoproduction decays have allowed for the observation of \(\Delta\)s. In the field of astrophysics, \(\Delta\) isobars are investigated under the quark-meson coupling model to determine whether if they could possibly be observed. Incorporating the recent additions, 8 four star, 4 three star and many other experimental status have been explored with the values ranging from \(J=\frac{1}{2}\) to \(J=\frac{15}{2}\) and still many states are awaited of confirmation of existence as listed by Particle Data Group (PDG).
Also, \(\Delta\) being the lightest member with the presence of electric quadrupole moment makes it interesting to dig deep into the shape and structure of the baryon [45]. Also, the MicroBooNE collaboration has recently reported \(\Delta\)(1232) radiative decay through neutrino induced neutral current [46]. The pole positions for N and \(\Delta\) resonances have been investigated through photoproduction of K\(\Sigma\) from coupled-channel study [47].
Similar to N, the mass spectra of \(\Delta\) has been tabulated in table 3. The S-wave mass predictions are very well matching with experimental data. The 1P(1556) state is 70 MeV lower compared to 1620 MeV. However, the 1P\(\frac{3}{2}^{-}\) state is 150 MeV under-predicted. Few states in 2P show good agreement. The 1D states are under-predicted compared to PDG. The 1G \(\frac{9}{2}^{+}\) is in accordance with PDG value of 2300 MeV by a difference of 75 MeV. 1H \(\frac{13}{2}^{-}\) (2750) is predicted to be 2600 MeV in the present work.
As the screened potential has been observed in heavy quark systems, our primary goal here to check if similar kind of effects apply to light quark systems. Briefly concluding that screening effect in light systems is notable at higher mass scale but not too suppressed as in the case of heavy systems. Another important aspect is the higher order corrections do not resolve the spin structure in case of screened potential.
## 4 Regge Trajectory
One of the helpful tools in spectroscopic research has been Regge trajectories. Based on calculated data, figures depict a plot of total angular momentum J and principle quantum number n versus the square of resonance mass \(M^{2}\). Many studies have found that the theoretical and experimental data are consistent with the non-intersecting and linearly fitted lines [48]. A correct spin-parity assignment for a state can perhaps be predicted using these plots.
\[J=aM^{2}+a_{0} \tag{14a}\] \[n=bM^{2}+b_{0} \tag{14b}\]
\begin{table}
\begin{tabular}{c c c c} \hline State & \(J^{P}\) & \(M_{scr}\) & \(M_{exp}\) \\ \hline \(2^{4}D_{1/2}\) & \(\frac{1}{2}^{+}\) & 2230 & \\ \(2^{4}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2211 & \\ \(2^{4}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2187 & \\ \(2^{4}D_{7/2}\) & \(\frac{7}{2}^{+}\) & 2156 & \\ \hline \(3^{2}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2566 & \\ \(3^{2}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2543 & \\ \(3^{4}D_{1/2}\) & \(\frac{1}{2}^{+}\) & 2593 & \\ \(3^{4}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2575 & \\ \(3^{4}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2552 & \\ \(3^{4}D_{7/2}\) & \(\frac{7}{2}^{+}\) & 2523 & \\ \hline \(4^{2}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2934 & \\ \(4^{2}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2912 & \\ \(4^{4}D_{1/2}\) & \(\frac{1}{2}^{+}\) & 2959 & \\ \(4^{4}D_{3/2}\) & \(\frac{3}{2}^{+}\) & 2942 & \\ \(4^{4}D_{5/2}\) & \(\frac{5}{2}^{+}\) & 2921 & \\ \(4^{4}D_{7/2}\) & \(\frac{7}{2}^{+}\) & 2893 & \\ \hline \(1^{2}F_{5/2}\) & \(\frac{5}{2}^{-}\) & 2131 & \\ \(1^{2}F_{7/2}\) & \(\frac{7}{2}^{-}\) & 2095 & 2200 \\ \(1^{4}F_{3/2}\) & \(\frac{3}{2}^{-}\) & 2170 & \\ \(1^{4}F_{5/2}\) & \(\frac{5}{2}^{-}\) & 2141 & \\ \(1^{4}F_{7/2}\) & \(\frac{7}{2}^{-}\) & 2106 & \\ \(1^{4}F_{9/2}\) & \(\frac{9}{2}^{-}\) & 2063 & \\ \hline \(1^{2}G_{7/2}\) & \(\frac{7}{2}^{+}\) & 2420 & \\ \(1^{2}G_{9/2}\) & \(\frac{9}{2}^{+}\) & 2375 & 2300 \\ \(1^{4}G_{5/2}\) & \(\frac{5}{2}^{+}\) & 2468 & \\ \(1^{4}G_{7/2}\) & \(\frac{7}{2}^{+}\) & 2431 & 2390 \\ \(1^{4}G_{9/2}\) & \(\frac{9}{2}^{+}\) & 2387 & \\ \(1^{4}G_{11/2}\) & \(\frac{11}{2}^{+}\) & 2335 & 2420 \\ \hline \(1^{2}H_{9/2}\) & \(\frac{9}{2}^{-}\) & 2719 & \\ \(1^{2}H_{11/2}\) & \(\frac{11}{2}^{-}\) & 2657 & \\ \(1^{4}H_{7/2}\) & \(\frac{7}{2}^{-}\) & 2786 & \\ \(1^{4}H_{9/2}\) & \(\frac{9}{2}^{-}\) & 2732 & \\ \(1^{4}H_{11/2}\) & \(\frac{11}{2}^{-}\) & 2670 & \\ \(1^{4}H_{13/2}\) & \(\frac{13}{2}^{-}\) & 2600 & 2750 \\ \hline \end{tabular}
\end{table}
Table 3: Resonance Masses of \(\Delta\) baryon using Screened Potential (in MeV)
Figure 1: (n,\(M^{2}\)) Regge trajectory for \(N\) baryon for screened potential.
Figure 2: (J,\(M^{2}\)) Regge trajectory for N baryon for screened potential.
Figure 4: (n,\(M^{2}\)) Regge trajectory for \(\Delta\) states for screened potential
Figure 3: (J,\(M^{2}\)) Regge trajectory for N baryon for screened potential.
Figure 5: (J,\(M^{2}\)) Regge trajectory for \(\Delta\) states for screened potential
Figure 6: (J,\(M^{2}\)) Regge trajectory for \(\Delta\) states for screened potential
The Regge trajectories have been observed to be following linear nature as expected. However, it is observed that all the lines are not equidistant. The slopes and intercepts have been mentioned in the graphs itself.
## 5 Decay
In the case of nucleons, including \(\Delta\), the prominent decay channel has been observed to be either \(N*\) or pion, depending on the charge of the respective parent [49]. In addition to other constants, the transition couplings of vector mesons have been obtained thanks to the work of Riska and colleagues [50]. Lagrangian densities for the transition couplings involving generalized Rarita-Schwinger vector spinors are used to define the coupling constants of the vector-meson transition to nucleon resonances. The transition coupling constants can be written in terms of the corresponding vector-meson coupling constants to the nucleons by comparing the matrix elements of these Lagrangians to the corresponding matrix elements in the quark model. The latter are calculated using phenomenological boson exchange interaction models and fitted to nucleon-nucleon scattering data, albeit with large uncertainty margins. P- and D-shell and excited S-shell states can be related to the ground state through these expressions involving SU(2) Clebsch-Gordan coefficients and orbital matrix elements of quark wave functions. The latter are dependent on the Hamiltonian model of the three-quark system. Here we use a straightforward model for covariant harmonic oscillators where the confining interaction is linear and the hyperfine interaction depends on the flavour and spin of the particle. In the work that is being presented here, the constants and decay widths that have been supplied by the Particle Data group have been used to determine the decay width of certain resonance masses that have already been established.
For \(\Delta(1232)\), \(\Delta(1600)\) decay to \(N\pi\),
\[\Gamma=\frac{1}{3}\frac{f^{2}}{4\pi}\frac{E^{{}^{\prime}}+m_{N}}{m_{\Delta}} \frac{k^{3}}{m_{\pi}^{2}} \tag{15}\]
where, \(E^{{}^{\prime}}\) is the energy of the final nucleon and k is pion momentum.
\[E^{{}^{\prime}}=\frac{m^{*2}-m_{\pi}^{2}+m_{N}^{2}}{2m^{*}} \tag{16}\]
\[k=\frac{\sqrt{[m^{*2}-(m_{N}+m_{\pi})^{2}][m^{*2}-(m_{N}-m_{\pi})^{2}]}}{2m^{*}} \tag{17}\]
Here \(m^{*}\) is resonance mass calculated using above model, \(m_{N}\) is nucleon mass 939 MeV and \(m_{\pi}\) is pion mass 139 MeV. For \(N(1535)\), \(N(1650)\) and \(\Delta(1620)\) decaying to \(N\pi\),
\[\Gamma=\frac{f^{2}}{4\pi}\frac{E^{{}^{\prime}}+m_{N}}{m^{*}}\frac{k}{m_{\pi} ^{2}}(m^{*}-m_{N})^{2} \tag{18}\]
For \(N(1520)\), \(N(1700)\) and \(\Delta(1700)\) decaying to \(N\pi\),
\[\Gamma=\frac{1}{3}\frac{f^{2}}{4\pi}\frac{E^{{}^{\prime}}-m_{N}}{m_{\Delta}} \frac{k^{3}}{m_{\pi}^{2}} \tag{19}\]
The values for decay constant f varies with each decay channel. These values are descriptively studied by Riska and Brown. Table 4 shows decay channels of N and \(\Delta\) baryon with respective decay widths obtained for our predicted resonance masses. The comparison of decay widths with a recent experimental findings as elaborated by Hunt et al. [39]. Our results are in good agreement in few channels. Also, we have compared with another partial wave analysis done by Ardnt et al. [51].
## 6 Conclusion
The present work summarizes the effect of screened type potential under hypercentral Constituent Quark Model (hCQM) for N and \(\Delta\) baryons. So far, linear potential has been applied to light spectrum, whereas screened potential provided reasonable results for heavy quark systems. The screening parameter plays a role in determining the spin-split and mass at higher angular momentum states. The obtained masses have been comparable with the basis of experimental known values with different star status. The masses of higher spin state for a given L value, decreases in hierarchy. The hyperfine splitting is observed to be less with screened potential than those of linear one.
The Regge trajectories show linear trend for all natural and unnatural parity points. The strong decay widths for pion channel are also calculated in the present study. The ongoing and upcoming experimental facilities namely HADES [52], PANDA[53, 54, 55, 56, 57, 58], shall be providing us with new insights in the understanding of light, strange baryons.
## Acknowledgment
Ms.Chandni Menapara acknowledges the support for pursuing this work under DST-INSPIRE Fellowship Scheme.
|
2310.06558 | Confirmation and characterization of neglected WDS systems using Gaia
DR3 and the Virtual Observatory | The aim of this paper is, making use of the Gaia DR3 catalogue and Virtual
Observatory tools, to confirm and characterize 428 binary and multiple stellar
systems classified as neglected (only one observation) in the Washington Double
Star Catalogue (WDS). The components of the stellar systems have the same
parallax and proper motion (within the errors) and are separated by less than
50 000 AU, which minimizes the number of by-chance counterparts. Effective
temperatures calculated using VOSA were used to estimate stellar masses.
Binding energies were calculated for 42 binary systems confirming they are
physical pairs. Also we found 75 pairs with F/G- M spectral types which are
very interesting to improve the determination of the metallicity of the M star
from the higher-mass component. | E. Solano, I. Novalbos, A. J. Ros, M. Cortés-Contreras, C. Rodrigo | 2023-10-10T12:16:42Z | http://arxiv.org/abs/2310.06558v1 | Confirmation and characterization of neglected WDS systems using Gaia DR3 and the Virtual Observatory
###### Abstract
The aim of this paper is, making use of the Gaia DR3 catalogue and Virtual Observatory tools, to confirm and characterize 428 binary and multiple stellar systems classified as neglected (only one observation) in the Washington Double Star Catalogue (WDS). The components of the stellar systems have the same parallax and proper motion (within the errors) and are separated by less than 50 000 AU, which minimizes the number of by-chance counterparts. Effective temperatures calculated using VOSA were used to estimate stellar masses. Binding energies were calculated for 42 binary systems confirming they are physical pairs. Also we found 75 pairs with F/G- M spectral types which are very interesting to improve the determination of the metallicity of the M star from the higher-mass component.
binaries:general, stars:fundamental parameters, astronomical databases: miscellaneous Article Type
## 1 Introduction
It is well known that, while they are young, stars are not isolated but grouped in clusters, associations of stars loosely bound by mutual gravitational attraction. Cluster members share physical parameters like age and metallicity as well as kinematic properties (distances, proper motions and radial velocities). Typically, after a few hundred million years, open clusters become disrupted by close encounters with other clusters and clouds of gas, as they orbit the Galactic center. As a remnant of this process, a significant fraction of main-sequence stars (the exact percentage depends on the spectral type) are in binary and multiple systems (Duquennoy and Mayor (1991),Raghavan et al. (2010),Cortes-Contreras et al. (2017)). The fact that the components of wide binary and multiple systems share physical properties and, at the same time, evolve in an independent way due to their large separation, makes them excellent testbenchs for stellar evolutionary models. In this work we aim to increase the number of these systems through examination of historical data.
The aim of this paper is, making use of the Gaia DR3 catalogue and Virtual Observatory tools, to confirm and characterize 428 binary and multiple stellar systems classified as neglected (only one observation) in the Washington Double Star Catalogue (WDS). The components of the stellar systems have the same parallax and proper motion (within the errors) and are separated by less than 50 000 AU, which minimizes the number of by-chance counterparts. Effective temperatures calculated using VOSA were used to estimate stellar masses. Binding energies were calculated for 42 binary systems confirming they are physical pairs. Also we found 75 pairs with F/G- M spectral types which are very interesting to improve the determination of the metallicity of the M star from the higher-mass component.
The Washington Double Star Catalogue (WDS, Mason, Wycoff, Hartkopf, Douglass, & Worley, 2001) is an all-sky survey, maintained by the US Naval Observatory (USNO), that represents one of the most important databases of binary and multiple stellar systems. At the time of writing, the catalogue contains 154 686 rows1. Among other information, each WDS row includes the WDS name, right ascension and declination of the primary component and the position angle and separation between the primary and secondary components. The WDS catalogue also includes a category, called _neglected_, to flag primaries that have been observed only once, either because the information on coordinates is wrong or simply because they have not been re-observed yet.
Footnote 1: [https://vixier.cds.unistra.fr/viz-bin/VizieR-37-source-B/wds/](https://vixier.cds.unistra.fr/viz-bin/VizieR-37-source-B/wds/)
The Virtual Observatory (VO2) is an international initiative aiming at optimizing the usage of the scientific information hosted in astronomical archives. VO has developed tools and services which enormously facilitate the access and analysis of astronomical data. In particular, Simbad (Wenger et al., 2000),
Vizier (Ochsenbein, Bauer, & Marcout, 2000), TOPCAT (Taylor, 2005), Aladin (Boch & Fernique, 2014; Bonnarel et al., 2000), and VOSA (Bayo et al., 2008) have been intensively used in this paper.
The paper is structured as follows. In Section 2, we describe the methodology used to obtain the sample of objects studied in this paper together with the results of their visual inspection. Physical parameters are estimated in Section 3. The Virtual Observatory SED Analyzer tool (VOSA3) was used to compute the effective temperatures of our objects as well as to identify unresolved binaries among them. Making use of the Gaia DR3 information on colours and distances, the objects were placed on a colour - absolute magnitude diagram (CMD) allowing to separate them into main sequence objects, subgiants/giants or white dwarfs. Also, masses were estimated from effective temperatures and used to calculate binding energies. Finally, in Section 4 we summarize the main results of the paper. A brief description of the Virtual Observatory compliant archive that contains detailed information on the candidates is given in the Appendix.
Footnote 3: [http://svc2.cab.inta-csic.es/theory/vosa/](http://svc2.cab.inta-csic.es/theory/vosa/)
## 2 Sample Selection
The sample of objects analyzed in this paper was obtained after applying a workflow which consists of the following steps:
* Filtering on the number of observations: We kept only those primaries flagged as neglected in the WDS catalogue, that is, with just one observation (N\({}_{obs}\)=1). This search reduced the number of rows from 154 686 to 18 314. Some of these primaries have not been observed for many years (for instance, over two hundred were observed more than a century ago).
* Cross-matching: The 18 314 primary components obtained in the previous step were cross-matched with Gaia DR3. Primary components not having counterparts in Gaia at less than 5 arcsec were rejected. If there is more than one Gaia counterpart at less than 5 arcsec, only the nearest one was considered. The 5 arcsec search radius was adopted as a compromise solution to avoid an unmanageable number of false positives. We also forced the association to be symmetric in the sense that the nearest object to the Gaia counterpart must coincide with our original primary component. After cross-matching the number of rows was reduced to 17 598.
* Filtering on parallaxes and proper motions: We used the Gaia DR3 information on parallaxes and proper motions to keep only primary components with relative errors of less than 10 per cent in PMRA and PMDEC and less than 20 per cent in parallax. The condition in parallax is necessary to have a reliable estimation of distance (Luri et al., 2018). After this filtering, the number of rows reduced to 14 808. For each one of these 14 808 entries, we cross-matched the primary components with Gaia DR3 in a 180 arcmin radius, keeping all the counterparts that also fulfil the conditions in the relative errors in parallax and proper motion previously mentioned. We adopted this large value for the search radius to keep all the physically bound pairs. After this, we obtained 865 974 pairs.
* Comoving pairs: From the 865 974 pairs, we kept only those for which the differences in parallax and in proper motion, both in right ascension and declination, were less than 3 times the corresponding errors. After applying this condition, 2 735 pairs were kept. Each pair is formed by the WDS source (primary component) and a Gaia counterpart (secondary component).
* Physical separation: According to Jimenez-Esteban, Solano, and Rodrigo (2019), the great majority of chance alignment counterparts occurs at physical separations between components larger than 50 000 AU. Using this number as an upper limit, 603 pairs were left. Physical separations were estimated by using the formulae \(s=\rho\times d\), where \(\rho\) is the angular separation between components and \(d\) is the distance to the pair (estimated from parallaxes). Moreover, according to Gaia Collaboration et al. (2021), the minimum angular separation above which a pair can be considered by Gaia as resolved is 180 mas. Therefore, we did not impose any condition on the minimum separation between components since this limit is much lower than what can be achieved from ground-based observations.
* RUWE. The Gaia Renormalised Unit Weight Error (RUWE) helps to identify objects with problematic astrometric solution (Arenou et al., 2018; Lindegren et al., 2018, 2021). We adopted a conservative value of RUWE < 1.4 to keep stars with good astrometry. We found 429 pairs with RUWE < 1.4 in both the primary and secondary components.
* Radial velocities: If a pair is physically bound, then, the primary and secondary components should present similar radial velocities (RVs) within the errors. However, as stated in Jimenez-Esteban et al. (2019), the radial velocity dispersion in field stars is large and the probability of chance alignment (same values within errors) is not negligible. Therefore, similar RVs cannot be used as a proof to fully confirm that the pair is physically bound but, on
the contrary, it is a good approach to discard pairs with different RV values. The Gaia DR3 catalog provides the RVs of both components for only 13 pairs. One of them, WDS07222-2558, showed clearly discrepant RVs (35.38\(\pm\)1.65 km s\({}^{-1}\) and 0.16\(\pm\)3.91 km s\({}^{-1}\) for the primary and secondary, respectively) and was, thus, discarded.
After all this process, we ended up with 428 pairs comprising 354 primaries and 372 secondaries (the same primary can be associated to more than one secondary). The sky distribution of primary components is shown in Fig. 1 while Fig. 2 provides information on the distribution of the separation between the primary and secondary components according to their distance. Detailed information on these pairs can be found at the SVO archive of neglected systems (see appendix).
### Visual inspection
The 428 pairs were visually inspected taking advantage of the scripting capabilities of Aladin4, an interactive software sky atlas. For each pair we conducted the following steps:
Footnote 4: [https://aladin.u-strasbg.fr/](https://aladin.u-strasbg.fr/)
* Upload of an 5 arcmin image of the Second Palomar Sky Survey (Reid et al., 1991, POSS II) centered on the position of the primary component. The POSS II image is used as background source.
* Identification of the primary component using the WDS coordinates.
* Identification of the secondary component using the information on separation and position angle available in the WDS catalogue.
* Upload of the Gaia DR3 sources lying in the region of the sky covered by the POSS II image.
* Graphical representation of the proper motion of the Gaia sources using arrows.
An example of the output of the script is given in Fig. 3. During the visual inspection of the 428 pairs the following cases were identified:
* The WDS information on coordinates of the primary component, position angle and separation is correct but it was superseded by the astrometric information provided by Gaia due to its superior accuracy (Fig. 3 ). We found 32 pairs lying in this category.
* The WDS information is wrong. There is no Gaia DR3 source at the separation/position angle given in WDS. The secondary component is found at a different separation and/or position angle (Fig. 4 ). 189 pairs belong to this category. For 127 pairs, the primary component is flagged in Simbad as close binary itself. However, in all cases, they have associated a single entry in Gaia DR3 with RUWE < 1.4, indicating that they are, most likely, single.
* For eight pairs there is a Gaia DR3 source at the expected position of the secondary according to the separation/position angle given in WDS. Nevertheless, the parallax/proper motion of the Gaia DR3 source lying at the expected position of the secondary are different by more than 3\(\sigma\) from the parallax/proper motion of the primary component. The secondary component is, actually, found at a different separation/position angle.
* 35 of our pairs belong to a multiple system according to WDS, that is, each one of the 35 primaries has associated more than one secondary. For three of the primaries we found that none of the secondaries reported in WDS have similar parallaxes and proper motions and, thus, are not physically associated. For 29 primaries only one of the secondaries reported in WDS can be considered as physical pair (Fig. 5 ), while we confirm that three primaries belong to a multiple system. Finally, we found three primaries, reported as double in WDS but that, actually, are triple systems according to Gaia parallaxes and proper motions.
* The primary is part of a larger structure like an open cluster or a stellar association (Fig. 6 ). 161 primaries belong to this category. The previously listed categories are properly flagged in the archive (see Appendix). Also, the results obtained after the visual inspection will be reported to USNO for their ingestion in the WDS catalogue.
## 3 Physical parameters
### Effective temperatures
We used VOSA to estimate effective temperatures for our pairs. VOSA is a tool developed by the Spanish Virtual Observatory designed to build the Spectral Energy Distributions (SEDs) of thousands of objects at a time from a large number of photometric catalogues, ranging from the ultraviolet to the infrared. VOSA compares catalogue photometry with different collections of theoretical models and determines which model best reproduces the observed data following different statistical approaches. Physical parameters can then be estimated for each object from the model that best fits the data.
Using VOSA we queried the GALEX (Bianchi, Shiao, & Thilker, 2017), SDSS DR12 (Alam et al., 2015), APASS DR9 (Henden, Levine, Terrell, & Welch, 2015), Gaia DR3 (Gaia Collaboration et al., 2021), 2MASS-PSC (Skrutskie et al., 2006), and ALLWISE (Wright et al., 2010) catalogs to build the SEDs from the ultraviolet to the infrared. Observational SEDs were then compared to the grid of BT-Settl model atmospheres (Allard, Homeier, & Freytag, 2012). We assumed 1 200 K \(\leq\) \(T_{\rm eff}\)\(\leq\) 12 000 K; 3 \(\leq\) logg \(\leq\) 4.5 and solar metallicity.
Extinction can play an important role in shaping the SED in particular at short wavelengths. If extinction is underestimated, the slope of the SED will appear flattened at short wavelengths and the derived effective temperature will be lower. To account for this effect and to minimize the extinction - effective temperature degeneracy in the SED fitting, we decided to leave extinction as a free parameter in the fitting process taking values in the range 0 \(\leq\) Av \(\leq\) 1 mag and keep only objects at distances \(<\) 1000 pc (1 Kpc roughly corresponds to an extinction of 1 mag in the optical regime. See, for instance, Fig. 8 in Lallement, Vergely, Babusiaux, and Cox (2022)). Therefore, effective temperatures were estimated only for this subset of objects.
The goodness of fit of the SED in VOSA can be assessed with the vgfb parameter, a pseudo-reduced \(\chi^{2}\) internally used by VOSA that is calculated by forcing \(\sigma(F_{\rm obs})>0.1\times F_{\rm obs}\), where \(\sigma(F_{\rm obs})\) is the error in the observed flux (\(F_{\rm obs}\)). This parameter is useful to avoid the risk of overweighting photometric points with under-estimated photometric errors. Only sources with vgfb \(<\) 15 (which is an indicator of good fit) were kept.
Figure 1: Sky distribution in Galactic coordinates (Aitoff projection) of the primary components of the 428 pairs identified in Section 2 (filled red circles). A 2MASS coloured image is displayed in the background.
Figure 2: Separation (both physical and angular) vs distance of the 428 pairs identified in Section 2.
VOSA also allows the identification of flux excess in a SED, excess that could be ascribed to the presence of a disc or the existence of a non-resolved companion. This way, we can check whether each of the primaries and secondaries are, themselves, single or double objects.A detailed description of how VOSA manages the flux excess can be found in the VOSA documentation5. Out of 354 primaries, VOSA did not find excess for 289 of them (216 at less than 1000 pc), while 34 primaries show flux excess, 24 of which at less than 1 Kpc. The rest of primaries (31) showed a bad SED fitting due to, for instance, poor quality photometry or lack of enough photometric points. Similarly, for the 372 secondaries, VOSA did not find excess for 256 of them (189 at less than 1000 pc), while 43 secondaries show flux excess (32 at less than 1 Kpc). 73 secondaries were discarded because of their poor SED fitting. Physical parameters (effective temperature, luminosity, stellar radius) of the 216 primaries and 189 secondaries at less than 1000 pc and not showing flux excess can be found at the SVO archive of neglected systems (see Appendix). Examples of the VOSA SED fitting are shown in Fig. 7.
Footnote 5: [https://bit.ly/2X#Cv9x](https://bit.ly/2X#Cv9x)
Fig. 8 shows the distribution in effective temperature of the primaries and secondaries classified by VOSA as single stars (i.e., not showing flux excess in their SEDs). The primaries reach the maximum of the distribution at \(\sim 6\,500\) K while the secondaries have it at \(\sim 3\,500\) K with \(\sim 50\%\) of them in the range \(3\,000\leq T_{\rm eff}\leq 4\,000\) K. Of special interest are the 75 pairs composed of a primary with F-G spectral types (\(5\,300\) K \(<T_{\rm eff}\)\(<7\,300\) K, according to the updated version of Table 4 in Pecaut and Mamajek (2013)6) and an M-dwarf secondary (\(T_{\rm eff}\)\(<3\,900\) K) as the metallicity of the M-dwarf can be inferred from the hotter component.
Footnote 6: [https://www.pas.rochester.edu/~enammjek/EBL_dwarf_UBVIJHK_colors_Teff.txt](https://www.pas.rochester.edu/~enammjek/EBL_dwarf_UBVIJHK_colors_Teff.txt)
### HR diagram
The absolute Gaia magnitude in the \(G\) band was estimated using
\[\mathbf{M}_{G}=G+5\log\varpi+5, \tag{1}\]
where \(\varpi\) is the parallax in arcseconds and \(G\) the apparent magnitude. With the absolute magnitude and the BP-RP colour, we built a colour - absolute magnitude diagram (CMD). Fig. 9 shows the position of our pairs in the CMD.
Among our pairs we identified a white dwarf (Gaia DR3 150264795265471616) already reported in Gentile Fusillo et al. (2019) and two sources (Gaia DR3 4009115472737321600
Figure 3: Example of a primary (WDS 00243+5753) for which the information provided by the WDS on separation and position angle is correct (the small red arc indicates the expected position of the secondary according to the WDS information), information that is, anyway, superseded by the Gaia parameters due to its superior performance. Blue arrows represent the Gaia DR3 proper motions.
and Gaia DR3 6409528409064496000) lying in the locus occupied by the white dwarf - main sequence binaries Rebassa-Mansergas et al. (2021).
### Binding energies
Stellar masses of objects lying on the main sequence were derived from effective temperatures by interpolating in Table 4 in Pecaut and Mamajek (2013). With these values and the projected physical separations we computed reduced binding energies as in Caballero (2009)
\[U_{g}=-GM_{1}M_{2}/s \tag{2}\]
where G is the gravitational constant, M\({}_{1}\) and M\({}_{2}\) the masses of the primary and the secondary and s the projected angular separation. Fig. 10 shows the binding energy - total mass distribution for the 42 pairs with mass determinations for both components. We consider a physically bound pair when binding energy is over \(10^{33}\) J (Caballero, 2009). All the pairs show binding energies well above this threshold.
## 4 Conclusions
Starting with Washington Double Stars catalogue (154 686 rows), we selected those primary stars having just one observation, and look for counterparts (secondaries) sharing the same Gaia parallaxes and proper motions (and radial velocities, whenever available) within the errors, and a RUWE value typical of objects with good astrometrical solution. This returned 428 pairs that were visually inspected using Aladin to check the WDS information on separation and position angle.
Effective temperatures, luminosities and radii of both the primaries and the secondaries were estimated using VOSA. In order to minimize the impact of the effective temperature - extinction degeneracy, physical parameters were derived only for objects at less than 1 Kpc. VOSA also allows to identify unresolved binaries by identifying the flux excess in the SED distribution. This way, out of 354 primaries, VOSA classified 289 as singles and 34 as unresolved binaries according to their SED. Similarly, for the 372 secondaries, VOSA identified 256 as singles and 43 as unresolved binaries. We were also able to identify 75 M dwarf + F/G pairs for the subsample for which the primary and secondary are both single stars. These pairs are very helpful to accurately estimate the metallicity of the M dwarf from the hotter companion.
Figure 4: Example of a primary (WDS 02199+5236) for which the information provided by the WDS on separation and/or position angle is wrong. The secondary is found at a different separation/position angle (at the center of the circle that appears furthest to the right). The expected position of the secondary according to the WDS information is marked by a small red arc at the North-East of the crosshair. Blue arrows represent the Gaia DR3 proper motions.
Using Gaia DR3 parallaxes and magnitudes, we plotted the 428 pairs on a Hertzsprung-Russell diagram (HRD). According to their position in the HRD we found a white dwarf and two sources lying in the region of the parameter space typically occupied by the white dwarf - main sequence binaries. Finally, we computed the binding energies for 42 pairs finding that all of them are consistent with being gravitationally bound.
Detailed information of the pairs can be found in the Virtual Observatory compliant archive described in the Appendix.
## Acknowledgments
This work has been funded by **MCHN/AEI/10.13039/501100011033** through grant _PID2020-112949GB-I00_ at Centro de Astrobiologia (CSICINTA). This research used the Washington Double Star Catalog maintained at the U.S. Naval Observatory. This publication makes use of VOSA, developed under the Spanish Virtual Observatory project. This research has made use of Aladin sky atlas developed at CDS, Strasbourg Observatory, France. Vizier, Simbad, and TOPCAT have also been widely used in this paper. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement
## Appendix A Virtual Observatory compliant, online catalogue
In order to help the astronomical community with using our catalogue of neglected WDS objects, we developed an archive system that can be accessed from a webpage7 or through a Virtual Observatory ConeSearch8.
Footnote 7: [http://svocats.cab.inta-csic.es/wds_neglected/](http://svocats.cab.inta-csic.es/wds_neglected/)
Footnote 8: g.g[http://svocats.cab.inta-csic.es/wds_lint/cs.php?RA-31.825Zmec=7.905Zm=0.1ZVERB=2](http://svocats.cab.inta-csic.es/wds_lint/cs.php?RA-31.825Zmec=7.905Zm=0.1ZVERB=2)
The archive system implements a very simple search interface that allows queries by coordinates and radius as well as by other parameters of interest. The user can also select the maximum number of sources (with values from 10 to unlimited). The result of the query is a HTML table with all the sources found in the archive fulfilling the search criteria. The result can also be downloaded as a VOTable or a CSV file. Detailed information on the output fields can be obtained placing the mouse over the question mark located close to the name of
Figure 5: Example of a triple system in WDS (WDS 04241+2413) for which one of the secondaries (small red arc below the crosshair) is not physically bound based on its Gaia proper motion information (blue arrows).
the column. The archive also implements the SAMP9 (Simple Application Messaging) Virtual Observatory protocol. SAMP allows Virtual Observatory applications to communicate with each other in a seamless and transparent manner for the user. This way, the results of a query can be easily transferred to other VO applications, such as, for instance, Topcat.
Footnote 9: [http://www.ivoats.net/documents/SAMP](http://www.ivoats.net/documents/SAMP)
The query syntaxes to recover the different subsets identified in this work are the following:
* 32 pairs for which the WDS information has been superseded by the Gaia astrometry: [http://svocats.cab.inta-csic.es/wds_list/index.php?action=search](http://svocats.cab.inta-csic.es/wds_list/index.php?action=search) and flag=0
* 189 pairs for which the WDS information on separation and/or position angle is wrong: [http://svocats.cab.inta-csic.es/wds_list/index.php?action=search](http://svocats.cab.inta-csic.es/wds_list/index.php?action=search) and flag=11
* 8 pairs for which there is a Gaia DR3 source at the separation/position angle given in WDS but with a different parallax/proper motion: [http://svocats.cab.inta-csic.es/wds_list/index.php?action=search](http://svocats.cab.inta-csic.es/wds_list/index.php?action=search) and flag=22
* 3 primaries in WDS for which none of the secondaries are physical pairs according to Gaia DR3 parallaxes and proper motions: [http://svocats.cab.inta-csic.es/wds_list/index.php?action=search](http://svocats.cab.inta-csic.es/wds_list/index.php?action=search) and flag=551
* 3 primaries, reported as part of double systems in WDS, but that actually belong to triple systems according to Gaia: [http://svocats.cab.inta-csic.es/wds_list/index.php?action=search](http://svocats.cab.inta-csic.es/wds_list/index.php?action=search) and flag=552
* Physical parameters (effective temperature, luminosity, stellar radius) of the 216 primaries at less than 1000 pc: [http://svocats.cab.inta-csic.es/wds_primary](http://svocats.cab.inta-csic.es/wds_primary)
Figure 6: Example of a primary component (WDS 03447+3206, marked with a cross and a red circle) member of a stellar cluster (IC 348).
* Physical parameters (effective temperature, luminosity, stellar radius) of the 189 secondaries at less than 1000 pc: [http://svocats.cab.inta-csic.es/wds](http://svocats.cab.inta-csic.es/wds) _secondary
* Effective temperatures of the 75 FG + M pairs: [http://svocats.cab.inta-csic.es/wds_fgkm](http://svocats.cab.inta-csic.es/wds_fgkm)
* Binding energies for the 42 pairs with mass determinations for both components: [http://svocats.cab.inta](http://svocats.cab.inta) -csic.es/wds_binding/
|
2308.13479 | Prompting a Large Language Model to Generate Diverse Motivational
Messages: A Comparison with Human-Written Messages | Large language models (LLMs) are increasingly capable and prevalent, and can
be used to produce creative content. The quality of content is influenced by
the prompt used, with more specific prompts that incorporate examples generally
producing better results. On from this, it could be seen that using
instructions written for crowdsourcing tasks (that are specific and include
examples to guide workers) could prove effective LLM prompts. To explore this,
we used a previous crowdsourcing pipeline that gave examples to people to help
them generate a collectively diverse corpus of motivational messages. We then
used this same pipeline to generate messages using GPT-4, and compared the
collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the
pipeline, and (3 & 4) two baseline GPT-4 prompts. We found that the LLM prompts
using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages
than the two baseline prompts. We also discuss implications from messages
generated by both human writers and LLMs. | Samuel Rhys Cox, Ashraf Abdul, Wei Tsang Ooi | 2023-08-25T16:35:06Z | http://arxiv.org/abs/2308.13479v1 | # Prompting a Large Language Model to Generate Diverse Motivational Messages
###### Abstract.
Large language models (LLMs) are increasingly capable and prevalent, and can be used to produce creative content. The quality of content is influenced by the prompt used, with more specific prompts that incorporate examples generally producing better results. On from this, it could be seen that using instructions written for crowdsourcing tasks (that are specific and include examples to guide workers) could prove effective LLM prompts. To explore this, we used a previous crowdsourcing pipeline that gave examples to people to help them generate a collectively diverse corpus of motivational messages. We then used this same pipeline to generate messages using GPT-4, and compared the collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the pipeline, and (3 & 4) two baseline GPT-4 prompts. We found that the LLM prompts using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages than the two baseline prompts. We also discuss implications from messages generated by both human writers and LLMs.
Large Language Models, Crowdsourcing, Prompt Engineering, Creativity +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Journal: Computer Vision and Pattern Recognition
by crowd-workers (taken from Cox et al. (2017)). Here workers were shown one 3 to 5 word phrase to inspire them when writing each message. These phrases were chosen to be semantically diverse in order to create a collectively diverse corpus of messages. We used the same phrases and instructions to prompt GPT-4 to write 250 messages (in the **Phrase-GPT** condition). For comparison, we also prompted with a **Simple-GPT** condition: simply asking GPT-4 to write 250 messages; and a **Diverse-Naive-GPT** condition: requesting GPT-4 to write 250 messages that are diverse from one another.
## 3. Results
To calculate the diversity of each set of messages, we calculated the mean pairwise Euclidean distance (Kolmogorov, 1954; Cox et al., 2017) between all messages within each condition (where a higher distance reflects more diversity). From lowest to highest diversity, this gave us: 4.13 for **Simple-GPT**, 4.29 for **Naive-Diverse-GPT**, 5.66 for **Phrase-GPT**, and 6.90 for **Human-Written**. This indicates that such a crowdsourcing pipeline could be used to increase the diversity of content generated by LLMs. While Phrase-GPT did not produce a corpus of messages as diverse as those from human-writers, this may be due to differences in message length (with Human-Written averaging 24.0 words, and Phrase-GPT 18.7 words). In addition, Simple-GPT averaged 9.2 and Naive-Diverse-GPT 9.8 words per message (emphphasising the impact of including examples when prompting LLMs).
The work involved in producing messages should also be noted. The 250 human-written messages took on average 73 seconds each to be written, while GPT-4 took roughly 6 seconds per message. Additionally, while some human-written messages in (Kolmogorov, 1954) were excluded (such as those using poor levels of English or apparent gibberish), the LLM-written messages seemingly suffered from no such issues. Example Human-Written and Phrase-GPT messages can be found in Table 1 alongside their respective inspirational phrases.
## 4. Discussion and Conclusion
This study has demonstrated the effectiveness of using a crowdsourcing pipeline ((Kolmogorov, 1954)) to generate more diverse messages compared to two baseline prompts. However, similar to some previous creativity tasks that require more advanced reasoning abilities (Kolmogorov, 1954; Dosov et al., 2015), human-writers were more successful than GPT-4. Further investigation could alter LLM parameters such as temperature (default 1.0 on ChatGPT (Krishnan et al., 2017)).
Several additional insights are demonstrated by examples in Table 1. Both human and LLM messages demonstrated the ability to draw metaphors from phrases (Ex.2). GPT-4 may have had difficulty deciding on the relevance of phrases and would generally incorporate phrases, while human writers would act more discerningly (see Ex.4 and Ex.5). This emphasises that LLMs follow the form rather than meaning of language (Beng et al., 2015), and implies that crowdsourcing pipelines could be atomised further for LLMs (e.g., including an initial step asking the LLM to judge the relevance of a phrase to physical activity).
At times, human-writers would not incorporate phrases (perhaps if they do not have domain knowledge of more esoteric phrases) while the LLM could (Ex.3). However, risk of AI hallucination (e.g., within healthcare (Beng et al., 2015; Beng et al., 2015; Beng et al., 2015; Dosov et al., 2015) such as LLMs misunderstanding medical vocabulary or providing advice that does not follow medical guidelines (Beng et al., 2015; Dosov et al., 2015)) should be noted, and additional measures would be needed to ensure the veracity of output.
Additionally, (while one could prompt a LLM to incorporate different conversational styles or sociocultural perspectives), attention is needed to ensure that it does not produce harmful cultural stereotypes (Beng et al., 2015; Dosov et al., 2015; Dosov et al., 2015). Similarly, human writers incorporate personal experiences into messages (Beng et al., 2015) (Ex.1) that may not necessarily be available to a LLM (if such experiences are poorly represented).
Finally, while we used automated measures to indicate diversity of each experiment condition, further human evaluations for diversity could be conducted (Kolmogorov, 1954; Krishnan et al., 2017; Krishnan et al., 2017), in addition to human evaluations of message efficacy (e.g., motivation (Kolmogorov, 1954; Krishnan et al., 2017)).
## Acknowledgments
This research is part of the programme DesCartes and is supported by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.
\begin{table}
\begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt}} \hline \hline
\begin{tabular}{c} Inspirational \\ phrase \\ \end{tabular} & **Human-Written** message & **Phrase-GPT** message \\ \hline
**Ex.1** & lots of fear and intimation & If you can’t exercise outside, try listening to music while exercising inside. It can help relax and motivate you at the same time. & Overcome fear and intimation in your exercise journey by seeking support, educating yourself, and taking small steps towards your goals. \\ \hline
**Ex.2** & home-public doses of exercise & Even small amounts of exercise can help you become healthy. You don’t need to dedicate 2 hours a day to become healthier! & Start with small, manageable doses of exercise to build a foundation for a healthier, more active lifestyle. \\ \hline
**Ex.3** & military press one rep & Don’t focus on working out for 30 minutes, an hour, etc. & Just focus on your exercises one rep at a time! Challenge yourself with exercises like the military press, focusing on form and gradually increasing weight for improved strength. \\ \hline
**Ex.4** & fats doing regular raming and aerobic exercise is great! Never stop working out. & Regular running and aerobic exercise is great! Never stop working out. & Take inspiration from studies, even those on rats, that show the benefits of regular aerobic exercise for overall health and longevity. \\ \hline
**Ex.5** & vice president of program development & Just 30 minutes of exercise a day can change your life! & Seek guidance from experienced professionals like vice presidents of program development to optimize your fitness routine and approach. \\ \hline \hline \end{tabular}
\end{table}
Table 1. Example messages written by the crowd and GPT-4 using the same phrases for inspiration. |
2307.04958 | Near-wall model for compressible turbulent boundary layers based on an
inverse velocity transformation | In this work, a near-wall model, which couples the inverse of a recently
developed compressible velocity transformation [Griffin, Fu, & Moin, PNAS,
118:34, 2021] and an algebraic temperature-velocity relation, is developed for
high-speed turbulent boundary layers. As input, the model requires the mean
flow state at one wall-normal height in the inner layer of the boundary layer
and at the boundary-layer edge. As output, the model can predict mean
temperature and velocity profiles across the entire inner layer, as well as the
wall shear stress and heat flux. The model is tested in an a priori sense using
a wide database of direct numerical simulation high-Mach-number turbulent
channel flows, pipe flows, and boundary layers (48 cases with edge Mach numbers
in the range of 0.77--11 and semi-local friction Reynolds numbers in the range
of 170--5700). The present model is significantly more accurate than the
classical ordinary differential equation (ODE) model for all cases tested. The
model is deployed as a wall model for large-eddy simulations in channel flows
with bulk Mach numbers in the range of 0.7--4 and friction Reynolds numbers in
the range of 320--1800. When compared to the classical framework, in the a
posteriori sense, the present method greatly improves the predicted heat flux,
wall stress, and temperature and velocity profiles, especially in cases with
strong heat transfer. In addition, the present model solves one ODE instead of
two and has a similar computational cost and implementation complexity as the
commonly used ODE model. | Kevin Patrick Griffin, Lin Fu, Parviz Moin | 2023-07-11T01:21:00Z | http://arxiv.org/abs/2307.04958v1 | Near-wall model for compressible turbulent boundary layers based on an inverse velocity transformation
###### Abstract
In this work, a near-wall model, which couples the inverse of a recently developed compressible velocity transformation (Griffin, Fu, & Moin, _PNAS_, 118:34, 2021) and an algebraic temperature-velocity relation, is developed for high-speed turbulent boundary layers. As input, the model requires the mean flow state at one wall-normal height in the inner layer of the boundary layer and at the boundary-layer edge. As output, the model can predict mean temperature and velocity profiles across the entire inner layer, as well as the wall shear stress and heat flux. The model is tested in an _a priori_ sense using a wide database of direct numerical simulation high-Mach-number turbulent channel flows, pipe flows, and boundary layers (48 cases with edge Mach numbers in the range of 0.77-11 and semi-local friction Reynolds numbers in the range of 170-5700). The present model is significantly more accurate than the classical ordinary differential equation (ODE) model for all cases tested. The model is deployed as a wall model for large-eddy simulations in channel flows with bulk Mach numbers in the range of 0.7-4 and friction Reynolds numbers in the range of 320-1800. When compared to the classical framework, in the _a posteriori_ sense, the present method greatly improves the predicted heat flux, wall stress, and temperature and velocity profiles, especially in cases with strong heat transfer. In addition, the present model solves one ODE instead of two and has a similar computational cost and implementation complexity as the commonly used ODE model.
## 1 Introduction
The largest driver of computational cost in numerical simulations of wall-bounded turbulence is typically the numerical resolution in the near-wall region. In scale-resolving simulations, e.g., wall-resolved (WR) large-eddy simulation (LES), high spatial and temporal resolutions are required to accurately simulate the small-scale eddies near walls. Wall models, or approximate boundary conditions, can be employed to reduce the near-wall resolution requirements. The computational cost (the number of grid points multiplied by the number of
time steps) for the simulation of a turbulent boundary layer scales with the Reynolds number as \(Re^{2.7}\) for WRLES and \(Re^{1.1}\) for wall-modeled (WM) LES (Yang & Griffin, 2021). Thus, wall models lead to substantial cost savings for high-Reynolds-number applications. In simulations of the Reynolds-averaged Navier-Stokes (RANS) equations, high spatial resolution is also required to resolve the steep near-wall gradients in the mean flow. Therefore, wall models --typically referred to as wall functions in the RANS context --can also greatly accelerate numerical simulations.
The present work focuses on the paradigm of wall-stress modeling (Larsson _et al._, 2016; Bose & Park, 2018) for LES. These models were derived from RANS analysis of boundary layers and typically invoke a zero-equation RANS model such as the Prandtl mixing length argument (Prandtl, 1925), which models the turbulence length scale as a linear function of the wall-normal distance. An empirical damping function is introduced following Van Driest (1956) to ensure the correct near-wall scaling of the mixing length. RANS models have naturally been widely used as boundary conditions for under-resolved RANS simulations (e.g., Abrahamson & Brower (1988); Lien _et al._ (1998); Goncalves & Houdeville (2001); Parente _et al._ (2011)). In this context, such a model is typically referred to as a wall function. Cabot (1995); Cabot & Moin (2000) showed that the mixing length RANS model is suitable for use as a boundary condition for the LES equations, i.e., for deployment as a wall-stress model. Specifically, they invoke the one-dimensional simplification of the RANS streamwise momentum equation. That is,
\[\frac{\mathrm{d}}{\mathrm{d}y}\left((\overline{\mu}+\overline{\mu}_{t})\frac{ \mathrm{d}\widetilde{U}}{\mathrm{d}y}\right)=0, \tag{1}\]
where \(\overline{\mu}\), \(\overline{\mu}_{t}\), and \(\widetilde{U}\) are the molecular dynamic viscosity, eddy viscosity, and velocity profiles, respectively, and \(y\) is the wall-normal coordinate. \(\overline{(\cdot)}\) denotes the Reynolds average and \(\overline{(\cdot)}\) denotes the Favre (density-weighted) average. Throughout this work the Favre- (density-weighted-) averaged RANS and LES equations are employed. The eddy viscosity is further modeled as
\[\overline{\mu}_{t}=\kappa y\overline{\rho}\sqrt{\tau_{w}/\overline{\rho}} \left(1-\exp(y^{+}/A^{+})\right)^{2}, \tag{2}\]
where \(\overline{\rho}(y)\) is the density profile. The subscript \((\cdot)_{w}\) denotes quantities evaluated at the wall. \(\tau_{w}=\overline{\mu}_{w}(d\widetilde{U}/dy)_{w}\) is the wall shear stress. The superscript \((\cdot)^{+}\) denotes non-dimensional by the friction velocity \(u_{\tau}=\sqrt{\tau_{w}/\overline{\rho}_{w}}\), \(\overline{\rho}_{w}\), and the kinematic wall viscosity \(\overline{\nu}_{w}=\overline{\mu}_{w}/\overline{\rho}_{w}\).
The von Karman constant \(\kappa=0.41\) and the eddy-viscosity damping coefficient \(A^{+}=17\) are adopted following Cabot & Moin (2000).
For an incompressible flow, the density and molecular dynamic viscosity are known constants. In the context of WMLES, the ODE in Eq. (1) is solved with two boundary conditions: 1) the no-slip wall condition and 2) a velocity sample, which is taken from the LES at a wall-normal distance referred to as the matching location. Note that the solution procedure is iterative because the eddy viscosity depends on the wall stress (Eq. (2)). The computed wall stress \(\tau_{w}\) is then applied as a momentum-flux boundary condition for the outer LES solver, which completes the two-way coupling of the wall model (inner) solution and the PDE (outer) simulation.
For compressible flow, the RANS equation for temperature can similarly be simplified to the one-dimensional form (Larsson _et al._, 2016; Bose & Park, 2018), which results in a
second, coupled ODE for the temperature profile, i.e.,
\[\frac{\mathrm{d}}{\mathrm{d}y}\left((\overline{\mu}+\overline{\mu}_{t})\widetilde {U}\frac{\mathrm{d}\widetilde{U}}{\mathrm{d}y}+C_{p}(\frac{\overline{\mu}}{ \mathrm{Pr}}+\frac{\overline{\mu}_{t}}{\mathrm{Pr}_{t}})\frac{\mathrm{d} \widetilde{T}}{\mathrm{d}y}\right)=0, \tag{3}\]
where \(\widetilde{T}\) is the temperature profile. \(C_{p}\) is the specific heat capacity at constant pressure, \(\mathrm{Pr}\) is the Prandtl number, and \(\mathrm{Pr}_{t}\) is the turbulent Prandtl number, which is assumed to be 0.9 (Larsson _et al._, 2016). The dependence of molecular dynamic viscosity on temperature can be assumed to follow a power law or Sutherland's law. The ideal gas equation of state closes the system and the thin-boundary-layer assumption implies that the pressure is constant across the inner layer.
In WMLES, the temperature ODE in Eq. (3) is solved with two additional boundary conditions: 1) the wall temperature and 2) the temperature at the matching location. Note that the solution procedure is also iterative in that the temperature depends on the velocity solution. The velocity also depends on the temperature through the density and viscosity. Solving two coupled boundary-value problems iteratively introduces a higher degree of nonlinearity compared to the incompressible case and can prove difficult to converge in flows with strong temperature gradients (strong heat transfer), e.g., as was reported in Fu _et al._ (2021). In addition to the numerical difficulties, the accuracy of this wall model degrades substantially in flows with strong heat transfer (as will be demonstrated herein).
Improved results for high-speed wall-bounded turbulent flows over cold walls have been obtained by using the semi-local scaling in the damping function (Yang & Lv, 2018; Fu _et al._, 2022), however, Iyer & Malik (2019) reports that in adiabatic walls, the classical scaling (consistent with the van Driest transformation) is more accurate. This motivates using a recently developed compressible velocity transformation that is accurate for both diabatic and adiabatic turbulent boundary layers (Griffin _et al._, 2021_c_).
In this work, a wall model for high-speed wall-bounded turbulent flows is developed in section 2. The model is evaluated via _a priori_ testing in section 3 and via _a posteriori_ validation in section 4. Conclusions are drawn in section 5.
## 2 Model development
There are two principal differences between the present model and the classical ODE-based wall model (Eqs. (1-3)): (1) rather than solving an ODE for the compressible velocity profile directly, the incompressible ODE (with constant density and viscosity) is solved, and an inverse compressibility transformation (Griffin _et al._, 2021_c_) is employed; (2) rather than employing a RANS equation for temperature and assuming a constant \(Pr_{t}\), an algebraic temperature-velocity relation is adopted, thus obviating the need to solve a second ODE.
### Inverse compressible velocity transformation
A compressible velocity transformation seeks to map the local mean strain rate of the variable-property compressible flow, \(d\widetilde{U}/dy\), to the non-dimensional mean strain rate of a constant-property incompressible flow at an equivalent Reynolds number. Upon integration, the transformation maps the compressible velocity profile to an incompressible velocity profile. In this way, a successful transformation can collapse profiles with different Mach numbers and thermal boundary conditions to a single incompressible law of the wall. Coupled with the incompressible profile implied by Eq. (1), an inverse velocity transformation can recover the compressible velocity profile.
The total-stress-based compressible velocity transformation of Griffin _et al._ (2021_c_) is used in this work since it is shown to be accurate in a wide range of flows, including boundary
layers with strong heat transfer. This transformation uses the viscous scaling arguments of Trettel & Larsson (2016) and Patel _et al._ (2016) in the near-wall viscous region and uses a modified version of the turbulence equilibrium arguments of Zhang _et al._ (2012) for the logarithmic region. The transformation is an algebraic function that relates the local mean strain rate of the compressible flow, \(d\widetilde{U}/dy\), to the non-dimensional incompressible mean strain-rate, \(S_{t}^{+}\), at the same semi-local friction Reynolds number, \(Re_{\tau}^{*}\), according to the relation
\[S_{t}^{+}=\frac{S_{eq}^{+}}{1+S_{eq}^{+}-S_{TL}^{+}}, \tag{1}\]
where \(S_{eq}^{+}=1/\overline{\mu}^{+}d\widetilde{U}^{+}/dy^{*}\) and \(S_{TL}^{+}=\overline{\mu}^{+}d\widetilde{U}^{+}/dy^{+}\). The superscript \((\cdot)^{*}\) denotes non-dimensionalization by the local density \(\rho(y)\), local molecular dynamic viscosity \(\mu(y)\), and the semi-local friction velocity \(u_{sl}=\sqrt{\tau_{w}/\overline{\rho}(y)}\)(Huang _et al._, 1995; Coleman _et al._, 1995). The semi-local friction Reynolds number is thus defined as \(Re_{\tau}^{*}=\overline{\rho}_{e}u_{sl}\delta/\overline{\mu}_{e}\), where the subscript \((\cdot)_{e}\) denotes quantities evaluated at the boundary layer edge (throughout this work, \(\delta\) denotes the channel half height or the boundary-layer thickness). Note that all variables of the form \(S_{(\cdot)}^{+}\) represent different local non-dimensionalizations of the compressible strain rate, which were designed in prior works with the target of equaling the strain rate implied by the incompressible law of the wall. For example, although \(S_{TL}^{+}\) is equivalent to the viscous stress, it is also a non-dimensionalization of the mean strain rate in a compressible flow. \(S_{TL}^{+}\) will exactly recover the incompressible strain rate of a flow with the equivalent viscous stress as long as the compressible flow also obeys \(\mu^{+}=1\). Additionally, note that the transformation in Eq. (1) assumes a constant stress layer in the buffer region of the boundary layer, where there is a transition between the underlying viscous and equilibrium transformations. Griffin _et al._ (2021) verifies that the deployment of this assumption does not significantly affect the accuracy of the transformation in equilibrium flows, and Bai _et al._ (2022) verifies the same for boundary layers with moderate pressure gradients.
The inverse velocity transformation is readily obtained by algebraically rearranging the transformation to find
\[\frac{\mathrm{d}\widetilde{U}^{+}}{\mathrm{d}y^{*}}=\left(\frac{1}{\overline{ \mu}^{+}S_{t}^{+}}-\frac{1}{\overline{\mu}^{+}}+\sqrt{\overline{\rho}^{+}} \left(1+\frac{1}{2\overline{\rho}^{+}}\frac{\mathrm{d}\overline{\rho}^{+}}{ \mathrm{d}y^{+}}y^{+}-\frac{1}{\overline{\mu}^{+}}\frac{\mathrm{d}\overline{ \mu}^{+}}{\mathrm{d}y^{+}}y^{+}\right)\right)^{-1}. \tag{2}\]
The incompressible mean strain rate \(S_{t}^{+}\) is available algebraically from the constant-property version of Eq. (1), i.e., \(\overline{\rho}=\overline{\rho}_{w}\) and \(\overline{\mu}=\overline{\mu}_{w}\). The incompressible model constants \(\kappa\) and \(B\) are determined using the aforementioned calibration but \(Re_{\tau}^{*}\) is used in place of \(Re_{\tau}\) since the former is invariant under the velocity transformation. Integrating Eq. (2) with variable properties yields the targeted compressible velocity profile; the properties are functions of temperature, which will be discussed next.
### Algebraic temperature-velocity relation
In order to close the velocity equation (Eq. (2)), the temperature profile must be determined. The classical model uses the constant turbulent Prandtl number assumption to develop a coupled ODE for temperature (Eq. (3)). However, the constant Prandtl number assumption has been shown to be less accurate than invoking the Generalized Reynolds Analogy (GRA) Zhang _et al._ (2014). Thus, the presently proposed wall model leverages the GRA instead.
The analogy between the conservation equations for momentum and energy has led to the derivation of several algebraic relations between temperature and velocity. Walz's equation (Walz, 1969) (also known as the modified Crocco-Busemann relation (Crocco, 1932; Busemann, 1931)) leverages the analogy between the conservation equations for momentum
and energy to arrive at an algebraic relation between mean temperature and velocity. This relation accounts for non-unity \(Pr\) effects via a recovery factor, which is taken as \(r=(\Pr)^{1/3}\). While this relation is accurate in high-speed adiabatic boundary layers, Duan & Martin (2011) observed that the accuracy degrades significantly in boundary layers with wall heat transfer and proposed a semi-empirical correction to the relation. This was subsequently recast in terms of a generalized Reynolds analogy (Zhang _et al._, 2014), thereby introducing the Reynolds analogy factor, \(s\), which they choose as \(s=1.14\) following convention. The resulting temperature-velocity relation is given as,
\[\widetilde{T}=\widetilde{T}_{w}+s\Pr(\widetilde{T}_{r}-\widetilde{T}_{w}) \frac{\widetilde{U}}{\widetilde{U}_{e}}\left(1-\frac{\widetilde{U}}{\widetilde {U}_{e}}\right)+\left(\frac{\widetilde{U}}{\widetilde{U}_{e}}\right)^{2}\left( \widetilde{T}_{e}-\widetilde{T}_{w}\right), \tag{3}\]
where the subscript \((\cdot)_{e}\) denotes quantities at the boundary-layer edge, the recovery temperature \(\widetilde{T}_{r}=\widetilde{T}_{e}+r\widetilde{U}_{e}^{2}/(2C_{p})\). This relation has been validated across a wide range of channel flows, pipe flows, and boundary layers with and without heat transfer (Zhang _et al._, 2014, 2018; Volpiani _et al._, 2020; Modesti & Pirozzoli, 2019; Fu _et al._, 2021). Specifically, this relation is derived by Zhang _et al._ (2014) through defining the generalized recovery temperature \(\widetilde{T}_{r_{g}}=\widetilde{T}+r_{g}\widetilde{U}^{2}/(2C_{p})\). Then, it is assumed that \(\widetilde{T}_{r_{g}}=\widetilde{T}_{w}+U_{s}\widetilde{U}/C_{p}\), where \(U_{s}\) is a constant velocity scale. Equivalently, the assumption can be reinterpreted that \(\widetilde{T}\) can be approximately represented as a second order Taylor expansion in terms of powers of \(\widetilde{U}\), i.e.,
\[\widetilde{T}=b_{0}+b_{1}\widetilde{U}+b_{2}\widetilde{U}^{2}/2, \tag{4}\]
where the no-slip condition implies \(b_{0}=\widetilde{T}_{w}\), \(b_{1}=(\mathrm{d}\widetilde{T}/\mathrm{d}\widetilde{U})|_{w}\). The algebraic relation of Zhang _et al._ (2014) can be recovered if \(b_{2}\) is specified by evaluating the expression at the boundary-layer edge \(\widetilde{T}_{e}=\widetilde{T}|_{\widetilde{U}_{e}}\) and \(b_{1}\) is determined using the Reynolds analogy. However, in this work, we use the matching data (denoted with subscript \((\cdot)_{m}\)) \(\widetilde{T}_{m}=\widetilde{T}|_{\widetilde{U}_{m}}\) to set \(b_{2}\), such that the exact value at the matching location can be enforced. The final temperature-velocity relation is
\[\widetilde{T}=\widetilde{T}_{w}+s\Pr(\widetilde{T}_{r}-\widetilde{T}_{w}) \frac{\widetilde{U}}{\widetilde{U}_{e}}\left(1-\frac{\widetilde{U}}{ \widetilde{U}_{m}}\right)+\left(\frac{\widetilde{U}}{\widetilde{U}_{m}}\right) ^{2}\left(\widetilde{T}_{m}-\widetilde{T}_{w}\right). \tag{5}\]
Note that one consequence of this relation is that the wall heat flux and wall shear stress are algebraically linked by the Reynolds analogy factor, where the heat flux is defined as \(q_{w}=s\tau_{w}C_{p}(\widetilde{T}_{w}-\widetilde{T}_{r})/\widetilde{U}_{e}\).
### Implementation details
Like the classical model (Eqs. (1-3)), the present model requires a matching temperature, velocity, and density, an equation of state (the ideal gas law is used in this work and the thin-boundary-layer assumption implies the pressure is constant), and a viscosity law (either a power law or Sutherland's law depending on the relevant reference data). In addition, the present model requires as input the velocity and temperature at the boundary-layer edge (computed using the method of Griffin _et al._ (2021)) for deploying the algebraic temperature-velocity relation (Eq. (5)) due to its dependence on the recovery temperature and edge velocity. To solve the nonlinear system, the following approach is used. The incompressible ODE (Eq. (1)) with constant properties is integrated once analytically, rearranged for \(d\widetilde{U}/dy\) and substituted into the inverse velocity transformation (Eq. (2)) as \(S\). This equation (initial value problem with an initial guess for the wall shear stress) is solved via the shooting method, where, at each integration step, a sub-iteration determines
the velocity increment that is consistent with the temperature-velocity relation (Eq. (5)) and the resulting density and viscosity at that location.
The implementation of the present model is available at the link provided in the data availability section at the end of this manuscript. This implementation was first developed by Griffin _et al._ (2021) to compute temperature and velocity profiles for estimating grid-point requirements in compressible flows, and this manuscript serves as the comprehensive documentation and the further development of the underlying inverse method for WMLES approach for the first time. Intermediate developments were presented in Griffin _et al._ (2022), and initial results were reported in Griffin _et al._ (2022); Griffin (2022). Kumar & Larsson (2022) used a similar procedure but with a data-driven velocity transformation (Volpiani _et al._, 2020). Chen _et al._ (2023) and Song _et al._ (2023) approximate the mean profiles of channel flows by considering two velocity transformations (Trettel & Larsson, 2016; Griffin _et al._, 2021) and employing the Central Mean Temperature Scaling (Song _et al._, 2022).
## 3 A priori results
The present and classical wall models are first evaluated via _a priori_ analysis. That is, the matching data are taken from DNS at a wall-normal distance of \(y_{m}=0.3\delta\). The wall model estimates the velocity and temperature profiles, as well as the wall shear stress and wall heat flux. The predicted velocity and temperature profiles are shown in Figure 1 and 2 for four channel flows with various Mach and Reynolds number conditions, Figure 3 for two pipe flows at different Reynolds numbers, and Figure 4 for two boundary layers, one with a heated and one with a cooled wall boundary condition. The bulk Mach number is defined as \(M_{b}=U_{b}/\sqrt{(}\gamma R\overline{T}_{w})\), where \(\gamma\) is the ratio of specific heats and \(R\) is the gas constant. The bulk Reynolds number is defined as \(Re_{b}=\rho_{b}U_{b}\delta/\overline{\mu}_{w}\), where the bulk density is defined as \(\rho_{b}=\int\!\!\int_{A}\overline{\rho}dA/A\) and the bulk velocity is defined as \(U_{b}=\int\!\!\int_{A}\widetilde{U}dA/A\), where \(A\) is the cross-sectional area of the domain. Reference DNS data are provided by Modesti & Pirozzoli (2019); Trettel & Larsson (2016); Zhang _et al._ (2018); Volpiani _et al._ (2020). For all cases, the profiles predicted by the present model agree with the DNS profiles significantly better than the classical model. Note that the velocities are non-dimensionalized by the predicted friction velocity, so the obtained profiles do not necessarily pass through the matching data if the predicted wall stress is inaccurate.
Next, the model performance is evaluated with a wide range of DNS data from 48 different simulations. The errors in the modeled wall stress and heat flux predictions are reported for each case with \(y_{m}=0.3\delta\). The relative error in the wall stress prediction \(\epsilon_{\tau_{w}}\) is defined as
\[\epsilon_{\tau_{w}}=\frac{\tau_{w,\mathrm{model}}-\tau_{w,\mathrm{DNS}}}{\tau _{w,\mathrm{DNS}}}\times 100\%. \tag{1}\]
The non-dimensional wall heat flux is defined as \(B_{q}=q_{w}/(C_{p}\widetilde{T}_{w}\overline{\rho}_{w}u_{\tau})\), and the relative error in the wall heat flux is defined as
\[\epsilon_{q_{w}}=\frac{q_{w,\mathrm{model}}-q_{w,\mathrm{DNS}}}{q_{w,\mathrm{ DNS}}}\times 100\%. \tag{2}\]
\(\epsilon_{q_{w}}\) is not reported for adiabatic boundary layer data because it is undefined, and both models predict negligible heat transfer for these data. The data considered include the compressible channel flow simulations of Modesti & Pirozzoli (2016); Trettel & Larsson (2016); Yao & Hussain (2020), the pipe flow simulations of Modesti & Pirozzoli (2019), the adiabatic supersonic and hypersonic boundary layers of Pirozzoli & Bernardini (2011); Zhang _et al._ (2018); Volpiani _et al._ (2018, 2020), and the diabatic supersonic and hypersonic boundary layers of Zhang _et al._ (2018); Volpiani _et al._ (2018, 2020); Fu _et al._ (2019). The
Figure 2: _A priori_ wall-model profiles of velocity (a,c) and temperature (b,d) are plotted versus the wall-normal coordinate. Results are for supersonic channel flows with panels (a) and (b) characterized by \(Re^{*}_{\tau}=590\), \(M_{b}=3.0\), and \(-B_{q}=0.12\) and panels (c) and (d) characterized by \(Re^{*}_{\tau}=200\), \(M_{b}=4.0\), and \(-B_{q}=0.19\).
Figure 1: _A priori_ wall-modeled profiles of velocity (a,c) and temperature (b,d) are plotted versus the wall-normal coordinate. Results are for supersonic channel flows with panels (a) and (b) characterized by \(Re^{*}_{\tau}=410\), \(M_{b}=1.7\), and \(-B_{q}=0.053\) and panels (c) and (d) characterized by \(Re^{*}_{\tau}=590\), \(M_{b}=1.7\), and \(-B_{q}=0.049\).
cases have edge Mach numbers in the range of 0.77-11 and semi-local friction Reynolds numbers in the range of 170-5700. Only the cases with \(Re_{\tau}^{*}>150\) are analyzed because lower Reynolds numbers can exhibit strong Reynolds number effects (Modesti & Pirozzoli, 2016) and are not the target of this study. The error measures are shown in Figure 5. The present model generates significantly less modeling error than the classical model, with the greatest error reduction when the non-dimensional heat transfer is the highest.
To distinguish the effects of Reynolds number and compressibility, we explore the effect of using Reynolds-number-dependent coefficients for the underlying incompressible Law of the Wall. Specifically, rather than letting the von Karman constant \(\kappa\) and the damping coefficient \(A^{+}\) be fixed values of 4.1 and 17, respectively, we recalibrate these values using incompressible reference data at various Reynolds numbers. We employ the DNS data from five incompressible turbulent channel flows (Lee & Moser, 2015) with friction Reynolds numbers \(Re_{\tau}=u_{\tau}\delta/\nu_{w}=\{182,543,1000,1990,5190\}\), and fit the least-squares optimal values of \(\kappa=\{0.400,0.408,0.400,0.391,0.391\}\) and \(A^{+}=\{18.2,17.4,17.0,16.5,16.5\}\). Linear interpolation and constant extrapolation of the optimal values are used to define \(\kappa\) and \(A^{+}\) for all Reynolds numbers. The inverse velocity transformation uses the semi-local wall-normal coordinate \(y^{*}\), so the incompressible data should be interpreted as a function of \(Re_{\tau}^{*}\) rather than \(Re_{\tau}\). _A priori_ analysis is performed as before using compressible DNS data, but with the optimal coefficients selected according to the \(Re_{\tau}^{*}\) observed in the compressible DNS. In Figure 6(a-b), for the case of a turbulent channel flow with \(Re_{\tau}^{*}=190\) and \(M_{b}=1.7\), there is a modest improvement from using the Reynolds-number-dependent coefficients for the incompressible model. This suggests that at low Reynolds numbers, the deviation of DNS data for the incompressible constant-property velocity profile from the nominal law of the wall is on the same order as the deviation of the constant coefficient model and compressible
Figure 3: _A priori_ wall-modeled profiles of velocity (a,c) and temperature (b,d) are plotted versus the wall-normal coordinate. Results are for supersonic pipe flows with panels (a) and (b) characterized by \(Re_{\tau}^{*}=333.5\), \(M_{b}=1.500\), and \(-B_{q}=0.047\) and panels (c) and (d) characterized by \(Re_{\tau}^{*}=668.8\), \(M_{b}=1.500\), and \(-B_{q}=0.044\).
DNS velocity profile. However, there is not a complete collapse of the model with Reynolds-number-dependent coefficients with the compressible DNS. This is likely attributed to the documented error in the compressible velocity transformation at \(Re_{\tau}^{*}<\sim 200\)Griffin _et al._ (2021). In Figure 6(c-d), the case of a turbulent channel flow with \(Re_{\tau}^{*}=590\) and \(M_{b}=1.7\) is considered. The Reynolds number is high enough that the optimal and constant coefficients are similar; thus, the performance of the present model with either set of coefficients is similar. Overall, there is no significant sensitivity to tuning the coefficients, so, for simplicity, we use the constant coefficients of \(\kappa=0.41\) and \(A^{+}=17\) for the remainder of this manuscript.
Two more recently developed compressible wall models are considered. The first is
Figure 4: _A priori_ wall-modeled profiles of velocity (a,c) and temperature (b,d) are plotted versus the wall-normal coordinate. Results are for hypersonic diabatic (isothermal) boundary layers with panels (a) and (b) characterized by \(Re_{\tau}^{*}=5677\), \(M_{e}=11.46\), and \(-B_{q}=0.19\) (cooled wall) and panels (c) and (d) characterized by \(Re_{\tau}^{*}=2328\), \(M_{e}=4.327\), and \(-B_{q}=-0.039\) (heated wall).
Figure 5: _A priori_ modeling errors of the wall shear stress \(\tau_{w}\) (a) and the wall heat flux \(q_{w}\) (b) versus the heat transfer coefficient \(B_{q}\). The model matching data are taken from DNSs of various channel and pipe flows (squares), nearly adiabatic boundary layers (triangles), and diabatic boundary layers (circles).
developed by Yang & Lv (2018); they show that the damping function in the classical model (Eq. (2)) is consistent with the velocity transformation of Van Driest (1951), which has been shown to be less accurate in channel flows than the velocity transformation of Trettel & Larsson (2016). Therefore, Yang & Lv (2018) rewrite the damping function in terms of \(y^{*}\) and show that this makes the model consistent with the Trettel-Larsson transformation. The second additional model considered is proposed by Chen _et al._ (2022), which also uses the semi-local damping function and further replaces the constant turbulent Prandtl number assumption of the classical model with an explicit function of \(y^{*}\). In Figure 7, these two additional wall models are compared with the classical and present wall models. Figure 7(a-d) indicate that all models are performing well in the channel flows except for the classical model. This behavior is explained by the behavior of the underlying velocity transformations. The models of Yang & Lv (2018) and Chen _et al._ (2022) use the Trettel-Larsson transformation and the present model uses the total-stress-based transformation (Griffin _et al._, 2021). Both of these transformations are well established to outperform the van Driest transformation (used by the classical model) in channel flows. In Figures 7(e-f) and 7(g-h), the models are applied to boundary layers with cooled and heated walls, respectively. For both cases the classical model is the least accurate likely due to the inaccuracy of the van Driest transformation for boundary layers with strong heat transfer (Griffin _et al._, 2021), as the velocity transformation is the only difference between the classical model and that of Yang & Lv (2018). Also for both cases, the models that use semi-local damping (Yang & Lv, 2018; Chen _et al._, 2022) perform almost identically, suggesting limited sensitivity in these flows to the change in turbulent Prandtl number model proposed by Chen _et al._ (2022). For the heated boundary layer, the present model slightly improves the prediction of the temperature peak and the log slope of the velocity compared to the semi-local damping
Figure 6: _A priori_ wall-modeled profiles of velocity (a,c) and temperature (b,d) are plotted versus the wall-normal coordinate. Results are for supersonic channel flows with panels (a) and (b) characterized by \(Re_{\tau}^{*}=190\), \(M_{b}=1.7\), and \(-B_{q}=0.057\) and panels (c) and (d) characterized by \(Re_{\tau}^{*}=590\), \(M_{b}=1.7\), and \(-B_{q}=0.049\).
models. For the cooled boundary layer, there is a more substantial improvement from the present model for the log slope of the velocity but the temperature profiles are only slightly improved. These improvements of the present model over the semi-local damping models are consistent with the improvements of the total-stress-based transformation over the Trettel-Larsson transformation for boundary layers with strong heat transfer.
## 4 A posteriori WMLES results
In this section, several WMLES simulations are conducted using charLES, a high-fidelity compressible finite-volume code (Bres _et al._, 2018). The numerical method consists of a low-dissipation, approximately entropy-preserving scheme, which utilizes artificial bulk viscosity to capture the solution discontinuities. Additional details about the solver and a summary of validation campaigns are available in Fu _et al._ (2021, 2022).
The WMLESs conducted herein are compressible turbulent channel flows driven with uniform volumetric momentum and energy source terms to achieve the same bulk Mach number \(M_{b}\) and bulk Reynolds number \(Re_{b}\) conditions of the DNS simulations of Trettel & Larsson (2016) as summarized in table 1.
The cases are run on a domain of size \((\pi\times 2\times\pi\sqrt{3}/4)\delta\) with periodic boundary conditions in the streamwise (first) and spanwise (third) dimensions. The mean profiles and fluxes were insensitive to doubling of the streamwise and spanwise domain sizes. Consistent with the DNS simulations, the viscosity is described by \(\mu/\mu_{ref}=(T/T_{ref})^{0.75}\) and \(Pr=0.7\). All cases are initialized from a uniform solution with the target bulk Mach number and Reynolds number, and zero velocity in the wall-normal and spanwise directions. The simulations are allowed to transition from laminar to turbulent states naturally and are run for \(\sim 500\) eddy turnover times \(\delta/u_{\tau}\). To challenge the wall model and isolate the effect of near-wall numerical errors (Kawai & Larsson, 2012), the wall model matching location is placed at \(y_{m}=0.3\delta\) and a coarse grid of 12 points per half channel height is used for all simulations unless otherwise indicated. The computational cost of the present model is similar to that of the classical model. The present model varies between being 7% faster and 32% slower depending on the Reynolds number, matching location, and Mach number. No effort was made to optimize the performance of the present model, so these numbers are just meant to indicate that the approximate cost of the model is similar in the cases tested. In general, modest differences in the cost of a wall model can be efficiently amortized over parallel processors via load balancing that assigns fewer control volumes to processors that contain more boundary faces, but this is not used in the present study.
The velocity and temperature profiles from WMLES are shown in Figure 8 and 9 for turbulent channel flows at four combinations of Reynolds and Mach numbers. In all cases, the present model is significantly more accurate than the classical model for the prediction of velocity and temperature with respect to the reference DNS solutions. For these cases and the others listed in table 1, the errors in the predictions of the wall shear stress and
\begin{table}
\begin{tabular}{l||c|c|c|c|c|c|c|c} \hline \(M_{b}\) & 0.6998 & 0.6999 & 1.698 & 1.699 & 1.699 & 2.994 & 2.996 & 2.997 & 3.993 \\ \(Re_{b}\) & 7498 & 11750 & 4495 & 9993 & 15490 & 7486 & 14980 & 23980 & 9979 \\ \(Re_{\tau}\) & 436.6 & 650.9 & 318.6 & 661.6 & 963.6 & 636.4 & 1208 & 1842 & 1010. \\ \(Re_{\tau}^{\tau}\) & 395.7 & 590.0 & 194.8 & 405.4 & 590.8 & 204.0 & 387.7 & 589.7 & 201.3 \\ \(-100B_{q}\) & 1.061 & 1.009 & 5.668 & 5.273 & 4.942 & 12.92 & 12.15 & 11.50 & 19.04 \\ \end{tabular}
\end{table}
Table 1: Non-dimensional flow parameters for the nine compressible turbulent channel flow cases considered for _a posteriori_ testing within the WMLES framework.
the wall heat flux are shown in Figure 12. The wall model is based on the inversion of the total-stress-based velocity transformation Griffin _et al._ (2021) and that was observed to have the greatest improvement over classical approaches in cases with strong heat transfer. This explains why the errors from the classical wall model grow significantly with the strong heat transfer.
Figure 7: _A priori_ wall-modeled profiles of velocity (a,c,e,g) and temperature (b,d,f,h) are plotted versus the wall-normal coordinate for four wall models. Panels (a-d) correspond to the supersonic channel flows presented in Figure 1; panels (e-h) correspond to the hypersonic diabatic boundary layers presented in Figure 4.
transfer, but the errors from the present model are rather small and do not vary with heat flux.
The primary quantities of interest for WMLES are the predictions of the mean profiles and fluxes. The fluctuating parts of LES solutions are not expected to exactly agree with DNS results unless the WMLES is conducted with DNS-like resolution, which is impractical. Nevertheless, the effect of wall models on the fluctuating part of the LES solution is presented for comparison between the present and classical models. Figures 10 and 11 include profiles of the LES resolved turbulent Mach number \(M_{t}=u^{\prime\prime}/\sqrt{(}\gamma R\tilde{T})\) and the LES temperature fluctuations \(T^{\prime\prime}\), where \((\cdot)^{\prime\prime}\) denotes the Favre fluctuation \((\cdot)^{\prime\prime}=(\cdot)-(\tilde{\cdot})\). There is an improvement in the predictions of the fluctuating statistics by the present model compared to those by the classical model. An accurate prediction of second-order statistics is unlikely without an accurate prediction of mean statistics. Thus, the improved second-order statistics of the present model are likely a consequence of its improved mean statistics compared to those of the classical model (see Figure 8 and 9). However, correct prediction of the mean field is not sufficient for the accurate prediction of second-order statistics in LES. In fact, the fluctuations in the LES results are generally over-predicted compared to the DNS data. The over-prediction may be due in part to the wall-blocking effect of stress-based wall model (Bae _et al._, 2018). Given the coarse resolution of twelve points across the channel half height, numerical errors and subgrid-scale model errors are certainly contributing. The subgrid-scale model has not been adapted for compressibility other than by accounting for variable properties (Moin _et al._, 1991). The turbulent Mach numbers are on the order of 0.3,
Figure 8: Velocity (a,c) and temperature (b,d) profiles from WMLES with the classical (blue) and present (red) wall models. A channel flow with \(M_{b}=1.7\), \(Re_{\tau}^{*}=410\), and \(-B_{q}=0.053\) is shown in panels (a) and (b), and one with \(M_{b}=1.7\), \(Re_{\tau}^{*}=590\), and \(-B_{q}=0.049\) is shown in panels (c) and (d). Within the WMLES framework, the outer solutions are computed by the LES PDE solver, while the inner solutions are computed by the two wall models. These solutions coincide at the LES matching point nearest to \(y_{m}=0.3\delta\), which is indicated with the dashed and dotted lines for the classical and present models, respectively.
which is sufficiently high that modeling for dilatational dissipation is a promising path to further improvements of the fluctuating statistics in the volume of the LES domain. Such research may be pursued independently of the current study focusing on wall modeling and the prediction of mean profiles and fluxes.
### Sensitivity to numerical resolution and the matching location
In WMLES, the wall model exchanges data with the outer LES solver at the matching location. The modeling error in the inner wall modeled equations may grow as the matching distance increases, which motivates placing the matching location near the wall. On the other hand, the matching location should be far enough from the wall in terms of the LES mesh resolution so that the LES solver can resolve the large scales of turbulence at the height of the matching location. Otherwise, numerical errors may contaminate the matching data that is provided as input to the wall model. Kawai & Larsson (2012) demonstrate this trade-off and how LES numerical errors contaminate the wall-modeled solution if the matching distance is on the order of the wall-normal grid resolution. The optimal matching distance will depend on the accuracy of a specific LES solver, but a typical choice is \(y_{m}\geqslant 3\Delta\)(Kawai & Larsson, 2012), where \(\Delta\) is the wall-normal grid spacing near the wall.
To evaluate the convergence and sensitivity of the presently proposed wall model, two types of mesh convergence studies are considered. In the first study, the matching location is held fixed at \(y_{m}=0.3\delta\), which corresponds in semi-local units to \(y_{m}^{*}=186\) and \(y_{m}^{*}=237\) for the present model and classical model cases across all resolutions. For the case of \(M_{b}=3.0\)
Figure 9: Velocity (a,c) and temperature (b,d) profiles from WMLES with the classical (blue) and present (red) wall models. A channel flow with \(M_{b}=3.0\), \(Re_{\pi}^{*}=590\), and \(-B_{q}=0.12\) is shown in panels (a) and (b), and one with \(M_{b}=4.0\), \(Re_{\pi}^{*}=200\), and \(-B_{q}=0.19\) is shown in panels (c) and (d). Within the WMLES framework, the outer solutions are computed by the LES PDE solver, while the inner solutions are computed by the two wall models. These solutions coincide at the LES matching point nearest to \(y_{m}=0.3\delta\), which is indicated with the dashed and dotted lines for the classical and present models, respectively.
and \(Re_{\tau}=1800\), the numerical resolution of the WMLES is varied. In Figure 13, the WMLES solutions are shown for three LES resolutions with 9, 18, and 36 grid points across the channel half-height. The uniform hexagonally close-packed mesh topology with global refinement is employed, resulting in three meshes with \(2.0\times 10^{4}\), \(1.6\times 10^{5}\), and \(1.3\times 10^{6}\) control volumes, respectively (note that the reference DNS uses as many as \(6.4\times 10^{8}\) control volumes). In this study, the LES numerical errors at the matching location are expected to diminish as the resolution is refined, but modeling errors from using the wall model over the domain \(y\in[0,0.3\delta]\) are not expected to change with resolution. For this reason, the classical model shows a large error in the log intercept of the velocity profile that is persistent with refinement and consistent with _a priori_ analysis in Figure 2(a). For the finest resolution with the present model, the grid point nearest to the wall exhibits an error that is persistent with refinement, which is consistent with the observations of (Kawai & Larsson, 2012) and does not affect the accuracy of the simulation since the inner solution is applicable for \(y<y_{m}\). For both the present and classical models, the results are only weakly dependent on the grid resolution. This suggests that the leading source of error for the simulations with the classical wall model is in fact the wall model rather than the numerical or subgrid-scale modeling errors, even on the coarsest simulation with 9 grid points per channel half height.
In the second grid convergence study, the models are tested in the way that WMLES is typically used in practice. That is, the matching distance is moved toward the wall as the grid is refined. In this study, two channel flows with different Reynolds number conditions are considered for three LES resolutions with 12, 24, and 48 grid points across the channel half height. The matching locations are \(y_{m}=0.3\delta\), \(0.15\delta\), and \(0.075\delta\), respectively, which corresponds to \(y_{m}=4\Delta\) for all cases, thus the effect of near-wall LES numerical errors is expected to be minor (Kawai & Larsson, 2012). In Figure 14, the convergence study is
Figure 10: LES turbulent Mach number \(M_{t}\) (a,c) and LES temperature fluctuation \(T^{\prime\prime}\) (b,d) profiles from WMLES with the classical (blue) and present (red) wall models. A channel flow with \(M_{b}=1.7\), \(Re_{\tau}^{*}=410\), and \(-B_{q}=0.053\) is shown in panels (a) and (b), and one with \(M_{b}=1.7\), \(Re_{\tau}^{*}=590\), and \(-B_{q}=0.049\) is shown in panels (c) and (d). The symbols represent the outer solutions computed by the LES PDE solver.
performed for \(M_{b}=3.0\) and \(Re_{\tau}^{*}=590\), and a lower Reynolds number case of \(M_{b}=3.0\) and \(Re_{\tau}^{*}=200\) is shown in Figure 15. In both cases, the accuracy of the present model is relatively high and insensitive to mesh resolution compared to that of the classical model. For the higher Reynolds number test, the matching locations in semi-local units are always in the logarithmic region of the boundary layer. Therefore, the WMLES results are not sensitive to refinement over this range of resolutions. However, for the lower Reynolds number case, the most refined meshes lead to semi-local matching locations \(y_{m}^{*}\) in the buffer region. For the classical model, because the relative error of the modeled \(U^{+}\) versus the DNS \(U^{+}\) is maximal in the region of the buffer layer and early log layer (compare to similar a priori results in the \(U^{+}\) case), the DNS \(U^{+}\) and LES \(U^{+}\) are not sensitive to refinement over the whole domain.
Figure 11: LES turbulent Mach number \(M_{t}\) (a,c) and LES temperature fluctuation \(T^{\prime\prime}\) (b,d) profiles from WMLES with the classical (blue) and present (red) wall models. A channel flow with \(M_{b}=3.0\), \(Re_{\tau}^{*}=590\), and \(-B_{q}=0.12\) is shown in panels (a) and (b), and one with \(M_{b}=4.0\), \(Re_{\tau}^{*}=200\), and \(-B_{q}=0.19\) is shown in panels (c) and (d). The symbols represent the outer solutions computed by the LES PDE solver.
Figure 12: WMLES _a posteriori_ modeling errors for the wall shear stress \(\tau_{w}\) (a) and the wall heat flux \(q_{w}\) (b) versus the non-dimensional heat flux \(B_{q}\). WMLES is conducted using the classical (blue) and present (red) wall models for turbulent channel flows at the nine operating conditions listed in table 1.
Figure 6), the convergence behavior for the classical model is complex in this regime. In other words, as the mesh is refined, although the LES numerical errors are diminishing, the wall modeling errors for the classical model may increase or decrease depending on the matching location since the relative modeling error does not monotonically reduce with wall-normal distance. On the other hand, the outer solution of the present model is relatively accurate irrespective of the matching location because the inner wall-modeled solution agrees well with the DNS solution throughout the viscous sublayer, buffer layer, and log layer (which is consistent with similar a priori results in Figure 6).
## 5 Conclusion
In this work, a wall model is proposed for turbulent wall-bounded flows with heat transfer. The model uses an established ODE description of incompressible flow, transforms that equation to account for compressibility effects, and is closed with an algebraic temperature-velocity relation. The resulting model can accurately estimate the near-wall profiles of temperature and velocity when the matching location is in the inner layer. This model is suitable for deployment as a boundary condition for an outer LES or RANS solver, an inflow generation
Figure 14: _A posteriori_ mesh sensitivity study of a channel flow at \(M_{b}=3.0\) and \(Re^{*}_{\tau}=590\) with matching locations dependent on the grid resolution as \(y_{m}=4\Delta\). The colors indicate semi-local matching distance \(y^{*}_{m}\), which is also indicated with vertical dashed lines. The outer WMLES solutions and the inner wall-modeled velocity profiles are indicated with symbols and dotted curves, respectively, for the present wall model (a) and the classical wall model (b).
Figure 13: _A posteriori_ mesh sensitivity study of a channel flow at \(M_{b}=3.0\) and \(Re^{*}_{\tau}=590\) with the matching location fixed at \(y_{m}=0.3\delta\) for all cases, as indicated by the vertical dashed lines. The colors indicate the numerical resolution \(\Delta\). The outer WMLES solutions and the inner wall-modeled velocity profiles are indicated with symbols and dotted curves, respectively, for the present wall model (a) and the classical wall model (b).
scheme, or the base flow for perturbation methods, possibly with the incompressible model augmented with a wake profile for the outer layer of the boundary layer. The proposed method can only be as accurate as the models on which it is based, namely, the forward velocity transformation and the algebraic temperature-velocity relation. While these models have been widely validated in channel and pipe flows and boundary layers with moderate pressure gradients, further studies in complex flows are warranted, e.g., the developing boundary layers on a blunt body behind a curved shock.
The model is first tested _a priori_ to verify that it can recover the boundary layer velocity and temperature data when provided with matching data from DNS. Numerical results reveal that the model accurately recovers the targeted profiles well, and the predicted wall stress and heat flux are within a few percent of their expected values for a wide database of DNS data for high-Mach-number turbulent channel flows, pipe flows, and boundary layers (48 cases with edge Mach numbers in the range of 0.77-11 and semi-local friction Reynolds numbers in the range of 170-5700). The model is also tested _a posteriori_ as a boundary condition for WMLES in turbulent channel flows with bulk Mach numbers \(M_{b}=0.7\)-\(4.0\) and \(Re_{\tau}=320\)-1800. Especially in flows with strong heat transfer, the proposed model is substantially more accurate than the classical ODE-based near-wall model. The superior performance of the present model is due to two key differences with respect to the classical model: 1) the constant turbulent Prandtl number assumption is replaced with a more accurate algebraic temperature-velocity relation and 2) the van Driest velocity transformation is replaced with the total-shear-stress velocity transformation (Griffin _et al._, 2021).
## Acknowledgments
Kevin Griffin acknowledges support from the National Defense Science and Engineering Graduate Fellowship, the Stanford Graduate Fellowship, the Stanford Lieberman Fellowship, and the Exascale Computing Project (Grant17-SC-20SC), a collaborative effort of two US Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering, and early testbed platforms, in support of the nation's exascale computing imperative. Lin Fu acknowledges funding from the Research Grants Council (RGC) of the Government of Hong Kong Special Administrative Region (HKSAR) with RGC/ECS Project (No. 26200222) and from the
Figure 15: _A posteriori_ mesh sensitivity study of a channel flow at \(M_{b}=3.0\) and \(Re_{\tau}^{*}=200\) with matching locations dependent on the grid resolution as \(y_{m}=4\Delta\). The colors indicate semi-local matching distance \(y_{m}^{*}\), which is also indicated with vertical dashed lines. The outer WMLES solutions and the inner wall-modeled velocity profiles are indicated with symbols and dotted curves, respectively, for the present wall model (a) and the classical wall model (b).
Guangdong Basic and Applied Basic Research Foundation (No. 2022A1515011779). Parviz Moin acknowledges support from NASA grant (No. NNX15AU93A). We wish to gratefully acknowledge helpful comments from Sanjeeb T. Bose.
This work was authored in part by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes.
## Declaration of interests
The authors declare that they do not have any financial or non-financial conflict of interests.
## Data availability statement
The data that support the findings of this study are available from the corresponding authors upon reasonable request. Matlab code implementing the proposed model will be available in the following public repository after the manuscript is accepted for publication: [https://github.com/kevingriffin1/comp_wm](https://github.com/kevingriffin1/comp_wm)
|
2305.03696 | Hadronic molecules $η_c η_c$ and $χ_{c0}χ_{c0}$ | The fully charmed hadronic scalar molecules $\mathcal{M}_1=\eta_c \eta_c$ and
$\mathcal{M}_2=\chi_{c0}\chi_{c0}$ are studied in the context of the QCD sum
rule method. The masses $m$, $\widetilde{m}$ and current couplings $f$, $
\widetilde{f}$ of these states are calculated using the two-point sum rule
approach. The obtained results $m=(6264 \pm 50)~\mathrm{MeV}$ and $
\widetilde{m}=(6954 \pm 50)~\mathrm{MeV}$ are employed to determine their decay
channels. It is demonstrated that the processes $\mathcal{M}_1\to J/\psi J/\psi
$ and $\mathcal{M}_1\to \eta _{c}\eta _{c}$ are kinematically allowed decay
modes of $\mathcal{M}_1$. The molecule $\mathcal{M}_2$ decays to $J/\psi
J/\psi$, $J/\psi \psi^{\prime}$, $\eta _{c}\eta _{c}$, $\eta _{c}\eta
_{c}(2S)$, $\eta _{c}\chi _{c1}(1P)$, and $\chi_{c0} \chi_{c0}$ mesons. The
partial widths all of these processes are evaluated by means of the three-point
sum rule calculations, which are necessary to extract the strong couplings
$g_i$ at vertices $\mathcal{M}_1J/\psi J/\psi $, $\mathcal{M }_1\eta _{c}\eta
_{c}$, and others. Our estimates for the full widths of the molecules
$\Gamma_{\mathcal{M}_1}=(320 \pm 72)~\mathrm{MeV}$ and $\Gamma _{
\mathcal{M}_2}=(138 \pm 18)~\mathrm{MeV}$, as well as their masses are compared
with parameters of the $X$ resonances discovered by the LHCb-ATLAS-CMS
Collaborations in the di-$J/\psi$ and $J/\psi\psi^{\prime}$ invariant mass
distributions. We argue that the molecule $\mathcal{M}_1$ can be considered as
a real candidate to the resonance $X(6200)$. The structure $ \mathcal{M}_2$ may
be interpreted as $X(6900)$ or one of its components in combination with a
scalar tetraquark. | S. S. Agaev, K. Azizi, B. Barsbay, H. Sundu | 2023-05-05T17:16:05Z | http://arxiv.org/abs/2305.03696v2 | # Hadronic molecules \(\eta_{c}\eta_{c}\) and \(\chi_{c0}\chi_{c0}\)
###### Abstract
The fully charmed hadronic scalar molecules \(\mathcal{M}_{1}=\eta_{c}\eta_{c}\) and \(\mathcal{M}_{2}=\chi_{c0}\chi_{c0}\) are studied in the context of the QCD sum rule method. The masses \(m\), \(\widetilde{m}\) and current couplings \(f\), \(\widetilde{f}\) of these states are calculated using the two-point sum rule approach. The obtained results \(m=(6264\pm 50)\) MeV and \(\widetilde{m}=(6954\pm 50)\) MeV are employed to determine their decay channels. It is demonstrated that the processes \(\mathcal{M}_{1}\to J/\psi J/\psi\) and \(\mathcal{M}_{1}\to\eta_{c}\eta_{c}\) are kinematically allowed decay modes of \(\mathcal{M}_{1}\). The molecule \(\mathcal{M}_{2}\) decays to \(J/\psi J/\psi\psi\), \(J/\psi\psi^{\prime}\), \(\eta_{c}\eta_{c}(2S)\), \(\eta_{cX1}(1P)\), and \(\chi_{c0}\chi_{c0}\) mesons. The partial widths of all of these processes are evaluated by means of the three-point sum rule calculations, which are necessary to extract the strong couplings \(g_{i}\) at vertices \(\mathcal{M}_{1}J/\psi J/\psi\), \(\mathcal{M}_{1}\eta_{c}\eta_{c}\), and others. Our estimates for the full widths of the molecules \(\Gamma_{\mathcal{M}_{1}}=(320\pm 72)\) MeV and \(\Gamma_{\mathcal{M}_{2}}=(138\pm 18)\) MeV, as well as their masses are compared with parameters of the scalar \(X\) resonances discovered by the LHCb-ATLAS-CMS Collaborations in the di-\(J/\psi\) and \(J/\psi\psi^{\prime}\) invariant mass distributions. We argue that the molecule \(\mathcal{M}_{1}\) can be considered as a real candidate to the scalar resonance \(X(6200)\). The structure \(\mathcal{M}_{2}\) may be interpreted as the resonance \(X(6900)\) or treated in conjunction with a scalar tetraquark as one of its components.
## I Introduction
The discovery of the resonances \(X(6200)\), \(X(6600)\), \(X(6900)\), and \(X(7300)\) in the di-\(J/\psi\) and \(J/\psi\psi^{\prime}\) invariant mass distributions by the LHCb, ATLAS, and CMS Collaborations gave new impetus to investigations of fully charmed and beauty four-quark mesons [1; 2; 3]. Heavy exotic mesons composed of two or four \(c\) and \(b\) quarks attracted interest of researches already at early stages of multiquark hadrons' physics. One of main problems studied in pioneering articles was stability of such hadrons against strong decays [4; 5; 6; 7]. It was argued that tetraquarks containing a heavy diquark and a light antidiquark may be strong-interaction stable particles, whereas fully heavy structures are unstable against strong decays.
Detailed quantitative explorations led to different conclusions concerning allowed decay channels of fully charmed or beauty exotic mesons. Thus, in accordance with Ref. [8], the scalar and axial-vector tetraquarks \(X_{4c}=c\overline{c}\overline{c}\) cannot decay to \(J/\psi J/\psi\) mesons, because their masses are less than the di-\(J/\psi\) threshold. Only a mass of a tensor tetraquark \(X_{4c}\) exceeds this limit and can be seen in the di-\(J/\psi\) mass distribution. At the same time, all fully beauty structures \(X_{4b}\) are below \(\Upsilon(1S)\Upsilon(1S)\) threshold, therefore do not strongly transforms to these mesons. Tetraquarks \(X_{4c}\) and \(X_{4b}\) with different spin-parities were studied in Ref. [9], in which it was demonstrated that scalar \(X_{4c}\) decays to \(\eta_{c}\eta_{c}\), \(J/\psi J/\psi\), and \(\eta_{c}\chi_{c1}(1P)\) mesons, whereas \(X_{4b}\) is stable against strong transformations to two bottomonia expect for a scalar diquark-antidiquark state \(X_{4b}\) built of pseudoscalar components.
Information of the LHCb Collaboration generated new publications aimed to explain origin of observed new structures, calculate their masses and explore possible decay channels [10; 11; 12; 13; 14; 15]. Thus, the mass of the scalar tetraquark \(X_{4c}\) was estimated around \(6.44-6.47\) GeV in the framework of QCD sum rule method [10], and the author interpreted \(X_{4c}\) as a part of a threshold enhancement \(6.2-6.8\) GeV seen by LHCb in nonresonant di-\(J/\psi\) production. The hadronic molecule \(\chi_{c0}\chi_{c0}\) or/and the diquark-antidiquark state with pseudoscalar constituents were considered as candidates to the resonance \(X(6900)\) in Ref. [11]. Decay channels of the fully heavy tetraquarks to conventional mesons through annihilations of \(Q\overline{Q}\) pairs to gluon(s) were investigated in Refs. [12; 13].
The LHCb data were analyzed in Ref. [14] in the context of a coupled-channel method: It was argued that in the di-\(J/\psi\) system there is a near-threshold state \(X(6200)\) having the spin-parities \(0^{++}\) or \(2^{++}\). The coupled-channel effects may also produce a pole structure, which was denoted by the resonance \(X(6900)\) in Ref. [15]. The performed analysis helped the authors to declare also the existence of a bound state \(X(6200)\), and the broad and narrow resonances \(X(6680)\) and \(X(7200)\), respectively.
The discoveries of the ATLAS and CMS experiments intensified analyses of new heavy \(X\) resonances [16; 17; 18; 19; 20; 21]. In fact, in Ref. [16] the \(X(6200)\) was considered to be the ground-level tetraquark structure with \(J^{\rm PC}=0^{++}\) or \(1^{+-}\), whereas its first radial excitation was assigned as \(X(6600)\). Similar interpretations were extended to the whole family of heavy \(X\) structures in Ref. [17], where the authors suggested to consider the resonances \(X(6200)-X(7300)\) as \(1S\), \(1P/2S\), \(1D/2P\), and \(2D/3P/4S\) tetraquark states. Close ideas were proposed in the context of the relativistic quark model as
well [18].
It is clear, that a wide variety of alternatives to explain the experimental data makes important detailed investigations of fully heavy tetraquarks. In our article [22], we calculated the masses of the scalar diquark-antidiquark states \(X_{4c}\) and \(X_{4b}\) built of axial-vector constituents, and estimated the full width of \(X_{4c}\). Our results for the mass \(m=(6570\pm 55)\) MeV and width \(\Gamma_{4c}=(110\pm 21)\) MeV of the tetraquark \(X_{4c}\) allowed us to consider it as a candidate to the resonance \(X(6600)\). Relying on their decay channels \(X(6600)\to J/\psi J/\psi\) and \(X(7300)\to J/\psi\psi^{\prime}\), we also supposed that \(X(7300)\) may be \(2S\) excitation of \(X(6600)\): Here, we took into account that \(\psi^{\prime}\) is \(2S\) excited state of the meson \(J/\psi\). We computed the mass of the fully beauty scalar state \(X_{4b}\) and got \(m^{\prime}=(18540\pm 50)\) MeV which is below the \(\eta_{b}\eta_{b}\) threshold. Hence \(X_{4b}\) cannot decay to hidden-bottom mesons, i.e., this tetraquark is observable neither in the \(\eta_{b}\eta_{b}\) nor in \(\Upsilon(1S)\Upsilon(1S)\) mass distributions. The break-up of \(X_{4b}\) to ordinary mesons proceeds through its strong decays to open-bottom mesons, or via weak leptonic and nonleptonic processes.
The scalar diquark-antidiquark states \(T_{4c}\) and \(T_{4b}\) with pseudoscalar components were explored in Ref. [23], in which we computed spectroscopic parameters of these tetraquarks. We interpreted the tetraquark \(T_{4c}\) with the mass \(m=(6928\pm 50)\) MeV and width \(\widetilde{\Gamma}_{4c}=(112\pm 21)\) MeV as a resonance \(X(6900)\). The mass and width of its beauty counterpart \(T_{4c}\) were found equal to \(m^{\prime}=(18858\pm 50)\) MeV and \(\widetilde{\Gamma}_{4b}=(94\pm 28)\) MeV, respectively.
In present article, we explore the hadronic molecules \(\mathcal{M}_{1}=\eta_{c}\eta_{c}\) and \(\mathcal{M}_{2}=\chi_{c0}\chi_{c0}\) by computing their masses and widths to confront obtained predictions with the both available experimental data and results of diquark-antidiquark model. The masses of these structures are evaluated using the QCD two-point sum rule method. To estimate their widths, we apply the three-point sum rule approach, which is required to extract strong couplings \(g_{i}\) at vertices, for example, \(\mathcal{M}_{1}J/\psi J/\psi\) and \(\mathcal{M}_{1}\eta_{c}\eta_{c}\) in the case of \(\mathcal{M}_{1}\).
This paper is structures in the following way: In Section II, we calculate the mass and current coupling of the molecule \(\mathcal{M}_{1}\). We evaluate its full width using strong processes \(\mathcal{M}_{1}\to J/\psi J/\psi\) and \(\mathcal{M}_{1}\to\eta_{c}\eta_{c}\). In Section III, we analyze in a detailed form spectroscopic parameters of the molecule \(\mathcal{M}_{2}\). The full width of \(\mathcal{M}_{2}\) is found by considering decays \(\mathcal{M}_{2}\to J/\psi J/\psi\), \(J/\psi\psi^{\prime}\), \(\eta_{c}\eta_{c}\), \(\eta_{c}\eta_{c}(2S)\), \(\eta_{c}\chi_{c1}(1P)\) and \(\mathcal{M}_{2}\to\chi_{c0}\chi_{c0}\). Last section is reserved for discussion of results and concluding remarks. Appendix contains the explicit expression of the heavy-quark propagator, and different correlation functions employed in the analyses.
## II Mass, current coupling and width of the molecule \(\eta_{c}\eta_{c}\)
In this section, we compute the mass \(m\), current coupling \(f\) and full width \(\Gamma_{\mathcal{M}_{1}}\) of the hadronic molecule \(\mathcal{M}_{1}=\eta_{c}\eta_{c}\) using the QCD sum rule method [24; 25].
### Mass and coupling
To derive the two-point sum rules for the mass \(m\) and current coupling \(f\) of the molecule \(\mathcal{M}_{1}\), we explore the two-point correlation function
\[\Pi(p)=i\int d^{4}xe^{ipx}\langle 0|\mathcal{T}\{J(x)J^{\dagger}(0)\}|0\rangle, \tag{1}\]
where, \(\mathcal{T}\) is the time-ordered product of two currents, and \(J(x)\) is the interpolating current for the molecule \(\mathcal{M}_{1}\).
The current for \(\mathcal{M}_{1}\) reads
\[J(x)=\overline{c}_{a}(x)i\gamma_{5}c_{a}(x)\overline{c}_{b}(x)i\gamma_{5}c_{b }(x), \tag{2}\]
where \(a\), and \(b\) are color indices. It describes a hadronic molecule with spin-parities \(J^{\rm PC}=0^{++}\).
The physical side of the sum rule \(\Pi^{\rm Phys}(p)\) can be obtained from Eq. (1) by inserting a complete set of intermediate states with quark content and spin-parities of the molecule \(\mathcal{M}_{1}\), and carrying out integration over \(x\)
\[\Pi^{\rm Phys}(p)=\frac{\langle 0|J|\mathcal{M}_{1}(p)\rangle\langle\mathcal{M}_{ 1}(p)|J^{\dagger}|0\rangle}{m^{2}-p^{2}}+\cdots. \tag{3}\]
In Eq. (3) the ground-state contribution is presented explicitly, whereas higher resonances and continuum terms are denoted by the ellipses.
The function \(\Pi^{\rm Phys}(p)\) can be rewritten using the matrix element of the molecule \(\mathcal{M}_{1}\)
\[\langle 0|J|\mathcal{M}_{1}(p)\rangle=fm, \tag{4}\]
which leads to the following expression
\[\Pi^{\rm Phys}(p)=\frac{f^{2}m^{2}}{m^{2}-p^{2}}+\cdots. \tag{5}\]
The correlator \(\Pi^{\rm Phys}(p)\) has a Lorentz structure which is proportional to \({\rm I}\). Consequently, corresponding invariant amplitude \(\Pi^{\rm Phys}(p^{2})\) is equal to the expression in right-hand side of Eq. (5).
The second component of the sum rule analysis, i.e., the function \(\Pi^{\rm OPE}(p)\) should be calculated in the operator product expansion (OPE) with some accuracy. In terms of the \(c\)-quark propagators \(S_{c}(x)\) the function
\(\Pi^{\rm OPE}(p)\) has the following form
\[\Pi^{\rm OPE}(p)=i\int d^{4}xe^{ipx}\left\{{\rm Tr}\left[\gamma_{5}S_{ c}^{ba^{\prime}}(x)\gamma_{5}S_{c}^{a^{\prime}b}(-x)\right]\right.\] \[\times{\rm Tr}\left[\gamma_{5}S_{c}^{ab^{\prime}}(x)\gamma_{5}S_{ c}^{b^{\prime}a}(x)\right]-{\rm Tr}\left[\gamma_{5}S_{c}^{bb^{\prime}}(x) \gamma_{5}S_{c}^{b^{\prime}a}(-x)\right.\] \[\times\gamma_{5}S_{c}^{aa^{\prime}}(x)\gamma_{5}S_{c}^{a^{\prime} b}(-x)\right]-{\rm Tr}\left[\gamma_{5}S_{c}^{ba^{\prime}}(x)\gamma_{5}S_{c}^{a^{ \prime}a}(-x)\gamma_{5}\right.\] \[\left.\times S_{c}^{ab^{\prime}}(x)\gamma_{5}S_{c}^{b^{\prime}b}( -x)\right]+{\rm Tr}\left[\gamma_{5}S_{c}^{bb^{\prime}}(x)\gamma_{5}S_{c}^{b^{ \prime}b}(-x)\right]\] \[\times{\rm Tr}\left[\gamma_{5}S_{c}^{aa^{\prime}}(x)\gamma_{5}S_{ c}^{a^{\prime}a}(-x)\right]\right\}\mbox{.} \tag{6}\]
The propagator \(S_{c}(x)\) contains terms which are linear and quadratic in gluon field strength. As a result, \(\Pi^{\rm OPE}(p)\) does not depend on light quark or mixed quark-gluon vacuum condensates. The explicit expression of \(S_{c}(x)\) can be found in Appendix (see, also Ref. [26]).
The \(\Pi^{\rm OPE}(p)\) has also a Lorentz structure proportional I. We denote the corresponding invariant amplitude as \(\Pi^{\rm OPE}(p^{2})\). To find a sum rule equality one has to equate the functions \(\Pi^{\rm Phys}(p^{2})\) and \(\Pi^{\rm OPE}(p^{2})\), apply the Borel transformation for suppressing contributions of higher resonances and continuum states, and subtract suppressed terms using the assumption about quark-hadron duality [24; 25]. After these manipulations the amplitude \(\Pi^{\rm OPE}(p^{2})\) becomes a function of the Borel and continuum subtraction parameters \(M^{2}\) and \(s_{0}\), and will be denoted \(\Pi(M^{2},s_{0})\).
Calculation of \(\Pi(M^{2},s_{0})\) is a next step to derive the sum rules for the mass \(m\) and coupling \(f\). Analyses demonstrate that \(\Pi(M^{2},s_{0})\) has the form
\[\Pi(M^{2},s_{0})=\int_{16m_{c}^{2}}^{s_{0}}ds\rho^{\rm OPE}(s)e^{-s/M^{2}}. \tag{7}\]
where \(\rho^{\rm OPE}(s)\) is a two-point spectral density found as an imaginary part of the invariant amplitude \(\Pi^{\rm OPE}(p^{2})\). The function \(\rho^{\rm OPE}(s)\) contains a perturbative term \(\rho^{\rm pert.}(s)\) and a dimension-4 nonperturbative contribution \(\sim\langle\alpha_{s}G^{2}/\pi\rangle\). An analytical expression of \(\rho^{\rm OPE}(s)\) is rather cumbersome, therefore we do not write it here explicitly.
The mass \(m\) and coupling \(f\) can be extracted from the sum rules
\[m^{2}=\frac{\Pi^{\prime}(M^{2},s_{0})}{\Pi(M^{2},s_{0})} \tag{8}\]
and
\[f^{2}=\frac{e^{m^{2}/M^{2}}}{m^{2}}\Pi(M^{2},s_{0}), \tag{9}\]
respectively. In Eq. (8), we introduce a notation \(\Pi^{\prime}(M^{2},s_{0})=d\Pi(M^{2},s_{0})/d(-1/M^{2})\).
The parameters which enter to these sum rules are the gluon vacuum condensate \(\langle\alpha_{s}G^{2}/\pi\rangle\) and the mass of \(c\) quark. Their numerical values are presented below
\[\langle\frac{\alpha_{s}G^{2}}{\pi}\rangle=(0.012\pm 0.004)~{}{\rm GeV }^{4},\] \[m_{c}=(1.27\pm 0.02)~{}{\rm GeV}. \tag{10}\]
A choice of the working regions for \(M^{2}\) and \(s_{0}\) is another problem of sum rule analyses. These parameters should be determined in such a way that they satisfy the requirement imposed by the pole contribution (PC), and ensure convergence of the operator product expansion. Prevalence of a perturbative contribution over a nonperturbative one, as well as stability of physical quantities under variation of these parameters are also among important constraints.
Because, in the present article we consider a nonperturbative term \(\sim\langle\alpha_{s}G^{2}/\pi\rangle\), the pole contribution plays a key role in determination of the working intervals for the \(M^{2}\) and \(s_{0}\). To estimate PC, we use the formula
\[{\rm PC}=\frac{\Pi(M^{2},s_{0})}{\Pi(M^{2},\infty)}, \tag{11}\]
and require fulfillment of the constraint \({\rm PC}\geq 0.5\).
The PC is employed to fix the higher limit of the Borel parameter \(M^{2}\). The lower limit for \(M^{2}\), in the case under discussion, is found from a stability of the sum rules' results under variation of \(M^{2}\), and from superiority of the perturbative term. Two values of \(M^{2}\) extracted by this way fix boundaries of the region where \(M^{2}\) can be varied.
Calculations for the molecule \({\cal M}_{1}\) prove that intervals
\[M^{2}\in[5,6.5]~{}{\rm GeV}^{2},~{}s_{0}\in[44,45]~{}{\rm GeV}^{2}, \tag{12}\]
are suitable regions for the parameters \(M^{2}\) and \(s_{0}\), where they comply with limits on PC and nonperturbative term. Thus, at \(M^{2}=6.5~{}{\rm GeV}^{2}\) the pole contribution is \(0.49\), whereas at \(M^{2}=5~{}{\rm GeV}^{2}\) it becomes equal to \(0.81\). At the minimum of \(M^{2}=5.5~{}{\rm GeV}^{2}\), contribution of the nonperturbative term forms \(\simeq 5\%\) of the correlation function. In Fig. 1, we plot PC as a function of \(M^{2}\) at different \(s_{0}\) to show its changes in explored
Figure 1: The pole contribution PC as a function of the Borel parameter \(M^{2}\) at different \(s_{0}\). The limit \({\rm PC}=0.5\) is plotted by the horizontal line. The red triangle shows the point, where the mass \(m\) of \({\cal M}_{1}\) has effectively been extracted from the sum rule.
range of \(M^{2}\). It is clear, that the pole contribution overshoots 0.5 for all values of the parameters \(M^{2}\) and \(s_{0}\) from Eq. (12) excluding very small region around of the point \(M^{2}=6.5\) GeV\({}^{2}\).
The mass \(m\) and coupling \(f\) of the molecule \({\cal M}_{1}\) are determined by calculating them at different \(M^{2}\) and \(s_{0}\) from the regions Eq. (12), and averaging obtained results to find mean values of these parameters. Final results for \(m\) and \(f\) are
\[m = (6264\pm 50)\ {\rm MeV},\] \[f = (2.12\pm 0.16)\times 10^{-2}\ {\rm GeV}^{4}. \tag{13}\]
The predictions Eq. (13) correspond to sum rules' results at the point \(M^{2}=5.6\) GeV\({}^{2}\) and \(s_{0}=44.5\) GeV\({}^{2}\) which is approximately at a middle of the regions Eq. (12). At this point the pole contribution is PC \(\approx 0.68\) which ensures superiority of PC in the results, and confirms ground-level nature of the molecule \({\cal M}_{1}\). Dependence of \(m\) on the parameters \(M^{2}\) and \(s_{0}\) is plotted in Fig. 2.
Our result for \(m\) nicely agrees with the mass of the resonance \(X(6200)\)
\[m^{\rm ATL}=6220\pm 50^{+40}_{-50}\ {\rm MeV}, \tag{14}\]
reported by the ATLAS Collaboration [2]. But for reliable conclusions on a nature of the resonance \(X(6200)\) there is a necessity also to estimate the full width of the molecule \({\cal M}_{1}\): Below, we provide results of relevant studies.
### Full width
The mass of the hadronic molecule \({\cal M}_{1}\) exceeds the two-meson \(J/\psi J/\psi\) and \(\eta_{c}\eta_{c}\) thresholds 6192 MeV and 5968 MeV, respectively. Hence \(S\)-wave decay channels \({\cal M}_{1}\to J/\psi J/\psi\) and \({\cal M}_{1}\to\eta_{c}\eta_{c}\) are allowed modes of this particle.
#### iii.2.1 Decay \({\cal M}_{1}\to J/\psi J/\psi\)
We start our studies from consideration of the decay \({\cal M}_{1}\to J/\psi J/\psi\). Partial width of this process is determined by the strong coupling \(g_{1}\) at the vertex \({\cal M}_{1}J/\psi J/\psi\). In the framework of the QCD sum rule method \(g_{1}\) can be obtained from analysis of the three-point correlation function
\[\Pi_{\mu\nu}(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y }e^{-ipx}\langle 0|{\cal T}\{J_{\mu}^{\psi}(y)\] \[\times J_{\nu}^{\psi}(0)J^{\dagger}(x)\}|0\rangle, \tag{15}\]
where \(J_{\mu}^{\psi}(x)\) is the interpolating currents for the vector meson \(J/\psi\)
\[J_{\mu}^{\psi}(x)=\overline{c}_{i}(x)\gamma_{\mu}c_{i}(x), \tag{16}\]
where \(i=1,2,3\) are the color indices. The 4-momentum of the molecule \({\cal M}_{1}\) is \(p\), whereas momenta of the \(J/\psi\) mesons are \(p^{\prime}\) and \(q=p-p^{\prime}\), respectively.
After some calculations, for the physical side of the
Figure 2: Mass of the hadronic molecule \({\cal M}_{1}\) as a function of the Borel \(M^{2}\) (left), and the continuum threshold \(s_{0}\) parameters (right).
sum rule, we find
\[\Pi^{\rm Phys}_{\mu\nu}(p,p^{\prime})=g_{1}(q^{2})\frac{fmf_{1}^{2}m_ {1}^{2}}{\left(p^{2}-m^{2}\right)\left(p^{\prime 2}-m_{1}^{2}\right)\left(q^{2}-m_{1 }^{2}\right)}\] \[\times\left[\frac{1}{2}\left(m^{2}-m_{1}^{2}-q^{2}\right)g_{\mu \nu}-q_{\mu}p^{\prime}_{\nu}\right]+\cdots, \tag{17}\]
where \(m_{1}\) and \(f_{1}\) are the mass and decay constant of the \(J/\psi\) meson. To derive Eq. (17), we have isolated the contribution of the ground-state particles from other terms, and made use the following matrix elements
\[\langle 0|J_{\mu}^{\psi}|J/\psi(p)\rangle=f_{1}m_{1}\varepsilon_{\mu}(p), \tag{18}\]
and
\[\langle J/\psi(p^{\prime})J/\psi(q)|{\cal M}_{1}(p)\rangle=g_{1}( q^{2})\left[q\cdot p^{\prime}\varepsilon^{*}(p^{\prime})\cdot\varepsilon^{*}(q)\right.\] \[\left.-q\cdot\varepsilon^{*}(p^{\prime})p^{\prime}\cdot \varepsilon^{*}(q)\right]. \tag{19}\]
The correlator \(\Pi^{\rm Phys}_{\mu\nu}(p,p^{\prime})\) contains two Lorentz structures that can be used to obtain the sum rule for \(g_{1}(q^{2})\). We select to work with the term \(\sim g_{\mu\nu}\) and present the relevant invariant amplitude by \(\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\). The Borel transformations over \(p^{2}\) and \(p^{\prime 2}\) of the amplitude \(\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\) yield
\[{\cal B}\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})=g_{1}(q^{2}) fmf_{1}^{2}m_{1}^{2}\] \[\times\frac{m^{2}-m_{1}^{2}-q^{2}}{2(q^{2}-m_{1}^{2})}e^{-m^{2}/ M_{1}^{2}}e^{-m_{1}^{2}/M_{2}^{2}}+\cdots. \tag{20}\]
The correlation function \(\Pi_{\mu\nu}(p,p^{\prime})\) calculated in terms of \(c\)-quark propagators reads
\[\Pi^{\rm OPE}_{\mu\nu}(p,p^{\prime})=-2\int d^{4}xd^{4}ye^{ip^{ \prime}y}e^{-ipx}{\rm Tr}\left[\gamma_{\mu}S_{c}^{ib}(y-x)\right.\] \[\left.\times\gamma_{5}S_{c}^{bj}(x)\gamma_{\nu}S_{c}^{ja}(-x) \gamma_{5}S_{c}^{ai}(x-y)\right]. \tag{21}\]
The invariant amplitude \(\Pi^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) which corresponds to the term \(\sim g_{\mu\nu}\) in Eq. (21) forms the QCD side of the sum rule. Having equated the amplitudes \(\Pi^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) and \(\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\) and performed the doubly Borel transforms and continuum subtractions, one can find the sum rule for the form factor \(g_{1}(q^{2})\)
\[g_{1}(q^{2})=\frac{2}{fmf_{1}^{2}m_{1}^{2}}\frac{q^{2}-m_{1}^{2} }{m^{2}-m_{1}^{2}-q^{2}}\] \[\times e^{m^{2}/M_{1}^{2}}e^{m_{1}^{2}/M_{2}^{2}}\Pi({\bf M}^{2}, \mathbf{s}_{0},q^{2}). \tag{22}\]
Here,
\[\Pi({\bf M}^{2},\mathbf{s}_{0},q^{2})=\int_{16m_{e}^{2}}^{s_{0}} ds\int_{4m_{e}^{2}}^{s_{0}^{\prime}}ds^{\prime}\rho(s,s^{\prime},q^{2})\] \[\times e^{-s/M_{1}^{2}}e^{-s^{\prime}/M_{2}^{2}}. \tag{23}\]
is the function \(\Pi^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) after the Borel transformations and subtraction procedures. It is expressed using the spectral density \(\rho(s,s^{\prime},q^{2})\): the latter is calculated as a relevant imaginary part of \(\Pi^{\rm OPE}_{\mu\nu}(p,p^{\prime})\). In Eq. (23) \({\bf M}^{2}=(M_{1}^{2},M_{2}^{2})\) and \(\mathbf{s}_{0}=(s_{0},s_{0}^{\prime})\) are the Borel and continuum threshold parameters, respectively.
The form factor \(g_{1}(q^{2})\) depends on the masses and decay constants of the molecule \({\cal M}_{1}\) and meson \(J/\psi\), which are input parameters in calculations. Their numerical values are collected in Table 1. Additionally, this table contains the parameters of the \(\psi^{\prime}\), \(\eta_{c}\), \(\eta_{c}(2S)\), \(\chi_{c1}(1P)\), and \(\chi_{c0}(1P)\) mesons which are necessary to explore decay modes of the hadronic molecules \({\cal M}_{1}\) and \({\cal M}_{2}\). For the masses of the particles, we utilize information from Ref. [27]. For decay constant of the meson \(J/\psi\), we employ the experimental value from Ref. [28]. As \(f_{\eta_{c}}\) and \(f_{\eta_{h}}\), we use results of the QCD lattice simulations [29; 30], whereas for \(f_{\chi_{c1}}\) and \(f_{\chi_{c0}}\)- the sum rule prediction from Refs. [31; 32].
\begin{table}
\begin{tabular}{|c|c|} \hline \hline Parameters & Values (in MeV) \\ \hline \hline \(m_{1}[m_{J/\psi}]\) & \(3096.900\pm 0.006\) \\ \(f_{1}[f_{J/\psi}]\) & \(409\pm 15\) \\ \(m_{1}^{*}[m_{\psi^{\prime}}]\) & \(3686.10\pm 0.06\) \\ \(f_{1}^{*}[f_{\psi^{\prime}}]\) & \(279\pm 8\) \\ \(m_{2}[m_{\eta_{c}}]\) & \(2983.9\pm 0.4\) \\ \(f_{2}[f_{\eta_{c}}]\) & \(398.1\pm 1.0\) \\ \(m_{2}^{*}[m_{\eta_{c}(2S)}]\) & \(3637.5\pm 1.1\) \\ \(f_{2}[f_{\eta_{c}(2S)}]\) & \(331\) \\ \(m_{3}[m_{\chi_{c1}}]\) & \(3510.67\pm 0.05\) \\ \(f_{3}[f_{\chi_{c1}}]\) & \(344\pm 27\) \\ \(m_{4}[m_{\chi_{c0}}]\) & \(3414.71\pm 0.30\) \\ \(f_{4}[f_{\chi_{c0}}]\) & \(343\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Masses and decay constants of the various \(\overline{c}c\) mesons which have been used in numerical computations.
Figure 3: The sum rule predictions and fit functions for the strong couplings \(g_{1}(Q^{2})\) (upper line) and \(g_{2}(Q^{2})\) (lower line). The red diamond and green star denote the points \(Q^{2}=-m_{1}^{2}\) and \(Q^{2}=-m_{2}^{2}\), respectively.
To carry out numerical computations, it is also necessary to choose the working regions for the parameters \({\bf M}^{2}\) and \({\bf s}_{0}\). For \(M_{1}^{2}\) and \(s_{0}\), associated with the \({\cal M}_{1}\) channel, we apply the working windows of Eq. (12). The parameters \((M_{2}^{2},\ s_{0}^{\prime})\) for the \(J/\psi\) channel are varied inside the intervals
\[M_{2}^{2}\in[4,5]\ {\rm GeV}^{2},\ s_{0}^{\prime}\in[12,13]\ {\rm GeV}^{2}. \tag{24}\]
It is a fact that the sum rule approach leads to reliable predictions in the deep-Euclidean region \(q^{2}<0\). For our purposes, it is suitable to introduce a new variable \(Q^{2}=-q^{2}\) and present the obtained function by \(g_{1}(Q^{2})\). The interval of \(Q^{2}\) studied by the sum rule analysis contains the region \(Q^{2}=1-10\ {\rm GeV}^{2}\). The results of analyses are plotted in Fig. 3.
But the width of the decay \({\cal M}_{1}\to J/\psi J/\psi\) is determined by the the form factor \(g_{1}(q^{2})\) at the mass shell \(q^{2}=m_{1}^{2}\), i.e., one has to find \(g_{1}(Q^{2}=-m_{1}^{2})\). To overcome this problem, we use a fit function \({\cal G}_{1}(Q^{2})\), which at momenta \(Q^{2}>0\) gives the same values as the sum rule predictions, but it can be extrapolated to the region of \(Q^{2}<0\). In this paper, we employ the functions \({\cal G}_{i}(Q^{2})\)
\[{\cal G}_{i}(Q^{2})={\cal G}_{i}^{0}{\rm exp}\left[c_{i}^{1}\frac{Q^{2}}{m^{2} }+c_{i}^{2}\left(\frac{Q^{2}}{m^{2}}\right)^{2}\right] \tag{25}\]
with parameters \({\cal G}_{i}^{0}\), \(c_{i}^{1}\), and \(c_{i}^{2}\).
Calculations demonstrate that \({\cal G}_{1}^{0}=3.41\ {\rm GeV}^{-1}\), \(c_{1}^{1}=2.18\), and \(c_{1}^{2}=-2.21\) lead to reasonable agreement with the sum rule's data for \(g_{1}(Q^{2})\) depicted in Fig. 3. At the mass shell \(q^{2}=m_{1}^{2}\) the function \({\cal G}_{1}(Q^{2})\) equals to
\[g_{1}\equiv{\cal G}_{1}(-m_{1}^{2})=(1.75\pm 0.41)\ {\rm GeV}^{-1}. \tag{26}\]
The partial width of the process \({\cal M}_{1}\to J/\psi J/\psi\) can be obtained by means of the following formula
\[\Gamma\left[{\cal M}_{1}\to J/\psi J/\psi\right]=g_{1}^{2}\frac{\lambda_{1}}{ 8\pi}\left(\frac{m_{1}^{4}}{m^{2}}+\frac{2\lambda_{1}^{2}}{3}\right), \tag{27}\]
where \(\lambda_{1}=\lambda(m,m_{1},m_{1})\) and
\[\lambda(a,b,c)=\frac{\sqrt{a^{4}+b^{4}+c^{4}-2(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2 })}}{2a}. \tag{28}\]
It is easy to find that
\[\Gamma\left[{\cal M}_{1}\to J/\psi J/\psi\right]=(142\pm 47)\ {\rm MeV}. \tag{29}\]
#### iii.1.2 Process \({\cal M}_{1}\to\eta_{c}\eta_{c}\)
The process \({\cal M}_{1}\to\eta_{c}\eta_{c}\) is another decay channel of the hadronic molecule \({\cal M}_{1}\). Investigation of this process runs, with some modifications, in accordance with the scheme explained above. The strong coupling \(g_{2}\) that describes the vertex \({\cal M}_{1}\eta_{c}\eta_{c}\) is extracted from the correlation function
\[\Pi(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y}e^{-ipx} \langle 0|{\cal T}\{J^{\eta_{c}}(y)\] \[\times J^{\eta_{c}}(0)J^{\dagger}(x)\}|0\rangle, \tag{30}\]
where
\[J^{\eta_{c}}(x)=\overline{c}_{i}(x)i\gamma_{5}c_{i}(x), \tag{31}\]
is the interpolating current for the meson \(\eta_{c}\).
The physical side of the sum rule for the form factor \(g_{2}(q^{2})\) is derived by separating the contribution of the ground-state and the effects of the higher states and continuum from each other. Then, the correlation function (30) can be presented in the following form
\[\Pi^{\rm Phys}(p,p^{\prime})=\frac{\langle 0|J^{\eta_{c}}|\eta_{c}(p ^{\prime})\rangle}{{p^{\prime}}^{2}-m_{2}^{2}}\frac{\langle 0|J^{\eta_{c}}|\eta_{c}(q) \rangle}{q^{2}-m_{2}^{2}}\] \[\times\langle\eta_{c}(p^{\prime})\eta_{c}(q)|{\cal M}_{1}(p) \rangle\frac{\langle{\cal M}_{1}(p)|J^{\dagger}|0\rangle}{p^{2}-m^{2}}+\cdots, \tag{32}\]
with \(m_{2}\) being the mass of the \(\eta_{c}\) meson.
We define the vertex composed of a scalar and two pseudoscalar particles by means of the formula
\[\langle\eta_{c}(p^{\prime})\eta_{c}(q)|{\cal M}_{1}(p)\rangle=g_{2}(q^{2})p \cdot p^{\prime}. \tag{33}\]
To rewrite the correlator \(\Pi^{\rm Phys}(p,p^{\prime})\) in terms of physical parameters of particles \({\cal M}_{1}\) and \(\eta_{c}\), we also use the matrix elements Eq. (4) and
\[\langle 0|J^{\eta_{c}}|\eta_{c}\rangle=\frac{f_{2}m_{2}^{2}}{2m_{c}}, \tag{34}\]
where \(f_{2}\) is the decay constant of the \(\eta_{c}\) meson. The correlation function \(\Pi^{\rm Phys}(p,p^{\prime})\) then becomes equal to
\[\Pi^{\rm Phys}(p,p^{\prime})=g_{2}(q^{2})\frac{fmf_{2}^{2}m_{2}^{ 4}}{4m_{c}^{2}\left(p^{2}-m^{2}\right)\left(p^{\prime 2}-m_{2}^{2}\right)}\] \[\times\frac{m^{2}+m_{2}^{2}-q^{2}}{2(q^{2}-m_{2}^{2})}+\cdots. \tag{35}\]
The function \(\Pi^{\rm Phys}(p,p^{\prime})\) has a Lorentz structure which is proportional to I, hence right-hand side of Eq. (35) is the corresponding invariant amplitude \(\widehat{\Pi}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\). We also find the function \(\Pi^{\rm OPE}(p,p^{\prime})\)
\[\Pi^{\rm OPE}(p,p^{\prime})=2i^{2}\int d^{4}xd^{4}ye^{ip^{\prime }y}e^{-ipx}\left\{{\rm Tr}\left[\gamma_{5}S_{c}^{ia}(y-x)\right.\right.\] \[\left.\left.\times\gamma_{5}S_{c}^{ia}(x-y)\right]{\rm Tr}\left[ \gamma_{5}S_{c}^{jb}(-x)\gamma_{5}S_{c}^{bj}(x)\right]\right.\] \[\left.-{\rm Tr}\left[\gamma_{5}S_{c}^{ia}(y-x)\gamma_{5}S_{c}^{aj} (x)\gamma_{5}S_{c}^{jb}(x)-\right.\right)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!
Numerical computations are carried out using the parameters of the meson \(\eta_{c}\) from Table 1. The Borel and continuum subtraction parameters \(M_{1}^{2}\) and \(s_{0}\) in the \(\mathcal{M}_{1}\) channel is chosen as in Eq. (12), whereas for \(M_{2}^{2}\) and \(s_{0}^{\prime}\) that correspond to the \(\eta_{c}\) channel, we employ
\[M_{2}^{2}\in[3.5,4.5]\ \mathrm{GeV}^{2},\ s_{0}^{\prime}\in[11,12]\ \mathrm{GeV}^{2}. \tag{38}\]
The fit function \(\mathcal{G}_{2}(Q^{2})\) has the following parameters: \(\mathcal{G}_{0}^{0}=1.52\ \mathrm{GeV}^{-1}\), \(c_{1}^{1}=2.84\), and \(c_{2}^{2}=-2.73\). For the strong coupling \(g_{2}\), we get
\[g_{2}\equiv\mathcal{G}_{2}(-m_{2}^{2})=(6.9\pm 1.5)\times 10^{-1}\ \mathrm{GeV}^{-1}. \tag{39}\]
The width of the process \(\mathcal{M}_{1}\to\eta_{c}\eta_{c}\) is determined by means of the formula
\[\Gamma\left[\mathcal{M}_{1}\to\eta_{c}\eta_{c}\right]=g_{2}^{2}\frac{m_{2}^{2 }\lambda_{2}}{8\pi}\left(1+\frac{\lambda_{2}^{2}}{m_{2}^{2}}\right), \tag{40}\]
where \(\lambda_{2}=\lambda(m,m_{2},m_{2})\). Finally, we obtain
\[\Gamma\left[\mathcal{M}_{1}\to\eta_{c}\eta_{c}\right]=(178\pm 55)\ \mathrm{MeV}. \tag{41}\]
The parameters of the decays \(\mathcal{M}_{1}\to J/\psi J/\psi\) and \(\mathcal{M}_{1}\to\eta_{c}\eta_{c}\) are shown in Table 2.
Based on these results, it is not difficult to find that
\[\Gamma_{\mathcal{M}_{1}}=(320\pm 72)\ \mathrm{MeV}, \tag{42}\]
which is the full width of the hadronic molecule \(\mathcal{M}_{1}\).
## III Hadronic Molecule \(\chi_{c0}\chi_{c0}\)
This part of the article is devoted to thorough investigations of the molecule \(\mathcal{M}_{2}=\chi_{c0}\chi_{c0}\) which imply calculations of the mass \(\widetilde{m}\) and coupling \(\widetilde{f}\), as well as the full width \(\Gamma_{\mathcal{M}_{2}}\) of this compound by employing its numerous decay modes.
### Spectroscopic parameters \(\widetilde{m}\) and \(\widetilde{f}\)
In the case of the molecule \(\mathcal{M}_{2}=\chi_{c0}\chi_{c0}\) the two-point correlation function that should be analyzed has the form
\[\widetilde{\Pi}(p)=i\int d^{4}xe^{ipx}\langle 0|\mathcal{T}\{\widetilde{J}(x) \widetilde{J}^{\dagger}(0)\}|0\rangle, \tag{43}\]
where \(\widetilde{J}(x)\) is the interpolating current for the molecule \(\mathcal{M}_{2}\). We treat \(\mathcal{M}_{2}\) as a hadronic state built of the scalar mesons \(\chi_{c0}\chi_{c0}\), therefore define the relevant interpolating current as
\[\widetilde{J}(x)=\overline{c}_{a}(x)c_{a}(x)\overline{c}_{b}(x)c_{b}(x). \tag{44}\]
The physical side of the sum rule
\[\widetilde{\Pi}^{\mathrm{Phys}}(p)=\frac{\widetilde{f}^{2}\widetilde{m}^{2}}{ \widetilde{m}^{2}-p^{2}}+\cdots, \tag{45}\]
does not differ from Eq. (5), but \(\widetilde{m}\) and \(\widetilde{f}\) are now the mass and coupling of the molecule \(\mathcal{M}_{2}\) introduced through the matrix element
\[\langle 0|\widetilde{J}|\mathcal{M}_{2}(p)\rangle=\widetilde{f}\widetilde{m}. \tag{46}\]
The amplitude \(\widetilde{\Pi}^{\mathrm{Phys}}(p^{2})\) required for the following analysis is given by the expression in the right-hand side of Eq. (45).
The correlation function \(\widetilde{\Pi}^{\mathrm{OPE}}(p)\) computed using the \(c\)-quark propagators is written down in Eq. (A.3). The sum rules for \(\widetilde{m}\) and \(\widetilde{f}\) are determined by Eqs. (8) and (9) with evident replacements.
The working windows for the Borel and continuum subtraction parameters \(M^{2}\) and \(s_{0}\) are:
\[M^{2}\in[5.5,7]\ \mathrm{GeV}^{2},\ s_{0}\in[54,55]\ \mathrm{GeV}^{2}. \tag{47}\]
At \(M^{2}=5.5\ \mathrm{GeV}^{2}\) and \(7\ \mathrm{GeV}^{2}\) the pole contribution constitutes \(\mathrm{PC}=0.75\) and \(0.48\) parts of the correlation function, respectively. The pole contribution changes within limits
\[0.75\geq\mathrm{PC}\geq 0.48. \tag{48}\]
On average in \(s_{0}\) the pole contribution is \(\mathrm{PC}\geq 0.5\). The dimension-4 term is negative and constitutes \(\simeq 19.9\%\) of the correlator.
The mass and current coupling of \(\mathcal{M}_{2}\) are:
\[\widetilde{m} = (6954\pm 50)\ \mathrm{MeV},\] \[\widetilde{f} = (1.71\pm 0.12)\times 10^{-2}\ \mathrm{GeV}^{4}. \tag{49}\]
These results are obtained as mean values of \(\widetilde{m}\) and \(\widetilde{f}\) averaged over the regions Eq. (47). They effectively correspond to the sum rule predictions at the point \(M^{2}=6.2\ \mathrm{GeV}^{2}\) and \(s_{0}=54.5\ \mathrm{GeV}^{2}\), where \(\mathrm{PC}=0.64\). In Fig. 4, we plot \(\widetilde{m}\) as a function of \(M^{2}\) and \(s_{0}\).
### Decays of \({\cal M}_{2}\)
The prediction for the mass of the molecule \({\cal M}_{2}\) determines its kinematically allowed decay channels. First of all, they are processes \({\cal M}_{2}\to J/\psi J/\psi\) and \({\cal M}_{2}\to J/\psi\psi^{\prime}\). The \(\widetilde{m}\) satisfies kinematical restrictions for productions of \(\eta_{c}\eta_{c}\) and \(\eta_{c}\eta_{c}(2S)\) pairs. The \({\cal M}_{2}\) can decay to \(\eta_{c}\chi_{c1}(1P)\) and \(\chi_{c0}\chi_{c0}\) mesons as well. The decay \({\cal M}_{2}\to\eta_{c}\chi_{c1}(1P)\) is the \(P\)-wave process, whereas other ones are \(S\)-wave modes.
### \({\cal M}_{2}\to J/\psi J/\psi\) and \({\cal M}_{2}\to J/\psi\psi^{\prime}\)
The three-point sum rules for the strong form factors \(g_{3}(q^{2})\) and \(g_{3}^{s}(q^{2})\) which describe interaction of particles at vertices \({\cal M}_{2}J/\psi J/\psi\) and \({\cal M}_{2}J/\psi\psi^{\prime}\) respectively, can be extracted from studies of the correlation function
\[\widetilde{\Pi}_{\mu\nu}(p,p^{\prime}) = i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y}e^{-ipx}\langle 0|{\cal T}\{J _{\mu}^{\psi}(y) \tag{50}\] \[\times J_{\nu}^{\psi}(0)\widetilde{J}^{\dagger}(x)\}|0\rangle.\]
Firstly, we express \(\widetilde{\Pi}_{\mu\nu}(p,p^{\prime})\) using the physical parameters of particles involved in these decays. The molecule \({\cal M}_{2}\) can decay to \(J/\psi J/\psi\) and \(J/\psi\psi^{\prime}\) mesons, therefore in \(\widetilde{\Pi}_{\mu\nu}(p,p^{\prime})\) we isolate contributions of the particles \(J/\psi\) and \(\psi^{\prime}\) from effects of higher resonances and continuum states. Then, the physical side \(\widetilde{\Pi}_{\mu\nu}^{\rm Phys}(p,p^{\prime})\) of the sum rule is determined by Eq. (A.4). It can be rewritten using the matrix elements of the particles \({\cal M}_{2}\), \(J/\psi\) and \(\psi^{\prime}\)
\[\widetilde{\Pi}_{\mu\nu}^{\rm Phys}(p,p^{\prime})=g_{3}(q^{2}) \frac{\widetilde{f}\widetilde{m}f_{1}^{2}m_{1}^{2}}{(p^{2}-\widetilde{m}^{2}) \left(p^{\prime 2}-m_{1}^{2}\right)\left(q^{2}-m_{1}^{2}\right)}\] \[\times\left[\frac{1}{2}\left(\widetilde{m}^{2}-m_{1}^{2}-q^{2} \right)g_{\mu\nu}-q_{\mu}p_{\nu}^{\prime}\right]+\] \[+g_{3}^{s}(q^{2})\frac{\widetilde{f}\widetilde{m}f_{1}m_{1}f_{1}^ {s}m_{1}^{4}}{(p^{2}-\widetilde{m}^{2})\left(p^{\prime 2}-m_{1}^{2}\right) \left(q^{2}-m_{1}^{2}\right)}\] \[\times\left[\frac{1}{2}\left(\widetilde{m}^{2}-m_{1}^{4}-q^{2} \right)g_{\mu\nu}-q_{\mu}p_{\nu}^{\prime}\right]+\cdots, \tag{51}\]
where \(m_{1}^{*}\) and \(f_{1}^{*}\) are the mass and decay constant of the meson \(\psi^{\prime}\). In what follows, we use the component of \(\widetilde{\Pi}_{\mu\nu}^{\rm Phys}(p,p^{\prime})\) that is proportional to \(g_{\mu\nu}\), and denote the relevant invariant amplitude by \(\widetilde{\Pi}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\).
The second term of the sum rules, i.e., the correlation function \(\widetilde{\Pi}_{\mu\nu}^{\rm OPE}(p,p^{\prime})\) is presented in Eq. (A.5). An amplitude \(\widetilde{\Pi}^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) which corresponds to the term \(\sim g_{\mu\nu}\) in \(\widetilde{\Pi}_{\mu\nu}^{\rm OPE}(p,p^{\prime})\) establishes the QCD side of the sum rules. By equating the amplitudes \(\widetilde{\Pi}^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) and \(\widetilde{\Pi}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\), applying the the Borel transformations and carrying out continuum subtractions, one can find the sum rules for the form factors \(g_{3}(q^{2})\) and \(g_{3}^{s}(q^{2})\). Let us note after these manipulations the \(\widetilde{\Pi}^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) takes the form Eq. (23) with a new spectral density \(\widetilde{\rho}(s,s^{\prime},q^{2})\).
To find the form factors \(g_{3}(q^{2})\) and \(g_{3}^{s}(q^{2})\), we keep the following approach. At the first phase, we compute the form factor \(g_{3}(q^{2})\) by chosing in the \(J/\psi\) channel \(4m_{c}^{2}<s_{0}^{\prime}<m_{1}^{*2}\). This means that, we exclude the second term in Eq. (51) from analysis by including it into higher resonances and continuum states. As a result, the physical side of the sum rule contains a contribution coming only from ground-level states. The sum rule for the form factor \(g_{3}(q^{2})\) is determined by Eq. (22) after sub
stitutions \(\Pi({\bf M}^{2},{\bf s}_{0},q^{2})\to\widetilde{\Pi}({\bf M}^{2},{\bf s}_{0},q^{2})\) and \(fm\to\widetilde{f}\widetilde{m}\). At the second stage of computations, we fix \(s_{0}^{\prime\prime}>m_{1}^{*2}\), and take into account the second term in Eq. (51). Afterwards, using results obtained for \(g_{3}(q^{2})\), we determine \(g_{3}^{*}(q^{2})\).
In numerical computations the working regions for \(M_{1}^{2}\) and \(s_{0}\) in the \({\cal M}_{2}\) channel are chosen as in Eq. (47). The parameters \((M_{2}^{2},\ s_{0}^{\prime})\) for the \(J/\psi\) channel are changed within limits given by Eq. (24). The sum rule calculations are carried out in deep-Euclidean region \(q^{2}=-(1\div 10)\ {\rm GeV}^{2}\). The fit function \({\cal G}_{3}(Q^{2})\) necessary to extrapolate these data to region of \(q^{2}>0\) has the parameters \({\cal G}_{3}^{0}=0.87\ {\rm GeV}^{-1}\), \(c_{3}^{1}=3.03\), and \(c_{3}^{2}=-3.64\). At the mass shell \(q^{2}=m_{1}^{2}\) this function determines the strong coupling \(g_{3}\)
\[g_{3}\equiv{\cal G}_{3}(-m_{1}^{2})=(4.1\pm 0.8)\times 10^{-1}\ {\rm GeV}^{-1}. \tag{52}\]
Partial width of the process \({\cal M}_{2}\to J/\psi J/\psi\) can be found by means of Eq. (27) after substitutions \(g_{1}\to g_{3}\), \(m^{2}\to\widetilde{m}^{2}\) and \(\lambda_{1}\to\lambda_{3}=\lambda(\widetilde{m},m_{1},m_{1})\). It is not difficult to get
\[\Gamma\left[{\cal M}_{2}\to J/\psi J/\psi\right]=(38\pm 11)\ {\rm MeV}. \tag{53}\]
The process \({\cal M}_{2}\to J/\psi\psi^{\prime}\) can be studied in accordance with a scheme described above. In this phase of the analysis, in the \(\psi^{\prime}\) channel, we employ
\[M_{2}^{2}\in[4,5]\ {\rm GeV}^{2},\ s_{0}^{*\prime}\in[15,16]\ {\rm GeV}^{2}. \tag{54}\]
It is worth noting that \(s_{0}^{*\prime}\) is limited by the mass \(m(3S)=4039\ {\rm MeV}\) of the charmonium \(\psi(3S)\)[27]. For this decay, the extrapolating function \({\cal G}_{3}^{*}(Q^{2})\) has the parameters \({\cal G}_{3}^{0*}=0.68\ {\rm GeV}^{-1}\), \(c_{3}^{1*}=2.90\), and \(c_{3}^{2*}=-3.54\). The strong coupling \(g_{3}^{*}\) is calculated at the mass shell \(q^{2}=m_{1}^{2}\)
\[g_{3}^{*}\equiv{\cal G}_{3}^{*}(-m_{1}^{2})=(3.3\pm 0.7)\times 10^{-1}\ {\rm GeV}^{-1}. \tag{55}\]
The partial width of the decay \({\cal M}_{2}\to J/\psi\psi^{\prime}\) is
\[\Gamma\left[{\cal M}_{2}\to J/\psi\psi^{\prime}\right]=(11\pm 4)\ {\rm MeV}. \tag{56}\]
### \({\cal M}_{2}\to\eta_{c}\eta_{c}\) and \({\cal M}_{2}\to\eta_{c}\eta_{c}(2S)\)
The processes \({\cal M}_{2}\to\eta_{c}\eta_{c}\) and \({\cal M}_{2}\to\eta_{c}\eta_{c}(2S)\) can be investigated by the similar manner. The strong couplings \(g_{4}\) and \(g_{4}^{*}\) that correspond to the vertices \({\cal M}_{2}\eta_{c}\eta_{c}\) and \({\cal M}_{2}\eta_{c}\eta_{c}(2S)\) can be extracted from the correlation function
\[\widetilde{\Pi}(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^{ip^{\prime }y}e^{-ipx}\langle 0|{\cal T}\{J^{\eta_{c}}(y)\] \[\times J^{\eta_{c}}(0)\widehat{J}^{\dagger}(x)\}|0\rangle. \tag{57}\]
Separating the ground-level and first excited state contributions from effects of higher resonances and continuum states, we can write the correlation function \(\Pi^{\rm Phys}(p,p^{\prime})\) is determined by Eq. (A.6). It can be further simplified using known matrix elements and takes the form
\[\widetilde{\Pi}^{\rm Phys}(p,p^{\prime})=g_{4}(q^{2})\frac{ \widetilde{f}\widetilde{m}f_{2}^{2}m_{4}^{4}}{8m_{c}^{2}\left(p^{2}-\widetilde{ m}^{2}\right)\left(p^{\prime 2}-m_{2}^{2}\right)}\] \[\times\frac{\widetilde{m}^{2}+m_{2}^{2}-q^{2}}{q^{2}-m_{2}^{2}}+g_ {4}^{*}(q^{2})\frac{\widetilde{f}\widetilde{m}f_{2}m_{2}^{2}f_{2}^{*}m_{2}^{ *2}}{8m_{c}^{2}\left(p^{2}-\widetilde{m}^{2}\right)\left(p^{\prime 2}-m_{2}^{2}\right)}\] \[\times\frac{\widetilde{m}^{2}+m_{2}^{*2}-q^{2}}{q^{2}-m_{2}^{2}}+ \cdots. \tag{58}\]
where \(m_{2}^{*}\) and \(f_{2}^{*}\) are the mass and decay constant of the \(\eta_{c}(2S)\) meson. The correlation function \(\Pi^{\rm Phys}(p,p^{\prime})\) has simple Lorentz structure proportional to I, hence right-hand side of Eq. (35) is the corresponding invariant amplitude \(\widetilde{\Pi}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\).
The QCD side of the sum rule \(\widetilde{\Pi}^{\rm OPE}(p,p^{\prime})\) is given by Eq. (A.7). The sum rule for the strong form factor \(g_{4}(q^{2})\) is determined by Eq. (37) with replacements \(fm\to\widetilde{f}\widetilde{m}\) and \(\widetilde{\Pi}\to\widetilde{\Pi}\), where \(\widetilde{\Pi}({\bf M}^{2},{\bf s}_{0},q^{2})\) corresponds to the correlation function \(\widetilde{\Pi}^{\rm OPE}(p,p^{\prime})\).
Numerical computations are carried out using Eq. (37), parameters of the meson \(\eta_{c}\) from Table 1, and working regions for \({\bf M}^{2}\) and \({\bf s}_{0}\). The Borel and continuum subtraction parameters \(M_{1}^{2}\) and \(s_{0}\) in the \({\cal M}_{2}\) channel are chosen as in Eq. (47), whereas for \(M_{2}^{2}\) and \(s_{0}^{\prime}\) which correspond to the \(\eta_{c}\) channel, we employ Eq. (38).
The interpolating function \({\cal G}_{4}(Q^{2})\) necessary to determine the coupling \(g_{4}\) has the parameters: \({\cal G}_{4}^{0}=0.48\ {\rm GeV}^{-1}\), \(c_{4}^{1}=3.65\), and \(c_{4}^{2}=-4.24\). For the strong coupling \(g_{4}\), we get
\[g_{4}\equiv{\cal G}_{4}(-m_{2}^{2})=(2.1\pm 0.4)\times 10^{-1}\ {\rm GeV}^{-1}. \tag{59}\]
The width of the process \({\cal M}_{2}\to\eta_{c}\eta_{c}\) is determined by means of the formula Eq. (40) with substitutions \(g_{2}\to g_{4}\), \(\lambda_{2}\to\lambda_{4}=\lambda(\widetilde{m},m_{2},m_{2})\). Our computations yield
\[\Gamma\left[{\cal M}_{2}\to\eta_{c}\eta_{c}\right]=(39\pm 11)\ {\rm MeV}. \tag{60}\]
For the channel \({\cal M}_{2}\to\eta_{c}\eta_{c}(2S)\), we use
\[M_{2}^{2}\in[3.5,4.5]\ {\rm GeV}^{2},\ s_{0}^{*\prime}\in[13,14]\ {\rm GeV}^{2}, \tag{61}\]
and find
\[g_{4}^{*}\equiv{\cal G}_{4}^{*}(-m_{2}^{2})=(1.34\pm 0.26)\times 10^{-1}\ {\rm GeV}^{-1}. \tag{62}\]
The \(g_{4}^{*}\) is evaluated using the fit function \({\cal G}_{4}^{*}(Q^{2})\) with the parameters \({\cal G}_{4}^{0*}=0.32\ {\rm GeV}^{-1}\), \(c_{4}^{1*}=3.64\), and \(c_{4}^{2*}=-4.23\). The width of this decay is equal to
\[\Gamma\left[{\cal M}_{2}\to\eta_{c}\eta_{c}(2S)\right]=(12\pm 4)\ {\rm MeV}. \tag{63}\]
### \({\cal M}_{2}\to\eta_{c}\chi_{c1}(1P)\) and \({\cal M}_{2}\to\chi_{c0}\chi_{c0}\)
Analysis of the \(P\)-wave process \({\cal M}_{2}\to\eta_{c}\chi_{c1}(1P
relator that should be studied in this case is
\[\widetilde{\Pi}_{\mu}(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^{ip^{ \prime}y}e^{-ipx}\langle 0|{\cal T}\{J_{\mu}^{\chi_{c1}}(y)\] \[\times J^{\eta_{c}}(0)\widehat{J}^{\dagger}(x)\}|0\rangle, \tag{64}\]
where \(J_{\mu}^{\chi_{c1}}(x)\) is the interpolating current for the axial-vector meson \(\chi_{c1}(1P)\)
\[J_{\mu}^{\chi_{c1}}(x)=\overline{c}_{j}(x)\gamma_{5}\gamma_{\mu}c_{j}(x). \tag{65}\]
In terms of the physical parameters of involved particles this correlation function has the form
\[\widetilde{\Pi}_{\mu}^{\rm Phys}(p,p^{\prime}) = g_{5}(q^{2})\frac{\widetilde{f}\widetilde{m}f_{2}m_{2}^{2}f_{3 }m_{3}}{2m_{c}\left(p^{2}-\widetilde{m}^{2}\right)\left(p^{\prime 2}-m_{3}^{2}\right)} \tag{66}\] \[\times\frac{1}{q^{2}-m_{2}^{2}}\left[\frac{\widetilde{m}^{2}-m_ {3}^{2}-q^{2}}{2m_{3}^{2}}p^{\prime}_{\mu}-q_{\mu}\right]+.\]
In Eq. (66) \(m_{3}\) and \(f_{3}\) are the mass and decay constant of the meson \(\chi_{c1}(1P)\), respectively. To derive \(\Pi_{\mu}^{\rm Phys}(p,p^{\prime})\), we have used the matrix elements of the molecule \({\cal M}_{2}\) and meson \(\eta_{c}\), as well as new matrix elements
\[\langle 0|J_{\mu}^{\chi_{c1}}|\chi_{c1}(p^{\prime})\rangle=f_{3}m_{3}\varepsilon _{\mu}^{*}(p^{\prime}), \tag{67}\]
and
\[\langle\eta_{c}(q)\chi_{c1}(p^{\prime})|{\cal M}_{2}(p)\rangle=g_{5}(q^{2})p \cdot\varepsilon^{*}(p^{\prime}), \tag{68}\]
where \(\varepsilon_{\mu}^{*}(p^{\prime})\) is the polarization vector of \(\chi_{c1}(1P)\).
In terms of \(c\)-quark propagators the correlator \(\widetilde{\Pi}_{\mu}^{\rm OPE}(p,p^{\prime})\) has the form Eq. (101). The sum rule for \(g_{5}(q^{2})\) is derived using amplitudes corresponding to terms \(\sim p^{\prime}_{\mu}\) in \(\Pi_{\mu}^{\rm Phys}(p,p^{\prime})\) and \(\Pi_{\mu}^{\rm OPE}(p,p^{\prime})\).
In numerical analysis, the parameter \(M_{2}^{2}\) and \(s^{\prime}_{0}\) in the \(\chi_{c1}\) channel are chosen in the following way
\[M_{2}^{2}\in[4,5]\ {\rm GeV}^{2},\ s^{\prime}_{0}\in[13,14]\ {\rm GeV}^{2}. \tag{69}\]
For the parameters of the fit function \({\cal G}_{5}(Q^{2})\), we get \({\cal G}_{5}^{0}=6.02\), \(c_{5}^{1}=3.16\), and \(c_{5}^{2}=-3.88\). Then, the strong coupling \(g_{5}\) is equal to
\[g_{5}\equiv{\cal G}_{5}(-m_{2}^{2})=2.9\pm 0.6. \tag{70}\]
The width of the decay \({\cal M}_{2}\to\eta_{c}\chi_{c1}(P)\) can be calculated by means of the expression
\[\Gamma\left[{\cal M}_{2}\to\eta_{c}\chi_{c1}(P)\right]=g_{5}^{2}\frac{\lambda _{5}^{3}}{24\pi m_{3}^{2}}, \tag{71}\]
where \(\lambda_{5}=\lambda(\widetilde{m},m_{3},m_{2})\). It is not difficult to find that
\[\Gamma\left[{\cal M}_{2}\to\eta_{c}\chi_{c1}(P)\right]=(16\pm 5)\ {\rm MeV}. \tag{72}\]
For studying the decay \({\cal M}_{2}\to\chi_{c0}\chi_{c0}\), we consider the correlation function
\[\widetilde{\Pi}_{\chi_{c0}}(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^ {ip^{\prime}y}e^{-ipx}\langle 0|{\cal T}\{J^{\chi_{c0}}(y)\] \[\times J^{\chi_{c0}}(0)\widehat{J}^{\dagger}(x)\}|0\rangle, \tag{73}\]
with \(J^{\chi_{c0}}(x)\) being the interpolating current for the scalar meson \(\chi_{c0}\)
\[J^{\chi_{c0}}(x)=\overline{c}_{i}(x)c_{i}(x). \tag{74}\]
The explicit expression of the correlator \(\widetilde{\Pi}_{\chi_{c0}}^{\rm OPE}(p,p^{\prime})\) can be found in Eq. (100). Remaining operations are performed in the context of the standard approach. Thus, in numerical computations, the parameters \(M_{2}^{2}\) and \(s^{\prime}_{0}\) in the \(\chi_{c0}\) channel are chosen in the form
\[M_{2}^{2}\in[4,5]\ {\rm GeV}^{2},\ s^{\prime}_{0}\in[14,14.9]\ {\rm GeV}^{2}, \tag{75}\]
where \(s^{\prime}_{0}\) is restricted by the mass of the charmonium \(\chi_{c0}(3860)\). The coupling \(g_{6}\) that corresponds to the vertex \({\cal M}_{2}\chi_{c0}\chi_{c0}\) is extracted at \(Q^{2}=-m_{4}^{2}\) of the fit function \({\cal G}_{6}(Q^{2})\) with parameters \({\cal G}_{6}^{0}=0.63\), \(c_{6}^{1}=2.83\), and \(c_{6}^{2}=-3.03\).
The strong coupling \(g_{6}\) is found equal to
\[g_{6}\equiv{\cal G}_{6}(-m_{4}^{2})=(2.7\pm 0.43)\times 10^{-1}\ {\rm GeV}^{-1}. \tag{76}\]
The partial width of the decay \({\cal M}_{2}\to\chi_{c0}\chi_{c0}\) is calculated by means of the formula
\[\Gamma\left[{\cal M}_{2}\to\chi_{c0}\chi_{c0}\right]=g_{6}^{2}\frac{m_{4}^{2} \lambda_{6}}{8\pi}\left(1+\frac{\lambda_{6}^{2}}{m_{4}^{2}}\right), \tag{77}\]
where \(\lambda_{6}=\lambda(\widetilde{m},m_{4},m_{4})\). Numerical analyses yield
\[\Gamma\left[{\cal M}_{2}\to\chi_{c0}\chi_{c0}\right]=(22\pm 5)\times 10^{-1}\ {\rm MeV}. \tag{78}\]
The partial widths of the six decays considered in this section are collected in Table 2.
Using these results, we estimate the full width of \({\cal M}_{2}\)
\[\Gamma_{{\cal M}_{2}}=(138\pm 18)\ {\rm MeV}. \tag{79}\]
This prediction can be confronted with the data of the experimental groups.
## IV Discussion and concluding notes
In this article, we studied the hadronic molecules \({\cal M}_{1}=\eta_{c}\eta_{c}\) and \({\cal M}_{2}=\chi_{c0}\chi_{c0}\) and calculated their masses and full widths. The masses of these structures were extracted from the QCD two-point sum rules. To evaluate full widths of \({\cal M}_{1}\) and \({\cal M}_{2}\), we applied the three-point sum rule method. We analyzed two decay channels of the molecule \({\cal M}_{1}\). In the case of \({\cal M}_{2}\) state, we took into account six kinematically allowed decay modes of this molecule.
Our predictions for the mass \(m=(6264\pm 50)\) MeV and width \(\Gamma_{{\cal M}_{1}}=(320\pm 72)\) MeV of the molecule \({\cal M}_{1}\) are consistent with the data of the ATLAS Collaboration which found for these parameters
\[m^{\rm ATL}=6220\pm 50^{+40}_{-50}\ {\rm MeV},\] \[\Gamma^{\rm ATL}=310\pm 120^{+70}_{-80}\ {\rm MeV}. \tag{80}\]
These results allow us to interpret the lowest resonance \(X(6200)\) with great confidence as the molecule \(\eta_{c}\eta_{c}\).
The \({\cal M}_{2}=\chi_{c0}\chi_{c0}\) state has the mass and width
\[\widetilde{m}=(6954\pm 50)\ {\rm MeV},\ \Gamma_{{\cal M}_{2}}=(138\pm 18)\ {\rm MeV}. \tag{81}\]
The mass \(\widetilde{m}\) of the molecule \({\cal M}_{2}\) within errors of computations agrees with the mass of the resonance \(X(6900)\) measured by the LHCb-ATLAS-CMS Collaborations, through the central value for \(\widetilde{m}\) is a little over the relevant data. It is convenient to compare \(\widetilde{m}\) and \(\Gamma_{{\cal M}_{2}}\) with the CMS data
\[m^{\rm CMS} = (6927\pm 9\pm 5)\ {\rm MeV},\] \[\Gamma^{\rm CMS} = (122\pm 22\pm 19)\ {\rm MeV}. \tag{82}\]
One sees that the molecule \({\cal M}_{2}\) is a serious candidate to the resonance \(X(6900)\). The \(X(6900)\) was also examined in our paper [23] in the context of the diquark-antidiquark model. The predictions for the mass \(m=(6928\pm 50)\ {\rm MeV}\) and width \(\widetilde{\Gamma}_{4{\rm c}}=(112\pm 21)\ {\rm MeV}\) of the scalar tetraquark \(T_{4{\rm c}}\) built of pseudoscalar constitutes are consistent with the CMS data as well. These circumstances make a linear superposition of the structures \({\cal M}_{2}\) and \(T_{4{\rm c}}\) as one of the reliable scenarios for the resonance \(X(6900)\).
## Appendix A Heavy-quark propagator \(S_{q}^{ab}(x)\) and correlation functions
In the present study, for the heavy quark propagator \(S_{Q}^{ab}(x)\) (\(Q=c,\ b\)), we use
\[S_{Q}^{ab}(x)=i\int\frac{d^{4}k}{(2\pi)^{4}}e^{-ikx}\Bigg{\{} \frac{\delta_{ab}\left(\not{k}+m_{Q}\right)}{k^{2}-m_{Q}^{2}}-\frac{g_{s}G_{ab }^{\alpha\beta}}{4}\frac{\sigma_{\alpha\beta}\left(\not{k}+m_{Q}\right)+\left( \not{k}+m_{Q}\right)\sigma_{\alpha\beta}}{(k^{2}-m_{Q}^{2})^{2}}\] \[+\frac{g_{s}^{2}G^{2}}{12}\delta_{ab}m_{Q}\frac{k^{2}+m_{Q}\not {k}}{(k^{2}-m_{Q}^{2})^{4}}+\cdots\Bigg{\}}. \tag{83}\]
Here, we have used the notations
\[G_{ab}^{\alpha\beta}\equiv G_{A}^{\alpha\beta}\lambda_{ab}^{A}/2,\ \ G^{2}=G_{\alpha\beta}^{A}G_{A}^{\alpha\beta}, \tag{84}\]
where \(G_{A}^{\alpha\beta}\) is the gluon field-strength tensor, and \(\lambda^{A}\) are the Gell-Mann matrices. The indices \(A,B,C\) run in the range \(1,2,\ldots 8\).
The correlation function \(\widetilde{\Pi}^{\rm OPE}(p)\) used to calculate the mass and current coupling of the molecule \({\cal M}_{2}\):
\[\widetilde{\Pi}^{\rm OPE}(p)=i\int d^{4}xe^{i\pi}\left\{{\rm Tr} \left[S_{c}^{ba^{\prime}}(x)S_{c}^{a^{\prime}b}(-x)\right]{\rm Tr}\left[S_{c}^ {ab^{\prime}}(x)S_{c}^{b^{\prime}a}(-x)\right]-{\rm Tr}\left[S_{c}^{bb^{\prime }}(x)S_{c}^{b^{\prime}a}(-x)\right.\right.\] \[\left.\left.\times S_{c}^{aa^{\prime}}(x)S_{c}^{a^{\prime}b}(-x) \right]-{\rm Tr}\left[S_{c}^{ba^{\prime}}(x)S_{c}^{a^{\prime}a}(-x)S_{c}^{ab^ {\prime}}(x)S_{c}^{b^{\prime}b}(-x)\right]+{\rm Tr}\left[S_{c}^{bb^{\prime}}(x )S_{c}^{b^{\prime}b}(-x)\right]\right.\] \[\left.\left.\times{\rm Tr}\left[S_{c}^{aa^{\prime}}(x)S_{c}^{a^{ \prime}a}(-x)\right]\right\}. \tag{85}\]
The correlators \(\widetilde{\Pi}^{\rm Phys}_{\mu\nu}(p,p^{\prime})\) and \(\widetilde{\Pi}^{\rm OPE}_{\mu\nu}(p,p^{\prime})\) necessary to explore the decays \({\cal M}_{2}\to J/\psi J/\psi(\psi^{\prime})\):
\[\widetilde{\Pi}^{\rm Phys}_{\mu\nu}(p,p^{\prime})=\frac{\langle 0|J_{ \mu}^{v}|J/\psi(p^{\prime})\rangle}{p^{\prime 2}-m_{1}^{2}}\frac{\langle 0|J_{\nu}^{v}|J/ \psi(q)\rangle}{q^{2}-m_{1}^{2}}\langle J/\psi(p^{\prime})J/\psi(q)|{\cal M}_{ 2}(p)\rangle\frac{\langle{\cal M}_{2}(p)|\widehat{J}^{\dagger}|0\rangle}{p^{2}- \widetilde{m}^{2}}\] \[+\frac{\langle 0|J_{\mu}^{v}|\psi(p^{\prime})\rangle}{p^{\prime 2}-m_{1}^{ *2}}\frac{\langle 0|J_{\nu}^{v}|J/\psi(q)\rangle}{q^{2}-m_{1}^{2}}\langle\psi(p^{ \prime})J/\psi(q)|{\cal M}_{2}(p)\rangle\frac{\langle{\cal M}_{2}(p)|\widehat{J }^{\dagger}|0\rangle}{p^{2}-\widetilde{m}^{2}}+\cdots, \tag{86}\]
and
\[\widetilde{\Pi}^{\rm OPE}_{\mu\nu}(p,p^{\prime})=2i^{2}\int d^{4}xd ^{4}ye^{ip^{\prime}y}e^{-ipx}\left\{{\rm Tr}\left[\gamma_{\mu}S_{c}^{ia}(y-x)S_{c }^{ai}(x-y)\right]{\rm Tr}\left[\gamma_{\nu}S_{c}^{jb}(-x)S_{c}^{bj}(x)\right]\right.\] \[\left.-{\rm Tr}\left[\gamma_{\mu}S_{c}^{ia}(y-x)S_{c}^{aj}(x) \gamma_{\nu}S_{c}^{jb}(-x)S_{c}^{bi}(x-y)\right]\right\}. \tag{87}\]
The correlation functions \(\widetilde{\Pi}^{\rm Phys}(p,p^{\prime})\) and \(\widetilde{\Pi}^{\rm OPE}(p,p^{\prime})\) used in the analysis of the decays \({\cal M}_{2}\rightarrow\eta_{c}\eta_{c}(\eta_{c}(2S))\)
\[\widetilde{\Pi}^{\rm Phys}(p,p^{\prime})=\frac{\langle 0|J^{ \eta_{c}}|\eta_{c}(p^{\prime})\rangle}{p^{\prime 2}-m_{2}^{2}}\frac{\langle 0|J^{ \eta_{c}}|\eta_{c}(q)\rangle}{q^{2}-m_{2}^{2}}\langle\eta_{c}(p^{\prime})\eta_ {c}(q)|{\cal M}_{2}(p)\rangle\frac{\langle{\cal M}_{2}(p)|J^{\dagger}|0\rangle }{p^{2}-\widetilde{m}^{2}}\] \[+\frac{\langle 0|J^{\eta_{c}}|\eta_{c}(2S)(p^{\prime})\rangle}{p^{ \prime 2}-m_{2}^{*2}}\frac{\langle 0|J^{\eta_{c}}|\eta_{c}(q)\rangle}{q^{2}-m_{2}^{2}} \langle\eta_{c}(2S)(p^{\prime})\eta_{c}(q)|{\cal M}_{2}(p)\rangle\frac{ \langle{\cal M}_{2}(p)|J^{\dagger}|0\rangle}{p^{2}-\widetilde{m}^{2}}+\cdots, \tag{100}\]
and
\[\widetilde{\Pi}^{\rm OPE}(p,p^{\prime})=-2\int d^{4}xd^{4}ye^{ip^{ \prime}y}e^{-ipx}{\rm Tr}\left[\gamma_{5}S_{c}^{ia}(y-x)S_{c}^{aj}(x)\gamma_{5} S_{c}^{jb}(-x)S_{c}^{bi}(x-y)\right]. \tag{101}\]
The correlation function \(\widetilde{\Pi}^{\rm OPE}_{\mu}(p,p^{\prime})\) for the process \({\cal M}_{2}\rightarrow\eta_{c}\chi_{1c}(1P)\) is given by the formula
\[\widetilde{\Pi}^{\rm OPE}_{\mu}(p,p^{\prime})=-2i^{3}\int d^{4}xd^{4}ye^{ip^{ \prime}y}e^{-ipx}{\rm Tr}\left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x)S_{c}^{aj} (x)\gamma_{5}S_{c}^{jb}(-x)S_{c}^{bi}(x-y)\right]. \tag{102}\]
The function \(\widetilde{\Pi}^{\rm OPE}_{\chi_{c0}}(p,p^{\prime})\) for the decay \({\cal M}_{2}\rightarrow\chi_{c0}\chi_{c0}\) is :
\[\widetilde{\Pi}^{\rm OPE}_{\chi_{c0}}(p,p^{\prime}) = 2i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y}e^{-ipx}\left\{{\rm Tr} \left[S_{c}^{ia}(y-x)S_{c}^{ai}(x-y)\right]{\rm Tr}\left[S_{c}^{jb}(-x)S_{c}^{bj }(x)\right]\right.. \tag{103}\] \[\left.-{\rm Tr}\left[S_{c}^{ia}(y-x)S_{c}^{aj}(x)S_{c}^{jb}(-x)S_ {c}^{bi}(x-y)\right]\right\}.\]
|
2302.00338 | A Robust Certificate Management System to Prevent Evil Twin Attacks in
IEEE 802.11 Networks | The evil twin attack is a major security threat to WLANs. An evil twin is a
rogue AP installed by a malicious user to impersonate legitimate APs. It
intends to attract victims in order to intercept their credentials, to steal
their sensitive information, to eavesdrop on their data, etc. In this paper, we
study the security mechanisms of wireless networks and we introduce the
different authentication methods, including 802.1X authentication. We show that
802.1X has improved security through the use of digital certificates but does
not define any practical technique for the user to check the network
certificate. Therefore, it remains vulnerable to the evil twin attack. To
repair this vulnerability, we introduce Robust Certificate Management System
(RCMS) which takes advantage of the digital certificates of 802.1X to protect
the users against rogue APs. RCMS defines a new verification code to allow the
user device to check the network certificate. This practical verification
combined with the reliability of digital certificates provides a perfect
protection against rogue APs. RCMS requires a small software update on the user
terminal and does not need any modification of IEEE 802.11. It has a
significant flexibility since trusting a single AP is enough to trust all the
APs of the extended network. This allows the administrators to extend their
networks easily without the need to update any database of trusted APs on the
user devices. | Yousri Daldoul | 2023-02-01T09:41:45Z | http://arxiv.org/abs/2302.00338v1 | # A Robust Certificate Management System to Prevent Evil Twin Attacks in IEEE 802.11 Networks
###### Abstract
The evil twin attack is a major security threat to WLANs. An evil twin is a rogue AP installed by a malicious user to impersonate legitimate APs. It intends to attract victims in order to intercept their credentials, to steal their sensitive information, to eavesdrop on their data, etc. In this paper, we study the security mechanisms of wireless networks and we introduce the different authentication methods, including 802.1X authentication. We show that 802.1X has improved security through the use of digital certificates but does not define any practical technique for the user to check the network certificate. Therefore, it remains vulnerable to the evil twin attack. To repair this vulnerability, we introduce Robust Certificate Management System (RCMS) which takes advantage of the digital certificates of 802.1X to protect the users against rogue APs. RCMS defines a new verification code to allow the user device to check the network certificate. This practical verification combined with the reliability of digital certificates provides a perfect protection against rogue APs. RCMS requires a small software update on the user terminal and does not need any modification of IEEE 802.11. It has a significant flexibility since trusting a single AP is enough to trust all the APs of the extended network. This allows the administrators to extend their networks easily without the need to update any database of trusted APs on the user devices.
IEEE 802.11 Networks; WLAN Security; 802.1X Authentication; Evil Twin Attack; Certificate Verification
## 1 Introduction
IEEE 802.11 [1] networks are widely used thanks to their high throughput capacity and easy installation. Due to the broadcast nature of these networks, any attacker can eavesdrop on their transmitted data. Therefore, WLANs must provide enough security to protect the user privacy. 802.11i is the principal amendment that intends to improve the security. It defines several protocols and algorithms to provide authentication, integrity and confidentiality services. A WLAN that supports 802.11i is called a Robust Security Network (RSN). Although 802.11i introduces robust mechanisms, an RSN is still vulnerable to several attacks, such as the evil twin attack. The principle of this attack is to install a rogue AP which impersonates a legitimate AP. When a new user wants to join the WLAN, he may confuse the rogue AP with the legitimate one and associate with the rogue AP. This allows the adversary to perform several attacks, such as intercepting the user credentials, stealing sensitive information and eavesdropping on the victim communication.
WLANs are suitable for multiple environments. They can provide public access to open networks in different areas, such as malls, municipalities, libraries and airports. They can also provide private access to authorized users, like students, employees, customers, hotel guests and family members. This is possible thanks to the different supported authentication methods. In fact, 802.11i defines 3 authentication methods: Open System Authentication (OSA), Pre-Shared Key (PSK) and 802.1X. OSA does not require any password and allows any user to join the network. PSK requires the users and the AP to share the same password. 802.1X requires an authentication server (AS) that authenticates the users by means of their credentials (e.g. username and password). Open networks are vulnerable to evil twin attacks since there is no mutual authentication between the user and the AP. PSK allows mutual authentication using the shared password. As long as the password is protected, the connection is secure and the evil twin attack is impossible. PSK is suitable for small WLANs, such as residential networks, where few users are able to share the password securely with each other. It is not convenient for public or large networks since the attacker is able to obtain the password which allows the rogue AP to succeed the mutual authentication with the victims. On the other hand, 802.1X [2] is suitable for large networks since it provides every user with his own credentials. It allows the user to authenticate the network by means of the digital certificate of the AS, while the user credentials allow the AS to authenticate the users. This authentication method is widely used by companies, hotels, shops and universities, such as the largest university network Eduroam [3]. Unfortunately, the evil twin attacks are easy to perform against 802.1X and allow the attacker to steal the user credentials. This is because the victim ignores the AS certificate and can trust any self-signed certificate provided by the rogue AP. Therefore, he may send his credentials in plaintext to the attacker within a TLS tunnel.
Despite the robust security mechanisms of 802.11i, the evil twin attack is easy to perform. As a result, a large number of studies have been carried out to prevent this attack. However, most of them do not provide a trustworthy detection since they may trust rogue APs and alert from legitimate APs. In addition, several approaches are not practical since they have extensive requirements (e.g. additional hardware, extensive use of the bandwidth, multiple network interfaces, costly signed certificates, etc.). Besides, we notice that all the reviewed proposals do not provide enough security and are easy to bypass. Therefore, it is necessary to define a practical and reliable approach to efficiently prevent the evil twin attacks.
We believe that a robust solution for the evil twin problem must rely on digital certificates. This is because the rogue AP cannot impersonate the legitimate AP without the private key. However, it is essential to provide the user with a practical and reliable method to verify the AS certificate. This allows the secure association with trusted WLANs and the efficient detection of rogue APs.
In this paper, we define a Robust Certificate Management System (RCMS) to prevent all evil twin attacks in WLANs. Our proposal is suitable for both small and large networks using 802.1X authentication. It runs entirely on the user device and does not require any protocol modification. It allows the user to strongly authenticate the AS using an additional code of a limited length, called the verification code. Upon the first association to an SSID, the user is requested to introduce his credentials (e.g. certificate or username/password) and the verification code. Once the AS certificate is checked correctly, the root Certification Authority (CA) of the AS certificate is considered as the trusted CA of the current SSID. Therefore, for any subsequent association to a given SSID, any AS certificate is trusted if its root CA is the trusted CA of the SSID. This allows the network administrators to easily extend their networks and to deploy multiple AS with different certificates issued by the same CA. The user must provide the verification code only if the information stored by RCMS on the user device does not allow trusting the AS (e.g. first association to the SSID or modified public key of the root CA). RCMS efficiently prevents evil twin attacks thanks to the reliable verification of the AS. Besides, our proposal is practical since it only requires slight software updates on the user device.
To summarize, we study the evil twin attacks in WLANs and the limitation of existing security mechanisms. Our main contribution is to introduce a new mechanism, called RCMS, in WLANs employing 802.1X authentication. RCMS allows the reliable check of the AS using a new verification code entered by the user. Therefore, it prevents all evil twin attacks and allows the secure association to legitimate APs.
The remainder of this paper is organized as follows. The next Section introduces related work studying the rogue AP detection in WLANs. Then, Section 3 presents the different authentication methods of 802.11i and their limitations against the evil twin attacks. Section 4 presents the threat model. We introduce RCMS in Section 5 and we conclude in Section 6.
## 2 Related Works
Extensive research is carried out to define secure protection mechanisms against evil twin attacks. The existing approaches can be classified into 4 main families: traffic anomaly, location, fingerprint and cryptography based approaches.
### Traffic anomaly based approaches
A large number of approaches are defined for the case of a rogue AP relaying the communication between the victims and the legitimate AP. Since this type of evil twins increases the number of wireless hops and the delays, the authors of [4] choose the inter-packet arrival time as the detection parameter. In [5], the round trip time (RTT) of ICMP packets is used for the Evil Twin detection. We believe that both methods are not precise as the delays may vary significantly in WLANs due to several factors such as the buffering delays, the used data rates, the number of users and the signal strength. Besides, bridges will be considered as rogue APs since they operate as relays. Another characteristic of evil twins acting as relays is frame forwarding on the wireless channel. This characteristic is considered by the proposal of [6] which continuously monitors the medium to capture and compare the transmitted frames. It classifies APs with frame forwarding as evil twins. Legal AP Finder (LAF) [7] is a similar approach which relies on the frame forwarding behavior of the rogue APs. Instead of comparing all data frames, it only examines the TCP 3-way handshake packets. We note that both [6] and [7] have significant drawbacks and limited accuracy. For example, they are not suitable for encrypted networks because the encryption algorithm makes the forwarded frames different from the original frames. In [8], the rogue AP detection relies on the statistics of the data transmitted by the different APs. An evil twin is identified if it transmits the same amount of data than another AP during the same time interval. A similar proposal is presented in [9] and detects the forwarding behavior by monitoring the arrival time of frames having similar lengths. It requires multiple wireless interfaces (minimum of two) to scan the different channels, and necessitates a long scan period to detect any forwarding behavior. PrAP-Hunter [10] is a detection mechanism for network administrators. It operates on a dedicated device with two wireless interfaces. The first interface associates with an AP and transmits data, while the second interface interferes with channels 1 to 11 sequentially. If the first interface notices throughput degradation when the second interface is interfering with specific channels, this indicates that the AP is an evil twin forwarding data. It is clear that this proposal suffers from significant drawbacks; not only does it waste the bandwidth, but also it does not provide a trustworthy detection. This is because the throughput of WLANs is variable due to several factors, such as medium sharing, interference with legitimate devices, collisions and channel fading. Therefore, PrAP-Hunter cannot ensure any effective protection against rogue APs.
Other approaches consider the case of rogue APs having their own Internet connection. In [11], the detection method relies on the principle that the different APs of a single Extended Service Set (ESS) usually use the same gateway. Therefore, it verifies the gateways of the visible APs belonging to the same SSID, and detects the presence of a rogue AP if different gateways are used. Similarly, Rogue AP Finder (RAF) [12] compares the transmission paths to a given server over the different APs of a particular ESS. It reports the presence of a rogue AP if different paths are used. These methods may work if both the rogue AP and the legitimate AP are visible. Otherwise, the attacker cannot be detected. Besides, the two proposals may detect the presence of an evil twin but cannot distinguish between rogue and legitimate APs.
Several other approaches consider both types of evil twins: relays and those having their own gateways. In [13], the authors combine the gateway check with the frame forwarding verification. BiRe is another detection mechanism defined in [14]. It requires two wireless interfaces associated with two different APs. Every interface sends a TCP SYN packet to a particular server which acknowledges the other interface. The absence of an acknowledgement indicates that one of the two APs is an evil twin. We note that BiRe cannot detect the attack if only one AP is available. Besides, it has excessive requirements which make it impractical. The proposal of [15] intends to alert the network administrator of any existing evil twin. It uses a sniffer that captures and analyses the transmitted frames. It considers that the attacker necessarily sends deauthentication frames to disconnect the victims from legal networks and connect them with the rogue AP. Therefore, an attack is detected if excessive Association Response frames are intercepted. Unfortunately, this proposal is not suitable for many types of evil twins. EvilScout [16] is defined for a very specific case of evil twins when the rogue AP operates on the same channel of the legitimate AP and impersonates the MAC address of the legitimate AP. In this case, the detection is based on anomalies related to MAC address conflicts.
_2.2 Location-based approaches_
To prevent the evil twin attack, the authors of [17] suggest the connection of the legitimate AP to a display that confirms the user connection to the right network. This proposal requires an additional device for every AP and a line of sight between the users and the display. This solution is expensive and not suitable for large networks since the simultaneous verification of multiple people using a single screen is not practical.
The principle of crowd sensing is used by CRAD in [18]. The crowd is composed of the mobile users connected to a specific ESS. Every user device should profile the different available APs by recording their signal strengths (i.e. RSSI) over time. Then it shares its measurement reports with the other members of the crowd. The ratio of reports containing a significant variation of the RSSI is used as an indicator of an existing rogue AP. However, an attacker can broadcast forged reports to decrease the ratio of reports with RSSI anomalies. A similar approach based on crowd sensing is proposed in [19]. It uses the CSI (Channel State Information) and AoA (Angle of Arrival) to improve the detection accuracy, and detects the attack if the spatial location of the AP changes. Another proposal based on RSSI is defined for residential networks in [20]. It considers that the signal strength is a stable parameter that can be used to detect evil twins. This assumption is not valid due to the user mobility and cannot provide reliable and precise attack detection. The principle of RSSI monitoring is also used in [21] to detect rogue APs based on their location. However, instead of using the crowd collaboration, the authors suggest to install multiple sensors that record the RSSI evolution. These measurements are transmitted to and processed by a remote server to detect any anomaly. This proposal alerts the network administrator if a rogue AP is detected but does not prevent the attack.
In [22], the authors use the principle of "trust by location" that records all the visible APs upon the first association to an AP. For subsequent connections, the AP is trusted if the variation of the neighbor networks does not exceed a given threshold. Otherwise, it is classified as an evil twin. In [23], the detection system starts by classifying all the visible APs as authorized and records their parameters in a white-list. Then, it checks for any suspicious modification of different parameters to report a rogue AP. For example, if a new AP is detected after the initialization step, it is considered as an evil twin. It is clear that this approach is not reliable as it may classify many legitimate APs as illegal and may trust rogue APs.
_2.3 Fingerprint-based approaches_
ETGuard [24] is an administrator-side mechanism which detects rogue APs based on the recorded fingerprints. It runs on a dedicated server and continuously records the beacon frames. Since the fingerprints are calculated from the beacon frames, any attacker is able to spoof these frames and impersonate legitimate APs. This affects the reliability of ETGuard. Multiple approaches use radiometric signature as a unique identifier of each device. In [25], the observation of the clock skew is used as the AP fingerprint. In [26], the authors use the CSI to extract the physical layer information of the transmitter. They consider that the phase errors depend on the device and can be used to create a unique fingerprint of any AP. Another approach [27] extracts the AP fingerprints from the power amplifier and frame distribution of the received data. The mechanism proposed in [28] detects rogue APs based on multiple parameters, namely the clock skew, the used channel, the received signal strength and the beacon transmission duration. These proposals must be initialized with a fingerprint list of authorized devices. Due to this constraint, any network extension or modification requires the update of the fingerprint list of every user. We note that the attacker can obtain a device identical to the used AP. This allows the rogue AP to produce the same fingerprint and to bypass the detector.
_2.4 Cryptography-based approaches_
VOUCH-AP [29] is among a few proposals that use digital certificates to authenticate the legitimate AP and to prevent the attacks. The authors provide each AP with a certificate issued by a trusted Certification Authority (CA). This certificate includes the network SSID and aims to prevent WLAN impersonation. Unfortunately, the SSID is not a unique identifier for WLANs and can be used by different networks simultaneously. Therefore, an attacker can obtain a signed certificate from a trusted CA for any SSID and perform the evil twin attack. As a result, this proposal is not secure enough and incurs additional costs related to the purchase of a signed certificate for every AP.
In [30], the authors show that the use of WPA2-Enterprise (i.e. 802.1X authentication) remains vulnerable to the evil twin attack which allows the adversary to steal user credentials. This is because the user ignores the AS certificate and cannot check it. Therefore, he may accept any certificate, including that of the attacker. To solve this problem, the authors suggest
to display WPA2-Enterprise networks in a list of pairs << SSID, AS name >>. As the authors recognize, this solution is not secure since the attacker is able to produce a certificate (either self-signed or signed by a trusted CA) containing the same AS name. In [31], the authors show that the 802.1X authentication used in Eduroam networks does not sufficiently secure WLANs since most users do not check the AS certificate. Therefore, they suggest activating the check of the AS name (i.e. displaying the information of the AS certificate in an interface and asking for the user permission before pursuing the authentication). This is not a reliable approach since self-signed certificates are widely used in WLANs, allowing the attacker to use any AS name. In addition, most users are not aware about the AS name and trust the WLAN based on its SSID. A similar study of the Eduroam security [32] shows that the authors were able to access the user credentials of 61% of the tested devices which accepted to associate with a rogue AP. To prevent the evil twin attacks, Eduroam provides a Configuration Assistant Tool (CAT) [33] that configures the user device with the Eduroam network profile. This solution requires the user to download and execute CAT. Since the use of CAT is not mandatory, most users may ignore it. We note that the created profile does not prevent the association with a rogue AP but allows to inform the user that the network details have changed and requests the user authorization to pursue the authentication. Therefore, we believe that CAT is neither practical nor reliable.
## 3 Background of WLAN Security
### Network Discovery
In a WLAN, every AP is identified using a unique identifier called BSSID (i.e. the MAC address of the AP). To extend the coverage of a WLAN, the administrator may install multiple APs. The extended network is called Extended Service Set (ESS) and is identified using a string called SSID. To join a WLAN, the user station (STA) follows 3 steps: network scanning (active or passive), authentication and association. During the first step, the STA scans the different channels of the spectrum to find the available networks. Using passive scanning, the STA receives the beacon frames of the visible APs. These frames are broadcasted periodically and contain all the information about the AP, such as SSID, BSSID and the security protocol. They allow the user to select the desired SSID. If multiple APs belonging to the same SSID are visible, the STA selects the AP with the highest signal strength (i.e. RSSI) as it is expected to provide the highest throughput. During the user mobility, the STA may perform a seamless handover from one AP to another within the same ESS. This handover is defined by the IEEE 802.11 standard and does not require the user permission.
### Authentication and Association
The second step after network scanning is user authentication. Current networks support 3 authentication methods: Open System Authentication (OSA), Pre-Shared Key (PSK) and 802.1X. The first method does not use any password and does not provide any authentication. It allows any user to join the network if his MAC address is not black-listed. This method does not use encryption and the network frames are transmitted in the clear. A recent enhancement of open authentication is defined in [34] and allows data encryption in open networks. As there is no way to authenticate the users and the WLAN, an evil twin attack is easily performed against open networks and cannot be detected or prevented.
The second authentication method is PSK. It requires the users and the AP to share the same password. During authentication, both the STA and the AP must prove knowledge of the secret. This ensures mutual authentication between the user and the network. Without the password, a rogue AP cannot authenticate to the STA and cannot establish a connection with the victim. Since the transmitted frames are encrypted in a WLAN protected with PSK, the adversary cannot eavesdrop on the data and cannot perform any attack. We note that PSK is practical in small WLANs, such as residential networks, as long as the few users are able to keep the password confidential. PSK is not suitable for public or large networks since the password is accessible to any user. For example, several restaurants and cafes provide their customers with free connections to WLANs protected by PSK. Typically, they provide them with the password within the receipt. This allows any malicious user to obtain the secret, to impersonate the legitimate AP and to perform the evil twin attack.
The third authentication method is 802.1X. It is widely known as WPA2-Enterprise. It requires an authentication server (AS) which performs the mutual authentication with the users by the intermediate of the AP. The AS uses its certificate to authenticate itself to the user. If the user trusts the AS certificate, he uses his credentials (e.g. certificate or username/password) to authenticate to the server. The AS certificate allows the establishment of a secure tunnel between the user and the AS to perform the user authentication. 802.1X authentication is suitable for large networks since every user has his own credentials and can identify legitimate APs thanks to the AS certificate. It prevents evil twin attacks and guarantees data confidentiality in public networks, even if the same username and password are publicly shared, as long as the users are able to verify the AS certificate. It can also be used in small networks since many commercial APs have integrated AS and can use self-signed certificates. Unfortunately, a large number of evil twin attacks are successfully achieved against 802.1X and allow the adversary to steal the user credentials and to eavesdrop on the traffic. This is because most users cannot verify the AS certificate and accept to authenticate with rogue APs providing self-signed certificates. Therefore, they establish a secure tunnel with the attacker who becomes able to perform multiple attacks.
As aforementioned, OSA is a null authentication protocol. It uses a two-frame exchange. The first frame contains the STA identity (i.e. the MAC address) and requests authentication. The second frame returns the authentication result. If the result is "successful," the STA and the AP are considered mutually authenticated. As depicted in Figure 1, the authentication step
of PSK is either OSA or Simultaneous Authentication of Equals (SAE). SAE intends to make PSK resistant to offline dictionary attacks. It generally uses the elliptic curve cryptography to derive an intermediate key, called Pairwise Master Key (PMK), from the PSK. When OSA is used with PSK, PMK is identical to the pre-shared key (i.e. PMK=PSK). In the case of 802.1X, the authentication step relies on OSA.
The association step is a two-frame transaction sequence following the authentication. It is initiated by the STA and allows the negotiation of the connection parameters. In the case of open authentication, no more steps are required and the user device is successfully associated to the WLAN. But all the frames of open networks are transmitted in the clear. If 802.1X is used, the 802.1X authentication step starts following the association. It allows the STA and the AS to derive the PMK from the TLS master key which is used to establish the TLS tunnel. Then, the AS sends the PMK to the AP. This allows the STA and the AP to share the same key.
If PSK or 802.1X is used, the AP and the STA start the 4-way handshake to derive the Pairwise Transient Key (PTK) from the PMK. This handshake intends to confirm that a live STA holds the PMK and to derive a fresh PTK. The PTK is used to encrypt the transmitted data frames. Figure 1 illustrates the network access steps using the 3 authentication methods.
### 3.802.1X authentication
802.1X authentication supports different credential types, such as digital certificates, usernames and passwords, secure tokens, and mobile network credentials (i.e. GSM and UMTS secrets). A WLAN employing 802.1X typically consists of user devices, one or multiple APs belonging to the same ESS, and one AS. For large networks, such as Eduroam [3], it is possible to use multiple servers. Figure 2 illustrates a simple WLAN using 802.1X. The most used authentication server is the RADIUS server which uses the RADIUS protocol to communicate with the AP. Therefore, the mutual authentication between the user and the AS is performed using two protocols: EAP over LAN (EAPOL) and RADIUS. EAPOL is introduced by 802.1X and relies on EAP [35]. It defines additional frames to support wired and wireless LANs. EAP is an authentication protocol used between the STA and the AS. The EAP messages are transmitted within 802.11 frames over the wireless medium, and within RADIUS packets between the AP and the AS.
EAP supports multiple authentication methods [36] which can be classified into two principal categories: password-based and TLS-based methods. However, not all of them are compliant with WLANs. In fact, 802.11 requires the use of an EAP method capable of generating the keying material [37]. Therefore, only TLS-based methods are compliant with the RSN requirements. They establish a secure TLS tunnel between the user device and the AS using the server certificate. If the user authenticates using his username and password, a password-based authentication method must be used through the encrypted tunnel and is called inner or tunneled method. This inner method may be EAP or non-EAP method, depending on the used TLS-based EAP method.
The most popular TLS-based EAP methods are:
* EAP-TLS [38]: This method allows the user and the AS to mutually authenticate using certificates. Therefore, both the AS and the user must have certificates. This method is mainly used in large companies where the network administrators take care of configuring the device of every employee individually. We note that EAP-TLS is supported by all devices since it is among the requirements of WPA2.
* EAP-TLS [39]: This method only requires the server to hold a certificate and is, therefore, more practical than EAP-TLS. It allows the user to authenticate using his username and password through the TLS tunnel. This method supports multiple inner methods. It supports both non-EAP (e.g. PAP, CHAP and MSCHAPv2) and EAP methods (e.g. EAP-MD5 and EAP-MSCHAPv2). Like EAP-TLS, EAP-TILS is also supported by any device compliant with WPA2.
* PEAP [40]: This method is similar to EAP-T TLS, but only supports EAP methods as inner methods.
On the other hand, the most used inner methods are:
* PAP [41]: This method allows the user authentication using his username and password. These credentials are transmitted in plaintext and are easily accessible if the TLS tunnel is established with the attacker.
* CHAP [42], MS-CHAP [43] and EAP-MD5 [37]: These are one-way authentication methods which allow the server to authenticate the user using challenges.
* MS-CHAPv2 [44] and EAP-MS-CHAPv2 [45]: These methods provide mutual authentication using challenges; both the server and the user must prove their knowledge of the user password.
Figure 3 illustrates an example of 802.1X authentication using EAP-TTLS and EAP-MD5 as the inner method.
Figure 1: Network access steps in IEEE 802.11 WLANs
Figure 2: Network architecture using 802.1X authentication
## 4 Threat Model
### Attack objectives
This section introduces the different evil twin attacks against 802.1X authentication according to the adversary expectations. In fact, we distinguish two attack objectives:
* Credential theft
* Data relay (man-in-the-middle)
In the first case, the attacker intends to steal valid credentials in order to access the WLAN as an authorized user. This is a typical attack against several private and paid networks (e.g. university, airport and Internet provider WLANs) where the network access is limited to authorized users only. The damages of this attack depend on the access rights of the victim and vary from simple to very harmful damages, such as bandwidth sharing, paid plan consumption and unauthorized access to personal documents and sensitive information.
The second objective is to relay the victim's data. We note that the attacker can easily provide an Internet connection using different methods, such as mobile, wireless or wired networks. Once the victim is connected to the Internet through the rogue
Figure 3: 802.1X authentication using EAP-TTLS and EAP-MD5
AP, the adversary can perform various passive and active attacks, such as data eavesdropping and user redirection to phishing websites. Although most sensitive websites use https to encrypt and protect the user data, several websites still use http and can be spied. Since many people use the same username and password to access different accounts, stealing the credentials from unencrypted websites may allow the attacker to gain access to the user accounts of sensitive websites. In addition, phishing websites may succeed to steal sensitive data (e.g. passwords and credit card information) and to convince the victim to download malware. We note that some malware are very harmful and allow the attacker to easily spy and control the victim device.
To steal the user credentials, the attacker does not need to provide an Internet connection or to be in visibility with a legitimate AP. He only needs to install a rogue AP, to capture the required information and to leave. But to relay the user data, the attacker must provide an Internet connection either using his own gateway (mobile network or wired LAN) or using the legitimate WLAN. In the latter case, a legitimate AP must be visible.
### 4.2 Credential theft
As previously mentioned, EAP-TLS allows the user and the AS to mutually authenticate using certificates. This is the most secure EAP method since the user certificate is useless to the adversary without the user's private key which is never transmitted over the network. Therefore, EAP-TLS is perfectly secure against credential theft. For the other TLS-based methods, the credentials are safe if the user associates with the legitimate AP. But if the victim associates with the rogue AP and accepts any certificate, the encrypted tunnel is established with the attacker who can decrypt the tunneled data. In this case, the attacker can obtain the victim credentials using two possible attacks: downgrade and dictionary attacks.
The downgrade attack allows the adversary to negotiate the weakest possible EAP method in order to facilitate the access to the credentials. In fact, EAP has several methods which are not necessarily supported by every STA and AS. In a typical scenario, the AS suggests EAP methods from strongest to weakest till a method is accepted by the STA. This allows the selection of the strongest method supported by both parties. In the case of a downgrade attack, the malicious AS suggests the methods from weakest to strongest in order to use the weakest possible. If EAP-TLS with PAP is used, the attacker receives the user credentials in plaintext and no more action is required. But if a challenge-response method is used, the attacker performs an offline dictionary attack using the challenge and the received response. This attack succeeds only if the victim password figures within the dictionary of likely passwords used by the attacker. Since the adversary's purpose is to steal the user credentials, he does not need to succeed the authentication step or to provide the victim with an Internet connection. He only needs the credentials in plaintext or the challenge and the corresponding response.
### 4.3 Data relay: Man-In-The-Middle (MITM)
EAP-TLS is not secure against MITM; if the victim trusts the rogue AP and accepts its certificate, the adversary accepts the victim certificate and succeeds the mutual authentication. In this case, the victim data will be relayed through the rogue AP. Similarly, the other TLS-based EAP methods do not prevent the victim from accepting the attacker certificate. Once the certificate is accepted, the success of the mutual authentication depends on the security of the inner method. Hence, if the adversary succeeds to negotiate an inner method that allows one-way authentication (e.g. PAP, CHAP, EAP-MD5 and EAP-MSCHAP), he succeeds the authentication step easily.
If the victim refuses all weak inner methods and only accepts a mutual authentication method (e.g. EAP-MSCHAPv2), the attacker must prove knowledge of the password and reply correctly to the challenge. This makes the authentication more difficult, but possible if a legitimate AP is visible to the attacker. In this case, the attacker impersonates the STA to the AS and establishes a second tunnel with the AS. Then, he negotiates the same authentication method. Upon the reception of the AS challenge, the attacker sends this challenge to the victim. Then, he forwards the victim's response and challenge to the AS. Finally, he forwards the AS response to the victim. This allows the attacker to succeed the mutual authentication with both STA and AS. In the remainder, the victim and the rogue AP derive the same session keys and the victim data are relayed through the evil twin as desired by the attacker.
In many environments (e.g. restaurants, cafes, libraries, etc.), the network is available for customers and is protected using the same credentials for all the users. Therefore, the attacker is able to obtain the shared password and to succeed the mutual authentication of the inner method without the need to interact with the legitimate AP. In this case, the only protection against the evil twin attack is the verification of the AS certificate.
### 4.4 Summary
To summarize, the evil twin attacks against 802.1X are possible when the victim does not verify the AS certificate and accepts any one. If the user is able to check the certificates and associates with authorized APs, no evil twin attack is possible. Table 1 illustrates the security level of the most used EAP methods and the possible attacks to attain the adversary objectives. For a given inner method, we consider that this method is selected following the downgrade attack and is the weakest possible method that the attacker can negotiate.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Main EAP method & \multirow{2}{*}{Inner method} & \multicolumn{2}{c|}{Adversary objective} \\ \cline{3-4} & & \multirow{2}{*}{\begin{tabular}{c} Credential \\ theft \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Data relay: MITM \\ \end{tabular} } \\ \cline{2-2} \cline{4-4} & & & \\ \hline EAP-TLS & null & Impossible & Easy \\ \cline{2-4} & PAP & Easy & Easy \\ \hline \multirow{3}{*}{EAP-TLS} & CHAP, EAP-MDS, & \multirow{3}{*}{
\begin{tabular}{c} Possible \\ using an \\ legitimate AP visibility \\ \end{tabular} } \\ \cline{2-4} & MSCHAP, & & \\ \cline{1-1} \cline{2-4} & EAP-MSCHAPv2 & & \\ \cline{1-1} \cline{2-4} & EAP-MD5, EAP-MSCHAP & dictionary & Easy \\ \cline{1-1} \cline{2-4} & EAP-MSCHAPv2 & attack & Possible: requires \\ \cline{1-1} \cline{2-4} & EAP-MSCHAPv2 & & legitimate AP visibility \\ \hline \end{tabular}
\end{table}
Table 1: Security level of most used EAP methods
Robust Certificate Management System
In this section, we introduce our approach to protect WLANs against any type of evil twin attack. Our solution is called Robust Certificate Management System (RCMS) and is defined for 802.1X authentication. It allows the user device to easily and precisely check the AS certificate. Therefore, RCMS only accepts legitimate certificates and associates with authorized networks. It rejects any authentication with rogue APs thanks to a new code called "**verification code**". This code allows the verification of the server certificate and the authentication with legitimate networks. **RCMS is suitable for all types of credentials**, but we mainly focus on the case of username/password pairs which are widely used with 802.1X. Upon the first association to an SSID, the user must provide 3 values instead of 2: username, password and the verification code. If the code is valid, the network is trusted and is added to the list of trusted networks. For subsequent associations to trusted APs, the code is not requested unless the public key of the root certificate is modified.
To successfully authenticate the servers, RCMS maintains the certificates of the root CA instead of the AS certificates. When the STA receives a new AS certificate, it checks the root certificate and accepts the AS certificate if the root CA is trusted. This allows large networks to use multiple servers with different certificates having the same root certificate. In the case of a small network having a single AS and a self-signed certificate, the root certificate is the AS certificate. Therefore, **our design is suitable for both small and large networks**. We note that the root CA does not need to be public as this incurs additional fees without any improved security. However, it is possible and more practical (i.e. free and more secure) to use a private CA, i.e. a self-signed certificate that is used directly to sign the AS certificates or to sign intermediate CA certificates.
In addition, RCMS associates a single root certificate to an SSID. Therefore, a trusted CA is only trusted for the corresponding SSID. Since certificates may be renewed or updated, RCMS is able to update the stored root certificate seamlessly as long as the public key has not changed. But if the public key of the trusted CA is modified, the user must provide a new code to check the AS certificate. We note that updating the certificate does not require the modification of its public key unless the private key is compromised. Moreover, the root certificate is generally valid for many years (typically 3 to 20 years). Hence, the **verification code is rarely requested after the very first association to a WLAN**.
### Verification code calculation
The verification code is used to check the AS certificate. It can be calculated differently:
1. **The code is derived either from the AS certificate or from the root certificate**: These are two possible options. In both cases, RCMS saves the root certificate as the trusted CA upon the first successful verification. If the code is derived from the AS certificate, the following constraint must be satisfied: the user must authenticate the first time with the AS for which the code is generated. To get rid of this constraint, we can derive the code from the root certificate. This second option is more flexible and allows the first authentication to occur with any AS of the ESS.
2. **The code is calculated either based on the entire certificate or based on the public key**: If the code is calculated based on the entire certificate, a particular authentication failure may occur in the following case: the code is calculated and then the certificate is updated before the first connection of the user. In this case, the authentication fails and the user must obtain a new code. We note that this is a particular case since a certificate is not modified frequently. However, we can avoid this particular case if we calculate the code based on the public key. In fact, the public key does not change during updates and renewals, unless the corresponding private key is compromised.
3. **The code is calculated using either a hash or a keyed-hash function**: It is possible to use a hash function to calculate the code. In this case, the code is common to all users and is easily accessible by an adversary. Therefore, the code must be long enough to be resistant to a brute force attack. The second option is to calculate the code using a cryptographic hash function and the user password. Therefore, the code is not a common value and varies among users. This allows short codes to be more resistant to brute force attacks; since the adversary ignores the code and the password, he cannot generate asymmetric keys where the keyed-hash of the public key is identical to the verification code.
Although multiple options are possible to calculate the code, we choose the most flexible and secure one. Therefore, **we calculate to code using the keyed-hash of the public key of the root CA**. We suggest the use of HMAC-SHA256 which has an output of 32 bytes. Since this is a very long value for the user, we suggest using the first 6 bytes (i.e. 48 bits) as our code. We believe that this length is long enough to authenticate the root CA and is convenient for the user. In addition, we convert the binary value into base64 and we obtain a string of 8 alphanumeric characters, as follows:
\[\begin{split}\texttt{Code}=\texttt{base64(first48(HMAC-SHA256(password, CA_PubKey)))} \end{split} \tag{1}\]
We designed RCMS to accept any AS certificate issued by the trusted root CA. Therefore, the network administrators have two options when the private key of any AS is compromised. The first option is to provide an online revocation list which allows the users to check for revoked certificates. This is practical for large networks with many permanent users, such as Eduroam. We note that the management of the certificate revocation list is beyond the scope of this paper. The second option is to update the public key of the root CA. This forces the users to contact the administrator and to request new codes.
RCMS maintains a list of SSIDs and the corresponding trusted root CA (i.e. list of SSID/CA). This list only contains SSIDs employing 802.1X authentication. In this list, the SSID must be unique but not the root CA. This means that an SSID must have a unique root CA, but a root CA may be associated to multiple SSIDs. This allows the network administrators to use the same root CA with different SSIDs having similar meanings, such as "_University of Monastir 1_" and "_University of Monastir 2_". The list is updated in the following cases:
1. First authentication with an SSID: a new entry is added to the list upon the successful certificate check using the verification code.
2. The root certificate has changed, including the public key: if the user has the right verification code, the existing entry is updated. Otherwise, this is a possible evil twin attack and the authentication must be canceled. If this AP is legitimate, the user must contact the administrator to obtain the new verification code.
3. The root certificate is modified but the public key has not changed: the root certificate is seamlessly updated in the list of SSID/CA and no verification code is requested.
Table 2 depicts an example of the list of SSID/CA. It includes the columns SSID, the public key of the trusted CA, the root certificate fingerprint (allows the detection of any update in the certificate) and the root certificate path (the storage path of the certificate on the user device). It is possible to include addition columns to this list in order to provide more details, such as the date of first association, the update history, etc. Similar to the operating mode of current user devices, it is necessary to store the user credentials in order to provide seamless authentication to trusted networks. These credentials can be saved either in this list or in a separate encrypted list.
### AS certificate check
The successful check of the verification code means that the root CA is trustworthy. Therefore, the user can accept any certificate issued by this CA. To perform the code verification, the STA must receive the entire certificate chain. This chain includes the AS certificate, the root certificate and any intermediate certificate. It is received during the establishment of the TLS tunnel within the "Certificate" field of the TLS message, as depicted in message 9 of Figure 3. Upon the reception of the certificate chain, RCMS checks the validity of this chain by inspecting the different issuers. Then, RCMS checks if the public key of the root CA exists in the list of SSID/CA and corresponds to the current SSID. If yes, the code verification is not required since the root CA is already trusted. Otherwise, an intermediate code is calculated based on the user password and the public key of the root CA, according to Equation 1. If this intermediate code is identical to the verification code, the root CA is trusted and the list of SSID/CA is updated. Otherwise, the certificate verification fails and the 802.1X authentication is canceled. To summarize, **the AS certificate is accepted only if the certificate chain is valid and the root CA is trusted**.
### Operating mode
On the network side, the administrator must configure the network to use 802.1X authentication. For small networks composed of one AP, it is possible to use the internal RADIUS server as the AS. This allows the wireless router to operate as both AP and AS without the need for an additional machine. For an ESS composed of several APs, a single AS is generally enough. If only one AS is used, a self-signed certificate is sufficient. For large networks requiring multiple authentication servers, the administrator should create a private CA with a self-signed certificate to be used as the root CA of the different AS certificates. For very large networks, it is possible to create additional intermediate CAs. Furthermore, the administrator must provide every AS with the entire certificate chain. In the case of Freeradius [46], the certificate chain is typically located in /etc/freeradius/certs.
When a new user arrives, he must contact the network administrator to obtain the connection credentials and the verification code. The credentials can be generated automatically using software and printed as part of a document (e.g. student subscription document, hotel bill, cafe receipt, etc.) or transmitted by phone to the user. It is also possible to allow the user to choose his username/password and to enter them manually into a software interface. In both cases (automatic or manual generation), the software must use the root certificate as input and must generate the verification code from the user password and the public key of the root CA, as explained in Equation1.
The operating mode of RCMS is illustrated in Figure 4. At the beginning, the STA starts the authentication (OSA or SAE) and the association steps with a given SSID, say SSID1. Then RCMS checks if this SSID uses 802.1X authentication or not. If not, there are two possible results: 1) SSID1 does not exist in the list of SSID/CA: in this case, 802.1X authentication is not required (as shown in Figure 1) and the user authentication is successful. 2) SSID1 exists in the list of SSID/CA: in this case, the authentication is rejected. In fact, this scenario corresponds to a WLAN that had used 802.1X authentication and now uses another security policy. Even if this scenario is legitimate, it is not typical. However, it may hide an evil twin attack where the attacker impersonates the SSID and offers an open access to his WLAN. Once the victim is connected, he is redirected to a fake captive portal [47] requesting the user credentials of the legitimate SSID1. For security purpose, RCMS rejects the authentication with an SSID that replaces the 802.1X authentication with another authentication method. Two additional output results of RCMS are depicted in bold in parallelograms 3 and 4. The third output is "Authentication Canceled" and corresponds to the user canceling the authentication. The last output is "Authentication Success".
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{1}{|c|}{**Table 2. List of SSIDs and the corresponding trusted CA (SSID/CA)**} & \multicolumn{1}{c|}{Public key of Root} & \multicolumn{1}{c|}{Root certificate} \\ \hline SSID & CA & fingerprint & Root certificate path \\ \hline Univ\_Monastir & a5f71fe8946\_c... & 487cd1h8Sc3... & rootca/univ\_m.pem \\ \hline Hotel SBM & 37492ne752b... & 3ch5d0e5cd0d... & rootca/hbsbm.pem \\ \hline \(:\) & \(:\) & \(:\) & \(:\) \\ \hline \end{tabular}
\end{table}
Table 2: List of SSIDs and the corresponding trusted CA (SSID/CA)
Figure 4: User Authentication using RCMS
RCMS requires the user input in 5 situations illustrated in 5 grey rectangles numbered from 1 to 5. The first case occurs when the user associates with the WLAN for the first time. Therefore, he must provide his credentials and the verification code. The second case occurs when RCMS notices that the public key of the root CA has changed compared to the stored value. Since RCMS already has the user credentials, it only requests the new code. The third and fourth scenarios occur when the root certificate was validated during a previous authentication but RCMS has no stored credentials. They may occur when the user provides a wrong username but correct password and verification code during the first authentication. This allows RCMS to add the SSID and the root CA to the list of SSID/CA but does not allow the successful authentication and the storage of the credentials. In the third case, RCMS requests the user credentials and the verification code since the public key of the root CA has changed. However, the verification code is not requested in case 4. The fifth case occurs when the credentials are incorrect, have been renewed or have expired.
## 6 Conclusion
In this paper we studied the evil twin attacks and we showed that the adversary is able to impersonate legitimate networks easily. We explained that the only reliable way to detect and avoid rogue APs is to use digital certificates. Since 802.1X authentication does not define any practical technique to verify the AS certificates, we defined RCMS to identify legitimate networks and to prevent evil twin attacks. RCMS introduces a new verification code which allows the user device to check the AS certificates. Therefore, our proposal allows an easy and practical verification of the network identity. In addition, RCMS runs entirely on the user device and is perfectly compliant with IEEE 802.11 standard. Thus, it only requires software updates to protect the user privacy.
## Funding Declaration
This work did not receive any funding from any organization.
## Conflict of Interest
The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Author Contribution
Yousri Daldoul wrote and reviewed the manuscript.
|
2303.03518 | Computer-assisted validation of the existence of periodic orbit in the
Brusselator system | We investigate the Brusselator system with diffusion and Dirichlet boundary
conditions on one dimensional space interval. Our proof demonstrates that, for
certain parameter values, a periodic orbit exists. This proof is
computer-assisted and rooted in the rigorous integration of partial
differential equations. Additionally, we present the evidence of the occurrence
of period-doubling bifurcation. | Jakub Banaśkiewicz, Piotr Kalita, Piotr Zgliczyński | 2023-03-06T22:04:02Z | http://arxiv.org/abs/2303.03518v2 | # Computer-assisted validation of the existence of periodic orbit in the Brusselator system
###### Abstract.
We investigate the Brusselator system with diffusion and Dirichlet boundary conditions on one dimensional space interval. Our proof demonstrates that, for certain parameter values, a periodic orbit exists. This proof is computer-assisted and rooted in the rigorous integration of partial differential equations. Additionally, we present the evidence of the occurrence of period-doubling bifurcation.
The work of all three authors was supported by National Science Center (NCN) of Poland under project No. UMO-2016/22/A/ST1/00077. The research of JB for this publication has been supported by a grant from the Priority Research Area (name of the PRA) under the Strategic Programme Excellence Initiative at Jagiellonian University. The work of PK was also partially supported by NCN of Poland under project No. DEC-2017/25/B/ST1/00302 and by Ministerio de Ciencia e Innovacion of Kingdom of Spain under project No. PID2021-122991NB-C21.
If we drop the diffusion and the dependence on the variable \(x\) in the term \(A\sin(x)\) from the system (1.1), we obtain the following planar ODE
\[\begin{cases}u^{\prime}=-(B+1)u+u^{2}v+A\ \ \text{for}\ \ t\in\mathbb{R},\\ v^{\prime}=Bu-u^{2}v\ \ \text{for}\ \ t\in\mathbb{R}.\end{cases} \tag{1.2}\]
The planarity of the above system implies that the invariant sets consist of fixed points, periodic orbits, and heteroclinic connections between them. In fact it is known, that in (1.2) there can exist the attracting periodic orbit which arises from the Hopf bifurcation [8, Theorem 3]. The analytical results about the Brusselator system of PDEs with diffusion are limited. In the article [21] the existence of the global attractor for the Brusselator system on the 3-dimensional domain is proved. While this global attractor is known to exist, the question about its structure, that is the understating the of the problem dynamics, remains unanswered. Some partial analytical results about the dynamics are available for Neumann boundary conditions, where the homogeneous steady state is known from the solutions of the corresponding ODE (1.2). In this case, one can linearize the system of PDEs in its vicinity. Such results are available for example in [5] and [18]. The case with Dirichlet conditions, which we consider, appears to be much more challenging. In [3], the stability analysis of the steady state was carried out for Dirichlet non-homogeneous conditions where a nonzero constant in space steady state existed. However, this type of analysis is not possible for the problem we are dealing with, as the constant in space steady state does not exist (it would have to be equal to zero). Based on numerical observations, the system (1.1) possesses a periodic orbit for some range of parameters \(d_{1},d_{2},A,B\). We anticipate that this periodic orbit arises from a mechanism similar to the one known for the planar ODE (1.2), namely through a Hopf bifurcation. We deal with the apparent impossibility of obtaining purely analytical results on the periodic orbit existence by using the computer assisted techniques. Specifically, we perform a computer-assisted proof of the following theorem.
**Theorem 1.1**.: _For the parameters \(d_{1}=0.2\), \(d_{2}=0.02\), \(A=1\), and \(B=2\), the Brusselator system has a time-periodic orbit._
The results on the existence of periodic orbit for the Brusselator were also obtained recently, together with the proof of the Hopf bifurcation, also using the computer assisted techniques, in the paper [1]. The approach employed there is based on the Newton-Kantorovich method. The author
Figure 1. The numerical approximation of the time-periodic orbit obtained in Theorem 1.1. Blue and orange surfaces correspond to plots of \(u(t,x)\) and \(v(t,x)\) respectively.
demonstrates that a specific Newton-type operator has a fixed point, enabling him to establish the existence of a periodic orbit. This class of methods has been successfully applied for many problems governed by PDEs (see for example [2, 4, 16, 17]). The monograph [15] contains detailed description and up to date overview of these methods together with numerous applications. In our proof of Theorem 1.1 we are using different method. Namely, our approach is based on the algorithm of rigorous forward integration of the dissipative systems. This method was developed in articles [23, 24] and applied there for the Kuramoto-Shivasinski equation. Similarly, as in the above works, we express the solution in terms of the Fourier series. Specifically, as the solutions \(u(t,x)\) and \(v(t,x)\) satisfy the Dirichlet boundary conditions on \([0,\pi]\), they are represented as the sum of sine Fourier series
\[u(t,x)=\sum_{i=1}^{\infty}u_{i}(t)\sin(ix),\quad v(t,x)=\sum_{i=1}^{\infty}v_{i }(t)\sin(ix). \tag{1.3}\]
The ability to work with all Fourier coefficients facilitates a straightforward and efficient integration of the Brusselator system. It is noteworthy, however, that in Theorem 4.2, we obtain the same periodic orbit as in [1], cross-validating the accuracy of the two distinct methods. Other approaches of representing the solution are also possible. In the work [10] the solution to the Burgers equation was represented by the first order finite element basis together with the estimates on the norms in the Sobolev spaces.
The novelty of the present paper is the proof of the periodic orbit existence for the Brusselator systems using the rigorous forward integration techniques. This integration algorithm is, according to our knowledge, applied by us for the first time for the system of PDEs: in our case two mutually coupled nonlinear parabolic PDEs with polynomials of order 3 in the nonlinear term. We underline that the rigorous integration scheme which we use is the same as in [23, 24], where it was used for the Kuramoto-Sivashinski equation with odd-periodic boundary conditions. We show its applicability for the problem with higher degree of nonlinearity: is our case the two equations of the system are coupled through the cubic term, while the nonlinearity in the Kuramoto-Sivashinsky equation is quadratic. The key concept which makes it possible for the integration scheme to work is the same in our case as in [23, 24]. Namely, the dissipativity of the leading linear operator together with appropriate a priori estimates for the nonlinearity, which is of lower order, allow the linear terms to dominate over the nonlinear ones at appropriately high modes in the Fourier expansion. This allows us to treat the tail of the Fourier expansion uniformly, by controlling a polynomial decay of the coefficients in every time step, cf. Lemma 2.7. Our techniques hold potential for wider applicability. To this end, in Section 5, we establish estimates on the convolution of sine and cosine Fourier series, which can be utilized to calculate nonlinear terms for a general dissipative system with polynomial nonlinearities in one spatial dimension. These estimates are a crucial component in rigorous integration algorithms for such systems. A further innovation is that in our theoretical results, which ensure the validity of the algorithm, we don't require the use of Galerkin projections of the solution, as seen in Lemma 2.6 below. Instead, we work directly with the solution of the PDE system. This simplifies our assumptions and makes the results more accessible to the dissipative PDE community, as shown in (A1)-(A2) and (B1)-(B3) below. Additionally, we examine the limitations of our algorithm for PDEs with nonlinearities that do not meet the compatibility condition on the boundary. We illustrate this issue by using the problem governed by the diffusive logistic equation (2.16) as an example.
In Section 4, we observe that for a sufficiently large parameter \(B\), the system exhibits slow-fast behavior, as expected for the Brusselator system and known in the ODE case. This effect is stronger for higher Fourier modes. To demonstrate this, we establish the existence of periodic orbits for parameters \(d_{1}=0.2,\ d_{1}=0.02,\ A=1,\ B=2+\frac{i}{10}\) for \(i\in 0,\ldots,11\). Figures 3 and 4 show that some of these orbits exhibit slow-fast behavior.
In [1] Arioli observed a period doubling bifurcation, a phenomenon that cannot occur in the planar ODE (2). Thus, the dynamics of (1) is expected to be more complicated than that of the planar ODE (2). Although we do not rigorously prove the bifurcation, we also show that the minimal period of the found orbits approximately doubles with a small increase in the parameter \(B\), as seen in Theorem 4.3.
Other nontrivial dynamics of the Brusselator system, such as the existence of 2-dimensional attracting tori and chaos, were numerically investigated in [6]. Our numerical observations support the existence of 2-dimensional attracting tori for small diffusion parameters, although a rigorous proof of their existence remains an open problem. There are many avenues for further research on this topic. Numerical simulations indicate that the periodic orbit established in Theorem 1 is attracting. However, providing a rigorous computer-assisted proof of this observation is challenging, as it requires a rigorous \(C^{1}\) calculation, i.e. the integration of the variational equation for the Brusselator system.
The structure of the article is as follows. In Section 2 we describe the algorithm of rigorous integration for dissipative equations. In Section 3 we describe the computer assisted proof of Theorem 1. In Section 3.2, we address the algorithm for computing the Poincare map and prove Theorem 3.2, which pertains to the fixed point of this map which corresponds to the periodic orbit of the system. In the remaining part of Section 3 we describe the validation of the assumptions of this theorem for the Brusselator system. Section 4 contains numerical and rigorous results for various parameters of the Brusselator system. Finally, in Section 5, we provide results on the algebra of infinite series, which is utilized in the algorithms.
## 2. Algorithm of integration
In this section we present our version of the technique of integration for infinite dimensional dissipative systems proposed in [23], where it has been used for the Kuramoto-Shivashinsky equation. This method relies of the rigorous integration of a differential inclusion and it can be used for many dissipative problems in mathematical physics. We discuss it in the abstract setting but some details will be specified for the Brusselator system. The other approach based on the automatic differentiation is presented in [20]. We summarize the content of this section. We start, in Section 2.1, with the formulation of the abstract problem for which the algorithm can be applied, and in Section 2.2 we discuss its realization for Brusselator PDEs. In Section 2.3 we briefly describe the goal of the algorithm and its main steps. The sets of states and their representation are discussed in Section 2.4, and the way to compute the nonlinearities present in the system on those sets is described in Section 2.5. In Section 2.6, we outline the steps for determining the enclosure and provide a justification for its correctness. The algorithm of evolution of sets is discussed in Section 2.7.
### Abstract Problem
Let \(H\) be a real Hilbert space with the scalar product \(\left\langle.,.\right\rangle\) and \(Y\) a be a Banach space which continuously and densely embeds in \(H\), that is \(\left\|x\right\|_{H}\leq C\left\|x\right\|_{Y}\) for \(x\in Y.\) We assume that \(\{e_{1},e_{2},\ldots\}\) is an orthogonal basis of \(H\) such that \(e_{i}\in Y\) for every \(i\in\mathbb{N}\).
For a given \(x\in H\) by \(x_{i}\) we will denote the Fourier coefficient \(x_{i}=\frac{\left\langle e_{i},x\right\rangle}{\left\langle e_{i},e_{i} \right\rangle}.\) We consider following problem
\[\begin{cases}\frac{d}{dt}x(t)=Lx(t)+f(x(t))=F(x(t)),\\ x(0)=x^{0},\end{cases} \tag{2.1}\]
where \(L\) is a diagonal operator such that \(Le_{i}=\lambda_{i}e_{i}\) where \(\lambda\in\mathbb{R}\setminus\{0\}\), which generates a \(C^{0}\) semigroup \(e^{tL}:Y\to Y\). We have \(e^{Lt}e_{i}=e^{\lambda_{i}t}e_{i}.\) We assume that \(x^{0}\in Y\), and \(f:Y\to Y\) is a continuous mapping. For a given \(x\in Y\), we will use the notation \(f_{i}(x)\) and \(F_{i}(x)\) for \(\frac{\left\langle f(x),e_{i}\right\rangle}{\left\langle e_{i},e_{i}\right\rangle}\) and \(f_{i}(x)+\lambda_{i}x_{i}\), respectively.
The following lemma provides the criteria for local in time existence and uniqueness of mild solutions to problem (2.1).
**Lemma 2.1**.: _Assume that_
* _For some_ \(C_{1}>0\) _there holds_ \(\left\|e^{tL}\right\|_{Y}\leq e^{C_{1}t}\)_._
* _For every_ \(R>0\) _there exists_ \(C(R)\) _such that for every_ \(x,y\in Y\) _such that_ \(\left\|x\right\|_{Y},\left\|y\right\|_{Y}\leq R\) _there holds_ \(\left\|f(u)-f(v)\right\|_{Y}\leq C(R)\left\|u-v\right\|_{Y}\)_._
_Then for every initial data \(x^{0}\in Y\) there exists the unique local in time solution to problem (2.1) understood in the following sense_
\[x(t)=e^{Lt}x^{0}+\int_{0}^{t}e^{L(t-s)}f(x(s))\,ds, \tag{2.2}\]
_where the equality in \(Y\) is supposed to hold for every \(t\in[0,T]\), where \(T\) may depend on \(x_{0}\)._
Instead of proving Lemma 2.1, we will show more general result which implies it. Namely, we can replace (A1) and (A2) with more general conditions. The following result generalises Lemma 2.1 to the case when is \(f\) is a continuous map from \(Y\) to \(Y^{1}\), where \(Y^{1}\) is a Banach space such that \(Y\) is continuously embedded in \(Y^{1}\), and \(Y^{1}\) is continuously embedded in \(H.\) This more general result is useful, for example, if the nonlinear term in the problem depends not only on the value of the solution but also on the values of its spatial derivatives, which is the case for the Burgers or Kuramoto-Shivashinsky equations.
**Lemma 2.2**.: _Assume that_
* _For some_ \(C_{1},C_{2}>0\) _and every_ \(t\in[0,\infty)\) _there holds_ \(\left\|e^{tL}\right\|_{Y}\leq C_{1}e^{C_{2}t}\)_._
* _For every_ \(R>0\) _there exists_ \(C(R)\) _such that for every_ \(x,y\in Y\) _such that_ \(\left\|x\right\|_{Y},\left\|y\right\|_{Y}\leq R\) _there holds_ \(\left\|f(u)-f(v)\right\|_{Y^{1}}\leq C(R)\left\|u-v\right\|_{Y}\)_._
* _The semigroup_ \(e^{tL}\) _can be extended to the_ \(C^{0}\) _semigroup on_ \(Y^{1}\)_. There exist constants_ \(C_{3},C_{4}>0\) _and_ \(\gamma\in[0,1)\) _such that for every_ \(z\in Y^{1}\) _and_ \(t\in(0,\infty)\) _there holds_ \(\left\|e^{tL}z\right\|_{Y}\leq C_{3}e^{C_{4}t}\frac{1}{t^{\gamma}}\left\|z \right\|_{Y^{1}}\)_._
_Then, for every initial data \(x^{0}\in Y\), there exists a unique time-local solution to problem (2.1) in the following sense:_
\[x(t)=e^{Lt}x^{0}+\int_{0}^{t}e^{L(t-s)}f(x(s))\,ds,\]
_where the equality holds in \(Y\) for all \(t\in[0,T]\), where \(T\) may depend on \(x_{0}\)._
Proof.: For a given \(x^{0}\in Y\) and \(\delta>0\) consider the set
\[S_{\delta}:=\{y\in C([0,\delta];Y)\colon y(0)=x^{0}\text{ and for every }\ t\in[0,\delta]\text{ we have }\left\|y(t)\right\|_{Y}\leq 1+C_{1}\left\|x^{0} \right\|_{Y}\},\]
and define the mapping \(T:C([0,\delta];Y)\to C([0,\delta];Y)\) by the formula
\[T(y)(t)=e^{Lt}x^{0}+\int_{0}^{t}e^{L(t-s)}f(y(s))ds.\]
The space \(C([0,\delta];Y)\) is equipped with the norm \(\sup_{t\in[0,\delta]}\left\|y(t)\right\|_{Y}\). We have
\[\left\|T(y)(t)\right\|_{Y} \leq C_{1}e^{C_{2}t}\left\|x^{0}\right\|_{Y}+\int_{0}^{t}\left\|e^ {L(t-s)}f(y(s))\right\|_{Y}ds\] \[\leq C_{1}e^{C_{2}t}\left\|x^{0}\right\|_{Y}+C_{4}\int_{0}^{t}e^{ (t-s)C_{3}}\frac{1}{(t-s)^{\gamma}}\left\|f(y(s))\right\|_{Y_{1}}ds\] \[\leq C_{1}e^{C_{2}t}\left\|x^{0}\right\|_{Y}+C_{4}e^{tC_{3}}\int_ {0}^{t}\frac{1}{(t-s)^{\gamma}}ds\sup_{s\in[0,t]}\left\|f(y(s))\right\|_{Y_{1}}\] \[\leq C_{1}e^{C_{2}t}\left\|x^{0}\right\|_{Y}+\frac{t^{1-\gamma}} {1-\gamma}C_{4}e^{tC_{3}}\left(C(R)\sup_{s\in[0,t]}\left\|y(t)\right\|_{Y}+ \left\|f(0)\right\|_{Y_{1}}\right),\]
where \(R=1+C_{1}\left\|x^{0}\right\|_{Y}.\) If we pick \(\delta>0\) such that
\[e^{C_{2}\delta}\leq 1+\frac{1}{2C_{1}\left\|x_{0}\right\|_{Y}},\quad\text{and} \quad\delta^{1-\gamma}e^{\delta C_{3}}\leq\frac{1-\gamma}{2C_{4}\left(C(R)R+ \left\|f(0)\right\|_{Y_{1}}\right)}\]
we have that \(T(S_{\delta})\subset S_{\delta}.\) We have also
\[\left\|T(y_{1})(t)-T(y_{2})(t)\right\|_{Y}\leq C_{4}e^{tC_{3}}C(R)\frac{t^{1- \gamma}}{1-\gamma}\sup_{s\in[0,t]}\left\|y_{1}(s)-y_{2}(s)\right\|_{Y}.\]
If we take \(\delta\) such that
\[\delta^{1-\gamma}e^{\delta C_{3}}\leq\frac{1-\gamma}{2C_{4}C(R)},\]
the mapping \(T\) is also a contraction in the set \(S_{\delta}.\) From Banach fixed point theorem we that \(T\) has a unique fixed point which is a solution to (2.1).
**Lemma 2.3**.: _Assume \((A1)-(A2)\) or \((B1)-(B3)\). For every \(x^{0}\in Y\) there exist \(t_{max}(x^{0})\in(0,\infty]\) such that for the unique solution \(x:[0,t_{max}(x^{0}))\to Y\) of (2.1) the interval \([0,t_{max}(x^{0}))\) is maximal interval of existence of this solution. Additionally if we consider the set_
\[\Omega:=\left\{(t,x)\in\mathbb{R}^{+}\times Y:t\in[0,t_{max}(x))\right\},\]
_then the function \(\varphi:\Omega\to Y\) given by formula_
\[\varphi(t,x^{0})=e^{Lt}x^{0}+\int_{0}^{t}e^{L(t-s)}f(x(s))\,ds. \tag{2.3}\]
_defines a local semigroup._
**Remark 2.1**.: Let \(x:[0,\tau]\to Y\) satisfy (2.2). Assume \((A1)-(A2)\) or \((B1)-(B3)\). Then for every \(i\in\mathbb{N}\), the Fourier coefficients \(x_{i}\) satisfy the following non-autonomous ODE
\[\frac{dx_{i}}{dt}(t)=\lambda_{i}x_{i}(t)+f_{i}(x(t)),\quad\text{for every $t\in(0,\tau)$.}\]
Proof.: If \(x(t)\) satisfies (2.2) then for every \(i\in\mathbb{N}\) we have
\[x_{i}(t)=e^{t\lambda_{i}}x_{i}^{0}+\int_{0}^{t}e^{(t-s)\lambda_{i}}f_{i}(x(s) )ds. \tag{2.4}\]
Observe that if \((A1)-(A2)\) or \((B1)-(B3)\) hold, then for every \(i\in\mathbb{N}\) the function \(f_{i}:Y\to\mathbb{R}\) is locally Lipschitz. Hence, in the formula (2.4) the function under integral is continuous. Therefore we can differentiate this formula and get the ODE for \(i\)-th Fourier coefficient.
### The Brusselator system
We discuss how to represent the Brusselator system in the abstract framework presented in the previous section. We use the notation \(L^{2}\) for \(L^{2}(0,\pi)\) equipped with the norm \(\left\|u\right\|_{L^{2}}=\sqrt{\int_{0}^{\pi}u(x)^{2}\,dx}\) and \(C_{0}\) for \(\{u\in C([0,\pi])\,:\,\,u(0)=u(\pi)=0\}\) equipped with the norm \(\left\|u\right\|_{C_{0}}=\max_{x\in[0,\pi]}\{\left|u(x)\right|\}\), and we consider the following two product spaces: the Hilbert space
\[H=L^{2}\times L^{2}\ \ \text{with the norm}\ \ \left\|(u,v)\right\|_{H}^{2}=\left\|u \right\|_{L^{2}}^{2}+\left\|v\right\|_{L^{2}}^{2}\ \ \text{for}\ \ (u,v)\in H\]
and the Banach space
\[Y=C_{0}\times C_{0}\ \ \text{with the norm}\ \ \left\|(u,v)\right\|_{Y}=\max\{ \left\|u\right\|_{C_{0}},\left\|v\right\|_{C_{0}}\}\ \ \text{for}\ \ (u,v)\in Y.\]
In the space \(H\) system of functions \(\{e_{k}\}_{k=1}^{\infty}\) defined in following way
\[e_{2k-1}=(\sin(kx),0)\ \ \ e_{2k}=(0,\sin(kx))\ \ \text{for}\ \ k\in\{1,2, \ldots\}.\]
is the orthogonal basis. For \(u,v\in L^{2}\) we denote by \(u_{k}\) and \(v_{k}\) the \(k\)-th coefficients in the Fourier expansion in the sine series, of \(u\) and \(v\) respectively. We define the operator
\[L:D(L)\supset Y\to Y\ \ \text{as}\ \ L(u,v)=(d_{1}u_{xx}-(B+1)u,d_{2}v_{xx}),\]
where \(D(L)=\{(u,v)\in H_{0}^{1}\times H_{0}^{1}\colon\ L(u,v)\in Y\}\). The operator \(L\) defines a \(C^{0}\) semigroup on \(Y\), denoted by \(e^{tL}\), cf. [9, Proposition 2.6.7 and Theorem 3.1.1]. Observe that \(Le_{2k-1}=-(d_{1}k^{2}+B+1)e_{2k-1}\) and \(Le_{2k}=-d_{2}k^{2}e_{2k}\), so \(\{e_{k}\}_{k=1}^{\infty}\) are the eigenfunctions of \(L.\) We define \(f(u,v)=(u^{2}v+A\sin(x),Bu-u^{2}v).\) We can write the Brusselator system (1.1) as the following abstract problem
\[\begin{cases}\frac{d}{dt}(u(t),v(t))=L(u(t),v(t))+f(u(t),v(t)),\\ (u(0),v(0))=(u^{0},v^{0}).\end{cases} \tag{2.5}\]
We apply Lemma 2.1 to the above system, which gives the following result.
**Theorem 2.4**.: _For every \((u^{0},v^{0})\in Y\) there exists a function \((u,v):[0,t_{max}(u_{0},v_{0}))\to Y\) which is the unique solution to the (2.5), satisfying the Duhamel formula_
\[(u(t),v(t))=(u^{0},u^{0})+\int_{0}^{t}e^{L(t-s)}f(u(s),v(s))ds.\]
Proof.: We demonstrate that for the Brusselator problem (2.5) assumptions (A1) and (A2) hold and we can use Lemma 2.1. Namely
\[\left\|e^{Lt}\right\|_{Y}\leq 1,\ \text{for every}\ t\in[0,\infty),\]
which follows from the maximum principle for the heat equation. Furthermore
\[\left\|u^{2}v-\bar{u}^{2}\bar{v}\right\|_{C_{0}}\leq\left\|u\right\|_{C_{0}} ^{2}\left\|v-\bar{v}\right\|_{C_{0}}+\left\|u+\bar{u}\right\|_{C_{0}}\left\| \bar{v}\right\|_{C_{0}}\left\|u-\bar{u}\right\|_{C_{0}}\ \ \text{for}\ \ u,v\in C_{0}.\]
So for every \(R>0\) there exist \(C(R)\) such that for every \(\left\|(u,v)\right\|_{Y},\left\|(\bar{u},\bar{v})\right\|_{Y}\leq R\)
\[\left\|f(u,v)-f(\bar{u},\bar{v})\right\|_{Y}\leq C(R)\left\|(u,v)-(\bar{u}, \bar{v})\right\|_{Y},\]
which concludes the proof.
From Remark 2.1 and the formula for expanding the expression \(u^{2}v\) in terms of sine Fourier series, we obtain the following lemma.
**Lemma 2.5**.: _Let \((u(t),v(t))\) solve the system (1.1). For every \(k\in\mathbb{N}\) there holds_
\[\begin{cases}\frac{d}{dt}u_{k}=-u_{k}(d_{1}k^{2}+1+B)+N(u,v)_{k}+A\delta_{1k}, \\ \frac{d}{dt}v_{k}=-v_{k}d_{2}k^{2}+u_{k}B-N(u,v)_{k},\end{cases} \tag{2.6}\]
_where_
\[N(u,v)_{k}=\frac{1}{4}\sum_{i_{1}+i_{2}+i_{3}=k}u_{|i_{1}|}u_{|i_{2}|}v_{|i_{3}|} \text{sgn}(-i_{1}i_{2}i_{3})\text{ with }i_{1},i_{2},i_{3}\in\mathbb{Z}\setminus\{0\}. \tag{2.7}\]
Finally, we present the result that states the fact that the space of functions with only odd Fourier coefficients being nonzero for functions \(u\) and \(v\) is forward-invariant and corresponds to the space of functions that are symmetric with respect to the point \(x=\frac{\pi}{2}\).
**Proposition 2.2**.: _Space \(W:=\{(u,v)\in Y:u_{2i}=v_{2i}=0,\text{ for }i\in\mathbb{N}\}\) is invariant for system (1.1). Specifically, if \((u_{0},v_{0})\in W\) then \(u(t,\frac{\pi}{2}+x)=u(t,\frac{\pi}{2}-x)\) and \(v(t,\frac{\pi}{2}+x)=v(t,\frac{\pi}{2}-x)\) for every \(t\in[0,t_{max}(u_{0},v_{0}))\) and almost every \(x\in[0,\frac{\pi}{2}]\)._
**Remark 2.3**.: In the proof of Theorem 2.4 we verify that assumptions (A1) and (A2) hold for the Brusselator system. If the lower order nonlinearity depends on the derivatives of the unknown, then we need (B1)-(B3). Indeed, consider the Kuramoto-Sivashinsky equation
\[u_{t}=-\nu u_{xxxx}-u_{xx}+(u^{2})_{x}\ \ \text{for}\ \ (x,t)\in(0,\pi)\times(0, \infty),\]
with odd-periodic boundary conditions \(u(0,t)=u(\pi,t)=u_{xx}(0,t)=u_{xx}(\pi,t)=0\) studied in [23, 24]. We assume that the constant \(\nu\) is positive. The system \(\{\sin(kx)\}_{k=1}^{\infty}\) constitutes the orthogonal in \(L^{2}(0,\pi)\) basis of eigenfunctions of the leading linear operator \(Lu=-\nu u_{xxxx}-u_{xx}\) with the eigenvalues \(\lambda_{k}=-\nu k^{4}+k^{2}\). To verify (B1)-(B3) we take \(H=L^{2}(0,\pi)\), \(Y^{1}=\{u\in H^{3}(0,\pi)\,:u(0)=u(\pi)=u_{xx}(0)=u_{xx}(\pi)=0\}\), and \(Y=H^{4}(0,\pi)\cap Y^{1}\). If \(u=\sum_{k=1}^{\infty}u_{k}\sin(kx)\), then
\[\|u\|_{Y^{1}}^{2}=\frac{2}{\pi}\sum_{k=1}^{\infty}k^{6}|u_{k}|^{2},\ \ \|u\|_{Y}^{2}=\frac{2}{\pi}\sum_{k=1}^{\infty}k^{8}|u_{k}|^{2}.\]
The operator \(L\) is diagonal and the evolution of the \(k\)-th mode via the linear semigroup \(e^{tL}\) is given by the formula
\[u_{k}(t)=u_{k}(0)e^{(-\nu k^{4}+k^{2})t}.\]
It is easy to verify that the function \(k\to-\nu k^{4}+k^{2}\) has its maximum equal to \(\frac{1}{4\nu}\) at \(k=\frac{1}{\sqrt{2\nu}}\). This leads to the estimates \(\|e^{tL}\|_{\mathcal{L}(Y^{1};Y^{1})}\leq e^{\frac{t}{4\nu}}\) and \(\|e^{tL}\|_{\mathcal{L}(Y;Y)}\leq e^{\frac{t}{4\nu}}\). Verification of (B3) follows the concept of [19, Lemma 3.1]. Indeed, assuming that \(u^{0}\in Y^{1}\) is the initial data, we obtain
\[\|e^{tL}u^{0}\|_{Y}^{2}=\frac{2}{\pi}\sum_{k=1}^{\infty}k^{6}|u_{k}^{0}|^{2}k^ {2}e^{2(k^{2}-\nu k^{4})t}.\]
A straightforward computation which involves the maximization over \(k\geq 0\) shows that
\[ke^{(k^{2}-\nu k^{4})t}\leq\frac{C}{\sqrt[4]{\nu t}}e^{\frac{t}{\nu}},\]
where \(C\) is independent of \(k,\nu,t\). We deduce that
\[\|e^{tL}u^{0}\|_{Y}\leq\frac{C}{\sqrt[4]{\nu t}}e^{\frac{t}{\nu}}\|u^{0}\|_{Y ^{1}},\]
and (B3) is proved. To get (B2) it is enough that
\[\|2uu_{x}-2vv_{x}\|_{H^{3}}\leq C(R)\|u-v\|_{H^{4}},\]
where \(\|u\|_{H^{4}},\|v\|_{H^{4}}\leq R\), which is straightforward to verify.
### Overview of algorithm
First of all, since the phase space \(Y\) of our abstract problem (2.1) is infinite dimensional, we need a suitable representation of sets from this space. Once we have such representation, the key concept of the method is, for a given set \(X(0)\) of initial data and a time-step \(\tau\), to effectively construct another set \(X(\tau)\), such that it is guaranteed that every solution starting from \(X(0)\) at time \(t=0\) belongs to \(X(\tau)\) at time \(t=\tau\). Of course sets \(X(0)\) and \(X(\tau)\) must be described in previously defined representation. In other words, if \(\varphi:\Omega\to\mathbb{R}\) is the local semigroup governed by the solutions of (2.1) we need to be able to construct the set \(X(\tau)\) such that \(\varphi(\tau,X(0))\subset X(\tau)\). Furthermore, as we do not exclude the possibility of blow-up arbitrarily, the algorithm should also ensure that the value of \(\varphi(\tau,x)\) is well-defined for every \(x\in X(0)\). At the same time, the set \(X(\tau)\) should be as small as possible, as we iterate the above procedure to find sets that contain all solutions originating from a given set of initial data at large times. The chosen representation always results in an overestimation at each iteration. Because we need to represent sets \(X(0)\) and \(X(\tau)\) in the computer memory, which is finite, we represent those sets as finite objects. In the abstract problem (2.1) we assume that our phase space \(Y\) is embedded in a Hilbert space \(H\) with the basis \(\{e_{k}\}_{k=1}^{\infty}.\) We represent \(H=H_{P}\oplus H_{Q}.\) where \(H_{P}=\text{span}\{e_{1},\ldots,e_{n}\}\cong\mathbb{R}^{n}\) and \(H_{Q}\) is an orthogonal complement of \(H_{P}\) in \(H.\) By \(P,Q\) we will denote orthogonal projections on the spaces \(H_{P}\) and \(H_{Q}\) respectively. We represent the sets \(X(0),X(\tau)\subset Y\) as
\[X(0)=X_{P}(0)+X_{Q}(0)\qquad X(\tau)=X_{P}(\tau)+X_{Q}(\tau),\]
where \(X_{P}(0),X_{P}(\tau)\subset H_{P}\) are sets in finite dimensional space and \(X_{Q}(0),X_{Q}(\tau)\subset H_{Q}\) are infinite dimensional sets, which need some finite representation. We realize this representation by giving some inequalities, which are uniform with respect to coefficients in the Fourier representation. Such decomposition of the set will be called a \(P,Q\) representation. Now, essentially, we divide algorithm into two parts
1. Find the set \(X([0,\tau]),\) given in the same \(P,Q\) representation as the set \(X(0)\), such that every the solution to the considered problem satisfies \(\varphi(t,X(0))\subset X([0,\tau])\) for every \(t\in[0,\tau]\). We will equivalently write \(\varphi([0,t],X(0))\subset X([0,\tau])\) and we will call such set an enclosure. The details of finding this enclosure are given in Section 2.6.
2. Use the obtained enclosure to find the set \(X(\tau).\) To this end, we use the following procedures to find separately the sets \(X_{P}(\tau)\) and \(X_{Q}(\tau).\) * We formulate a differential inclusion in \(\mathbb{R}^{n}\) for the \(P\) component of the solution. In practice this \(P\) component consists of a finite number of Fourier coefficients (with respect to the space variable) of the solutions of the system (2.6). The influence from the omitted variables, i.e. the ones from \(Q\), is estimated from the enclosure obtained in the first step. The differential inclusion is integrated rigorously over time interval \(\tau\) and for initial values belonging to the set \(X_{P}(0)\). As a result we obtain the bounds for the coordinates of \(X_{P}(\tau)\). There are two possibilities to get these bounds: * Study the evolution of variables belonging to \(P\) separately, coordinate by coordinate, by solving the linear differential inequalities. That is, for every \(i\in\mathbb{N}\) we are estimating the evolution of the Fourier modes from the equation \[\frac{d}{dt}x_{i}(t)=\lambda_{i}x_{i}(t)+f_{i}(x(t)).\] * Solve rigorously the following vector differential inclusion, obtained by considering the Galerkin projection of the problem (2.1) on the space \(H_{P}\), and estimating the influence the omitted terms in the equation through the multivalued expression \(I\). \[\frac{d}{dt}Px\in PF(Px)+I.\]
The rigorous integration algorithm for finite dimensional vector inclusions such as the one above is described in [12]. We intersect the estimates obtained by two above techniques in order to obtain the sharper bounds.
* Use the a priori estimates coming from the dissipativity of the linear part of the problem to obtain the representation of \(X_{Q}(\tau)\). The influence of the nonlinear terms is estimated from the enclosure found in the first step.
The details of this step of the algorithm are given in Section 2.7.
### Representation of sets
The important role in the algorithm will be played by the sequences \(\{V_{i}\}_{i=1}^{\infty}\) of intervals which we will call infinite interval vectors. For a given infinite interval vector \(V\) we will denote the \(i\)-th interval by \(V_{i}\) and its left and right ends by \(V_{i}^{-},V_{i}^{+}\), respectively. If for every sequence \(\{v_{i}\}_{i=1}^{\infty}\), such that \(v_{i}\in V_{i}\), the series \(\sum_{i=1}^{\infty}e_{i}v_{i}\) converges in \(H\), we will call the set \(\{\sum_{i=1}^{\infty}e_{i}v_{i}\in H:v_{i}\in V_{i}\}\) a representation of infinite interval vector and we will say that the infinite vector is representable in the space \(H.\) It is possible that infinite interval vector \(V\) does not represent subset of \(H\) as it can happen that the series \(\sum_{i=1}^{\infty}e_{i}v_{i}\) where \(v_{i}\in V_{i}\) does not converge in \(H\). Whenever it will not lead to confusion we will use the same nonantion for infinite interval vectors and their representations.
We define several useful operation on the infinite interval vectors. First, for infinite vector \(V\) we define the quantities
\[V^{-}=\{[V_{i}^{-},V_{i}^{-}]\}_{i=1}^{\infty},\quad V^{+}=\{[V_{i}^{+},V_{i} ^{+}]\}_{i=1}^{\infty}.\]
For a given interval \(I\) the we define multiplication an infinite interval vector \(V\) by the interval \(I\) as
\[I\,V=V\,I=\{IV_{i}\}_{i=1}^{\infty}.\]
For two infinite intervals vectors \(V\) and \(W\) we define their sum and element-wise product as
\[V+W=\{V_{i}+W_{i}\}_{i=1}^{\infty},\quad V*W=\{V_{i}W_{i}\}_{i=1}^{\infty}.\]
We say that vector \(V\) is a subset of \(W\) and denote by \(V\subset W\) if and only if \(V_{i}\subset W_{i}\) for every \(i\in\mathbb{N}\). Additionally we define \(V\subset_{\text{int}}W\) if and only if \(V_{i}\subset\text{int}W_{i}\) for every \(i\in\mathbb{N}.\) We define the convex hull of two infinite intervals vectors as
\[\text{conv}\{V,W\}=\{\text{conv}\{V_{i}\cup W_{i}\}\}_{i=1}^{\infty}.\]
The intersection of two infinite vectors is defined in the following way
\[V\cap W=\{V_{i}\cap W_{i}\}_{i=1}^{\infty}.\]
Note that all above operations make sense for all infinite interval vectors, and not only the representable ones.
In the algorithm we consider such sets \(X=X_{P}+X_{Q}\), for which there exist infinite interval vectors whose representation contains the set \(X.\) Specifically, for the Brusselator system we will work with the pairs of interval infinite vectors which are given in the form \((U,V)=\{(U_{i},V_{i})\}_{i\in\mathbb{N}^{+}}\) where \(U_{i}\) and \(V_{i}\) are the intervals. Such vectors can be easily re-indexed into the form described previously. We will work with class of infinite interval vectors, with polynomial estimates on the tail. That means that for some \(n\in\mathbb{N}\) and \(s\in\mathbb{R}\) and every sequence \(u_{i}\in U_{i}\) and \(v_{i}\in V_{i}\) we have
\[u_{i}\in\frac{[C_{U}^{-},C_{U}^{+}]}{k^{s}},\quad v_{i}\in\frac{[C_{V}^{-},C_ {V}^{+}]}{k^{s}},\quad\text{for }i\geq n, \tag{2.8}\]
where \(C_{U}^{-}\leq C_{U}^{+}\) and \(C_{V}^{-}\leq C_{V}^{+}\) are constants. In this manner, the tail of the Fourier expansion (for \(i\geq n\)) can be represented by specifying the decay rate \(s\) of the coefficients and four additional constants \(C_{U}^{-},C_{U}^{+},C_{V}^{-},C_{V}^{+}\). Lemmas 5.5 and 5.6 are helpful in the implementation of operations of element-wise multiplication and addition for such class of infinite interval vectors.
Since for the Bruselator, we consider the system of two PDEs, the states are the sets \(X=X_{P}+X_{Q}\) which are the subsets of \(Y\). Their elements are pairs \((u,v).\) To represent these pairs we use the Fourier basis which consists of the eigenfunctions of the operator \(L,\) i.e. \((\sin(kx),0)\) and \((0,\sin(kx))\) for \(k\in\mathbb{N}^{+}\). The finite dimensional space \(H_{P},\) in which the sets \(X_{P}\) are always contained, is equal to \(\operatorname{span}\{(\sin(x),0),(0,\sin(x)),\ldots,(\sin(kx),0),(0,\sin(kx))\}.\)
The set \(X\) has to be a subset of some representable infinite interval vector vector \((U,V)\). In the algorithm, this means that \(X_{P}\) is a subset of some cube in \(H_{P}.\) The cubes are the simplest examples of possible representations of the set \(X_{P}.\) More sophisticated parallelepiped-type objects can also used. They can reduce overestimation of the integration results for the rigorous ODE solvers and overcome so calling wrapping effect (for example see [13]). The element \((u,v)\in H_{Q}\) belongs to \(X_{Q}\) if it satisfies (8) with \(s>1.\) This guarantees that \(X_{Q}\) is a subset of \(Y\). In that case we can estimate the result of series multiplication by using Lemmas 5 and 5. Finally, we note that we may restrict our space to the space of functions \(u,v\) with only odd nonzero coefficients. Then the set \(X\) is a subset of \(W:=\{(u,v)\in Y:u_{2i}=v_{2i}=0,\text{ for }i\in\mathbb{N}\}.\) The representation of set \(X\) is roughly the same, except that we have to enforce that \(u_{i}=v_{i}=0\) if \(i\) is even.
### Computation of nonlinear terms
In the course of the algorithm for a given set \(X=X_{P}+X_{Q},\) we need to compute the set \(X^{1}\) such that \(f(X)\subset X^{1}.\) This set, represented as \(X^{1}=X^{1}_{P}+X^{1}_{Q}\) constitutes the estimates for \(f(X)\) and hence it should be as small as possible. The set \(X^{1}\) is used in further steps of algorithm. For the Brusselator problem we have that
\[f(u,v)=(u^{2}v+A\sin(x),Bu-u^{2}v).\]
For functions \((u,v)\in Y\) the components of \(f\) can be represented in the following sine Fourier series with the coefficients \(a_{i},b_{i}\) dependent on \(u,v\)
\[u^{2}v+A\sin(x)=\sum_{i=1}^{\infty}a_{i}\sin(ix),\quad Bu-u^{2}v=\sum_{i=1}^{ \infty}b_{i}\sin(ix).\]
The set \(X^{1}_{P}\) is represented as the cube (or parallelogram) in \(H_{P}\) and \(X^{1}_{Q}\) is described by the polynomial decay of Fourier coefficients. The first step of finding \(X^{1}\) is estimating the square of \(u\) which is represented in the cosine Fourier series. For this purpose we use Lemma 5. Having computed the coefficients of \(u^{2}\) we need to find the coefficients of \((u^{2})v.\) To this end we use Lemma 5. Finally, we use Lemma 5 to compute the representation of sums of particular terms which appear in the definition of \(f\). We also need to compute the image \(L(X)\) but as \(L\) is a diagonal operator we only need to multiply every given coefficient by the corresponding eigenvalue of \(L.\) The result of such multiplication is given in Lemma 5. Additionally in the algorithm we need the decomposition
\[f(p+q)=f(p)+f_{2}(p,q),\]
where \(p\in H_{P},\)\(q\in H_{Q}\) and \(f_{2}(p,q)=f(p+q)-f(p)\). This decomposition is required for the formulation of the differential inclusion. For the Brusselator system we can write \(f(u_{P}+u_{Q},v_{P}+v_{Q})=f(u_{P},v_{P})+f_{2}(u_{P},v_{P},u_{Q},v_{Q})\) where
\[f_{2}(u_{P},v_{P},u_{Q},v_{Q})=((2u_{P}u_{Q}+u_{Q}^{2})(v_{P}+v_{Q})+u_{P}^{2} v_{Q},Bu_{Q}-(2u_{P}u_{Q}+u_{Q}^{2})(v_{P}+v_{Q})-u_{P}^{2}v_{Q}).\]
### Computation of the enclosure
We start this section with the definition of a enclosure.
**Definition 4**: The set \(X([0,\tau])\) is a enclosure of the set \(X^{0}\subset Y\) for time \(\tau>0\) if \(\varphi(t,X^{0})\subset X([0,\tau])\) for every \(t\in[0,\tau].\)
The following Lemma can be used in order to validate if for the given set of initial data \(X^{0}\) the set \(X^{0}+Z\) is an enclosure.
**Lemma 2.6**.: _Assume that (A1) and (A2) hold and let \(\{X_{i}^{0}\}_{i=1}^{\infty}\) be a countable family of intervals \(X_{i}^{0}=[x_{i}^{-},x_{i}^{+}]\) such that the set \(X^{0}:=\{\sum_{i=1}^{\infty}e_{i}x_{i}:x_{i}\in X_{i}^{0}\}\) is bounded in \(Y\). Moreover, define another set \(Z:=\{\sum_{i=1}^{\infty}e_{i}z_{i}:z_{i}\in Z_{i}\},\) also bounded in \(Y\), where \(Z_{i}=[z_{i}^{-},z_{i}^{+}]\) are intervals containing zero. Let \(x^{0}\in X^{0}\) and \(n\in\mathbb{N}\). We assume that for every \(i\in\mathbb{N}\) there holds_
\[[g_{i}^{-},g_{i}^{+}]\cap[h_{i}^{-},h_{i}^{+}]\subset\text{int}\ Z_{i}, \tag{2.9}\]
_where_
\[g_{i}^{-}=\min_{t\in[0,\tau]}\left[(e^{\lambda_{i}t}-1)\left(\frac{f_{i}^{-}} {\lambda_{i}}+x_{i}^{-}\right)\right],\quad g_{i}^{+}=\max_{t\in[0,\tau]} \left[(e^{\lambda_{i}t}-1)\left(\frac{f_{i}^{+}}{\lambda_{i}}+x_{i}^{+}\right) \right], \tag{2.10}\]
\[h_{i}^{-}=\min_{t\in[0,\tau]}t(\lambda_{i}(x_{i}^{-}+z_{i}^{-})+f_{i}^{-}), \quad h_{i}^{+}=\max_{t\in[0,\tau]}t(\lambda_{i}(x_{i}^{+}+z_{i}^{+})+f_{i}^{+}), \tag{2.11}\]
_and \(f_{i}^{-},f_{i}^{+}\in\mathbb{R}\) satisfy_
\[f_{i}^{-}\leq f_{i}(X^{0}+Z)\leq f_{i}^{+}. \tag{2.12}\]
_Then for every \(x^{0}\in X^{0}\) there exists a continuous function \(x:[0,\tau]\to Y\) which is a unique solution to (2.1). Moreover for every \(t\in[0,\tau]\) we have_
1. \(x_{i}(t)\in x_{i}^{0}+[0,t](f_{i}(X^{0}+Z)+\lambda_{i}(X^{0}+Z))\) _for every_ \(i\in\mathbb{N},\)__
2. \(x_{i}^{-}+g_{i}^{-}\leq x_{i}(t)\leq x_{i}^{+}+g_{i}^{+}\) _for every_ \(i\in\mathbb{N},\)__
3. \(x_{i}(t)\in e^{t\lambda_{i}}x_{i}^{0}+\frac{e^{\lambda_{i}t}-1}{\lambda_{i}}[ f_{i}^{-},f_{i}^{+}]\) _for every_ \(i\in\mathbb{N}.\)__
Proof.: We define the operator \(T:C([0,\tau];Y)\to C([0,\tau];Y)\) in the following way
\[T(g)(t)=e^{Lt}x^{0}+\int_{0}^{t}e^{L(t-s)}f(g(s))ds. \tag{2.13}\]
We consider the set
\[S_{\tau}=\{g\in C([0,\tau];Y):g(0)=x^{0}\text{ and for every }t\in[0,\tau]\text{ we have }g(t)\in X^{0}+Z\}.\]
We prove first that for every \(x^{0}\in X^{0}\) the mapping \(T\) leads from \(S_{\tau}\) to itself. We observe that \(y_{i}(t)=T_{i}(g)(t)\) is a solution to the non-autonomous ODE
\[\frac{d}{dt}y_{i}(t)=\lambda_{i}y_{i}(t)+f_{i}(g(t)), \tag{2.14}\]
for every \(i\in\mathbb{N}.\) We will prove that \(y_{i}(t)\in[x_{i}^{-}+z_{i}^{-},x_{i}^{+}+z_{i}^{+}],\) for every \(t\in[0,\tau].\) For the sake of contradiction assume that \(\tau_{1}<\tau,\) where \(\tau_{1}:=\sup_{t\in[0,\tau]}\{y_{i}(s)\in[x_{i}^{-}+z_{i}^{-},x_{i}^{+}+z_{i}^ {+}]\text{ for every }s\leq t\}.\) We observe that
\[y_{i}(t)=x_{0}+\int_{0}^{t}\lambda y_{i}(s)+f_{i}(g(s))ds,\quad y_{i}(t)=e^{ \lambda_{i}t}x_{0}+\int_{0}^{t}e^{\lambda(t-s)}f_{i}(g(s))ds.\]
So for \(t\leq\tau_{1}\) we have
\[y_{i}(t)\leq x_{i}^{0}+\int_{0}^{t}(\lambda(x_{i}^{+}+z_{i}^{+})+f_{i}^{+})ds \leq x_{i}^{+}+h_{i}^{+}.\]
Similarly for \(t\leq\tau_{1}\) we have \(x_{i}^{-}+h_{i}^{-}\leq y_{i}(t).\) We observe that for \(t\leq\tau_{1}\)
\[y_{i}(t)\leq e^{\lambda_{i}t}x_{i}^{0}+\int_{0}^{t}e^{\lambda_{i}(t-s)}f_{i}^{ +}ds\leq e^{\lambda_{i}t}x_{i}^{+}+\frac{e^{\lambda_{i}t}-1}{\lambda_{i}}f_{i}^ {+}=x_{i}^{+}+g_{i}^{+}.\]
We also see that \(x_{i}^{-}+g_{i}^{-}\leq y_{i}(t).\) This implies that for every \(t\in[0,\tau_{1}]\) we have \(x_{i}^{-}+z_{i}^{-}<y_{i}(t)<x_{i}^{+}+z_{i}^{+}.\) By the continuity of \(y_{i}(t)\) we can find \(\delta>0\) such that \(y_{i}(t)\in[x_{i}^{-}+z_{i}^{-},x_{i}^{+}+z_{i}^{+}]\) for every \(t\leq\tau_{1}+\delta.\) This is a contradiction, so \(\tau=\tau_{1}\). We deduce that \(T(S_{\tau})\subset S_{\tau}.\) To show that \(T\) is
a contraction we equip the space \(C([0,\tau],Y)\) with the norm \(\left\|y\right\|_{\alpha}=\sup_{t\in[0,\tau]}\left\|y(t)\right\|_{Y}e^{-t\alpha}\), where \(\alpha>0\) is an appropriately chosen positive constant. For \(g_{1},g_{2}\in S_{\tau}\) we have
\[\left\|\int_{0}^{t}e^{L(t-s)}(f(g_{2}(s))-f(g_{1}(s)))ds\right\|_{Y} \leq e^{tC_{1}}\int_{0}^{t}e^{s\alpha}e^{-s\alpha}\left\|f(g_{1}(s ))-f(g_{2}(s)))\right\|_{Y}ds\] \[\leq\frac{e^{tC_{1}}e^{t\alpha}}{\alpha}\sup_{s\in[0,t]}\left(e^{ -s\alpha}\left\|f(g_{1}(s))-f(g_{2}(s))\right\|_{Y}\right)\] \[\leq\frac{C(R)e^{tC_{1}}e^{t\alpha}}{\alpha}\sup_{s\in[0,t]}e^{-s \alpha}\left\|g_{1}(s)-g_{2}(s)\right\|_{Y}.\]
If we pick \(\alpha=\sup_{s\in[0,\tau]}\frac{1}{2C(R)e^{sC_{1}}}\) then
\[\left\|T(g_{1})-T(g_{2})\right\|_{\alpha}\leq\sup_{s\in[0,\tau]}\frac{C(R)e^{ sC_{1}}}{\alpha}\left\|g_{1}-g_{2}\right\|_{\alpha}\leq\frac{1}{2}\left\|g_{1}-g_ {2}\right\|_{\alpha}.\]
where \(R>0\) is a bound in the norm \(\left\|.\right\|_{Y}\) of elements of the set \(X_{0}+Z.\) This shows that \(T\) is a contraction. From the Banach fixed point theorem, we deduce that \(T\) must have a unique fixed point in \(S_{\tau}\). Now, formulae (1)-(3) are straightforward, which completes the proof.
**Remark 2.5**.: The assertion of Lemma 2.6 holds if we replace conditions \((A1)-(A2)\) by their more general counterparts \((B1)-(B3).\)
Proof.: We consider the same operator \(T\) and the set \(S_{\tau}\) as in proof of Lemma 2.6. The argument that \(T(S_{\tau})\subset S_{\tau}\) is the same, only the proof of contractivity changes. We will show that in this more general case we can find \(\alpha>0\) such that \(T\) is a contraction with respect to the norm \(\left\|y\right\|_{\alpha}=\sup_{t\in[0,\tau]}e^{-t\alpha}\left\|y(t)\right\|_{Y}.\) For \(g_{1},g_{2}\in S_{\tau}\) we compute
\[\left\|\int_{0}^{t}e^{L(t-s)}(f(g_{2}(s))-f(g_{1}(s)))ds\right\|_ {Y} \leq C_{3}e^{tC_{4}}\int_{0}^{t}\frac{1}{(t-s)^{\gamma}}e^{s\alpha }e^{-s\alpha}\left\|f(g_{1}(s))-f(g_{2}(s)))\right\|_{Y^{1}}ds\] \[\leq C_{3}e^{\tau C_{4}}C(R)\left\|g_{1}-g_{2}\right\|_{\alpha} \int_{0}^{t}\frac{1}{(t-s)^{\gamma}}e^{\alpha s}ds\] \[=C_{3}e^{\tau C_{4}}C(R)e^{t\alpha}\left\|g_{1}-g_{2}\right\|_{ \alpha}\int_{0}^{t}\frac{1}{s^{\gamma}}e^{-s\alpha}ds.\]
where \(R>0\) is a bound of the norm \(\left\|\cdot\right\|_{Y}\) satisfied by all elements of the set \(X_{0}+Z.\) For every \(\delta\leq t\) we have
\[\int_{0}^{t}\frac{1}{s^{\gamma}}e^{-s\alpha}ds\leq\int_{0}^{\delta}\frac{1}{s ^{\gamma}}ds+\frac{1}{\delta^{\gamma}}\int_{\delta}^{t}e^{-s\alpha}ds\leq \frac{\delta^{1-\gamma}}{1-\gamma}+\frac{1}{\delta^{\gamma}}\frac{1}{\alpha}\]
So, if we pick \(\delta,\alpha\) such that
\[\delta^{1-\gamma}\leq\frac{1}{4(1-\gamma)}C_{3}e^{\tau C_{4}}C(R),\quad\alpha \geq\frac{1}{4\delta^{\gamma}}C_{3}e^{\tau C_{4}}C(R),\]
then for every \(t\leq\tau\) we have
\[\left\|\int_{0}^{t}e^{L(t-s)}(f(g_{2}(s))-f(g_{1}(s)))ds\right\|_{Y}\leq\frac{ 1}{2}e^{t\alpha}\left\|g_{1}-g_{2}\right\|_{\alpha}.\]
Hence, \(T\) is a contraction on the set \(S_{\delta}\) equipped with the norm \(\left\|\cdot\right\|_{\alpha}\). The Banach fixed point theorem gives the unique fixed point of the map \(T\), and the proof is complete.
In Lemma 2.6 the sets \(X^{0},Z\) are representable infinite interval vectors. If \(X^{0}\) is not a representable infinite interval vector, we can still find a enclosure, provided we can choose the representable interval vector \(X^{1}\) such that \(X^{0}\subset X^{1}.\) The following algorithm takes as an input the infinite interval vector \(X^{0}\), the infinite interval vector \(Z\) such \(0\in Z_{i}\) for every \(i\in\mathbb{N}\), and the step
\(\tau>0.\) The algorithm validates if \(X^{0}+Z\) is an enclosure for \(\tau\) by checking assumption (2.9) of Lemma 2.6.
1. Find the infinite vectors \(V^{f}\) and \(V^{L}\) such that \(f(X+Z)\subset V^{f}\) and \(L(X+Z)\subset V^{L}.\)
2. Construct the infinite vector \(V^{\mathrm{ND}}\) such that \([0,\tau](V^{L}+V^{f})\subset V^{\mathrm{ND}}\)
3. Compute the infinite vectors \(V^{E_{1}}\) and \(V^{L^{-1}}\) which satisfy \[e^{[0,\tau]\lambda_{i}}-1\subset(V^{E_{1}})_{i},\qquad\frac{1}{\lambda_{i}} \in(V^{L^{-1}})_{i}\ \ \text{for every}\ \ i\in\{1,\dots\}.\]
4. Compute the infinite vectors \(V^{\mathrm{D1}}\) and \(V^{\mathrm{D2}}\) which satisfy \[V^{E_{1}}*((V^{f})^{+}*V^{L^{-1}}+X^{+})\subset V^{\mathrm{D1}},\quad V^{E_{1 }}*((V^{f})^{-}*V^{L^{-1}}+X^{-})\subset V^{\mathrm{D2}}.\]
5. Find infinite interval vectors \(V^{\mathrm{D}}\) such that \[\mathrm{conv}\{V^{\mathrm{D1}},V^{\mathrm{D2}}\}\subset V^{\mathrm{D}}.\]
6. Compute \(Z^{1}\) such that \(V^{\mathrm{ND}}\cap V^{\mathrm{D}}\subset Z^{1}.\)
7. Check if \(Z^{1}\subset_{\mathrm{int}}Z\) holds.
For every \(i\in\mathbb{N}\) the interval \([h_{i}^{-},h_{i}^{+}]\) from (2.11) is contained in the interval \(V_{i}^{ND}\), where \(V^{ND}\) is an infinite interval vector obtained in step (2). Similarly for every \(i\in\mathbb{N}\) the interval \([g_{i}^{-},g_{i}^{+}]\) from (2.10) is contained in the interval \(V_{i}^{D}\), where \(V^{D}\) is an infinite interval vector from step (5). If condition in step (7) is satisfied, we have that \(V_{i}^{ND}\cap V_{i}^{ND}\subset\mathrm{int}Z_{i}\) for every \(i\in\mathbb{N}\), which implies that assumptions of Lemma 2.6 hold and the set \(X^{0}+Z\) is an enclosure for our initial data \(X^{0}\) and the time step \(\tau>0\).
If condition (7) does not hold, we can modify the set \(Z\) and repeat the validation procedure. The reasonable guess is to take \(Z=[0,c]Z^{1}\), where \(c>1.\) Another possibility is to decrease the time step. The following lemma shows that for certain class of sets it is always possible to find the enclosure using the above algorithm with sufficiently small time step \(\tau\).
**Lemma 2.7**.: _Let \(\{a_{i}\}_{i=1}^{\infty}\) be a given sequence of positive numbers. Assume that_
1. _Conditions (A1)-(A2) or (B1)-(B3) hold._
2. _For some_ \(i_{0}\in\mathbb{N}\) _we have_ \(\lambda_{i}<0\) _for every_ \(i\geq i_{0}.\)__
3. _For every bounded set_ \(A\subset Y\) _such that_ \[\sup_{x\in A}|x_{i}|=O(a_{i}),\] _the following holds_ \[\sup_{x\in A}\left|\frac{f_{i}(x)}{\lambda_{i}}\right|=o(a_{i}).\]
_Let \(\{X_{i}^{0}\}_{i=1}^{\infty}\) and \(\{Z_{i}\}_{i=1}^{\infty}\) be sequences of intervals such that_
1. _The set_ \(X^{0}:=\{\sum_{i=1}^{\infty}e_{i}x_{i}:x_{i}\in X_{i}^{0}\}\) _is bounded in_ \(Y\)_. Intervals_ \(X_{i}^{0}\) _contain zero for every_ \(i\geq i_{0},\) _where_ \(i_{0}\in\mathbb{N}\) _and_ \[\sup_{x_{i}\in X_{i}^{0}}|x_{i}|=O(a_{i}).\]
2. _The set_ \(Z:=\{\sum_{i=1}^{\infty}e_{i}z_{i}:z_{i}\in Z_{i}\}\) _is bounded in_ \(Y\)_. Every interval_ \(Z_{i}\) _contains zero and_ \[z_{i}^{+}=\Theta(a_{i})\quad\text{and}\quad z_{i}^{-}=\Theta(a_{i}).\]
_Under these assumptions there exists \(\tau>0\) such that condition (2.9) of Lemma 2.6 is satisfied, and, in consequence, for every \(x^{0}\in X^{0}\) there exists a continuous function \(x:[0,\tau]\to Y\) which is a unique solution to (2.1) and the following estimates hold for every \(t\in[0,\tau]\)_
1. \(x_{i}(t)\in x_{i}^{0}+[0,t](f_{i}(X^{0}+Z)+\lambda_{i}(X^{0}+Z))\) _for every_ \(i\in\mathbb{N},\)__
2. \(x_{i}^{-}+g_{i}^{-}\leq x_{i}(t)\leq x_{i}^{+}+g_{i}^{+}\) _for every_ \(i\in\mathbb{N},\)__
3. \(x_{i}(t)\in e^{t\lambda_{i}}x_{i}^{0}+\frac{e^{\lambda_{i}t}-1}{\lambda_{i}}[f_{i}^ {-},f_{i}^{+}]\) _for every_ \(i\in\mathbb{N},\)__
_where \(g_{i}^{-},g_{i}^{+}\) are given by (2.10) and \(f_{i}^{-},f_{i}^{+}\) are given by (2.12)._
Proof.: Observe that we can find \(i_{1}\) such that
\[[g_{i}^{-},g_{i}^{+}]\subset\text{int }Z_{i}=(z_{i}^{-},z_{i}^{+}), \tag{2.15}\]
for every \(i\geq i_{1}\) and \(\tau>0.\) Indeed for big enough \(i\) we have
\[g_{i}^{+}=\max_{t\in[0,\tau]}\left[(e^{\lambda_{i}t}-1)\left(\frac{f_{i}^{+}}{ \lambda_{i}}+x_{i}^{+}\right)\right]\leq\max_{t\in[0,\tau]}\left[(e^{\lambda_ {i}t}-1)\frac{f_{i}^{+}}{\lambda_{i}}\right]\leq\left|\frac{f_{i}^{+}}{\lambda _{i}}\right|<z_{i}^{+}.\]
The first inequality follows from the fact that \(x_{i}^{+}\) is positive and term \((e^{\lambda_{i}t}-1)\) is negative for sufficiently large \(i\). The second inequality follows from the assumption on \(\lambda_{i}.\) The third one is a consequence of the fact that \(\sup_{x\in X}\left|\frac{f_{i}(x)}{\lambda_{i}}\right|=o(a_{i})\) and \(z_{i}^{+}=\Theta(a_{i}).\) We have shown that for \(i\geq i_{1}\) we have the inclusion (2.15). For the remaining, "low", indexes \(i\), we take \(\tau>0\) such that
\[\tau<\sup_{i<i_{1}}\frac{|z_{i}^{-}|}{\left|(\lambda_{i}(x_{i}^{-}+z_{i}^{-})+ f_{i}^{-})\right|+1}\quad\text{and}\quad\tau<\sup_{i<i_{1}}\frac{|z_{i}^{+}|}{ \left|(\lambda_{i}(x_{i}^{+}+z_{i}^{+})+f_{i}^{+})\right|+1}.\]
For such choice of \(\tau\) we have
\[[h_{i}^{-},h_{i}^{+}]\subset\text{int }Z_{i}\quad\text{for}\quad i<i_{1},\]
which ends the proof.
_Remark 2.6_.: The key assumption of the above lemma is (III). The sequence \(a_{i}\) signifies some decay of Fourier coefficients of the elements \(x\) of a set \(A\). The decay of Fourier coefficients of \(\frac{f_{i}(x)}{\lambda_{i}}\) for \(x\in A\) must be essentially faster than \(a_{i}\). Assume for example that \(a_{i}=\frac{1}{i^{s}}\) and that \(f\) is a polynomial. Then, the decay of the coefficients of \(f_{i}(x)\) is the same as that of \(x\), that is also \(\frac{1}{i^{s}}\) (see Section 5). If the leading operator is dissipative, i.e. \(\lambda_{i}\to-\infty\), then, after the division by \(\lambda_{i}\) this decay will be essentially faster then \(a_{i}\). Thus, dissipativity of \(L\) guarantees that the enclosure can be always found, and a step of the rigorous integration algorithm can be performed. The drawback is, that the length of a time-step \(\tau\) can be very small in Lemma 2.7. We expect that such situation holds for dissipative problems with finite type blow-up, for instance for the problems governed by the Fujita equation \(u_{t}=u_{xx}+|u|^{p-1}u\), cf. [7].
Using the above lemma we deduce that for the Brusselator system we can always find an enclosure for any set described in the Section 2.4, by choosing sufficiently short time-step.
_Remark 2.7_.: For every \(C>0,\ \varepsilon>0,\ k>1\) there exists \(\tau>0\) such that for every initial data \((u^{0},v^{0})\in Y\) satisfying
\[|u_{k}^{0}|\leq\frac{C}{|k|^{s}}\quad\text{and}\quad|v_{k}^{0}|\leq\frac{C}{| k|^{s}},\]
the solution \((u(t),v(t))\) to (1.1) satisfies
\[|u_{k}(t)|\leq\frac{C+\varepsilon}{|k|^{s}}\quad\text{and}\quad|v_{k}(t)|\leq \frac{C+\varepsilon}{|k|^{s}}\quad\text{for }t\in[0,\tau].\]
One can easily construct an example of a problem without this property. To this end let us consider the logistic model of population growth with diffusion and homogeneous Dirichlet boundary conditions. The corresponding equation together with the initial and boundary conditions are the following
\[\begin{cases}u_{t}=du_{xx}+u-u^{2}\text{ for }(x,t)\in[0,\pi]\times(0,\infty),\\ u(t,x)=0\text{ for }(x,t)\in\{0,\pi\}\times(0,\infty),\\ u(0,x)=u^{0}(x),\end{cases} \tag{2.16}\]
for \(d>0.\) As we impose the Dirichlet boundary condition, from the equation we observe that \(u_{xx}\) is always \(0\) at the boundary. But, in general, the fourth derivative \(u_{xxxx}\) can have nonzero values at the boundary. This implies that the coefficients in the expansion of the solution in the sine trigonometric series cannot have the arbitrarily fast polynomial decay.
We can write the equation in above example in the form (2.1) with \(L=du_{xx}+u\) and \(f(u)=u^{2}.\) For the initial condition \(u_{0}=\sqrt{\frac{\pi}{8}}\sin(x)\) we have that
\[f(u_{0})=\frac{\pi}{8}(1-\cos(2x))=\sum_{i=1}^{\infty}\frac{(-1+(-1)^{i})}{-4 n+n^{3}}\sin(ix).\]
So using Lemma 2.6 we have a chance to find the time step \(\tau>0\) and \(C>0\) such that the solution satisfies
\[|u_{k}(t)|\leq\frac{C}{k^{s}}\quad\text{for}\quad t\in[0,\tau],\]
only for \(s\in(1,5).\) Similar difficulty would occur if in the considered Brusselator system the term \(\sin(x)\) would be replaced with the original term from the Brusselator ODE, that is the constant \(1.\)
### Evolution of sets
We assume that \(X([0,\tau])=X_{P}([0,\tau])+X_{Q}([0,\tau])\) is an enclosure for the initial data \(X=X_{P}+X_{Q}\) and the time step \(\tau>0.\) The following procedure is used to find the set \(X(\tau)\) such that \(S(\tau)X\subset X(\tau).\)
1. Compute the infinite interval vector \(V\) such that \(f_{2}(X_{P}([0,\tau]),X_{Q}([0,\tau]))\subset V\).
2. Solve the system of differential inclusions \[\frac{d}{dt}Px\in LPx+Pf(Px)+PV,\] (2.17) with initial condition in the set \(X_{P}\). Note that if the initial data belongs to the set \(X_{P}\) which has more complicated structure then a vector of intervals (for example it can be a parallelepiped) this step, can be realized without enclosing the initial data in an interval vector. This will result in sharper estimates. As the result, the solver will generate the set \(X_{P1}\) such that \(Px(\tau)\in X_{P1}\) for every \(x(0)\in X_{P}+X_{Q}.\)
3. Compute the infinite interval vector \(V^{2}\) such that \(f(X([0,\tau]))\subset V^{2}.\)
4. Compute the infinite vectors \(V^{E_{1}},V^{E_{2}}\) and \(V^{L^{-1}}\) which satisfy \[e^{\tau\lambda_{i}}-1\in(V^{E_{1}})_{i},\qquad e^{\tau\lambda_{i}}\in(V^{E_{2} })_{i},\qquad\frac{1}{\lambda_{i}}\in(V^{L^{-1}})_{i}\ \text{ for every }\ i\in\{1,\dots\}.\]
5. Compute \(V^{3}\) such that \(V^{E_{2}}*X+V^{E_{1}}*V^{L^{-1}}*V^{2}\subset V_{3}.\)
6. Return the set \((PV_{3}\cap X_{P1})+QV_{3}.\)
Note that the interval vectors \(V^{E_{1}}\) both in the algorithm for finding the enclosure and evolving the set are not representable. However, we still need the data structures to represent them as infinite vectors and perform the operations such as addition or elementwise multiplication. Particular way of representing such sets and their operations with polynomial decay or growth of coefficients is addressed in Section 5.
The detailed description how to rigorously solve the differential inclusion can be found in [12]. For the step (2) we can consider differential inclusion only on part of variables represented explicitly. This can be beneficial for the computational time.
As we are using the infinite interval vectors (2.8) we need to determine the decay rate \(s\) for \(V^{E_{2}}.\) It is possible to impose arbitrarily fast polynomial decay in this term. On the other hand, the vector \(V^{E_{1}}\) can be represented by (2.8) with \(s=0\). Finally, the representation of \(V^{L^{-1}}\) decays with some given \(s_{1}>0\) determined by the decay of the inverses of the eigenvalues of \(L\) (which have to decay to zero as the considered problem is disspative). So, \(V_{3}\) can have higher \(s\) then the initial data \(X\). Maximal increase of the rate \(s\) in \(V_{3}\) is equal to \(s_{1}\). Theoretically in every time step it is possible to increase the decay \(s\) by the value of \(s_{1}.\) But it can lead to overestimates on some
variables so it is sometimes beneficial to keep the old \(s.\) In the code we are using heuristic algorithm which estimates upper bound of resulting series to decide if it worth to increase the exponent.
## 3. Computer assisted proof of periodic orbit existence
In this section we describe the computer assisted proof of Theorem 1.1.
### Overview of the proof
The proof of Theorem 1.1 is based on the Schauder fixed point theorem. We check that for some previously prepared Poincare map \(\mathcal{P}\) the image of appropriately chosen compact initial set \(X^{0}\) is contained in itself, i.e. \(\mathcal{P}(X^{0})\subset X^{0}.\) To validate the inclusion, we apply the previously described integration algorithm and Lemma 3.1 to address the issue of crossing the section. As the preliminary step, we construct a set of initial data \(X^{0}\) and a section \(l\) from the analysis of results of approximate numerical integration of the Galerkin projection both of the Brusselator equation and its variational equation.
### Poincare map - crossing the section
The problem of finding the periodic orbit for the system (2.1) is reduced to finding the fixed point of the Poincare map. In this section we present the algorithm of rigorous computatation of the Poincare map and justify its correctness. While the same algorithm is for ODEs can be found in [22, Section 5], and its infinite dimensional version appears in [23, Section 3], we present it for the exposition completeness. Note that although the results of the present section follow closely the concepts of [22, 23], the results in [23, Lemma 6 and Theorem 8] use the Brouwer fixed point theorem, and the argument on passing to the limit in Galerkin projections to get the fixed point, and our Theorem 3.2 uses the Schauder fixed point theorem.
We assume as in Section 2 that we consider abstract problem (2.1). The spaces \(H,Y\) are also the same as in Section 2. By \(\varphi\) we define the local semiflow which is given by solutions of (2.1). The following lemma allows us to check if evolution of initial set transversally intersects the section, which is a kernel of some affine map.
**Lemma 3.1**.: _Let \(l:Y\to\mathbb{R}\) be a function given by the formula \(l(x)=\sum_{i=1}^{n}\alpha_{i}(x_{i}-\beta_{i})\), where \(\alpha_{1}\ldots,\alpha_{n},\beta_{1},\ldots,\beta_{n}\in\mathbb{R}.\) Let \(X\) be a bounded set in \(Y.\) We assume that for some \(\tau>0\) the following conditions hold:_
1. _For every_ \(x\in X\) _we have_ \(l(x)<0\) _and_ \(l(\varphi(\tau,x))>0.\)__
2. _For every_ \(x\in\varphi([0,\tau],X)\) _we have_ \(\sum_{i=1}^{n}\alpha_{i}F_{i}(x)>0.\)__
_Then for every \(x\in X\) there exists a unique \(\tau_{l}(x)\in(0,\tau)\) such that \(l(\varphi(\tau_{l}(x),x))=0.\) Moreover the function \(\tau_{l}:X\to(0,\tau)\) is continuous in the norm of the space \(Y.\)_
Proof.: Let \(x\in X.\) We observe that condition (1) and the continuity of the flow imply that there exists \(\tau_{l}(x)\) such that \(l(\varphi(\tau_{l}(x),x))=0.\) By condition (2) we deduce that the function \(g(t)=l(\varphi(t,x))\) is increasing. Indeed we have
\[\frac{dg(t)}{dt}=\sum_{i=1}^{n}\alpha_{i}\frac{d\varphi_{i}(t,x)}{dt}=\sum_{i =1}^{n}\alpha_{i}F_{i}(\varphi(t,x))>0.\]
So \(\tau_{l}(x)\) has to be unique zero for \(g\) on the interval \([0,\tau]\). We prove the continuity of \(\tau_{l}.\) Let \(\varepsilon>0.\) We define \(g(\tau_{l}(x)-\varepsilon)=A_{1}<0\) and \(g(\tau_{l}(x)+\varepsilon)=A_{2}>0.\) From the continuity of the flow and the function \(l\), we can pick \(\delta>0\) such that for every \(y\in B_{Y}(x,\delta)\cap X\) we have
\[\sup_{s\in[0,\tau]}|l(\varphi(s,x))-l(\varphi(s,y))|<\frac{\min\{-A_{1},A_{2} \}}{2}.\]
So for \(y\in B_{Y}(x,\delta)\cap X\) we obtain \(l(\varphi(\tau_{l}(x)-\varepsilon,y))<0\) and \(l(\varphi(\tau_{l}(x)+\varepsilon,y))>0.\) Consequently there exists \(\tau_{l}(y)\in\tau_{l}(x)+(-\varepsilon,\varepsilon)\) such that \(l(\varphi(\tau_{l}(y),y))=0,\) so the funtion \(\tau_{l}\) is continuous, which concludes the proof.
If assumption of Lemma 3.1 are satisfied then for every \(x\in X\) we have \(l(\varphi(\tau_{l}(x),x))=0.\) Hence we can define the map Poincare map \(\mathcal{P}:X\to Y\) by the formula \(\mathcal{P}(x)=\varphi(\tau_{l}(x),x).\) The map \(\mathcal{P}\) is continuous in the norm of the Banach space \(Y\). If \(\varphi(t,X^{0})\subset X\) then we can define the Poincare map \(\mathcal{P}:X^{0}\to Y\) by the formula \(\mathcal{P}(x)=\varphi(\tau_{l}(\varphi(t,x)),\varphi(t,x)).\)
We will briefly describe the algorithm which estimates the image of the set \(X^{0}=X^{0}_{P}+X^{0}_{Q}\) by the Poincare map. We assume that the section \(l(x)\) is given by the same formula as in Lemma 3.1 and does not depend on values in space \(H_{Q},\) that is for every \(x\in Y\) we have that \(l(x)=l(Px).\) The algorithm is following.
1. Set \(t_{\mathrm{prev}}:=0\) and \(t_{\mathrm{curr}}:=0.\)
2. Check if for every \(x\in\varphi(t_{\mathrm{prev}},X^{0})\) there holds \(l(x)<0\) and for every \(x\in\varphi(t_{\mathrm{curr}},X^{0})\) there holds \(l(x)>0.\) 1. If no, then change \(t_{\mathrm{prev}}:=t_{\mathrm{curr}}\) and \(t_{\mathrm{curr}}:=t_{\mathrm{curr}}+\tau,\) where \(\tau>0\) is a given time step. Then go back to step (2). 2. If yes, then try to minimise the difference \(t_{\mathrm{curr}}-t_{\mathrm{prev}}\) for which condition in (2) holds.
3. Compute the enclosure \(X^{E}\) such that \(\varphi([t_{\mathrm{prev}},t_{\mathrm{curr}}],X^{0})\subset X^{E}.\)
4. Check that for every \(x\in X^{E}\) we have \(\sum_{i=1}^{n}\alpha_{i}F_{i}(x)>0\) and if this condition is satisfied return \(X^{E}.\)
Images \(\varphi(t_{\mathrm{prev}},X^{0})\) and \(\varphi(t_{\mathrm{curr}},X^{0})\) can be estimated from the algorithm of integration described in Section 2. Computation of evaluations of \(l(\varphi(t_{\mathrm{prev}},X^{0}))\) and \(l(\varphi(t_{\mathrm{prev}},X^{0})))\) depends only on the \(P\) part of representations of sets \(\varphi(t_{\mathrm{prev}},X^{0})\) and \(\varphi(t_{\mathrm{curr}},X^{0})\). We refer to [11, Algorithm 1, 3] for further details about evaluation of \(l.\) Step (b) is not necessary but it significantly improves estimation of the algorithm result. For minimization of the crossing time we can use the rigorous version of the bisection method or the the rigorous Newton method [11, Algorithm 5, Lemma 8].
Steps (2) and (4) of above algorithm assure that assumptions (1) and (2) of Lemma 3.1 are satisfied. In step (4) we can additionally return estimation of the set \([B](X^{E}_{P}-[y^{0}])\) were \([B]\) is some interval matrix and \([y^{0}]\) is some interval vector. This allows to compute the resulting estimates in some system of coordinates defined on the section and ignore the component which is normal to the section. For more details of computation of \([B](X^{E}_{P}-y^{0})\) see [11, Algorithm 6].
The following result allows us to deduce the existence of periodic orbit. Assumptions of this theorem can be checked with the use of previously described algorithm.
**Theorem 3.2**.: _Let \(X^{0}\) be a nonempty, compact, convex subset of \(Y\), such that for every \(x^{0}\in X^{0}\) we have \(l(x^{0})=0\). Assume that for \(t>0\) and \(\tau>0\) the following holds:_
1. _For every_ \(x^{0}\in X^{0}\) _we have_ \(l(\varphi(t,x^{0}))<0\) _and_ \(l(\varphi(t+\tau,x^{0}))>0.\)__
2. _For every_ \(x\in\varphi([t,t+\tau],X^{0}),\) _we have_ \(\sum_{i=1}^{n}\alpha_{i}F_{i}(x)>0.\)__
3. _For every_ \(x\in\varphi([t,t+\tau],X^{0})\) _such that_ \(l(x)=0\) _we have_ \(x\in X^{0}.\)__
_Then there exist \(x^{*}\in\varphi([t,t+\tau],X^{0})\) and \(T\in(t,t+\tau)\) such that \(l(x^{*})=0\) and \(\varphi(T,x^{*})=x^{*}.\)_
Proof.: From Lemma 3.1 we observe that for every \(x\in X^{0}\) we have \(l(\varphi(\tau_{l}(\varphi(t,x)),\varphi(t,x)))=0.\) Hence we can define the map \(\mathcal{P}:X^{0}\to Y\) by the formula \(\mathcal{P}(x^{0})=\varphi(\tau_{l}(\varphi(t,x)),\varphi(t,x)).\) The map \(\mathcal{P}\) is continuous in the norm of the Banach space \(Y\). From assumption (3) we see that \(\mathcal{P}(X^{0})\subset X^{0}.\) As \(X^{0}\) is compact and convex, the Schauder fixed-point theorem ensures the existence of a fixed point \(x^{*}\). For this point we have \(x^{*}\in\mathcal{P}(X^{0})\subset\varphi([t,t+\tau],X^{0})\) which concludes the proof.
**Remark 3.1**.: If in Theorem 3.2 we additionally assume that for some \(t_{1}<t_{2}<t+\tau\):
1. For every \(x\in\varphi([0,t_{1}],X_{0})\) and \(x\in\varphi([t_{2},t+\tau],X_{0})\) we have that \(\sum_{i=1}^{n}\alpha_{i}F_{i}(x)>0\).
2. For every \(t\in[t_{1},t_{2}]\) we have \(\varphi(t,X^{0})\cap X^{0}=\emptyset.\)
Then \(T\) has to be a fundamental period for \(x^{*}\).
### Numerical approximation of periodic orbit
The first step in constructing the initial data and the section which will be used in validating the assumptions of Theorem 3.2, is finding the approximation of periodic using the Galerkin projection of (2.5). This means that we need to find an initial data \(x^{*}=(u^{*},v^{*})\) and a time \(T^{*}\) such that the solution to the system
\[\begin{cases}\frac{d}{dt}P(u(t),v(t))=LP(u(t),v(t))+Pf(P(u(t),v(t))),\\ (u(0),v(0))=(u^{*},v^{*}).\end{cases} \tag{3.1}\]
is close to a periodic solution and \(T^{*}\) is close to its period.
As, for the Brusselator system, we observe that periodic orbit is numerically attracting, it is enough to find approximation of attracting fixed point of some Poincare map. We need to ensure that the section which defines this Poincare map intersects periodic orbit of (3.1). Additionally we are searching for \((u^{*},v^{*})\) for which only odd Fourier coefficients in the sine series are nonzero. We chose to project the Brusselator system (1.1) on the subspace of \(L^{2}\times L^{2}\) spanned by the functions \(\{(\sin(x),0),(\sin(3x),0),\ldots,(\sin(17x),0)\}\) and \(\{(0,\sin(x)),(0,\sin(3x)),\ldots,(0,\sin(17x))\}\).
With this procedure we have found the following approximations of the fixed point of Poincare map.
\[\begin{split} u^{*}(x)&=10^{-1}*6.999\sin(x)-10^{- 2}*8.170\sin(3x)+-10^{-3}*5.377\sin(5x)+10^{-2}*1.325\sin(7x)+\\ &+10^{-3}*1.050\sin(9x)-10^{-4}*2.585\sin(11x)-10^{-6}*1.764\sin( 13x)+10^{-7}*5.029\sin(15x)\\ &+10^{-8}*2.779\sin(17x),\end{split}\]
and
\[\begin{split} v^{*}(x)&=3.869\sin(x)+1.136\sin(3x) +10^{-1}*1.017\sin(5x)-10^{-3}*9.291\sin(7x)\\ &-10^{-3}*1.297\sin(9x)+10^{-4}*1.960\sin(11x)+10^{-5}*1.993\sin( 13x)-10^{-6}*4.109\sin(15x)\\ &-10^{-7}*3.147\sin(17x).\end{split}\]
The coefficients of \(u^{*}\) and \(v^{*}\) are written up to three decimal places.
Figure 2. First two modes and initial data for the approximate periodic solution found by nonrigorous computations.
### Defining a Poincare map and constructing an initial set
We need to define a Poincare map and an initial set that will be used to validate Theorem 1.1. The section is a mapping \(l:Y\to\mathbb{R}\) given by the formula \(l(x)=\sum_{i=1}^{n}\alpha_{i}(x_{i}-x_{i}^{*})\), where the \(x^{*}\) is the point from the numerical approximation of the periodic orbit. The numbers \(\alpha_{i}\in\mathbb{R}\) are chosen to assure that transversality condition (2) of Lemma 3.1 holds. The method to choose the coefficients in the optimal way, such that the time of passing through the section is minimal is given in [11, Theorem 18]. We use this approach. To describe the initial set \(X^{0}=X^{0}_{P}+X^{0}_{Q}\) we define separately two sets \(X^{0}_{P}\) and \(X^{0}_{Q}.\) The set \(X^{0}_{P}\) is defined as follows
\[X^{0}_{P}=x^{*}+Ar^{0},\]
where \(x^{*}\) is a (noninterval) vector, \(A\) is a (noninterval) square matrix and \(r^{0}\) is an interval vector. Note, that since both \(r^{0}\) is an interval vector, \(x^{*}\) is a vector, and \(A\) is a matrix, \(X^{0}_{P}\) is a bounded and convex set.
The first column \(c_{1}\) of the matrix \(A\) is equal to \((\alpha_{1},\ldots,\alpha_{n}).\) The subsequent columns \(c_{2},\ldots,c_{n}\) of this matrix constitute the coordinate system on section. They are assumed to satisfy the following two conditions
1. For every \(i\in\{2,\ldots,n\}\) we have \(l(c_{i}+x^{*})=0,\)
2. For the linear functional \(\hat{l}(x)=l(x+x^{*})\) we have that \(\operatorname{span}\{c_{2},\ldots,c_{n}\}=P(\ker\hat{l}).\)
The first coefficient of \(r^{0}\) can be set to interval \([0,0]\) as our initial set should be on the previously defined section. The next coefficients describe size of the set in the section and can be fixed for example to intervals \([-\delta,\delta]\) where \(\delta>0\). In the algorithm of the evolution of the set, we will take as the initial data the set \(\overline{X}^{0}_{P}\) which is a superset of \(X^{0}_{P}\) and is defined by \(\overline{X}^{0}_{P}=[x^{*}]+[A]r^{0},\) where \([A]\) is an interval matrix containing \(A\) and \([x^{*}]\) is an interval vector containing \(x^{*}\). The first column of \([A]\) is the interval vector containing \(c_{1}\). The remaining interval columns denoted by \(\{C_{i}\}_{i=2}^{n}\) are constructed in such a way that they are guaranteed to contain vectors which constitute the coordinate system \(\{c_{i}\}_{i=2}^{n}\) on the section, i.e. \(c_{i}\in C_{i}\) for \(i\in\{2,\ldots,n\}\). In construction of columns \(C_{2},\ldots,C_{n}\) of the matrix \([A]\) we can use numerical approximation of the eigenvectors of the Poincare map given by the section \(l\). Additionally every numerical approximation of these eigenvectors is rigorously projected on the first column in order to assure that condition (1) is satisfied for a certain \(c_{i}\in C_{i}\). We also compute an interval matrix \([B]\) which is a rigorous interval inverse of the interval matrix \([A].\) Existence of this matrix ensures that condition (2) is satisfied. Matrix \([B]\) is also used in further steps. The set \(X^{0}_{Q}\) is defined by infinite interval vectors with the polynomial decay of coefficients as presented in (2.8).
### Computer assisted proof
With the previously defined section \(l\) and the initial data \(X^{0}\) we check that the assumptions of Theorem 3.2 are satisfied. We use the algorithm described in Section 3.2 to estimate the image \(\mathcal{P}(X^{0})\subset X^{1}_{P}+X^{1}_{Q}.\) This algorithm guarantees that assumptions (1) and (2) of Theorem 3.2 are satisfied. The set \(X^{1}_{P}\) is returned in the form
\[X^{1}_{P}=[x^{*}]+[A]q^{0}.\]
To obtain the interval vector \(q^{0}\) we use the algorithm of computing a Poincare map with the matrix \([B]\), which is an interval inverse of the matrix \([A]\), and \([y^{0}]\) equal to \([x^{*}]\). To validate the assumption (3) of Theorem 3.2 it is enough to check that
1. we have \(q^{0}_{i}\subset r^{0}_{i}\) for \(i\in\{2,\ldots,n\},\)
2. we have \(X^{1}_{Q}\subset X^{0}_{Q}.\)
In the computer assisted proof of Theorem 1.1 we set
\[H_{P}:=\operatorname{span}\bigcup_{\begin{subarray}{c}1\leq k\leq 59\\ k\bmod 2=1\end{subarray}}\{(\sin(kx),0),(0,\sin(kx))\},\]
We define \(H_{Q}\) as the orthogonal complement of \(H_{P}\), in the space
\[H:=\left\{(u,v)\in L^{2}\times L^{2}:\,u(x)=\sum_{k=1}^{\infty}u_{2k-1}\sin((2k-1) x),\,v(x)=\sum_{k=1}^{\infty}v_{2k-1}\sin((2k-1)x)\right\}.\]
Observe that \(\dim(H_{P})=60.\) The set \(X^{0}_{P}\) is defined using the procedure described in Section 3.4 applied to the Brusselator system. This is a bounded, closed and convex set in a finite dimensional space \(H_{P}\). To see that \(q^{0}_{i}\subset r^{0}_{i}\) implies that \(X^{0}_{P}\subset X^{1}_{P}\) note that \(q^{0}\subset r^{0}\) implies that
\[[B](X^{1}_{P}-[x^{*}])=q^{0}\subset r^{0}=A^{-1}(X^{0}_{P}-x^{*}).\]
But, since \(A^{-1}\in[B]\) and \(x^{*}\in[x^{*}]\), the last inclusion implies that
\[A^{-1}(X^{1}_{P}-x^{*})\subset A^{-1}(X^{0}_{P}-x^{*}),\]
which guarantees the required inclusion \(X^{1}_{P}\subset X^{0}_{P}\). The part \(X^{0}_{Q}\) is given as follows
\[X^{0}_{Q}:=\left\{(u,v)\in H^{Q}:\;u_{k}\in\frac{[-1,1]}{|k|^{5}},\;v_{k}\in \frac{[-1,1]}{|k|^{5}}\quad\text{for }k>59\right\}.\]
The set \(X^{0}=X^{0}_{P}+X^{0}_{Q}\) must be compact in \(Y=C_{0}\times C_{0}\). As it is closed in \(Y\), it is sufficient to show that it is bounded in \(H^{1}_{0}\times H^{1}_{0}\). To this end assume that \((u,v)\in X^{0}\). Then
\[(u_{x},v_{x})=\left(\sum_{\begin{subarray}{c}k=1\\ k\text{ mod }2=1\end{subarray}}^{\infty}ku_{k}\sin(kx),\sum_{\begin{subarray}{c }k=1\\ k\text{ mod }2=1\end{subarray}}^{\infty}ku_{k}\sin(kx)\right),\]
and
\[\|(u,v)\|^{2}_{H^{1}_{0}\times H^{1}_{0}}=\|(u_{x},v_{x})\|^{2}_{L ^{2}\times L^{2}}=\frac{\pi}{2}\sum_{\begin{subarray}{c}k=1\\ k\text{ mod }2=1\end{subarray}}^{\infty}k^{2}(|u_{k}|^{2}+|v_{k}|^{2})\] \[\leq\frac{59^{2}\pi}{2}\sum_{\begin{subarray}{c}k=1\\ k\text{ mod }2=1\end{subarray}}^{59}(|u_{k}|^{2}+|v_{k}|^{2})+\frac{\pi}{2}\sum_{ \begin{subarray}{c}k=61\\ k\text{ mod }2=1\end{subarray}}^{\infty}\frac{2}{k^{8}}.\]
Since the last quantity is bounded uniformly with respect to the choice of \((u,v)\in X^{0}\), we get the required compactness of this set in \(Y\). After the computation of Poincare map we get the set \(X^{1}_{P}+X^{1}_{Q}\) with the following \(Q\) part
\[X^{1}_{Q}:=\left\{(u,v)\in H^{Q}:\;u_{i}\in 10^{-14}\frac{[-3.46474,3.46474]}{|i |^{6.5}},\;v_{i}\in 10^{-13}\frac{[-3.9024,3.9024]}{|i|^{6.5}}\quad\text{for }i>59\right\}.\]
The conditions (C1), (C2) are satisfied, so the main Theorem 1.1 is validated. The fact that (C1) holds is demonstrated in Tab. 1 where for brevity only the first 10 coordinates are depicted. Due to the extra information on the periodic orbit obtained in the computer assisted proof, we can provide the following extended version of the main theorem.
**Theorem 3.3**.: _For parameters \(d_{1}=0.2,\;d_{2}=0.02,\;A=1,\;B=2\) the Brusselator system has a periodic solution \((\bar{u}(t,x),\bar{v}(t,x)),\) with the period \(T\in[7.69666,7.69667].\) The functions \(\bar{u}(t,x),\bar{v}(t,x)\) are symmetric with respect to the point \(x=\frac{\pi}{2}.\) Moreover the following estimates are
true_
\[\sup_{t\in[0,T]}\left\|\bar{u}(t)\right\|_{L^{2}} \leq 1.27261,\ \sup_{t\in[0,T]}\left\|\bar{v}(t)\right\|_{L^{2}}\leq 5.05587,\] \[\sup_{t\in[0,T]}\left\|\bar{u}_{x}(t)\right\|_{L^{2}} \leq 1.35194,\ \sup_{t\in[0,T]}\left\|\bar{v}_{x}(t)\right\|_{L^{2}}\leq 6.75405,\] \[\sup_{t\in[0,T]}\left\|\bar{u}(t)-u^{*}(t)\right\|_{L^{2}} \leq 0.00049664,\ \sup_{t\in[0,T]}\left\|\bar{v}(t)-v^{*}(t)\right\|_{L^{2}}\leq 0.000955005,\] \[\sup_{t\in[0,T]}\left\|\bar{u}_{x}(t)-u^{*}_{x}(t)\right\|_{L^{2}} \leq 0.000546005,\ \sup_{t\in[0,T]}\left\|\bar{v}_{x}(t)-v^{*}_{x}(t)\right\|_{L^{2}} \leq 0.00141263,\]
_where \((u^{*}(t,x),v^{*}(t,x))\) is the solution to the Brusselator system with the initial data \(u^{*}(0,x)=u^{*}(x),\ v^{*}(0,x)=v^{*}(x),\) and the same parameters as above._
## 4. Numerical and rigorous results for other parameter values
In this section we discuss some numerical observations and rigorous results concerning the Brusselator system with various parameter values. We conduct the computer assisted proof of existence of periodic orbit for the parameter values from the set \(\mathcal{A}=\mathcal{A}_{1}\cup\mathcal{A}_{2}\cup\mathcal{A}_{3}\), of parameters \((d_{1},d_{2},A,B)\), where
\[\mathcal{A}_{1} =\left\{\left(0.2,0.02,1,2+\frac{i}{10}\right):i\in\{0,\ldots,11 \}\right\},\] \[\mathcal{A}_{2} =\left\{\left(1,\frac{1}{64},1,2.71\right),\left(1,\frac{1}{64},1,2.83\right),\left(1,\frac{1}{64},1,2.84\right)\right\},\quad\mathcal{A}_{3} =\{(0.02,0.02,1,2)\}.\]
We have rigorously proved the existence of the periodic orbits for all parameters from the set \(\mathcal{A}\). Values in the set \(\mathcal{A}_{1}\) correspond to the slow-fast behavior of the system, ones in the set \(\mathcal{A}_{2}\) correspond to the period-doubling bifurcation and cross-validation with the results of Arioli [1], and the paramters in \(\mathcal{A}_{3}\) are related with the numerical experiments where we find attracting torus.
In the grid set \(\mathcal{A}_{1}\) we fix the parameters \(d_{1},d_{2},A\) and we increase the parameter \(B\) by \(0.1\) starting from its value \(B=2\), which corresponds to the periodic orbit found in our main Theorem 1.1, and ending at the value \(B=3.1\). For the corresponding planar ODE (1.2), it is known [14] that the slow-fast dynamics of the system increases when the parameter \(B\) grows. We observe the same
\begin{table}
\begin{tabular}{|l|l|l|} \hline & \(r_{0}\) & \(q_{0}\) \\ \hline
1. & \([0,0]\) & \(10^{-6}[-1.10436,1.21675]\) \\ \hline
2. & \(10^{-5}[-1\;,\;1]\) & \(10^{-6}[-6.87377,6.87371]\) \\ \hline
3. & \(10^{-5}[-1\;,\;1]\) & \(10^{-6}[-1.1776,1.17766]\) \\ \hline
4. & \(10^{-5}[-1\;,\;1]\) & \(10^{-11}[-2.8735,-2.36735]\) \\ \hline
5. & \(10^{-5}[-1\;,\;1]\) & \(10^{-9}[-5.40597,5.43755]\) \\ \hline
6. & \(10^{-5}[-1\;,\;1]\) & \(10^{-9}[-1.60821,1.59499]\) \\ \hline
7. & \(10^{-5}[-1\;,\;1]\) & \(10^{-10}[-1.56502,1.59018]\) \\ \hline
8. & \(10^{-5}[-1\;,\;1]\) & \(10^{-11}[-4.72352,6.65989]\) \\ \hline
9. & \(10^{-5}[-1\;,\;1]\) & \(10^{-11}[-5.29417,-1.37926]\) \\ \hline
10. & \(10^{-5}[-1\;,\;1]\) & \(10^{-11}[0.131355,2.59156]\) \\ \hline \end{tabular}
\end{table}
Table 1. First 10 coordinates of interval vectors \(r_{0}\) and computed \(q_{0}\) which correspond to the sets \(X_{P}^{0}\) and \(X_{P}^{1}\) in the computer assisted proof of Theorem 1.1 for parameters \(d_{1}=0.2,d_{2}=0.02,A=1,B=2\).
intensification of slow-fast behavior upon the increase of parameter \(B\) also for the Brusselator with diffusion. This is depicted in Fig. 3 and 4. The slow-fast behavior is especially visible in the higher Fourier modes, cf. Fig. 5. We also stress that as the value of \(B\) increases, we require more modes to accurately represent the solution. Therefore, to successfully carry out the computer-assisted proof, we must increase the dimension of the inclusion as the value of \(B\) increases, and, consequently the computation time lengthens. This is shown in Tab.2.
The result on the existence of the periodic orbits for parameters in \(\mathcal{A}_{1}\) is contained in the next theorem.
**Theorem 4.1**.: _The Brusselator system has a periodic orbit for \((d_{1},d_{2},A,B)\in\mathcal{A}_{1}\)._
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Value of parameter \(B\) & Number of variables in the inclusion (2.17) & Computation time in seconds \\ \hline
2.1 & 14 & 298 \(s\) \\ \hline
2.3 & 22 & 332 \(s\) \\ \hline
2.5 & 22 & 341 \(s\) \\ \hline
2.7 & 30 & 477 \(s\) \\ \hline
2.9 & 32 & 534 \(s\) \\ \hline
3.1 & 40 & 771 \(s\) \\ \hline \end{tabular}
\end{table}
Table 2. Selected values of parameter \(B\) used in the computer assisted proof of Theorem 4.1, the dimensions of corresponding inclusion (2.17) and the computation times needed to rigorously validate the periodic orbit existence. The computations were single-threaded and realized on the processor Intel(R) Core(TM) i5-4200M CPU @ 2.50GHz.
Figure 3. Plots of projection of numerical approximation of the periodic orbits into first modes of \(u\) and \(v\) (left) and the evolution in time of the first Fourier modes (right) for \(d_{1}=0.2\), \(d_{2}=0.02\), \(B=2.1\) (top) and \(B=2.3\) (bottom), \(A=1\).
Figure 4. Plots of projection of numerical approximation of the periodic orbits into first modes of \(u\) and \(v\) (left) and the evolution in time of the first Fourier modes (right) for \(d_{1}=0.2,\ d_{2}=0.02,\ B=2.5,\ 2.7,\ 2.9,\ 3.1\) (from top to bottom), \(A=1\).
In [1, Theorem 2] Arioli used computer assisted method and proved the existence of periodic orbit for parameters \(d_{1}=1,\)\(d_{2}=\frac{1}{64},\)\(B\in[2.6993,2.7419],\)\(A=1.\) We have checked his result for \(B=2.71,\) and we found that both results correspond to each other in terms of the found period of the orbit. Our results, which succesfully cross-validates both approaches, is contained in the following theorem.
**Theorem 4.2**.: _For parameters \(d_{1}=1,\;d_{2}=\frac{1}{64},\;A=1,\;B=2.71\) the Brusselator system has a periodic solution \((\bar{u}(t,x),\bar{v}(t,x)),\) with the period \(T\in[10.4549,10.455].\) The functions \(\bar{u}(t,x),\bar{v}(t,x)\) are symmetric with respect to the point \(x=\frac{\pi}{2}.\) Moreover the following estimates are true_
\[\sup_{t\in[0,T]}\left\|\bar{u}(t)\right\|_{L^{2}} \leq 0.600569,\;\sup_{t\in[0,T]}\left\|\bar{v}(t)\right\|_{L^{2}} \leq 5.05587,\] \[\sup_{t\in[0,T]}\left\|\bar{u}_{x}(t)\right\|_{L^{2}} \leq 10.0529,\;\sup_{t\in[0,T]}\left\|\bar{v}_{x}(t)\right\|_{L^{2} }\leq 11.5792,\] \[\sup_{t\in[0,T]}\left\|\bar{u}(t)-u^{*}(t)\right\|_{L^{2}} \leq 10^{-5}*8.69037,\;\sup_{t\in[0,T]}\left\|\bar{v}(t)-v^{*}(t) \right\|_{L^{2}}\leq 0.000295375,\] \[\sup_{t\in[0,T]}\left\|\bar{u}_{x}(t)-u^{*}_{x}(t)\right\|_{L^{2 }} \leq 10^{-5}*8.99234,\;\sup_{t\in[0,T]}\left\|\bar{v}_{x}(t)-v^{*}_{x}(t) \right\|_{L^{2}}\leq 0.000450742,\]
_where \((u^{*}(t,x),v^{*}(t,x))\) is the solution to the Brusselator system with the initial data_
\[u^{*}(0,x) =0.43\sin(x)-0.0231361\sin(3x)-0.00129933\sin(5x)+10^{-5}*7.48643 \sin(7x)\] \[+10^{-6}*7.466799\sin(9x)-10^{-7}*2.998759\sin(11x)-10^{-8}*3.8992 1\sin(13x)\] \[+10^{-9}*1.15918\sin(15x)+10^{-10}*1.98398\sin(17x),\]
\[v^{*}(0,x) =7.85996\sin(x)+1.59666\sin(3x)+0.091348\sin(5x)-0.0041776\sin(7x)\] \[-10^{-4}*4.73146\sin(9x)+10^{-5}*1.66984\sin(11x)+10^{-6}*2.42261 \sin(13x)\] \[-10^{-8}*6.5405\sin(15x)-10^{-8}*1.23104\sin(17x),\]
_and the same parameters as above._
Figure 5. The evolution of fourth mode of \(u\) for \(B=2.1\) (left picture) and \(B=3.1\) (right picture).
Arioli observed a period doubling bifurcation, a phenomenon that cannot occur in the planar ODE (1.2). Thus, the dynamics of (1.1) is expected to be more complicated than that of the planar ODE (1.2). Although we do not rigorously prove the bifurcation, we show that the minimal period of the found orbits approximately doubles with a small increase in the parameter \(B\), as seen in Theorem 4.3. Specifically, as we increase \(B\) and keep other parameters fixed \(d_{1}=1,\ d_{2}=\frac{1}{64},\ A=1\), from numerical simulations we observe that the system goes through the period doubling bifurcation. The bifurcation appears to occur between \(B=2.83\) and \(B=2.84\). We have proven the following theorem about periodic orbits for these parameters.
**Theorem 4.3**.: _For \(d_{1}=1,\ d_{2}=\frac{1}{64}\), \(B=2.83\ A=1\) there exist a periodic orbit with its fundamental period in the interval \([13.2128,13.2130]\). For \(d_{1}=1,\ d_{2}=\frac{1}{64}\), \(B=2.84,\ A=1\) there exist a periodic orbit with its fundamental period in the interval \([27.2436,27.2439]\)._
Note that we only obtain rigorous computer assisted proofs for the parameter values \(B\) before and after the expected bifurcation. We believe that conducting computer assisted proof of the bifurcation existence would be interesting and challenging problem.
Figure 6. Periodic orbit from Theorem 4.2, i.e. for \(B=2.71\).
Figure 7. Numerically attracting periodic orbit for \(B=2.83\) (left) and \(B=2.84\) (right).
Other nontrivial dynamics of the Brusselator PDE (1.1) was investigated in [6], where the numerical evidence on the existence of \(2\)-dimensional attracting tori was shown. We observe, only numerically, the same phenomenon, which exists only in PDE Brusselator model and not in the corresponding planar ODE (1.2). First, we rigorously observe the existence of the periodic orbit when the diffusion rates \(d_{1}\) and \(d_{2}\) are equal to each other. We have proved that for parameter values \(d_{1}=0.02,d_{2}=0.02,B=2,A=1\). If we decrease the diffusion rates \(d_{1}\) and \(d_{2}\) (keeping them equal to each other) we numerically observe the emergence of two dimensional attracting torus, cf. Fig. 8. This is a numerical confirmation of the phenomenon which was firstly observed in [6]. Note that the same paper also contains the numerical evidence for the presence of chaos in the same system of equations. These dynamical phenomena are further interesting challenge for computer assisted proofs for the Brusselator system.
## 5. Algebra
This section contains the technical results on the estimates of the convolutions of the sine and cosine Fourier series represented with uniform estimates on the decay of coefficients. Such operations are performed during the calculations in the algorithm as part of the computer-assisted proofs, where we require uniform estimates on the tails of the series. We work with sequences \(\{u_{i}\}_{i=1}^{\infty}\) such that first \(n\) coefficients are given by some numbers or intervals and the remainders of the sequence satisfy \(u_{i}\in\frac{[C_{u}^{-},C_{u}^{+}]}{i^{s}}\) for some \(C_{u}^{-}\leq C_{u}^{+}\) and \(s\in\mathbb{R}\).
The decay of the Fourier coefficients for smooth periodic functions is related with their regularity: if a periodic function \(u:\mathbb{R}\to\mathbb{R}\) is of class \(C^{s}\), then its coefficients must decay as \(\frac{1}{i^{s}}.\) Clearly, the product of two \(C^{s}\) functions also has regularity \(C^{s}\). This is related with our results of this section, which state that if two functions, represented in the sine or cosine Fourier series have some given decay of the Fourier coefficients of the form \(O\left(\frac{1}{i^{s}}\right)\) for \(s>1\) then their product must have the same decay. Moreover we provide the exact estimates for the Fourier coefficients of the product: such estimates are needed in rigorous computations of nonlinear polynomial terms present in the equations.
Figure 8. Two dimensional numerically attracting torus observed in for \(d_{1}=0.009,\;d_{2}=0.009,\;B=2.1,\;A=1\).
The following results are basic, so we skip the proofs. We use them several times is the following considerations.
**Proposition 5.1**.: _Let \(s>1.\) The following inequality holds_
\[\sum_{i=1+n}^{\infty}\frac{1}{i^{s}}\leq\frac{n^{1-s}}{s-1}. \tag{5.1}\]
**Proposition 5.2**.: _Let \(s>1\). If \(a,b>0\) then the following inequality holds_
\[(a+b)^{s}\leq 2^{s-1}(a^{s}+b^{s}). \tag{5.2}\]
The following lemma gives us the estimates on the result of multiplication of \(u\) and \(v\) which are both represented in the sine Fourier series. The first \(n\) coefficients of \(u\) and \(v\) are given explicitly and the coefficients indexed by numbers larger than \(n\) are expressed by the polynomial decay.
**Lemma 5.1**.: _Assume that_
\[u(x)=\sum_{i=1}^{\infty}u_{i}\sin(ix),\quad v(x)=\sum_{i=1}^{\infty}v_{i}\sin( ix). \tag{5.3}\]
_Moreover we assume that for some \(n\in\mathbb{N}\) and \(s>1\) the following estimates hold_
\[u_{i}\in\frac{C_{u}[-1,1]}{i^{s}}\quad\text{and}\quad v_{i}\in\frac{C_{v}[-1, 1]}{i^{s}}\quad\text{for }i>n, \tag{5.4}\]
_where \(C_{v}>0,C_{u}>0\). Then_
\[(uv)(x)=(uv)_{0}+\sum_{k=1}^{\infty}(uv)_{k}\cos(kx), \tag{5.5}\]
_with_
\[(uv)_{0}=\frac{1}{2}\sum_{i=1}^{\infty}u_{i}v_{i}\quad(uv)_{k}=\frac{1}{2}\sum _{i=1}^{\infty}u_{i+k}v_{i}+\frac{1}{2}\sum_{i=1}^{\infty}u_{i}v_{i+k}-\frac{1 }{2}\sum_{i=1}^{k-1}u_{i}v_{k-i}. \tag{5.6}\]
_and the following estimates hold for the coefficients of the product. For \(k=0\) we have_
\[(uv)_{0}\in\frac{1}{2}\sum_{i=1}^{n}u_{i}v_{i}+\frac{C_{u}C_{v}}{2}\frac{n^{1- 2s}}{2s-1}[-1,1], \tag{5.7}\]
_for \(1\leq k\leq 2n\) we have_
\[(uv)_{k}\in\frac{1}{2}\sum_{i=1}^{n}u_{i}v_{i+k}+\frac{1}{2}\sum_{i=1}^{n}u_{ i+k}v_{i}-\frac{1}{2}\sum_{i=1}^{k-1}u_{i}v_{k-i}+C_{u}C_{v}\frac{n^{1-2s}}{2s-1 }[-1,1], \tag{5.8}\]
_and for \(k>2n\) we have_
\[(uv)_{k}\in\frac{D[-1,1]}{k^{s}},\quad D=\frac{1}{2}\left(\sum_{i=1}^{n}(C_{u }|v_{i}|+C_{v}|u_{i}|)\left(1+\left(\frac{2n+1}{2n+1-i}\right)^{s}\right)+C_{u }C_{v}(2+2^{s})\frac{n^{1-s}}{s-1}\right). \tag{5.9}\]
The above lemma provides explicit formulas for estimating the coefficients with indexes \(0\) to \(2n\) of the cosine Fourier series. This is motivated by the fact that if the sine expansions of \(u\) and \(w\) are finite and concentrate on the first \(n\) coefficients, then the representation of \(uv\) is also finite and only first \(2n+1\) coefficients are nonzero. In the formulas in the above lemma, whenever the terms \(u_{i}\) and \(v_{i}\) appear with \(i>n\) in the computation, we substitute them with the intervals \(\frac{C_{u}}{i^{s}}[-1,1]\) and \(\frac{C_{v}}{i^{s}}[-1,1]\), respectively. Additionally, we can use other available estimates of \(u_{i}\). For example we can use the fact that \(u_{i}\in\frac{[C_{u}^{-},C_{u}^{+}]}{i^{s}}\) (with upper and lower bound different from each other), or that \(u_{i}\) is zero for odd coefficients.
The coefficients for \(k>2n\) of the cosine expansion in the above lemma are given by the uniform polynomial decay with the same rate as the sine series for \(u\) and \(v\).
proof of Lemma 5.2.: We have
\[uv =\left(\sum_{i_{1}=1}^{\infty}\sin(i_{1}x)\right)\left(\sum_{i_{2} =1}^{\infty}\sin(i_{2}x)\right)=\frac{1}{2}\sum_{i_{1},i_{2}=1}^{\infty}u_{i_ {1}}v_{i_{2}}\cos((i_{1}-i_{2})x)-\frac{1}{2}\sum_{i_{1},i_{2}=1}^{\infty}u_{i _{1}}v_{i_{2}}\cos((i_{1}+i_{2})x)\] \[=\frac{1}{2}\sum_{i_{1}=i_{2}}u_{i_{1}}v_{i_{2}}+\frac{1}{2}\sum_ {k=1}^{\infty}\left(\sum_{i_{1}-i_{2}=k}u_{i_{1}}v_{i_{2}}+\sum_{i_{1}-i_{2}=- k}u_{i_{1}}v_{i_{2}}-\sum_{i_{1}-i_{2}=k}u_{i_{1}}v_{i_{2}}\right)\cos(kx).\]
We express all coefficients of the resultant cosine series separately, using the formulas
\[(uv)_{0}=\frac{1}{2}\sum_{i=1}^{\infty}u_{i}v_{i},\]
and
\[(uv)_{k}=\frac{1}{2}\sum_{i=1}^{\infty}u_{i+k}v_{i}+\frac{1}{2}\sum_{i=1}^{ \infty}u_{i}v_{i+k}-\frac{1}{2}\sum_{i=1}^{k-1}u_{i}v_{k-i},\]
for \(k\geq 1.\) Observe that for \(i>n\) there holds
\[|u_{i}v_{i}|\leq\frac{C_{v}C_{u}}{i^{2s}}.\]
So, we obtain
\[(uv)_{0}=\sum_{i=1}^{n}u_{i}v_{i}+\sum_{i=n+1}^{\infty}u_{i}v_{i}\in\sum_{i=1} ^{n}u_{i}v_{i}+C_{u}C_{v}\sum_{i=n+1}\frac{1}{i^{2s}}[-1,1]\subset\sum_{i=1} ^{n}u_{i}v_{i}+C_{u}C_{v}\frac{n^{1-2s}}{-1+2s}[-1,1].\]
Now, observe that for \(i>n\) the following estimates hold
\[|u_{i}v_{i+k}|\leq\frac{C_{v}C_{u}}{i^{2s}},\quad|u_{i+k}v_{i}|\leq\frac{C_{v }C_{u}}{i^{2s}},\]
whereas we deduce that for \(1\leq k\leq 2n\) we have
\[(uv)_{k} =\frac{1}{2}\sum_{i=1}^{n}u_{i+k}v_{i}+\frac{1}{2}\sum_{i=1}^{n} u_{i}v_{k+i}-\frac{1}{2}\sum_{i=1}^{k-1}u_{i}v_{k-i}+\frac{1}{2}\sum_{i=n+1}^{ \infty}(u_{i+k}v_{i}+u_{i}v_{i+k})\] \[\in\frac{1}{2}\sum_{i=1}^{n}u_{i+k}v_{i}+\frac{1}{2}\sum_{i=1}^{n }u_{i}v_{k+i}-\frac{1}{2}\sum_{i=1}^{k-1}u_{i}v_{k-i}+C_{v}C_{u}\frac{n^{1-2s }}{2s-1}[-1,1].\]
Finally, for \(k>2n\) we need to estimate the sums in the expression
\[(uv)_{k} =\frac{1}{2}\Bigg{[}\sum_{i=1}^{n}u_{i+k}v_{i}+\sum_{i=1}^{n}u_{i }v_{i+k}+\sum_{i=n+1}^{\infty}u_{i+k}v_{i}+\sum_{i=n+1}^{\infty}u_{i}v_{i+k}\] \[\qquad+\sum_{i=1}^{n}u_{k-i}v_{i}+\sum_{i=k-n}^{k-1}u_{k-i}v_{i} +\sum_{i=n+1}^{k-n-1}u_{k-i}v_{i}\Bigg{]}.\]
To this end observe, that we have
\[\sum_{i=1}^{n}|u_{i+k}v_{i}|\leq\sum_{i=1}^{n}\frac{C_{u}}{(i+k)^ {s}}|v_{i}|\leq\frac{C_{u}}{k^{s}}\sum_{i=1}^{n}|v_{i}|,\quad\sum_{i=1}^{n}|u_ {i}v_{i+k}|\leq\frac{C_{v}}{k^{s}}\sum_{i=1}^{n}|u_{i}|,\] \[\sum_{i=n+1}^{\infty}|u_{i+k}v_{i}|\leq\sum_{i=n+1}^{\infty} \frac{C_{u}C_{v}}{i^{s}(i+k)^{s}}\leq\frac{C_{u}C_{v}}{k^{s}}\sum_{i=n+1}^{ \infty}\frac{1}{i^{s}}\leq\frac{C_{u}C_{v}}{k^{s}}\frac{n^{1-s}}{s-1},\sum_{i= n+1}^{\infty}|u_{i}v_{i+k}|\leq\frac{C_{u}C_{v}}{k^{s}}\frac{n^{1-s}}{s-1}.\]
We also obtain
\[\sum_{i=1}^{n}|u_{k-i}v_{i}|\leq\frac{C_{u}}{k^{s}}\sum_{i=1}^{n}|v_{i}|\left( \frac{k}{k-i}\right)^{s}=\frac{C_{u}}{k^{s}}\sum_{i=1}^{n}|v_{i}|\left(1+\frac{i }{k-i}\right)^{s}\leq\frac{C_{u}}{k^{s}}\sum_{i=1}^{n}|v_{i}|\left(1+\frac{i}{2 n+1-i}\right)^{s}.\]
The last inequality follows from fact that the sequence \(\{(1+\frac{i}{k-i})\}_{k\geq 2n+1}\) is decreasing with respect to \(k.\) Similarly, we have
\[\sum_{i=k-n}^{k-1}|u_{k-i}v_{i}|=\sum_{i=1}^{n}|u_{i}v_{k-i}|\leq\frac{C_{v}}{k ^{s}}\sum_{i=1}^{n}|u_{i}|\left(1+\frac{i}{2n+1-i}\right)^{s}.\]
The last infinite sum is estimated as follows
\[\sum_{i=n+1}^{k-1-n}|u_{i}v_{k-i}| \leq C_{u}C_{v}\sum_{i=n+1}^{k-1-n}\frac{1}{i^{s}(k-i)^{s}}=\frac{ C_{u}C_{v}}{k^{s}}\sum_{i=n+1}^{k-1-n}\left(\frac{1}{i}+\frac{1}{k-i}\right)^{s}\] \[\leq\frac{2^{s-1}C_{u}C_{v}}{k^{s}}\left(\sum_{i=n+1}^{k-1-n} \frac{1}{i^{s}}+\sum_{i=n+1}^{k-1-n}\frac{1}{(k-i)^{s}}\right)\leq\frac{2^{s-1 }C_{u}C_{v}}{k^{s}}\frac{2n^{-s+1}}{s-1}.\]
Combining the estimates of all six sums yields directly the assertion of the lemma.
The next lemma is analogous to Lemma 5.1 and gives us the estimate on the result of multiplication of \(u\) and \(v\) which are represented in cosine and sine series, respectively. The first \(n+1\) coefficients of \(u\) and the first \(n\) coefficients of \(v\) are given explicitly and the rest of them is expressed by the polynomial decay.
**Lemma 5.2**.: _Assume that_
\[u(x)=u_{0}+\sum_{i=1}^{\infty}u_{i}\cos(ix),\quad v(x)=\sum_{i=1}^{\infty}v_{ i}\sin(ix). \tag{5.10}\]
_Moreover, assume that for some \(n\in\mathbb{N}\) and \(s>1\) the following bounds hold_
\[u_{i}\in\frac{C_{u}[-1,1]}{i^{s}}\quad\text{and}\quad v_{i}\in\frac{C_{v}[-1, 1]}{i^{s}}\quad\text{for }i>n, \tag{5.11}\]
_where \(C_{v}>0,C_{u}>0.\) Then_
\[(uv)(x)=\sum_{k=1}^{\infty}(uv)_{k}\sin(kx), \tag{5.12}\]
_with_
\[(uv)_{k}=u_{0}v_{k}+\frac{1}{2}\sum_{i=1}^{\infty}u_{i}v_{i+k}-\frac{1}{2}\sum _{i=1}^{\infty}u_{i+k}v_{i}+\frac{1}{2}\sum_{i=1}^{k-1}u_{i}v_{k-i}, \tag{5.13}\]
_and the following estimates on the coefficients of the product hold for \(1\leq k\leq 2n\)_
\[(uv)_{k}\in u_{0}v_{k}+\frac{1}{2}\left(\sum_{i=1}^{n}v_{i+k}u_{i}-\sum_{i=1}^ {n}v_{i}u_{k+i}+\sum_{i=1}^{k-1}v_{i}u_{k-i}\right)+C_{u}C_{v}\frac{n^{1-2s}} {2s-1}[-1,1], \tag{5.14}\]
_while for \(k>2n\) we have_
\[(uv)_{k}\in\frac{D[-1,1]}{k^{s}},\quad D=|u_{0}|C_{v}+\frac{1}{2}\left(\sum_{i =1}^{n}(C_{u}|v_{i}|+C_{v}|u_{i}|)\left(1+\left(\frac{2n+1}{2n+1-i}\right)^{s} \right)+C_{u}C_{v}(2+2^{s})\frac{n^{1-s}}{s-1}\right). \tag{5.15}\]
Proof.: The argument is analogous to the proof of Lemma 5.1. Namely, we have the following representation of the product
\[uv =\left(u_{0}+\sum_{i_{1}=1}^{\infty}u_{i_{1}}\cos(i_{1}x)\right) \left(\sum_{i_{2}=1}^{\infty}v_{i_{2}}\sin(i_{2}x)\right)\] \[=\sum_{k=1}^{\infty}u_{0}v_{k}\sin(kx)+\frac{1}{2}\sum_{i_{1},i_{ 2}=1}^{\infty}u_{i_{1}}v_{i_{2}}\sin(i_{2}-i_{1})+\frac{1}{2}\sum_{i_{1},i_{2 }=1}^{\infty}u_{i_{1}}v_{i_{2}}\sin(i_{1}+i_{2})\] \[=\sum_{k=1}^{\infty}\left(u_{0}v_{k}+\frac{1}{2}\sum_{i_{2}-i_{1} =k}u_{i_{1}}v_{i_{2}}-\frac{1}{2}\sum_{i_{2}-i_{1}=-k}u_{i_{1}}v_{i_{2}}+ \frac{1}{2}\sum_{i_{2}+i_{1}=k}u_{i_{1}}v_{i_{2}}\right)\sin(kx)\]
So, for natural \(k\geq 1\) we represent all coefficients using the formulas
\[(uv)_{k}=u_{0}v_{k}+\frac{1}{2}\sum_{i=1}^{\infty}u_{i}v_{i+k}-\frac{1}{2} \sum_{i=1}^{\infty}u_{i+k}v_{i}+\frac{1}{2}\sum_{i=1}^{k-1}u_{i}v_{k-i}, \tag{5.16}\]
For \(1\leq k\leq 2n\) we write
\[(uv)_{k}=u_{0}v_{k}+\frac{1}{2}\sum_{i=1}^{n}u_{i}v_{i+k}-\frac{1}{2}\sum_{i= 1}^{n}u_{i+k}v_{i}+\frac{1}{2}\sum_{i=1}^{k-1}u_{i}v_{k-i}+\frac{1}{2}\sum_{i =n+1}^{\infty}(u_{i}v_{i+k}-u_{i+k}v_{i}).\]
The infinite sums in the above formula are estimated in the same way as in Lemma 5.1. For \(k>2n\) we have
\[(uv)_{k} =u_{0}v_{k}+\frac{1}{2}\sum_{i=1}^{n}u_{i}v_{i+k}-\frac{1}{2} \sum_{i=1}^{n}u_{i+k}v_{i}+\frac{1}{2}\sum_{i=n+1}^{\infty}u_{i}v_{i+k}-\frac{ 1}{2}\sum_{i=n+1}^{\infty}u_{i+k}v_{i}\] \[+\frac{1}{2}\sum_{i=1}^{n}u_{i}v_{k-i}+\frac{1}{2}\sum_{i=k-n}^{k -1}u_{i}v_{k-i}+\frac{1}{2}\sum_{i=n+1}^{k-n-1}u_{i}v_{k-i}.\]
It is easy to see that the first component of the above sum can be estimated in the following way \(|u_{0}v_{k}|\leq\frac{|u_{0}|C_{v}}{k^{s}}.\) The remaining infinite sums are estimated in the same way as in Lemma 5.1.
The following simple lemmas are useful to implement operations on the infinite interval vectors. They can be used when working both with sine and cosine Fourier series.
**Lemma 5.3**.: _Assume that the sequence \(\{u_{i}\}_{i=1}^{\infty}\) satisfies_
\[u_{i}\in[u_{i}^{-},u_{i}^{+}]\quad\text{for}\quad i\leq n\quad\text{and}\quad u _{i}\in\frac{[C_{u}^{-},C_{u}^{+}]}{i^{s}}\quad\text{for}\quad i>n.\]
_If \(k<n\), then_
\[u_{i}\in[D_{u}^{-},D_{u}^{+}]\quad\text{for }i>k.\]
_where,_
\[D_{u}^{-}=\min\{u_{k+1}^{-}(k+1)^{s},\dots,u_{n}n^{s},C_{u}^{-}\},\quad D_{u} ^{+}=\max\{u_{k+1}^{+}(k+1)^{s},\dots,u_{n}n^{s},C_{u}^{+}\}.\]
**Lemma 5.4**.: _Assume that sequence \(\{u_{i}\}_{i=1}^{\infty}\) satisfies_
\[u_{i}\in[u_{i}^{-},u_{i}^{+}]\quad\text{for}\quad i\leq n\quad\text{and}\quad u _{i}\in\frac{[C_{u}^{-},C_{u}^{+}]}{i^{s}}\quad\text{for}\quad i>k.\]
_Then for \(s_{1}<s\) there holds_
\[u_{i}\in[D_{u}^{-},D_{u}^{+}]\quad\text{for }i>n,\]
_where,_
\[D_{u}^{-}=\min\left\{0,\frac{C_{u}^{-}}{(n+1)^{s-s_{1}}}\right\},\quad D_{u} ^{+}=\max\left\{0,\frac{C_{u}^{+}}{(n+1)^{s-s_{1}}}\right\}.\]
**Lemma 5.5**.: _Assume that sequences \(\{u_{i}\}_{i=1}^{\infty}\) and \(\{v_{i}\}_{i=1}^{\infty}\) satisfy_
\[u_{i}\in\frac{[C_{u}^{-},C_{u}^{+}]}{i^{s_{1}}},\quad v_{i}\in\frac{[C_{v}^{-},C_ {v}^{+}]}{i^{s_{2}}}\quad\text{for }i>n, \tag{5.17}\]
_with the constants \(s_{1},s_{2}.\) Then_
* _if_ \(s_{1}=s_{2}\) _then for_ \(i>n\) _we have_ \(u_{i}+v_{i}\in\frac{[C_{v}^{-}+C_{v}^{-},C_{u}^{+}+C_{v}^{+}]}{i^{s_{2}}},\)__
* _if_ \(s_{1}<s_{2}\) _then for_ \(i>n\) _we have_ \(u_{i}+v_{i}\in\frac{[C_{u}^{-},C_{u}^{+}]+[C_{v}^{-},C_{v}^{+}]}{i^{s_{1}}},\)__
* _if_ \(s_{1}>s_{2}\) _then for_ \(i>n\) _we have_ \(u_{i}+v_{i}\in\frac{\frac{[0,1]}{(n+1)^{s_{1}-s_{2}}}[C_{u}^{-},C_{u}^{+}]+[C_ {v}^{-},C_{v}^{+}]}{i^{s_{2}}}.\)__
**Lemma 5.6**.: _Assume that sequences \(\{u_{i}\}_{i=1}^{\infty}\) and \(\{v_{i}\}_{i=1}^{\infty}\) satisfy_
\[u_{i}\in\frac{[C_{u}^{-},C_{u}^{+}]}{i^{s_{1}}}\quad\text{and}\quad v_{i}\in \frac{[C_{v}^{-},C_{v}^{+}]}{i^{s_{2}}}\quad\text{for }i>n, \tag{5.18}\]
_with some constants \(s_{1},s_{2}.\) Then for \(i>n\) we have \(u_{i}v_{i}\in\frac{[C_{u}^{-}+C_{v}^{-},C_{u}^{+}+C_{v}^{+}]}{i^{s_{1}+s_{2}}}.\)_
|
2310.03420 | FreeReg: Image-to-Point Cloud Registration Leveraging Pretrained
Diffusion Models and Monocular Depth Estimators | Matching cross-modality features between images and point clouds is a
fundamental problem for image-to-point cloud registration. However, due to the
modality difference between images and points, it is difficult to learn robust
and discriminative cross-modality features by existing metric learning methods
for feature matching. Instead of applying metric learning on cross-modality
data, we propose to unify the modality between images and point clouds by
pretrained large-scale models first, and then establish robust correspondence
within the same modality. We show that the intermediate features, called
diffusion features, extracted by depth-to-image diffusion models are
semantically consistent between images and point clouds, which enables the
building of coarse but robust cross-modality correspondences. We further
extract geometric features on depth maps produced by the monocular depth
estimator. By matching such geometric features, we significantly improve the
accuracy of the coarse correspondences produced by diffusion features.
Extensive experiments demonstrate that without any task-specific training,
direct utilization of both features produces accurate image-to-point cloud
registration. On three public indoor and outdoor benchmarks, the proposed
method averagely achieves a 20.6 percent improvement in Inlier Ratio, a
three-fold higher Inlier Number, and a 48.6 percent improvement in Registration
Recall than existing state-of-the-arts. | Haiping Wang, Yuan Liu, Bing Wang, Yujing Sun, Zhen Dong, Wenping Wang, Bisheng Yang | 2023-10-05T09:57:23Z | http://arxiv.org/abs/2310.03420v2 | FreeReg: Image-to-Point Cloud Registration Leveraging Pretrained Diffusion Models and Monocular Depth Estimators
###### Abstract
Matching cross-modality features between images and point clouds is a fundamental problem for image-to-point cloud registration. However, due to the modality difference between images and points, it is difficult to learn robust and discriminative cross-modality features by existing metric learning methods for feature matching. Instead of applying metric learning on cross-modality data, we propose to unify the modality between images and point clouds by pretrained large-scale models first, and then establish robust correspondence within the same modality. We show that the intermediate features, called diffusion features, extracted by depth-to-image diffusion models are semantically consistent between images and point clouds, which enables the building of coarse but robust cross-modality correspondences. We further extract geometric features on depth maps produced by the monocular depth estimator. By matching such geometric features, we significantly improve the accuracy of the coarse correspondences produced by diffusion features. Extensive experiments demonstrate that without any task-specific training, direct utilization of both features produces accurate image-to-point cloud registration. On three public indoor and outdoor benchmarks, the proposed method averagely achieves a \(20.6\%\) improvement in Inlier Ratio, a \(3.0\times\) higher Inlier Number, and a \(48.6\%\) improvement in Registration Recall than existing state-of-the-arts. The codes and additional results are available at [https://whu-usi3dv.github.io/FreeReg/](https://whu-usi3dv.github.io/FreeReg/).
## 1 Introduction
Image-to-point cloud (I2P) registration requires estimating pixel-to-point correspondences between images and point clouds to estimate the SE(3) pose of the image relative to the point cloud. It is a prerequisite for many tasks such as Simultaneous Localization and Mapping (Zhu et al., 2022), 3D reconstruction (Dong et al., 2020), segmentation (Guo et al., 2020), and visual localization (Sarlin et al., 2023).
To establish pixel-to-point correspondences, we have to match features between images and point clouds. However, it is difficult to learn robust cross-modality features for images and point clouds. Most existing methods (Feng et al., 2019; Wang et al., 2021; Pham et al., 2020; Jiang & Saripalli, 2022; Li et al., 2023) resort to metric learning methods like contrastive loss, triplet loss or InfoCE loss to force the alignment between the 2D and 3D features of the same object. However, due to the inherent data disparities that images capture appearances while point clouds represent structures, directly aligning cross-modal data inevitably leads to poor convergence. Consequently, cross-modality metric learning suffers from poor feature robustness (Wang et al., 2021) and limited generalization ability (Li et al., 2023).
In this paper, we propose a novel method, called _FreeReg_, to build robust cross-modality correspondences between images and point clouds with the help of recent large-scale diffusion models (Rombach et al., 2022; Zhang and Agrawala, 2023; Mou et al., 2023) and monocular depth estimators (Bhat et al., 2023; Yin et al., 2023). FreeReg avoids the difficult cross-modality metric learning and does even not require training on the I2P task. As shown in Fig. 1, the key idea is to unify the modality between images and point clouds by these large-scale pretrained models so FreeReg allows robust correspondence estimation within the same modality for cross-modality matching.
In order to convert point clouds to the image modality, a straightforward way is to project points onto an image plane to get a depth map and then convert the depth map to an image by a depth-to-image diffusion model ControlNet (Zhang and Agrawala, 2023). However, as shown in Fig. 2 (I), a depth map may correspond to multiple possible images so that the generated image from the point cloud would have a completely different appearance from the input image, which leads to incorrect matching results even with SoTA image matching methods (Sarlin et al., 2020; DeTone et al., 2018; Sun et al., 2021). To address this problem, we propose to match the semantic features between the generated images and the input image because the generated images show strong semantic consistency with the input image in spite of different appearances. Inspired by recent diffusion-based semantic correspondence estimation methods (Tang et al., 2023; Zhang et al., 2023), we utilize the intermediate feature maps in the depth-to-image ControlNet to match between depth maps and images. As shown in Fig. 2 (II), we visualize the diffusion features of the depth map and the RGB image. Then, we utilize the nearest neighbor (NN) matcher with mutual check (Wang et al., 2022) to establish correspondences between them. We find that such semantic features show strong consistency even though they are extracted on depth maps and images separately, making it possible to build robust cross-modality correspondences. However, the semantic features are related to a large region of the image. Such a large receptive field leads to coarse-grained features and only sparse correspondences in feature matching.
Figure 1: _Left_: FreeReg unifies the modalities of images and point clouds, which enables mono-modality matching to build cross-modality correspondences. _Right_: FreeReg does not require any training on the I2P task and is able to register RGB images to point clouds in both indoor and outdoor scenes, even for challenging cases with small overlaps, large viewpoint changes, and sparse point density.
Figure 2: To unify the modalities of point clouds (PCs) and images, \(\mathbf{I}\): a straightforward way is to generate RGB images from point clouds by depth-to-image diffusion models. However, the generated images usually have large appearance differences from the query images. \(\mathbf{II}\): We find that the intermediate features of diffusion models show strong semantic consistency between RGB images and depth maps, resulting in _sparse but robust_ correspondences. \(\mathbf{III}\): We further convert RGB images to point clouds by a monocular depth estimator and extract geometric features to match between the input and the generated point clouds, yielding _dense but noisy_ correspondences. \(\mathbf{IV}\): We propose to fuse both types of features to build _dense and accurate_ correspondences.
We further improve the accuracy of our cross-modality correspondences with the help of the monocular depth estimators (Bhat et al., 2023). Recent progress in monocular depth estimators enables metric depth estimation on a single-view image. However, directly matching features between the point cloud and the estimated depth maps from the input image leads to poor performance as shown in Fig. 2 (III). The main reason is that the predicted depth maps are plausible but still contain large distortions in comparison with the input point cloud. The distortions prevent us from estimating robust correspondences. Though the global distortions result in noisy matches, the local geometry of the estimated depth maps still provides useful information to accurately localize keypoints and densely estimate fine-grained correspondences. Thus, we combine the local geometric features (Choy et al., 2019) extracted on the estimated depth maps with the semantic features extracted from diffusion models as the cross-modality features, which enable dense and accurate correspondence estimation between images and point clouds, as shown in Fig. 2 (IV).
In summary, FreeReg has the following characteristics. 1) FreeReg combines coarse-grained semantic features from diffusion models and fine-grained geometric features from depth maps for accurate cross-modality feature matching. 2) FreeReg does not require training on the I2P task, which avoids the unstable and notorious metric learning to align local features of point clouds and images. 3) FreeReg significantly outperforms existing fully-supervised cross-modality registration baselines (Pham et al., 2020; Li et al., 2023). Specifically, on the indoor 3DMatch and ScanNet datasets and the outdoor KITTI-DC dataset, FreeReg roughly achieves over \(20\%\) improvement in Inlier Ratio, a \(3.0\times\) more Inlier Number, and a \(48.6\%\) improvement in Registration Recall.
## 2 Related work
**Image-to-point cloud registration.** In order to establish correspondences between images and point clouds for pose recovery, most existing methods (Li et al., 2015; Xing et al., 2018; Feng et al., 2019; Lai et al., 2021; Wang et al., 2021; Pham et al., 2020; Liu et al., 2020; Jiang and Saripalli, 2022; Li et al., 2023) rely on metric learning to align local features of images and point clouds (Feng et al., 2019; Pham et al., 2020; Lai et al., 2021; Jiang and Saripalli, 2022), or depth maps (Liu et al., 2020; Wang et al., 2021; Li et al., 2023). However, these methods often require cross-modal registration training data (Pham et al., 2020; Wang et al., 2021; Jiang and Saripalli, 2022; Li et al., 2023) and show limited generalization ability (Pham et al., 2020; Wang et al., 2021; Li et al., 2023) due to the difficulty in the cross-modality metric learning. In contrast, FreeReg does not require task-specific training and finetuning and exhibits strong generalization ability to both indoor and outdoor scenes.
Some other methods directly solve image-to-point cloud registration as an optimization problem (David et al., 2004; Campbell et al., 2019), which regresses poses by progressively aligning keypoints (Li and Lee, 2021; Ren et al., 2022; Campbell et al., 2019), pole structures (Wang et al., 2022b), or semantic boundaries (Liao et al., 2023) of RGB images and depth maps. However, these methods heavily rely on an accurate initial pose (Wang et al., 2021; Liao et al., 2023) to escape from local minima in optimizations. Thus, these methods are mostly constrained to specific application scenarios (Li and Lee, 2021; Ren et al., 2022; Arar et al., 2020). FreeReg does not require such a strictly accurate initialization because FreeReg matches features to build correspondences to handle large pose changes.
**Diffusion feature extraction.** Recently, a category of research (Ho et al., 2020; Song et al., 2020;b; Karras et al., 2022; Song and Ermon, 2019; Dhariwal and Nichol, 2021; Liu et al., 2023), known as diffusion models, has demonstrated impressive generative capabilities. Based on that, with the advent of classifier-free guidence (Ho and Salimans, 2022) and billions of text-to-image training data (Schuhman et al., 2022), a latent diffusion model, specifically stable diffusion (Rombach et al., 2022), has shown remarkable text-to-image generation capabilities. Building upon this, existing methods have demonstrated the exceptional performance of Stable Diffusion internal representations (Diffusion Feature) (Kwon et al., 2022; Tumanyan et al., 2023) in various domains such as segmentation (Amit et al., 2021; Baranchuk et al., 2021; Chen et al., 2022b; Jiang et al., 2018; Tan et al., 2022; Wolleb et al., 2022), detection (Chen et al., 2022a), depth estimation (Duan et al., 2023; Saxena et al., 2023b;a). These methods only extract diffusion features on RGB images utilizing Stable Diffusion. Our method extracts diffusion features on RGB and depth maps based on recent finetuned diffusion models ControlNet (Zhang and Agrawala, 2023) or T2IAdaptor (Mou et al., 2023), which efficiently leverage depth, semantic maps, and sketches to guide stable diffusion in image generation.
Diffusion feature for matching.Some recent works utilize diffusion features for representation learning (Kwon et al., 2022) and semantic matching (Luo et al., 2023; Tang et al., 2023; Hedlin et al., 2023; Zhang et al., 2023) among RGB images capturing objects across instances and categories. In comparison, our method shows the effectiveness of diffusion features in learning cross-modality features for image-to-point cloud registration.
**Monocular depth estimator** Monocular depth estimation inherently suffers from scale ambiguity (Chen et al., 2016, 2020; Xian et al., 2018, 2020). With more and more monocular depth training data (Guizilini et al., 2023; Antequera et al., 2020; Wilson et al., 2023), recent works (Bhat et al., 2021, 2022; Jun et al., 2022; Li et al., 2022; Yang et al., 2021; Yin et al., 2021, 2019; Yuan et al., 2022; Guizilini et al., 2023; Yin et al., 2023) learn scene priors to regress depth values in real metric space and show impressive results. We employ a SoTA metric depth estimator Zoe-Depth (Bhat et al., 2023) to recover point clouds in the same metrics corresponding to the RGB images.
## 3 Method
Let \(I\in\mathbb{R}^{H\times W\times 3}\) be an RGB image and \(P\in\mathbb{R}^{N\times 3}\) be a point cloud. We first project \(P\) to a depth map \(D\in\mathbb{R}^{H^{\prime}\times W^{\prime}}\) on a camera pose, which is calculated from the depth or LiDAR sensor center and orientation. More details about this projection are given in the supplementary material. FreeReg aims to match the cross-modality features extracted on \(I\) and \(D\) to establish correspondences and solve the relative pose between them. The pipeline of FreeReg is illustrated in Fig. 3. Specifically, We extract diffusion features (Sec. 3.2) and geometric features (Sec. 3.3) for feature matching and then estimate the I2P transformation estimation from the matching results. We begin with a brief review of diffusion methods, which we utilize to extract cross-modality features.
### Preliminary: Stable Diffusion and ControlNet
The proposed cross-modality features are based on ControlNet (Zhang and Agrawala, 2023) (CN) so we briefly review the related details of ControlNet in this section. Diffusion models contain a forward process and a reverse process, both of which are Markov chains. The forward process gradually adds noise to the input image in many steps and finally results in pure structure-less noise. The corresponding reverse process gradually denoises the noise step-by-step to gradually recover the structure and generate the image. Stable Diffusion (Rombach et al., 2022) (SD) is a widely-used diffusion model mainly consisting of a UNet which takes noisy RGB images as input and predicts the noise. The original Diffusion model only allows text-to-image generation. Recent ControlNet (Zhang and Agrawala, 2023), as shown in Fig. 4 (b), adds an additional encoder to process depth maps and utilizes the extracted depth features to guide the reverse process of SD, enabling SD to generate images coherent to the input depth map from a pure Gaussian noise. In FreeReg, we utilize CN and SD to extract cross-modality features for feature matching.
Figure 3: _FreeReg pipeline._ Given a point cloud (PC) and a partially overlapping RGB image, FreeReg extracts diffusion features and geometric features for the point cloud and the image. These two features are fused and matched to establish pixel-to-point correspondences, on which we compute the SE(3) relative pose between the image and the point cloud.
### Diffusion Features on cross-modality data
Directly generating an image from the input depth map suffers from appearance inconsistency with the input image, which results in inaccurate feature matching. Instead of generating an explicit image, we resort to the intermediate feature maps of stable diffusion models for cross-modality feature matching. The overview is shown in Fig. 4.
**RGB diffusion feature.** As shown in Fig. 4(a), we perform the forward process of SD (Rombach et al., 2022) to add noise to the input RGB image, which results in a noisy image on the same predefined step \(\hat{t}\). The noisy image is fed to the UNet of the SD and the intermediate feature maps of the UNet decoder are used as the diffusion feature for the input RGB image.
**Depth diffusion feature.** Given the depth maps, we first densify them using traditional erosion and dilation operations (Ku et al., 2018). As shown in Fig. 4 (b), we propose to feed the depth map to a CN (Zhang and Agrawala, 2023) as a condition to guide the reverse process of SD. With such a condition, SD gradually denoise a pure Gaussian noise until a predefined step \(\hat{t}\) and then we use the feature maps in the SD UNet decoder as the depth diffusion features. An alternative way is to directly treat the depth map as an RGB image for diffusion feature extraction, which however leads to poor performance as shown in the supplementary material.
**Layer selection.** The remaining problem is about which layer to be used for feature extraction. Visualization of extracted diffusion features on RGB images and depth maps are given in Fig. 4(c). It can be observed that the features of early upsampling layers with layer index \(l\leq 6\) show strong consistency between RGB and depth data. Features of later upsampling layers with an index larger than 6 show more fine-grained details like textures that no longer exhibit consistency. Therefore, we use features of early layers 0,4,6 as our diffusion features. To reduce the feature dimension on each layer, we apply a Principal Component Analysis (PCA) to reduce the feature dimension to 128. The resulting diffusion features of RGB image \(I\) and depth map \(D\) are \(F_{d}^{I}\) and \(F_{d}^{D}\) respectively, both of which are obtained by concatenating the features from different layers and L2 normalized.
### Geometric Features on cross-modality data
The above diffusion feature is extracted from a large region on the image, which struggles to capture fine-grained local details and estimates only sparse correspondences as shown in Fig. 5 (b/e). To improve the accuracy of these correspondences, we introduce a so-called geometric feature, leveraging the monocular depth estimator Zoe-Depth (Bhat et al., 2023).
Specifically, we utilize Zoe-Depth to generate per-pixel depth \(D^{Z}\) for the input RGB image \(I\) and recover a point cloud from the generated depth map. Then, we employ a pre-trained point cloud feature extractor FCGF (Choy et al., 2019) to extract per-point features, which serve as the geometric features of their corresponding pixels in the image \(I\). We construct geometric features for pixels of the depth map \(D\) in the same way. As illustrated in Fig. 5 (c/f), solely matching geometric
Figure 4: _Diffusion feature extraction_ on (a) images and (b) depth maps. (c) Visualization of diffusion features.
features produces many outlier correspondences due to large distortion in the single-view depth estimation. However, these geometric features provide local descriptions of the geometry, which are more localized and enable more accurate correspondence in cooperation with the diffusion features.
### Fuse both features for I2P registration
**Fuse features.** In this section, we propose to fuse the two types of features to enable accurate correspondence estimation, as shown in Fig. 5. Note that we uniformly sample a dense grid of keypoints on both the depth map and the image. Then, we extract the above diffusion features and geometric features on the keypoints. Both features are normalized by their L2 norm before the fusion. Specifically, we follow (Zhang et al., 2023) to fuse two kinds of features on each keypoint in \(I\) or \(D\) by
\[F=[wF_{d},(1-w)F_{g}], \tag{1}\]
\(w\) is a fusion weight, \([\cdot,\cdot]\) means concatenating on feature dimension, and \(F\) is the resulting FreeReg feature.
**Pixel-to-point correspondences.** Given two sets of fused features \(F^{I}\) on RGB image \(I\) and \(F^{D}\) on depth map \(D\), we conduct nearest neighborhood (NN) matching with a mutual nearest check (Wang et al., 2022) to find a set of putative correspondences. Note that the pixel from the depth map \(D\) in each match corresponds to a 3D point in point cloud \(P\).
**Image-to-point cloud registration.** To solve SE(3) poses of RGB image \(I\) relative to \(P\). A typical approach is to conduct the Perspective-n-Point (PnP) algorithm (Lepetit et al., 2009) on the established pixel-to-point correspondences. However, we have estimated a depth map corresponding to RGB using Zoe-Depth (Bhat et al., 2023). Thus, we can convert the pixel-to-point correspondences to 3D point-to-point correspondences, and estimate the SE(3) relative pose using the Kabsch algorithm (Kabsch, 1976). In the supplementary material, we empirically show that using the PnP algorithm leads to a more accurate pose estimation but fails in many cases, while the Kabsch algorithm works in more cases but the estimated transformations exhibit larger errors.
## 4 Experiments
### Experimental protocol
**Datasets.** We evaluate the proposed method on three widely used datasets: (1) The _3DMatch_(Zeng et al., 2017) testset comprises RGB images and point clouds (called _I2P pairs_) from 8 indoor scenes. The point clouds used here are collected by an Asus Xtion depth sensor. We manually exclude the I2P pairs with very small overlaps resulting in 1210 I2P pairs with over \(30\%\) overlaps. (2) The _ScanNet_(Dai et al., 2017) testset consists of 4,660 I2P pairs from 31 indoor scenes with more than
Figure 5: _Visualization of features and estimated correspondences._ (a) Input images and point clouds. (b), (c), and (d) show the visualization of diffusion, geometric, and fused feature maps respectively. (e), (f), and (g) show the pixel-to-point correspondences estimated by the nearest neighbor (NN) matcher using diffusion, geometric, and fused features respectively. Diffusion features estimate reliable but sparse correspondences. Geometric features yield dense matches but with more outliers. Fused features strike a balance between accuracy and preserving fine-grained details, resulting in accurate and dense matches.
30% overlap. To further increase the difficulty, we downsampled the input point clouds using a voxel size of 3cm, which leads to highly sparse point clouds. (3) The _Kitti-DC_(Uhrig et al., 2017) testset has 342 I2P pairs from 4 selected outdoor scenes. The sparse point clouds come from a 64-line LiDAR scan. The distance between each I2P pair is less than 10 meters.
**Metrics**. Following (Choy et al., 2019; Wang et al., 2023;a), we adopt four evaluation metrics: (1) _Feature Matching Recall (FMR)_ is the fraction of I2P pairs with more than \(5\%\) correct estimated correspondences. A correspondence is regarded as correctly matched if its ground truth 3D distance is smaller than \(\tau_{c}\). \(\tau_{c}\) is set to 0.3m for 3DMatch/ScanNet and 3m for Kitti-DC. (2) _Inlier Ratio (IR)_ is the average correct correspondence proportions among all I2P pairs. (3) _Inlier Number (IN)_ is the average number of correct correspondences on each I2P pair. and (4) _Registration Recall (RR)_ is the percentage of correctly-aligned I2P pairs with rotation and translation errors less than \(\tau_{R}\) and \(\tau_{t}\) respectively. (\(\tau_{R}\), \(\tau_{t}\)) is set to (20\({}^{\circ}\), 0.5m) for 3DMatch/ScanNet and (\(10^{\circ}\), 3m) for Kitti-DC. We provide additional results under different threshold conditions in the supplementary material.
**Baselines**. We compare FreeReg with fully supervised registration baselines. The image registration method SuperGlue (SG) (Sarlin et al., 2020) is modified to match RGB images and point clouds. LCD (Pham et al., 2020) learns to construct I2P cross-modality descriptors utilizing metric learning. Deep1ZP (Li and Lee, 2021) resolve I2P registration by optimizing an accurate initial pose. We implement a cross-modality feature extraction method I2P-Matt following a concurrent work 2D3D-Matt (Li et al., 2023), where the official codes are not released yet. Meanwhile, we compare FreeReg with P2-Net (Wang et al., 2021) and 2D3D-Matt (Li et al., 2023) under their experimental protocol (Li et al., 2023) in the supplementary material, where FreeReg also achieves the best registration performance. We also adopt a combined baseline which first utilizes ControlNet (Zhang and Agrawala, 2023) (CN+SG) to generate an RGB image from the target point cloud and then conducts SuperGlue (Sarlin et al., 2020) to match the input and the generated image. For our method, we report the results using only the diffusion feature (FreeReg-D), only the geometric feature (FreeReg-G), and the fused feature (FreeReg) for matching. More implementation and experimental details are provided in the supplementary material.
### Results on three benchmarks
The quantitative results of FreeReg and baselines on the three cross-modality registration benchmarks are given in Table 1. Some quantitative results are shown in Fig. 6.
**Correspondence quality** is reflected by _FMR_, _IR_, and _IN_. For LCD and I2P-Matt, utilizing a metric learning method to directly align cross-modality features leads to poor performance. CN+SG suffers from the appearance difference between generated images and the input images and thus fails to build reliable correspondences. For FreeReg, using solely diffusion features (FreeReg-D) or geometric features (FreeReg-G) can already yield results superior to the baselines. Utilizing both features, FreeReg achieves the best correspondence quality and outperforms baselines by a large margin with \(54.0\%\) in _FMR_, \(20.6\%\) in _IR_, and a \(3.0\times\) higher _IN_. Note that, unlike baseline methods, FreeReg does not even train on the I2P task.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} _Method_ & \begin{tabular}{c} LCD \\ _SE(3) Soker_ \\ \end{tabular} & \begin{tabular}{c} SG \\ _Pn_ \\ \end{tabular} & \begin{tabular}{c} Deep1ZP \\ _Pn_ \\ \end{tabular} & \begin{tabular}{c} CN+SG \\ _InvCP_ \\ \end{tabular} & \begin{tabular}{c} I2P-Matt \\ _PnP_ \\ \end{tabular} & \begin{tabular}{c} FreeReg-D \\ _PnP_ \\ \end{tabular} & \begin{tabular}{c} FreeReg-G \\ _PnP_ \\ \end{tabular} & \begin{tabular}{c} FreeReg-G \\ _Rabsch_ \\ \end{tabular} & \begin{tabular}{c} FreeReg \\ _PnP_ \\ \end{tabular} &
\begin{tabular}{c} FreeReg \\ _Rabsch_ \\ \end{tabular} \\ \hline \hline \multirow{4}{*}{_3DMatch_} & _FMR_(\%) & 40.1 & 50.3 & / & 64.7 & 90.6 & 91.9 & 90.7 & **94.6** & **94.6** \\ & _IR_(\%) & 35.1 & 11.1 & / & 18.4 & 24.9 & 39.6 & 31.4 & **47.0** & **47.0** \\ & _IN_(\#)_ & 4.3 & 3.1 & / & 10.9 & 49.0 & 60.8 & 49.4 & **82.8** & **82.8** \\ & _RR_(\%) & / & 1.8 & / & 6.5 & 28.2 & 33.2 & 50.4 & 40.0 & **63.8** \\ \hline \multirow{4}{*}{_ScanNet_} & _FMR_(\%) & 55.1 & 53.2 & / & 64.1 & 87.0 & 95.3 & **96.4** & **95.5** & **99.5** \\ & _IR_(\%) & 30.7 & 13.4 & / & 18.3 & 14.3 & 45.7 & 40.5 & **56.8** & **56.8** \\ & _IN_(\#) & 5.0 & 4.7 & / & 9.1 & 24.8 & 61.5 & 84.5 & **114.4** & **114.4** \\ & _RR_(\%) & / & 1.2 & / & 5.5 & 8.5 & 42.3 & 69.4 & 57.6 & **78.0** \\ \hline \multirow{4}{*}{_Kini-DC_} & _FMR_(\%) & / & 73.4 & / & 94.2 & / & **100.0** & 94.4 & 99.7 & 99.7 \\ & _IR_(\%) & / & 18.1 & / & 34.4 & / & **59.4** & 41.2 & 58.3 & 58.3 \\ \cline{1-1} & _IN_(\#) & / & 12.6 & / & 51.1 & / & 103.6 & 93.6 & **132.9** & **132.9** \\ \cline{1-1} & _RR_(\%) & / & 8.2 & 20.9 & 20.4 & / & 68.1 & 43.3 & **70.5** & 67.5 \\ \hline \end{tabular}
\end{table}
Table 1: _Cross-modality registration performance of different methods. “InvCP.” means Inverse Camera Projection (Li and Lee, 2021)._
**Registration quality** is indicated by _RR_. Benefited by the high-quality correspondences, FreeReg significantly outperforms the baseline methods by a \(48.6\%\) RR and FreeReg-D/G by a \(22.9\%/16.4\%\) RR. Moreover, FreeReg utilizing Kabsch significantly surpasses PnP on indoor 3DMatch/ScanNet but is \(3\%\) lower than PnP on the outdoor Kitti-DC. The main reason is that Zoe-Depth performs better on these two indoor datasets with an average 0.27m error but worse on the KITTI with an average 3.4m error. In the supplementary material, we further provide more analysis and find that PnP achieves more accurate results while Kabsch provides plausible results in more cases.
### Ablation studies
We conducted comprehensive ablation experiments on FreeReg. More ablation studies on diffusion feature extraction and I2P transformation estimation are provided in the supplementary material.
#### 4.3.1 Ablating diffusion feature extraction
In this section, we evaluate on a validation scene "bundlefusion-office0" (BFO) which is not included in the testset to tune hyperparameters in diffusion feature layer selection and diffusion step \(\hat{t}\) selection. Subsequently, we report their performances on the 3DMatch dataset.
**Diffusion layer selection**. In table 2 (a-i), we report the size of output feature maps of 8 layers in the UNet of Stable Diffusion. The feature map size is divided into three levels, i.e. small group (\(8\times 11\), layers 0-2), medium group (\(16\times 22\), layers 3-5), and large group (\(32\times 44\), layers 6-8). We select the layers with the best registration performance and reasonable matching quality from each level on BFO, specifically layers 0, 4, and 6, to construct our Diffusion Features. Then, in table 2 (j-m), we ablate the layer selection in constructing diffusion features. It can be seen that concatenating features of 0,4,6 layers significantly improves the correspondence quality and registration performance. The results from 3DMatch further validate the effectiveness of our choice. More ablation studies on the diffusion layer selection are provided in the supplementary material.
**Diffusion step selection**. In Table 3, we aim to determine the diffusion step \(\hat{t}\). The experimental results demonstrate that the Diffusion Features from \(\hat{t}=150\) achieve the best registration performance on BFO. Results on 3DMatch confirm its effectiveness.
Figure 6: _Visualization of correspondences._ (a) Input RGB images and point clouds for registration. (b) Estimated correspondences from I2P-Mart. (c / d / e) Estimated correspondences from FreeReg-D / FreeReg-G / FreeReg.
#### 4.3.2 Ablating feature fusion weight
We ablate the fusion weight \(w\) to fuse diffusion and geometric features in Table. 4 based on the baseline model FreeReg. It can be seen that FreeReg achieves the best registration performance when \(w\) is set to 0.5. Moreover, we find that relying more on diffusion features, i.e., \(w=0.6\) achieves a much similar result to the default FreeReg. While relying more on geometric features, i.e., \(w=0.4\) causes a sharp performance drop of a \(8.7\%\) lower IR and a \(2.5\%\) lower RR. This demonstrates the robustness of the proposed diffusion features.
### Limitations
The main limitation is that FreeReg requires about 11s and 13G GPU memory to match a single I2P pair on a 4090 GPU. The reason is that we need to run multiple backward process steps of ControlNet to denoise the pure noises to reach a specific step \(\hat{t}\) for feature extraction. Meanwhile, though we show the superior performance of using diffusion features for I2P registration, we manually select layers and denoising steps in the diffusion feature extraction, which could be improved by future works to automatically select good features.
## 5 Conclusion
We propose an I2P registration framework called FreeReg. The key idea of FreeReg is the utilization of diffusion models and monocular depth estimators for cross-modality feature extraction. Specifically, we leverage the intermediate representations of diffusion models to construct multi-modal diffusion features that show strong consistency across RGB images and depth maps. We further introduce so-called geometric features to capture distinct local geometric details on RGB images and depth maps. Extensive experiments demonstrate that FreeReg shows strong generalization and robustness in the I2P task. Without any task-specific training, FreeReg achieves a \(20.6\%\) improvement in Inlier Ratio, a \(3.0\times\) higher Inlier Number, and a \(48.6\%\) improvement in Registration Recall on three public indoor and outdoor benchmarks.
\begin{table}
\begin{tabular}{l c c c c} \(w\) & _FMR(\%)_ & _IR(\%)_ & _IN(\#)_ & _RR(\%)_ \\ \hline
0.7 & 94.8 & 45.1 & 74.1 & 58.5 \\
0.6 & **95.3** & **47.1** & 81.7 & 62.3 \\
0.5 & 94.6 & 47.0 & **82.8** & **63.8** \\
0.4 & 93.8 & 42.9 & 73.5 & 60.3 \\
0.3 & 91.8 & 37.5 & 61.9 & 56.5 \\ \end{tabular}
\end{table}
Table 4: Determining the fusion weight to fuse diffusion and geometric features.
\begin{table}
\begin{tabular}{l c c c c c c c c c} _ID_ & _Layer_ & \multicolumn{2}{c}{_Feature Map_} & \multicolumn{2}{c}{_BFO_} & \multicolumn{2}{c}{_3DMatch_} \\ & & \multicolumn{2}{c}{_(channel\(\times h\times w\))_} & _FMR(\%)_ & _IR(\%)_ & _IN(\#)_ & _RR(\%)_ & _FMR(\%)_ & _IR(\%)_ & _IN(\#)_ & _RR(\%)_ \\ \hline (a) & 0 & \(1280\times 8\times 11\) & 88.9 & 42.7 & 18.9 & 14.4 & 89.5 & 39.7 & 17.6 & 16.7 \\ (b) & 1 & \(1280\times 8\times 11\) & 91.5 & 42.1 & 19.1 & 12.4 & 86.9 & 39.7 & 18.1 & 15.8 \\ (c) & 2 & \(1280\times 8\times 11\) & 86.3 & 42.9 & 21.2 & 14.4 & 84.2 & 39.7 & 20.2 & 16.9 \\ (d) & 3 & \(1280\times 16\times 22\) & 87.6 & 42.7 & 45.9 & 23.5 & 88.4 & 41.0 & 47.2 & 23.0 \\ (e) & 4 & \(1280\times 16\times 22\) & 91.5 & 36.0 & 31.7 & 24.2 & 92.1 & 35.3 & 32.9 & 26.0 \\ (f) & 5 & \(1280\times 16\times 22\) & 89.5 & 35.4 & 28.1 & 22.9 & 91.7 & 35.5 & 28.9 & 25.6 \\ (g) & 6 & \(1280\times 32\times 44\) & 92.8 & 31.3 & 45.5 & 30.1 & 89.4 & 31.4 & 51.7 & 28.7 \\ (h) & 7 & \(640\times 32\times 44\) & 90.8 & 19.9 & 34.3 & 14.4 & 85.2 & 19.6 & 35.1 & 22.1 \\ (i) & 8 & \(640\times 32\times 44\) & 88.9 & 17.2 & 28.4 & 9.8 & 82.9 & 16.8 & 27.5 & 17.3 \\ (j) & [0.4] & \(256\times 32\times 44\) & 93.5 & **44.6** & 25.9 & 25.5 & **92.5** & **41.3** & **34.7** & **26.5** \\ (k) & [0.6] & \(256\times 32\times 44\) & 92.8 & 40.2 & 53.9 & 34.0 & 91.4 & 38.5 & **62.2** & 32.9 \\ (l) & [4.6] & \(256\times 32\times 44\) & 91.5 & 36.4 & 45.8 & 32.0 & 91.4 & 35.6 & 56.2 & 30.7 \\ (m) & [0.4,6] & \(384\times 32\times 44\) & **94.8** & 42.3 & **58.2** & **35.9** & 91.9 & 39.6 & 60.8 & **33.2** \\ \end{tabular}
\end{table}
Table 2: _Layer selection in diffusion feature extraction. “Feature map” means the size of the feature map in the form of “channel\(\times\)/width\(\times\)length”._
\begin{table}
\begin{tabular}{l c c c c c c c} \(\hat{t}\) & _FMR(\%)_ & _IR(\%)_ & _IN(\#)_ & _RR(\%)_ & _FMR(\%)_ & _IR(\%)_ & _IN(\#)_ & _RR(\%)_ \\ \hline
300 & 94.1 & 40.0 & 55.8 & 33.3 & 91.7 & 39.4 & 60.4 & 31.4 \\
200 & 92.8 & 41.0 & 58.1 & 35.3 & 91.8 & 39.8 & 61.4 & 31.2 \\
150 & 94.8 & 42.3 & 58.2 & **35.9** & 91.9 & 39.6 & 60.8 & **33.2** \\
100 & 92.8 & 41.3 & 57.3 & 35.3 & 91.8 & 38.8 & 59.3 & 31.6 \\
50 & 92.8 & 40.0 & 54.6 & 32.7 & 92.0 & 38.1 & 57.3 & 30.6 \\ \end{tabular}
\end{table}
Table 3: Determining \(\hat{t}\) in diffusion feature extraction. |
2306.00817 | Dilated Convolution with Learnable Spacings: beyond bilinear
interpolation | Dilated Convolution with Learnable Spacings (DCLS) is a recently proposed
variation of the dilated convolution in which the spacings between the non-zero
elements in the kernel, or equivalently their positions, are learnable.
Non-integer positions are handled via interpolation. Thanks to this trick,
positions have well-defined gradients. The original DCLS used bilinear
interpolation, and thus only considered the four nearest pixels. Yet here we
show that longer range interpolations, and in particular a Gaussian
interpolation, allow improving performance on ImageNet1k classification on two
state-of-the-art convolutional architectures (ConvNeXt and Conv\-Former),
without increasing the number of parameters. The method code is based on
PyTorch and is available at
https://github.com/K-H-Ismail/Dilated-Convolution-with-Learnable-Spacings-PyTorch | Ismail Khalfaoui-Hassani, Thomas Pellegrini, Timothée Masquelier | 2023-06-01T15:42:08Z | http://arxiv.org/abs/2306.00817v2 | # Dilated Convolution with Learnable Spacings: beyond bilinear interpolation
###### Abstract
Dilated Convolution with Learnable Spacings (DCLS) is a recently proposed variation of the dilated convolution in which the spacings between the non-zero elements in the kernel, or equivalently their positions, are learnable. Non-integer positions are handled via interpolation. Thanks to this trick, positions have well-defined gradients. The original DCLS used bilinear interpolation, and thus only considered the four nearest pixels. Yet here we show that longer range interpolations, and in particular a Gaussian interpolation, allow improving performance on ImageNet1k classification on two state-of-the-art convolutional architectures (ConvNeXt and ConvFormer), without increasing the number of parameters. The method code is based on PyTorch and is available at github.com/K-H-Ismail/Dilated-Convolution-with-Learnable-Spacings-PyTorch.
## 1 Introduction
Dilated Convolution with Learnable Spacings (DCLS) is an innovative convolutional method whose effectiveness in computer vision was recently demonstrated Khalfaoui-Hassani et al. (2023). In DCLS, the positions of the non-zero elements within the convolutional kernels are learned in a gradient-based manner. The challenge of non-differentiability caused by the integer nature of the positions is addressed through the application of **bilinear** interpolation. By doing so, DCLS enables the construction of a differentiable convolutional kernel.
DCLS is a differentiable method that only constructs the convolutional kernel. To implement the whole convolution, one can utilize either the native convolution provided by PyTorch or a more efficient implementation such as the "depthwise implicit gemm" convolution method proposed by Ding et al. (2022), which is suitable for large kernels.
The primary motivation behind the development of DCLS was to investigate the potential for enhancing the fixed grid structure imposed by standard dilated convolution in an input-independent way. By allowing an arbitrary number of kernel elements, DCLS introduces a free tunable hyper-parameter called the "kernel count". Additionally, the "dilated kernel size" refers to the maximum extent to which the kernel elements are permitted to move within the dilated kernel (Fig. 1c). Both of these parameters can be adjusted to optimize the performance of DCLS. The positions of the kernel elements in DCLS are initially randomized and subsequently allowed to evolve within the limits of the dilated kernel size during the learning process.
The main focus of this paper will be to question the choice of **bilinear** interpolation used by default in DCLS. We tested several interpolations and found in particular that a **Gaussian** interpolation with learnable standard deviations made the approach more effective.
To evaluate the effectiveness of DCLS with Gaussian interpolation, we integrate it as a drop-in replacement for the standard depthwise separable convolution in two state-of-the-art convolutional models: the ConvNext-T model Liu et al. (2022) and the ConvFormer-S18 model Yu et al. (2022).
In Section 5, we evaluate the training loss and the classification accuracy of these models on the ImageNet1k dataset Deng et al. (2009). The remainder of this paper will present a detailed analysis of the methods, equations, algorithms and techniques regarding the application of the Gaussian interpolation in DCLS.
## 2 Related work
In the field of convolutional neural networks (CNNs), various approaches have been explored to improve the perfor
mance and efficiency of convolutional operations. Gaussian mixture convolutional networks have investigated the fit of input channels with Gaussian mixtures Celarek et al. (2022), while Chen et al. (2023) utilized Gaussian masks in their work. Additionally, continuous kernel convolution was studied in the context of image processing by Kim and Park (2023). Their approach is similar to the linear correlation introduced in Thomas et al. (2019). The interpolation function used in the last two works corresponds to the DCLS-triangle method described in 3.1. Romero et al. have also made notable contributions in learning continuous functions that map the positions to the weights with their Romero et al. (2022) and Romero et al. (2022) methods.
In the work by Jacobsen et al. (2016), the kernel is represented as a weighted sum of basis functions, including centered Gaussian filters and their derivatives. Pintea et al. (2021) extended this approach by incorporating the learning of Gaussian width, effectively optimizing the resolution. Shelhamer et al. (2019) introduced a kernel factorization method where the kernel is expressed as a composition of a standard kernel and a structured Gaussian one. In these last three works the Gaussians are centered on the kernel.
Furthermore, the utilization of bilinear interpolation within deformable convolution modules has already shown its effectiveness. Dai et al. (2017), Qi et al. (2017) and recently Wang et al. (2022) leveraged bilinear interpolation to smoothen the non-differentiable regular-grid offsets in the deformable convolution method. Even more recently, in Kim et al. (2023), a Gaussian attention bias with learnable standard deviations has been successfully used in the positional embedding of the attention module of the ViT model Dosovitskiy et al. (2021) and leads to reasonable gains on ImageNet1k.
## 3 Methods
### From bilinear to Gaussian interpolation
We denote by \(m\in\mathbb{N}^{*}\) the number of kernel elements inside the dilated constructed kernel and we refer to it as the "kernel count". Moreover, we denote respectively by \(s_{x},s_{y}\in\mathbb{N}^{*}\times\mathbb{N}^{*}\), the sizes of the constructed kernel along the x-axis and the y-axis. The latter could be seen as the limits of the dilated kernel, and we refer to them as the "dilated kernel size".
The real numbers \(w\), \(p^{x}\), \(\sigma^{x}\), \(p^{y}\) and \(\sigma^{y}\) respectively stand for the weight, the mean position and standard deviation of that weight along the x-axis (width) and its mean position and standard deviation along the y-axis (height).
The mathematical construction of the 2D-DCLS kernel in Khalfaoui-Hassani et al. (2023) relies on bilinear interpolation and is described as follows :
\[f\colon\mathbb{R}\times\mathbb{R}\times\mathbb{R}\rightarrow \mathcal{M}_{s_{x},s_{y}}(\mathbb{R}) \tag{1}\] \[w,p^{x},p^{y}\mapsto\quad K\]
where \(\forall i\in\llbracket 1\,..\,s_{x}\rrbracket\), \(\forall j\in\llbracket 1\,..\,s_{y}\rrbracket\):
\[K_{ij}=\left\{\begin{array}{cc}w\left(1-r^{x}\right)\left(1-r^{y}\right)& \text{if }i=\lfloor p^{x}\rfloor,\;j=\lfloor p^{y}\rfloor\\ w\,r^{x}\left(1-r^{y}\right)&\text{if }i=\lfloor p^{x}\rfloor+1,\;j=\lfloor p^{y}\rfloor \\ w\,(1-r^{x})\,r^{y}&\text{if }i=\lfloor p^{x}\rfloor,\;j=\lfloor p^{y}\rfloor+1\\ w\,r^{x}\,r^{y}&\text{if }i=\lfloor p^{x}\rfloor+1,\;j=\lfloor p^{y}\rfloor+1\\ 0&\text{otherwise}\end{array}\right. \tag{2}\]
and where the fractional parts are:
\[r^{x}=\{p^{x}\}=p^{x}-\lfloor p^{x}\rfloor\quad\text{and}\quad r^{y}=\{p^{y} \}=p^{y}-\lfloor p^{y}\rfloor \tag{3}\]
An equivalent way of describing the constructed kernel \(K\) in Equation 2 is:
\[K_{ij}=w\cdot g(p^{x}-i)\cdot g(p^{y}-j) \tag{4}\]
Figure 1: (a) a standard \(3\times 3\) kernel. (b) a standard dilated \(3\times 3\) kernel. (c) a 2D-DCLS kernel using bilinear interpolation with 9 kernel elements and a kernel size of 9. (d) the same kernel as (c) with Gaussian interpolation. The numbers have been rounded in all figures and omitted in (d) for readability.
with
\[g\colon x\mapsto\text{max}(0,\;1-|x|) \tag{5}\]
This expression corresponds to the bilinear interpolation as described in (Dai et al., 2017, eq. 4).
In fact, this last \(g\) function is known as the triangle function (refer to Fig. 2 for a graphic representation), and is widely used in kernel density estimation. From now on, we will note it as
\[\forall x\in\mathbb{R}\qquad\Lambda(x)\stackrel{{\text{def}}}{{=}} \text{max}(0,\;1-|x|) \tag{6}\]
First, we consider a scaling by a parameter \(\sigma\in\mathbb{R}_{+}\) for the triangle function (the bilinear interpolation corresponds to \(\sigma=1\)),
\[\forall x\in\mathbb{R},\quad\forall\sigma\in\mathbb{R}_{+}\quad\Lambda_{ \sigma}(x)\stackrel{{\text{def}}}{{=}}\text{max}(0,\;\sigma-|x|) \tag{7}\]
We found that this scaling parameter \(\sigma\) could be learned by backpropagation and that doing so increases the performance of the DCLS method. As we have different \(\sigma\) parameters for the x and y-axes in 2D-DCLS, learning the standard deviations costs two additional learnable parameters and two additional FLOPs (multiplied by the number of the channels of the kernel and the kernel count). We refer to the DCLS method with triangle function interpolation as the DCLS-triangle method.
Second, we tried a smoother function rather than the piecewise affine triangle function, namely the Gaussian function:
\[\forall x\in\mathbb{R},\;\forall\sigma\in\mathbb{R}^{*},\quad G_{\sigma}(x) \stackrel{{\text{def}}}{{=}}\text{exp}\left(-\frac{x^{2}}{2 \sigma^{2}}\right) \tag{8}\]
We refer to the DCLS method with Gaussian interpolation as the DCLS-Gauss method. In practice, instead of Equations 7 and 8, we respectively use:
\[\forall x\in\mathbb{R},\;\forall\sigma\in\mathbb{R},\;\Lambda_{\sigma_{0}+ \sigma}(x)=\text{max}(0,\;\sigma_{0}+|\sigma|-|x|) \tag{9}\]
\[\forall x\in\mathbb{R},\;\forall\sigma\in\mathbb{R},\;G_{\sigma_{0}+\sigma}(x) =\text{exp}\left(-\frac{1}{2}\frac{x^{2}}{(\sigma_{0}+|\sigma|)^{2}}\right) \tag{10}\]
with \(\sigma_{0}\in\mathbb{R}_{+}^{*}\) a constant that determines the minimum standard deviation that the interpolation could reach. For the triangle interpolation, we take \(\sigma_{0}=1\) in order to have at least 4 adjacent interpolation values (see Figure 1c). And for the Gaussian interpolation, we set \(\sigma_{0}=0.27\). This corresponds to \(99.97\%\) of the integral of the Gaussian belonging to the interval \([-1,1]\), which is very close to the DCLS method with bilinear interpolation.
Last, to make the sum of the interpolation over the dilated kernel size equal to 1, we divide the interpolations by the following normalization term :
\[A=\epsilon+\sum_{i=1}^{s_{x}}\sum_{j=1}^{s_{y}}\mathcal{I}_{\sigma_{0}+\sigma ^{x}}(p^{x}-i)\cdot\mathcal{I}_{\sigma_{0}+\sigma^{y}}(p^{y}-j) \tag{11}\]
with \(\mathcal{I}\) an interpolation function (\(\Lambda\) or \(G\) in our case) and \(\epsilon=1e-7\) for example, to avoid division by zero.
**Other interpolations** Based on our tests, other functions such as Lorentz, hyper-Gaussians and sinc functions have been tested with no great success. In addition, learning a correlation parameter \(\rho\in[-1,1]\) or equivalently a rotation parameter \(\theta\in[0,2\pi]\) as in the bivariate normal distribution density did not improve performance (maybe because cardinal orientations predominate in natural images).
### The 2D-DCLS-Gauss kernel construction algorithm
In the following, we describe with pseudocode the kernel construction used in 2D-DCLS-Gauss and 2D-DCLS-\(\Lambda\). \(\mathcal{I}\) is the interpolation function (\(\Lambda\) or \(G\) in our case) and \(\epsilon=1e-7\). In practice, \(w\), \(p^{x}\), \(p^{y}\), \(\sigma^{x}\) and \(\sigma^{y}\) are 3-D tensors of size (channels_out, channels_in // groups, K_count), but the algorithm presented here is easily extended to this case by applying it channel-wise.
```
0:\(w,p^{x},p^{y}\), \(\sigma^{x}\), \(\sigma^{y}\) : vectors of dimension \(m\)
0:\(K\) : the constructed kernel, of size (\(s_{x}\times s_{y}\))
1:\(K\gets 0_{s_{x},s_{y}}\) {zero tensor of size \(s_{x},s_{y}\)}
2:for\(k=0\)to\(m-1\)do
3:\(H\gets 0_{s_{x},s_{y}}\)
4:\(p_{k}^{x}\gets p_{k}^{x}+s_{x}/2;\quad p_{k}^{y}\gets p_{k}^{y}+s_{y} /2\)
5:\(\sigma_{k}^{x}\leftarrow|\sigma_{k}^{x}|+\sigma_{0}^{\mathcal{I}};\quad \sigma_{k}^{y}\leftarrow|\sigma_{k}^{y}|+\sigma_{0}^{\mathcal{I}}\)
6:for\(i=0\)to\(s^{x}-1\)do
7:for\(j=0\)to\(s^{y}-1\)do
8:\(H[i,j]\leftarrow\mathcal{I}_{\sigma_{k}^{x}}(p_{k}^{x}-i)*\mathcal{I}_{ \sigma_{k}^{y}}(p_{k}^{y}-j)\)
9:endfor
10:endfor
11:\(H[:,:]\gets H[:,:]\) /(\(\epsilon+\sum\limits_{i=0}^{s^{x}-1}\sum\limits_{j=0}^{s^{y}-1}H[i,j]\))
12:\(K\gets K+H*w_{k}\)
13:endfor
```
**Algorithm 1** 2D-DCLS-interpolation kernel construction
Figure 2: 1D view of Gaussian and \(\Lambda\) functions with \(\sigma=5\).
## 4 Learning techniques
Having discussed the implementation of the interpolation in the DCLS method, we now shift our focus to the techniques employed to maximize its potential. We retained most of the techniques used in Khalfaoui-Hassani et al. (2023), and suggest new ones for learning standard deviations parameters. In Appendix C, we present the training techniques that have been selected based on consistent empirical evidence, yielding improved training loss and validation accuracy.
## 5 Results
We took two recent state-of-the-art convolutional architectures, ConvNeXt and ConvFormer, and drop-in replaced all the depthwise convolutions by DCLS ones, using the three different interpolations (bilinear, triangle or Gauss). Table 1 reports the results in terms of training loss and test accuracy. However, below, we only analyze the training loss as it is less variable, and thus allows seeing subtle differences between models. If a model has a slightly lower training loss than another one with the same number of parameters, it is likely, though not certain, that it will have a slightly higher test accuracy on average. Yet, proving it would require running many seeds (due to test accuracy variability), and here we have at most three seeds per model.
A first observation is that all the DCLS models perform much better than the baselines, whereas they have the same number of parameters. There are also subtle differences between interpolation functions. As Figure 3 shows, triangle and bilinear interpolations perform similarly, but the Gaussian interpolation performs significantly better.
Furthermore, the advantage of the Gaussian interpolation w.r.t. bilinear is not only due to the use of a larger kernel, as a 17x17 Gaussian kernel (5th line in Table 1) still outperforms the bilinear case (2nd line). Finally, the 6th line in Table 1 shows that there is still room for improvement by increasing the kernel count, although this slightly increases the number of trainable parameters w.r.t. the baseline.
## 6 Conclusion
In conclusion, this study introduces Gaussian and interpolation methods as alternatives to bilinear interpolation in Dilated Convolution with Learnable Spacings (DCLS). Evaluations on state-of-the-art convolutional architectures demonstrate that Gaussian interpolation improves performance of image classification task on ImageNet1k without increasing parameters. Future work could implement the Whittaker-Shannon interpolation instead of the Gaussian interpolation and search for a dedicated architecture, that will make the most of DCLS.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline model @ 224 &
\begin{tabular}{c} ker. size \\ / count \\ \end{tabular} & interpolation & \# param. & train loss & Top-5 acc. & Top-1 acc. \\ \hline ConvNeXt-T & \(7^{2}\) / 49 & & 28.59M & 2.828 & 96.05 & 82.08 \\ ConvNeXt-T & \(17^{2}\) / 34 & Bilinear & 28.59M & 2.775 & 96.11 & 82.44 \\ ConvNeXt-T \(\odot\) & \(23^{2}\) / 26 & Triangle & 28.59M & 2.787 & 96.09 & 82.34 \\ ConvNeXt-T \(\star\) & \(23^{2}\) / 26 & Gaussian & 28.59M & 2.762 & 96.18 & 82.44 \\ ConvNeXt-T & \(17^{2}\) / 26 & Gaussian & 28.59M & 2.773 & 96.17 & 82.40 \\ ConvNeXt-T & \(23^{2}\) / 34 & Gaussian & 28.69M & 2.758 & 96.22 & 82.60 \\ \hline ConvFormer-S18 & \(7^{2}\) / 49 & & 26.77M & 2.807 & 96.17 & 82.84 \\ ConvFormer-S18 & \(17^{2}\) / 40 & Bilinear & 26.76M & 2.764 & 96.42 & 83.14 \\ ConvFormer-S18 \(\odot\) & \(23^{2}\) / 26 & Triangle & 26.76M & 2.761 & 96.38 & 83.09 \\ ConvFormer-S18 \(\star\) & \(23^{2}\) / 26 & Gaussian & 26.76M & 2.747 & 96.31 & 82.99 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Classification accuracy and training loss on ImageNet-1K.** For the 17/34 bilinear, the 23/26 Triangle and Gaussian cases, the results have been averaged over 3 distinct seeds (the corresponding lines are highlighted in yellow).
Figure 3: Training loss for ConvNeXt-T and ConvFormer-S18 models with DCLS according to interpolation type (lower is better). The pairwise p-values have been calculated using an independent two-sample Student t-test assuming equal variances.
## Acknowledgments
This work was performed using HPC resources from GENCI-IDRIS (Grant 2021-[AD011013219]). Support from the ANR-3IA Artificial and Natural Intelligence Toulouse Institute is gratefully acknowledged. We would also like to thank the region of Toulouse Occitanie.
|
2309.01354 | Nodal solutions for the double phase problems | We consider a parametric nonautonomous $(p, q)$-equation with unbalanced
growth as follows
\begin{align*}
\left\{ \begin{aligned} &-\Delta_p^\alpha u(z)-\Delta_q u(z)=\lambda \vert
u(z)\vert^{\tau-2}u(z)+f(z, u(z)), \quad \quad \hbox{in }\Omega,\\
&u|_{\partial \Omega}=0, \end{aligned} \right. \end{align*} where $\Omega
\subseteq \mathbb{R}^N$ be a bounded domain with Lispchitz boundary
$\partial\Omega$, $\alpha \in L^{\infty}(\Omega)\backslash \{0\}$, $a(z)\geq 0$
for a.e. $z \in \Omega$, $ 1<\tau< q<p<N$ and $\lambda>0$. In the reaction
there is a parametric concave term and a perturbation $f(z, x)$. Under the
minimal conditions on $f(z, 0)$, which essentially restrict its growth near
zero, by employing variational tools, truncation and comparison techniques, as
well as critical groups, we prove that for all small values of the parameter
$\lambda>0$, the problem has at least three nontrivial bounded solutions
(positive, negative, nodal), which are ordered and asymptotically vanish as
$\lambda \rightarrow 0^{+}$. | Chao Ji, Nikolaos S. Papageorgiou | 2023-09-04T04:36:34Z | http://arxiv.org/abs/2309.01354v1 | # Nodal solutions for the double phase problems
###### Abstract
We consider a parametric nonautonomous \((p,q)\)-equation with unbalanced growth as follows
\[\begin{cases}-\Delta_{p}^{\alpha}u(z)-\Delta_{q}u(z)=\lambda|u(z)|^{\tau-2}u(z )+f(z,u(z)),\qquad\text{in }\Omega,\\ u|_{\partial\Omega}=0,\end{cases}\]
where \(\Omega\subseteq\mathbb{R}^{N}\) be a bounded domain with Lispchitz boundary \(\partial\Omega\), \(\alpha\in L^{\infty}(\Omega)\backslash\{0\}\), \(a(z)\geq 0\) for a.e. \(z\in\Omega\), \(1<\tau<q<p<N\) and \(\lambda>0\). In the reaction there is a parametric concave term and a perturbation \(f(z,x)\). Under the minimal conditions on \(f(z,0)\), which essentially restrict its growth near zero, by employing variational tools, truncation and comparison techniques, as well as critical groups, we prove that for all small values of the parameter \(\lambda>0\), the problem has at least three nontrivial bounded solutions (positive, negative, nodal), which are ordered and asymptotically vanish as \(\lambda\to 0^{+}\).
**2010 Mathematics Subject Classification:** 35J60, 58E05.
**Keywords:** Unbalanced growth, Generalized Orlicz spaces, Critical groups, Extremal constant sign solutions, Nodal solutions.
## 1 Introduction
In this paper we are concerned with the following parametric double phase Dirichlet problem
\[\begin{cases}-\Delta_{p}^{\alpha}u(z)-\Delta_{q}u(z)=\lambda|u(z)|^{\tau-2}u(z )+f(z,u(z)),\qquad\text{in }\Omega,\\ u|_{\partial\Omega}=0,\end{cases}\]
where \(\Omega\subseteq\mathbb{R}^{N}\) be a bounded domain with Lispchitz boundary \(\partial\Omega\), \(\alpha\in L^{\infty}(\Omega)\backslash\{0\}\), \(a(z)\geq 0\) for a.e. \(z\in\Omega\), \(1<\tau<q<p<N\) and \(\lambda>0\). We denote the weighted \(r\)-Laplacian differential operator by \(\Delta_{r}^{a}\) and define it as follows
\[\Delta_{r}^{\alpha}u=\operatorname{div}\left(a(z)\nabla u|^{r-2}\nabla u\right).\]
where \(1<r<\infty\). If \(\alpha\equiv 1\), then we recover the usual \(r\)-Laplacian differential operator. Problem \((P_{\lambda})\) is driven by the sum of two such operators with different exponents, making it a non-homogeneous differential operator. This operator is related to the double phase energy functional, which is defined by
\[u\to\int_{\Omega}\Big{(}\alpha(z)|\nabla u|^{p}+|\nabla u|^{q}\Big{)}dz.\]
It is noteworthy that we do not assume that the weight function \(\alpha(\cdot)\) is bounded away from zero, i.e., we do not require that \(\operatorname*{ess\,inf}_{\Omega}\alpha>0\). As a result, the density function of the above integral functional, denoted as the integrand \(\eta(z,t)=a(z)t^{p}+t^{q}\), exhibits unbalanced growth, which can be characterized by:
\[t^{q}\leq\eta(z,t)\leq c_{0}(1+t^{p}),\text{ for a.e. }z\in\Omega,\text{ all }t\geq 0,\text{ some }c_{0}>0.\]
Such integral functionals were first considered by Marcellini [11, 12] and Zhikov [21, 22] in the context of problems in the calculus of variations (including the Lavrentiev gap phenomenon) and of nonlinear elasticity theory. For problems with unbalanced growth, only local regularity results exist (please see the survey papers [13] due to Marcellini and [15] due to Mingione and Radulescu), and there is no global regularity theory (i.e., regularity up to the boundary). This limitation restricts the use of many powerful technical tools available for problems with balanced growth.
In the present paper, we address these challenges and demonstrate, under minimal conditions on the perturbation \(f(z,x)\), which essentially impose conditions on \(f(z,\cdot)\) only near zero (local perturbation), the nodal solution and multiple solutions of problem \((P_{\lambda})\) are studied.
The main result of this paper is the following.
**Theorem 1.1**.: _If hypotheses \((H_{0}),(H_{1})\) hold (see Section 2), then for all \(\lambda>0\) small, problem \((P_{\lambda})\) possesses at least three nontrivial, bounded solutions with sign information (positive, negative, and nodal/sign-changing). These solutions are ordered and converge to zero in \(L^{\infty}(\Omega)\) as \(\lambda\to 0^{+}\)._
The absence of a global regularity theory makes it challenging and difficult to produce nodal solutions for the double phase problems, some authors have already done some interesting work in this direction. In particular, in [1], Crespo-Blanco and Winkert considered quasilinear elliptic equations driven by the variable exponent double phase operator with superlinear right-hand sides. Under very general assumptions on the nonlinearity, they proves a multiplicity result for such problems and showed the existence of a positive solution, a negative one and a solution with changing sign; By using variational methods, Liu and Dai [9] obtained various existence and multiplicity results for the following double phase problem
\[\begin{cases}-\operatorname{div}\left(|\nabla u|^{p-2}\nabla u+a(x)|\nabla u |^{q-2}\nabla u\right)=f(x,u)&\text{ in }\Omega,\\ u=0&\text{ on }\partial\Omega.\end{cases}\]
In particular, they found a sign-changing ground state solution; then, Papageorgiou and Zhang [20] dealt with a nonlinear unbalanced double-phase problem with a superlinear
reaction and Robin boundary condition. They showed that the problem has three nontrivial solutions all with sign information (positive, negative and nodal). We notice that in order to study nodal solutions, they all applied the Nehari manifold method and its variants in the papers mentioned above. However, these methods required restrictive monotonicity conditions (see hypothesis \((f_{6})\) of [1], hypothesis \((f_{5})\) of [9], hypothesis \(H_{1}\)( iii) of [20]) or differentiability conditions on \(f(z,\cdot)\) (see hypothesis \(H_{1}\) differentiability condition in [20]), which we avoid in our study. Instead, in this paper we will employ variational tools, along with truncation and comparison techniques and critical groups, to establish the existence of nodal solutions under minimal hypotheses.
Recently, in [18], Papageorgiou, Vetro and Vetro considered problem \((P_{\lambda})\). They proved that for all parametric values \(\lambda>\lambda^{*}\) the problem has at least three nontrivial solutions, two of which have constant sign, one is sign-changing solution, here the critical parameter \(\lambda^{*}\) is precisely in terms of the spectrum of the \(q\)-Laplacian. Notice that in [18], the perturbation enters with a negative sign and \(\tau=q\). In this paper, the perturbation enters with positive sign and \(\tau<q\). There is a perturbation of the \(q\)-eigenvalue problem in [18] while in our case we have a problem with a concave term and a general perturbation. That is why, in [18], the focus is on the existence of solutions for large values of \(\lambda\), while in this paper, we will instead consider the existence of solutions for small values of \(\lambda\). Finally we would like to mention that in contest to [18], here we do not have any asymptotic condition on \(\frac{f(z,x)}{|x|^{p-2}x}\) as \(x\to\pm\infty\). To the best of our knowledge, this is the first work producing nodal solutions for unbalanced growth problems using critical groups. This is rather surprising given the lack of a global regularity theory for such problems.
The paper is organized as follows. In Section 2 we introduce the functional setting and give some preliminaries. Then, in Section 3, we prove existence of constant sign solutions. In Section 4, we produce the nodal solutions. Finally, in the last section, we will give the proof of Theorem 1.1.
## 2 The variational framework and some preliminaries
As a consequence of the unbalanced growth of the function \(\eta(z,\cdot)\), the standard setting of the classical Lebesgue and Sobolev spaces is inadequate. Instead, we need to work with the more suitable generalized Orlicz spaces. For a comprehensive presentation of the theory of these spaces, we refer to the book by Harjulehto and Hasto [7].
Our hypotheses on the weight \(\alpha(\cdot)\) and the exponents \(p,q,\tau\) are as follows:
\[\alpha\in L^{\infty}(\Omega)\backslash\{0\},a(z)\geqslant 0\text{ for a.e. }z\in\Omega,\,1<\tau<q<p<N,\alpha\leq q^{*}=\frac{Nq}{N-q}\text{ and }\frac{p}{q}<1+\frac{1}{N}.\] ( \[H_{0}\] )
**Remark 2.1**.: _The last inequality implies that the exponents \(p,q\) can not be too far apart. Also, it leads to that \(p<q^{*}\) and this in turn leads to the compact embeddings of some relevant spaces (see Proposition 2.1 below). The hypothesis on the weight function \(\alpha(\cdot)\), together with [2, Proposition 2.18], guarantees the validity of the Poincare inequality on the generalized Sobolev-Orlicz space \(W^{1,\eta}_{0}(\Omega)\), which we will introduce later._
Let \(L^{0}(\Omega)\) be the space of all measurable functions \(u:\Omega\to\mathbb{R}\). As usual, we identify two such functions which differ only on a Lebesgue null subset of \(\Omega\). The generalized Lebesgue-Orlicz space \(L^{\eta}(\Omega)\) is defined by
\[L^{\eta}(\Omega)=\left\{u\in L^{0}(\Omega):\rho_{\eta}(u)=\int_{\Omega}\eta(z,|u |)dz<\infty\right\},\]
where the function \(\rho_{\eta}(\cdot)\) is known as the modular function. This space is equipped with the so-called Luxemburg norm \(\|\cdot\|\eta\), defined by
\[\|u\|_{\eta}=\inf\Big{\{}\lambda>0:\rho_{\eta}\left(\frac{u}{\lambda}\right) \leqslant 1\Big{\}}.\]
With this norm, the space \(L^{\eta}(\Omega)\) becomes a separable, reflexive Banach space that is also uniformly convex due to the uniformly convexity of \(\eta\left(z,\cdot\right)\).
Using \(L^{\eta}(\Omega)\), we can define the corresponding generalized Sobolev-Orlicz space \(W^{1,\eta}(\Omega)\) as
\[W^{1,\eta}(\Omega)=\Big{\{}u\in L^{\eta}(\Omega):|\nabla u|\in L^{\eta}( \Omega)\Big{\}},\]
where \(\nabla u\) represents the weak gradient of \(u\). We equip \(W^{1,\eta}(\Omega)\) with the norm \(\|\cdot\|_{1,\eta}\) defined by
\[\|u\|_{1,\eta}=\|u\|_{\eta}+\|\nabla u\|_{\eta},\text{ for any }u\in W^{1,\eta}(\Omega),\]
here \(\|\nabla u\|_{\eta}=\||\nabla u\|\|_{\eta}\). Additionally, we define
\[W^{1,\eta}_{0}(\Omega)=\overline{C^{\infty}_{c}(\Omega)}^{\|\cdot\|_{1,\eta}}.\]
Due to the Poincare inequality being valid on \(W^{1,\eta}_{0}(\Omega)\), we can consider the equivalent norm on \(W^{1,\eta}_{0}(\Omega)\)
\[\|u\|=\|\nabla u\|_{\eta},\text{ for all }u\in W^{1,\eta}_{0}(\Omega).\]
Both \(W^{1,\eta}(\Omega)\) and \(W^{1,\eta}_{0}(\Omega)\) are separable, reflexive Banach spaces, with the uniformly convexity.
We have the following embedded results among these spaces which are useful.
**Proposition 2.1**.:
1. \(L^{\eta}(\Omega)\hookrightarrow L^{s}(\Omega),W^{1,\eta}_{0}(\Omega) \hookrightarrow W^{1,s}_{0}(\Omega)\) _continuously for all_ \(s\in[1,q]\)_;_
2. \(L^{p}(\Omega)\hookrightarrow L^{\eta}(\Omega)\) _continuously;_
3. \(W^{1,\eta}_{0}(\Omega)\hookrightarrow L^{s}(\Omega)\) _continuously if_ \(s\in[1,q^{*}]\)_, and_ \(W^{1,\eta}_{0}(\Omega)\hookrightarrow L^{s}(\Omega)\) _compactly if_ \(s\in[1,q^{*})\)_, where_ \(q^{*}=\frac{Nq}{N-\eta q}\) _is the critical Sobolev exponent._
Note that the modular function \(\rho_{\eta}:W^{1,\eta}_{0}(\Omega)\to\mathbb{R}^{+}\) is continuous and convex, hence by Mazur's lemma, it is weakly lower semi-continuous. There is a close relation between the modular function \(\rho_{\eta}(\cdot)\) and the norm \(\|\cdot\|\) as follows.
**Proposition 2.2**.:
1. _If_ \(u\neq 0\)_, then_ \(\|u\|=\lambda\Leftrightarrow\rho_{\eta}\left(\frac{\nabla u}{\lambda}\right)=1\)_;_
2. \(\|u\|<1\) _(respectively_ \(=1,>1\)_)_ \(\Leftrightarrow\rho(\nabla u)<1\)_(respectively_ \(=1,>1\)_);_
3. \(\|u\|<1\Rightarrow\|u\|^{p}\leq\rho(\nabla u)\leq\|u\|^{q}\)_;_
4. \(\|u\|>1\Rightarrow\|u\|^{q}\leq\rho(\nabla u)\leq\|u\|^{p}\)_;_
5. \(\|u\|\to\infty(\mbox{{respectively}}\to 0)\Leftrightarrow\rho(\nabla u)\to \infty(\mbox{{respectively}}\to 0)\)_._
Furthermore, we introduce the map \(V:W_{0}^{1,\eta}(\Omega)\to(W_{0}^{1,\eta}(\Omega))^{*}\) defined by
\[\langle V(u),h\rangle=\int_{\Omega}\Big{(}a(z)|\nabla u|^{p-2}+|\nabla u|^{q-2 }\Big{)}(\nabla u,\nabla h)dz,\mbox{ for any }u,h\in W_{0}^{1,\eta}(\Omega).\]
This map has the following important properties.
**Proposition 2.3**.: _The map \(V(\cdot)\) is bounded, continuous, strictly monotone (thus maximal monotone too) and of type \((S)_{+}\), which means that_
\[\mbox{If }u_{n}\rightharpoonup u\mbox{ in }W_{0}^{1,\eta}(\Omega)\mbox{ \ and \ }\limsup_{n\to\infty}\left\langle V\left(u_{n}\right),u_{n}-u\right\rangle\leq 0,\]
_then \(u_{n}\to u\) in \(W_{0}^{1,\eta}(\Omega)\)._
If \(u\in L^{0}(\Omega)\), we define
\[u^{\pm}(z)=\max\{\pm u(z),0\},\mbox{ \ for a.e. }z\in\Omega.\]
Observe that \(u=u^{+}-u^{-}\) and \(|u|=u^{+}+u^{-}\). Additionally, if \(u\in W_{0}^{1,\eta}(\Omega)\), then \(u^{\pm}\in W_{0}^{1,\eta}(\Omega)\). Given \(h_{1},h_{2}\in L^{0}(\Omega)\) with \(h_{1}(z)\leqslant h_{2}(z)\) for a.e. \(z\in\Omega\), we define the order interval \([h_{1},h_{2}]\) as
\[[h_{1},h_{2}]=\left\{u\in W_{0}^{1,\eta}(\Omega):h_{1}(z)\leqslant u(z) \leqslant h_{2}(z),\mbox{ for a.e. }z\in\Omega\right\}.\]
If \(X\) is a Banach space and \(\varphi\in C^{1}(X)\), then
\[K_{\varphi}=\{u\in X:\varphi^{\prime}(u)=0\}\;\left(\mbox{critical points set of }\varphi\right).\]
A set \(C\subseteq W_{0}^{1,\eta}(\Omega)\) is said to be "downward directed" (respectively, "upward directed"), if given \(u_{1},u_{2}\in C\), we can find \(u\in C\) such that \(u\leq u_{1},u\leq u_{2}\) (respectively, if given \(v_{1},v_{2}\in C\), we can find \(v\in C\) such that \(v_{1}\leq v,v_{2}\leq v\)).
As we already mentioned in the introduction, in order to overcome the serious difficulties arising from the absence of a global regularity theory, we will use the critical groups and their properties. So, let us briefly recall some basic definitions and facts from that theory. For the details we refer to Chapter 6 in the book [17].
Let \(X\) be a Banach space and \((Y_{1},Y_{2})\) be a topological pair where \(Y_{2}\subseteq Y_{1}\subseteq X\). For this pair, \(H_{k}\left(Y_{1},Y_{2}\right)\), \(k\in\mathbb{N}_{0}\), denotes the \(k\)th-relative singular homology group with integer coefficients. Given \(\varphi\in C^{1}(X)\) and \(c\in\mathbb{R}\), we define \(\varphi^{c}=\{u\in X:\varphi(u)\leq c\}\). If \(u\in K_{\varphi}\) is isolated and \(c=\varphi(u)\), then the critical groups of \(\varphi(\cdot)\) at \(u\) are given by
\[C_{k}(\varphi,u)=H_{k}\left(\varphi^{c}\cap\mathcal{U},\varphi^{c}\cap \mathcal{U}\backslash\{u\}\right),\;k\in\mathbb{N}_{0},\]
with \(\mathcal{U}\) a neighborhood of \(u\) such that \(K_{\varphi}\cap\varphi^{c}\cap\mathcal{U}=\{u\}\). These critical groups are well-defined and independent of the choice of the isolating neighborhood \(\mathcal{U}\), thanks to the excision property of singular homology.
Moreover, \(\varphi\in C^{1}(X)\) satisfies the \(C\)-condition if it has the following property:
"Every sequence \(\left\{u_{n}\right\}_{n\in\mathbb{N}}\subset X\) such that \(\left\{\varphi\left(u_{n}\right)\right\}_{n\in\mathbb{N}}\subset\mathbb{R}\) is bounded, and \(\left(1+\left\|u_{n}\right\|_{X}\right)\varphi^{\prime}\left(u_{n}\right)\to 0\) in \(X^{*}\), admits a strongly convergent subsequence."
Suppose \(\varphi\in C^{1}(X)\) satisfies the \(C\)-condition and \(\inf\varphi(K_{\varphi})>-\infty\). We can define the critical groups of \(\varphi(\cdot)\) at infinity when \(c<\inf\varphi(K_{\varphi})\), denoted by
\[C_{k}(\varphi,\infty)=H_{k}\left(X,\varphi^{c}\right)\text{ for all }k\in \mathbb{N}_{0}.\]
These critical groups are independent of the level \(c<\inf\varphi(K_{\varphi})\) and are well-defined thanks to the second deformation theorem (see [17, Theorem 5.3.12]).
Suppose that \(K_{\varphi}\) is finite, we introduce the following series in \(t\in\mathbb{R}\),
\[M(t,u)=\sum_{k\in\mathbb{N}_{0}}\operatorname{rank}C_{k}(\varphi,u)t^{k}\text{ with }u\in K_{\varphi},\] \[P(t,\infty)=\sum_{k\in\mathbb{N}_{0}}\operatorname{rank}C_{k}( \varphi,\infty)t^{k}.\]
The "Morse relation" says that
\[\sum_{u\in K_{\varphi}}M(t,u)=P(t,\infty)+(1+t)Q(t) \tag{2.1}\]
with \(Q(t)=\sum_{k\in\mathbb{N}_{0}}\beta_{k}t^{k}\) a formal series in \(t\in\mathbb{R}\) with nonnegative integer coefficients.
To use the properties of critical groups, we require the following notion. Let us define that \(\varphi:\Omega\times\mathbb{R}\to\mathbb{R}\) as an \(L^{\infty}\)-locally Lipschitz integrand if, for all \(x\in\mathbb{R}\), \(z\to\varphi(z,x)\) is measurable and for every compact set \(K\subseteq\mathbb{R}\), there exists \(\vartheta_{K}\in L^{\infty}(\Omega)\) such that
\[|\varphi(z,x)-\varphi(z,y)|\leq\vartheta_{K}(z)|x-y|,\text{ for a.e. }z\in \Omega,\text{ all }x,y\in K.\]
Clearly, such a function is jointly measurable (see [19, Proposition 2.2.31]). Therefore, if \(u\in L^{0}(\Omega)\), then \(z\to\varphi(z,u(z))\) is measurable.
Suppose \(u\in L^{0}(\Omega)\) has the property that for all compact subset \(\mathcal{K}\subseteq\Omega\), there exists a constant \(C_{\mathcal{K}}\) such that
\[0<C_{\mathcal{K}}\leqslant u(z)\text{ for a.e. }z\in\mathcal{K},\]
then we denote that \(0\prec u\). Similarly, \(v\prec 0\) is used if \(0\prec-v\).
Let \(\hat{\lambda}_{1}(q)\) denote the principal eigenvalue of \(\left(-\Delta_{q},W_{0}^{1,q}(\Omega)\right)\). We know that \(\hat{\lambda}_{1}(q)>0\) and it is simple and isolated. It has the following variational characterization:
\[\hat{\lambda}_{1}(q)=\inf\left\{\frac{\|\nabla u\|_{q}^{q}}{\|u\|_{q}^{q}}:u \in W_{0}^{1,q}(\Omega),\ u\neq 0\right\}. \tag{2.2}\]
The infimum in (2.2) is realized on the corresponding one-dimensional eigenspace, the elements of which have a fixed sign. In fact, \(\hat{\lambda}_{1}(q)>0\) is the only eigenvalue with eigenfunctions of constant sign, while all other eigenvalues have nodal eigenfunctions. Using these facts, we can easily prove the following result (see [16, Lemma 4.11]).
**Proposition 2.4**.: _If \(\vartheta\in L^{\infty}(\Omega)\) satisfies \(\vartheta(z)\leqslant\hat{\lambda}_{1}(q)\) for a.e. \(z\in\Omega\) and the inequality is strict on a set of positive Lebesgue measure, then there exists \(c_{1}>0\) such that_
\[c_{1}\|\nabla u\|_{q}^{q}\leqslant\|\nabla u\|_{q}^{q}-\int_{\Omega}\vartheta( z)|u|^{q}dz,\text{ for all }u\in W^{1,q}_{0}(\Omega).\]
The hypotheses on the perturbation \(f(z,x)\) are the following:
\((H_{1}):f:\Omega\times\mathbb{R}\rightarrow\mathbb{R}\) is an \(L^{\infty}\)-locally Lipschitz integrand such that \(f(z,0)=0\) for a.e. \(z\in\Omega\) and
1. \(|f(z,x)|\leq\hat{\alpha}(z)(1+|x|^{r-1})\) for a.e. \(z\in\Omega\), all \(x\in\mathbb{R}\) with \(\hat{\alpha}\in L^{\infty}(\Omega)\) and \(p<r<q^{*}\);
2. there exist \(\vartheta\in L^{\infty}(\Omega)\) and \(\delta>0\) such that \[\vartheta(z)\leq\hat{\lambda}_{1}(q)\ \text{ for a.e. }z\in\Omega,\ \vartheta\not \equiv\hat{\lambda}_{1}(q),\] \[\limsup_{x\to 0}\frac{qF(z,x)}{|x|^{q}}\leq\vartheta(z)\text{ uniformly for a.e }z\in\Omega,\] where \(F(z,x)=\int_{0}^{x}f(z,s)ds\) and \(0\leq f(z,x)x\) for a.e. \(z\in\Omega\), all \(|x|\leq\delta\).
**Remark 2.2**.: _The hypotheses on the perturbation \(f(z,x)\) are minimal, requiring a nonuniform nonresonance condition as \(x\to 0\) and a local sign condition._
Now let us consider the following auxiliary double phase Dirichlet problem
\[\begin{cases}-\Delta_{p}^{\alpha}u(z)-\Delta_{q}u(z)=\lambda|u(z)|^{\tau-2}u(z ),\ \text{ in }\Omega,\\ u|_{\partial\Omega}=0,\ 1<\tau<q<p<N,\ \lambda>0.\end{cases}\]
By reasoning similarly to the proof of [10, Proposition 10], we have the following result concerning problem \((A_{\lambda})\).
**Proposition 2.5**.: _If hypotheses \((H_{0})\) hold and \(\lambda>0\), then problem \((A_{\lambda})\) has a unique positive solution \(\bar{u}_{\lambda}\in W^{1,\eta}_{0}(\Omega)\cap L^{\infty}(\Omega)\) with \(0\prec\bar{u}_{\lambda}\). Furthermore, since problem \((A_{\lambda})\) is odd, \(\bar{v}_{\lambda}=-\bar{u}_{\lambda}\) is the unique negative solution of \((A_{\lambda})\)._
## 3 Constant-sign solutions
We define the following sets
\[S_{\lambda}^{+}=\text{set of positive solutions of problem }(P_{\lambda}),\]
\[S_{\lambda}^{-}=\text{set of negative solutions of problem }(P_{\lambda}).\]
Now we will show that both \(S_{\lambda}^{+}\) and \(S_{\lambda}^{-}\) are non-empty for small \(\lambda>0\).
**Proposition 3.1**.: _Assuming hypotheses \((H_{0})\), \((H_{1})\) hold, for all \(\lambda>0\) small, we have_
\[\emptyset\neq S_{\lambda}^{+}\subseteq W^{1,\eta}_{0}(\Omega)\cap L ^{\infty}(\Omega),0\prec u,\text{ for all }u\in S_{\lambda}^{+},\] \[\emptyset\neq S_{\lambda}^{-}\subseteq W^{1,\eta}_{0}(\Omega)\cap L ^{\infty}(\Omega),v\prec 0,\text{ for all }\ v\in S_{\lambda}^{-}.\]
Proof.: Let \(\varphi_{\lambda}^{+}:W_{0}^{1,\eta}(\Omega)\to\mathbb{R}\) be the \(C^{1}\)-functional defined by
\[\varphi_{\lambda}^{+}(u)=\frac{1}{p}\rho_{\alpha}(\nabla u)+\frac{1}{q}\|\nabla u \|_{q}^{q}-\frac{\lambda}{\tau}\|u^{+}\|_{\tau}^{\tau}-\int_{\Omega}F(z,u^{+}) dz,\ \ \text{for all}\ u\in W_{0}^{1,\eta}(\Omega),\]
where \(\rho_{\alpha}(\nabla u)=\int_{\Omega}\alpha(z)|\nabla u|^{p}dz\). From hypotheses \((H_{1})(1)\), \((2)\), for any \(\varepsilon>0\), there exists \(c_{1}=c_{1}(\varepsilon)>0\) such that
\[F(z,x)\leqslant\frac{1}{q}(\vartheta(z)+\varepsilon)|x|^{q}+c_{1}|x|^{r},\ \text{for a.e.}\ z\in\Omega,\ \text{all}\ x\in\mathbb{R}.\]
Then for all \(u\in W_{0}^{1,\eta}(\Omega)\), we have
\[\varphi_{\lambda}^{+}(u)\geq\frac{1}{p}\rho_{a}(\nabla u)+\frac{1}{q}\left(\| \nabla u\|_{q}^{q}-\int_{\Omega}\vartheta(z)|u|^{q}dz-\varepsilon\|u\|_{q}^{q} \right)-\frac{\lambda}{\tau}\|u\|_{\tau}^{r}-c_{1}\|u\|_{r}^{r}. \tag{3.1}\]
By Proposition 2.4, we have from (2.2) that
\[\|\nabla u\|_{q}^{q}-\int_{\Omega}\vartheta(z)|u|^{q}dz-\varepsilon\|u\|_{q}^ {q}\geq\left(c_{1}-\frac{\varepsilon}{\hat{\lambda}_{1}(q)}\right)\|\nabla u\| _{q}^{q}.\]
Choosing \(\varepsilon\in\left(0,\hat{\lambda}_{1}(q)c_{1}\right)\), we obtain
\[\|\nabla u\|_{q}^{q}-\int_{\Omega}\vartheta(z)\|u\|^{q}dz-\varepsilon\|u\|_{q }^{q}\geqslant c_{2}\|\nabla u\|_{q}^{q}\ \text{for some}\ c_{2}>0. \tag{3.2}\]
Returning to (3.1) and using (3.2) and the fact that
\[W_{0}^{1,\eta}(\Omega)\hookrightarrow L^{\tau}(\Omega),L^{\tau}(\Omega)\ \text{continuously},\]
we have
\[\varphi_{\lambda}^{+}(u)\geq c_{3}\rho_{\eta}(\nabla u)-c_{4}\left(\|u\|^{ \tau}+\|u\|^{r}\right),\ \text{for some}\ c_{3},c_{4}>0.\]
If \(\|u\|\leq 1\), from Proposition 2.2, one has
\[\varphi_{\lambda}^{+}(u)\geq\Big{(}c_{3}-c_{4}\left(\|u\|^{\tau-p}+\|u\|^{r-p} \right)\Big{)}\|u\|^{p}. \tag{3.3}\]
We consider the function
\[\gamma_{\lambda}(t)=\lambda t^{\tau-p}+t^{r-p},\ t>0.\]
Since \(1<\tau<q<p\), it is easy to see that
\[\gamma_{\lambda}(t)\to+\infty\ \text{as}\ t\to 0^{+}\ \text{and as}\ \ t\to+\infty.\]
So, there exits \(t_{0}>0\) such that
\[\gamma_{\lambda}\left(t_{0}\right)=\min_{t>0}\gamma_{\lambda}(t) \Rightarrow\gamma_{\lambda}^{\prime}\left(t_{0}\right)=0\] \[\Rightarrow\lambda(p-\tau)t_{0}^{\tau-p}=(r-p)t_{0}^{r-p}\] \[\Rightarrow t_{0}(\lambda)=\left(\frac{\lambda(p-\tau)}{r-p} \right)^{\frac{1}{r-\tau}}.\]
Evidently \(t_{0}(\lambda)\to 0^{+}\) as \(\lambda\to 0^{+}\) and then since \(p-\tau<r-\tau\), we have
\[\gamma_{\lambda}\left(t_{0}(\lambda)\right)\to 0^{+}\text{ as }\ \lambda\to 0^{+}.\]
Hence we can find \(\lambda_{*}>0\) such that
\[t_{0}(\lambda)<1\ \text{ and }\ \gamma_{\lambda}\left(t_{0}(\lambda)\right)< \frac{c_{3}}{c_{4}}\ \text{ for all }\lambda\in\left(0,\lambda_{*}\right).\]
Then from (3.3), we have that
\[\varphi_{\lambda}^{+}(u)\geq m_{\lambda}>0,\text{ for all }\|u\|=t_{0}( \lambda)\text{ and all }\lambda\in\left(0,\lambda_{*}\right). \tag{3.4}\]
Now we introduce the following closed ball in \(W_{0}^{1,\eta}(\Omega)\)
\[\bar{B}_{\lambda}=\left\{u\in W_{0}^{1,\eta}(\Omega):\|u\|\leq t_{0}(\lambda),\lambda\in\left(0,\lambda_{*}\right)\right\}.\]
Since \(W_{0}^{1,\eta}(\Omega)\) is reflexive, by the James and Eberlein-Smulian theorems (see [19]), we have that \(\bar{B}_{\lambda}\) is sequentially \(\omega\)-compact. Also using Proposition 2.1, we see that \(\varphi_{\lambda}^{+}(\cdot)\) is sequentially weakly lower semicontinuous. So, by the Weierstrass-Tonelli theorem, there exists a \(u_{\lambda}\in W_{0}^{1,\eta}(\Omega)\) such that
\[\varphi_{\lambda}^{+}\left(u_{\lambda}\right)=\inf\left\{\varphi_{\lambda}^{+} (u):u\in\bar{B}_{\lambda}\right\}. \tag{3.5}\]
Let \(u\in C_{0}^{1}(\bar{\Omega})\) with \(u(z)>0\) for all \(z\in\Omega\). Choose \(t\in(0,1)\) small enough such that
\[tu\in\bar{B}_{\lambda}\ \text{ and }0\leq tu(z)\leq\delta\text{ for all }z\in\bar{\Omega},\]
with \(\delta>0\) as postulated by hypothesis \((H_{1})(2)\). Then from the local sign condition (\(\sec H_{1}(2)\)), recall \(t\in(0,1)\) and \(q<p\), we have
\[\varphi_{\lambda}^{+}(tu)\leqslant\frac{t^{q}}{q}\rho_{\eta}(\nabla u)-\frac{ \lambda t^{\tau}}{\tau}\|u\|_{\tau}^{\tau}.\]
Since \(\tau<q\), choose \(t\in(0,1)\) even smaller if necessary, we have from (3.5) that
\[\varphi_{\lambda}^{+}(tu)<0,\] \[\Rightarrow \varphi_{\lambda}^{+}\left(v_{\lambda}\right)<0=\varphi_{\lambda }^{+}(0),\] \[\Rightarrow v_{\lambda}\neq 0. \tag{3.6}\]
From (3.4) and (3.6) it follows that
\[0<\|v_{\lambda}\|<t_{0}(\lambda).\]
Then from (3.5), we have for all \(h\in W_{0}^{1,\eta}(\Omega)\) and \(\lambda\in\left(0,\lambda_{*}\right)\) that
\[\left\langle\left(\varphi_{\lambda}^{+}\right)^{\prime}\left(u_{\lambda} \right),h\right\rangle=0\Rightarrow\left\langle V\left(u_{\lambda}\right),h \right\rangle=\int_{\Omega}\lambda\left(u_{\lambda}^{+}\right)^{\tau-1}hdz+ \int_{\Omega}f\left(z,u_{\lambda}^{+}\right)hdz. \tag{3.7}\]
In (3.7), choose the test function \(h=-u_{\lambda}^{-}\in W_{0}^{1,\eta}(\Omega)\) and use Proposition 2.2, we obtain
\[\rho_{\eta}\left(\nabla u_{\lambda}^{-}\right)=0\Rightarrow u_{\lambda}\geqslant 0,\ u_{\lambda}\neq 0.\]
Then from (3.7) we see that
\[u_{\lambda}\in S_{\lambda}^{+},\ \text{for all}\ \lambda\in\left(0,\lambda_{*} \right).\]
[5, Theorem 3.1] implies that
\[u_{\lambda}\in W_{0}^{1,\eta}(\Omega)\cap L^{\infty}(\Omega).\]
Let \(\rho=\left\|u_{\lambda}\right\|_{\infty}\). Hypotheses \((H_{1})\) imply that we can find \(\hat{\xi}_{\rho}>0\) such that
\[f\left(z,x\right)+\hat{\xi}_{\rho}x^{p-1}\geq 0,\ \text{for a.e.}\ z\in\Omega,\ \text{and all}\ 0\leq x\leq\rho.\]
Therefore we have
\[-\Delta_{p}^{\alpha}u_{\lambda}-\Delta_{q}u_{\lambda}+\hat{\varepsilon}_{p}u_ {\lambda}^{p-1}\geq 0\ \ \text{in}\ \,\Omega.\]
Then using [18, Proposition 2.4] we obtain
\[0\prec u_{\lambda}.\]
Therefore we have proved that for \(\lambda\in(0,\lambda_{*})\)
\[\emptyset\neq S_{\lambda}\subseteq W_{0}^{1,\eta}(\Omega)\cap L^{\infty}( \Omega)\ \text{and}\ 0\prec u\ \text{for all}\ u\in S_{\lambda}^{+}.\]
Similarly we can show the nonemptiness of the set \(S_{\lambda}^{-}\). In this case we work with the \(C^{1}\)-functional \(\varphi_{\lambda}^{-}:W_{0}^{1,\eta}(\Omega)\to\mathbb{R}\) defined by
\[\varphi_{\lambda}^{-}(u)=\frac{1}{p}\rho_{a}(\nabla u)+\frac{1}{q}\|\nabla u \|_{q}^{q}-\frac{\lambda}{\tau}\|u^{-}\|_{\tau}^{\tau}-\int_{\Omega}F(z,u^{- })dz,\ \ \text{for all}\ u\in W_{0}^{1,\eta}(\Omega).\]
The proof of the previous proposition, leads to the next result which will be used to show that the nodal solutions that we will produce, asymptotically vanish as \(\lambda\to 0^{+}\).
**Proposition 3.2**.: _If hypotheses \((H_{0}),(H_{1})\) hold and \(\lambda\in(0,\lambda_{*})\), then there exist \(u_{\lambda}\in S_{\lambda}^{+}\) and \(v_{\lambda}\in S_{\lambda}^{-}\) such that_
\[u_{\lambda},v_{\lambda}\to 0,\ \text{in}\ W_{0}^{1,\eta}(\Omega)\cap L^{ \infty}(\Omega)\ \text{as}\ \lambda\to 0^{+}.\]
Proof.: From the proof of Proposition 3.1, we know that we can find \(u_{\lambda}\in S_{\lambda}^{+}\) such that
\[\|u_{\lambda}\|<t_{0}(\lambda).\]
Since \(t_{0}(\cdot)\) is increasing and \(t_{0}(\lambda)\to 0\) as \(\lambda\to 0^{+}\), from [5, Theorem 3.1] (see also [14, Theorem 3.1], we infer that
\[\{u_{\lambda}\}_{\lambda\in(0,\lambda_{*})}\subseteq L^{\infty}(\Omega)\ \text{is bounded and}\ \|u_{\lambda}\|_{\infty}\leq O(\lambda),\ \lambda\in(0,\lambda_{*})\,,\]
here \(O(\lambda)\to 0\) as \(\lambda\to 0^{+}\). Therefore finally we have
\[u_{\lambda}\to 0\ \ \text{in}\ \ W_{0}^{1,\eta}(\Omega)\cap L^{\infty}(\Omega)\ \ \text{as}\ \ \lambda\to 0^{+}.\]
Similarly, we have solutions \(v_{\lambda}\in S_{\lambda}^{-}\) for all \(\lambda\in(0,\lambda_{*})\) such that
\[v_{\lambda}\to 0\ \ \text{in}\ \ W_{0}^{1,\eta}(\Omega)\cap L^{\infty}(\Omega)\ \ \text{as}\ \ \lambda\to 0^{+}.\]
Next we will show that for \(\lambda\in(0,\lambda_{*})\), problem \((P_{\lambda})\) has extremal constant-sign solutions (that is, a largest positive solution and a smallest negative solution). We will use them to generate a nodal (sign-changing) solution.
So, we have the following result.
**Proposition 3.3**.: _If hypotheses \((\mathrm{H}_{0}),(\mathrm{H}_{1})\) hold and \(\lambda\in(0,\lambda_{*})\), then there exist \(u_{\lambda}^{*}\in S_{\lambda}^{+}\) and \(v_{\lambda}^{*}\in S_{\lambda}^{-}\) such that_
\[u_{\lambda}^{*}\leqslant u,\ \text{ for all }u\in S_{\lambda}^{+},\] \[v\leqslant v_{\lambda}^{*},\ \text{ for all }v\in S_{\lambda}^{-}.\]
Proof.: As demonstrated in [3], we show that
\[S_{\lambda}^{+}\text{ is downward directed.}\]
Consequently, by using [8, Theorem 5.109], we can obtain a decreasing sequence \(\{u_{n}\}_{n\in\mathbb{N}}\subseteq S_{\lambda}^{+}\) such that
\[\inf S_{\lambda}^{+}=\inf_{n\in\mathbb{N}}u_{n}.\]
We have
\[\langle V(u_{n}),h\rangle=\int_{\Omega}\lambda u_{n}^{\tau-1}hdz+\int_{\Omega }f\left(z,u_{n}\right)hdz,\text{ for all }h\in W_{0}^{1,\eta}(\Omega),\text{ all }n\in \mathbb{N}, \tag{3.8}\]
and
\[0\leq u_{n}\leq u_{1},\ \text{ for all }n\in\mathbb{N}. \tag{3.9}\]
Based on hypotheses \((H_{1})\), we have refer from (3.9) and the fact that \(u_{1}\in L^{\infty}(\Omega)\) that
\[\left|\lambda u_{n}(z)^{\tau-1}+f\left(z,u_{n}(z)\right)\right| \leq c_{5}\left(u_{n}(z)^{\tau-1}+u_{n}(z)^{q-1}+u_{n}(z)^{r-1}\right)\] \[\leq c_{5}\left(u_{1}(z)^{\tau-1}+u_{1}(z)^{q-1}+u_{1}(z)^{r-1}\right)\] \[\leq c_{6},\text{ for a.e. }z\in\Omega,\]
where \(c_{5}\) and \(c_{6}\) are positive constants. Then using Moser's iteration technique, as in [6, Proposition 1.3], we have
\[\|u_{n}\|\leqslant c_{7}O\left(\|f\left(\cdot,u_{n}(\cdot)\right)\|_{m}\right),\text{ for some }c_{7}>0,\text{ with }m>N,\text{ all }n\in\mathbb{N}. \tag{3.10}\]
From (3.8), using the test function \(h=u_{n}\in W_{0}^{1,\eta}(\Omega)\), we obtain that
\[\left\{u_{n}\right\}_{n\in\mathbb{N}}\subseteq W_{0}^{1,\eta}(\Omega)\ \text{ is bounded }.\]
So, from Proposition 2.1 we may assume that
\[u_{n}\rightharpoonup u_{\lambda}^{*}\text{ in }W_{0}^{1,\eta}(\Omega),\ u_{n} \to u_{\lambda}^{*}\text{ in }L^{p}(\Omega). \tag{3.11}\]
In (3.8) we use the test function \(h=(u_{n}-u_{\lambda}^{*})\in W_{0}^{1,\eta}(\Omega)\), pass to the limit as \(n\to\infty\) and use (3.11). Then, by Proposition 2.3, we have
\[\lim_{n\to\infty}\left\langle V\left(u_{n}\right),u_{n}-u_{\lambda}^{*} \right\rangle=0\Rightarrow u_{n}\to u_{\lambda}^{*}\text{ in }W_{0}^{1,\eta}(\Omega). \tag{3.12}\]
Suppose that \(u_{\lambda}^{\star}=0\). Then from (3.10) it follows that
\[u_{n}\to 0\text{ in }L^{\infty}(\Omega)\text{ \ as \ }n\to+\infty.\]
Therefore there exists \(n_{0}\in\mathbb{N}\) from hypothesis \((H_{1})(2)\) such that for a.e. \(z\in\Omega\) and all \(n\geqslant n_{0}\)
\[0\leq u_{n}(z)\leq\delta\Rightarrow\lambda u_{n}(z)^{\tau-1}\leqslant\lambda u _{n}(z)^{\tau-1}+f\left(z,u_{n}(z)\right). \tag{3.13}\]
We fix \(n\geqslant n_{0}\) and introduce Caratheodory function \(\gamma_{\lambda}^{+}(z,x)\) defined by
\[\gamma_{\lambda}^{+}(z,x)=\begin{cases}\lambda\left(x^{+}\right)^{\tau-1}& \text{ if }x\leqslant u_{n}(z),\\ \lambda u_{n}(z)^{\tau-1}&\text{ if }u_{n}(z)<x.\end{cases} \tag{3.14}\]
Assume \(\Gamma_{\lambda}^{+}(z,x)=\int_{0}^{t}\gamma_{\lambda}^{+}(z,s)ds\) and consider the \(C^{1}\)-functional \(\sigma_{\lambda}^{+}:W_{0}^{1,\eta}(\Omega)\to\mathbb{R}\) defined by
\[\sigma_{\lambda}^{+}(u)=\frac{1}{p}\rho_{a}(\nabla u)+\frac{1}{q}\|\nabla u\| _{q}^{q}-\int_{\Omega}\Gamma_{\lambda}^{+}(z,u)dz,\text{ \ for all }u\in W_{0}^{1,\eta}(\Omega).\]
From (3.14) we see that \(\sigma_{\lambda}^{+}(\cdot)\) is coercive. Also, it is sequentially weakly lower semicontinuous. So, we can find \(\tilde{u}_{\lambda}\in W_{0}^{1,\eta}(\Omega)\) such that
\[\sigma_{\lambda}^{+}\left(\tilde{u}_{\lambda}\right)=\inf\left\{\sigma_{ \lambda}^{+}(u):u\in W_{0}^{1,\eta}(\Omega)\right\}. \tag{3.15}\]
Let \(u\in C_{0}^{1}(\bar{\Omega})\) with \(u(z)>0\) for all \(z\in\Omega\) and let \(t\in(0,1)\), then by (3.14), we have
\[\sigma_{\lambda}^{+}(tu)\leq \frac{t^{q}}{q}\rho_{\eta}(\nabla u)-\int_{\Omega}\Gamma_{ \lambda}^{+}(z,tu)dz\] \[= \frac{t^{q}}{q}\rho_{\eta}(\nabla u)-\frac{\lambda t^{\tau}}{\tau }\int_{\{0\leq tu\leq u_{n}\}}u^{\tau}dz\] \[-\frac{\lambda}{\tau}\int_{\{u_{n}<tu\}}u_{n}^{\tau}dz-\lambda \int_{\{u_{n}<tu\}}u_{n}^{\tau}\left(tu-u_{n}\right)dz\] \[\leq \frac{t^{q}}{q}\rho_{\eta}(\nabla u)-\frac{\lambda t^{\tau}}{\tau }\int_{\{0\leq tu\leq u_{n}\}}u^{\tau}dz\] \[= \frac{t^{q}}{q}\rho_{\eta}(\nabla u)-\frac{\lambda t^{\tau}}{\tau }\int_{\Omega}u^{\tau}dz+\frac{\lambda t^{\tau}}{\tau}\int_{\{u_{n}<tu\}}u^{ \tau}dz.\]
Moreover, we have
\[\frac{\sigma_{\lambda}^{+}(tu)}{t^{\tau}}\leqslant\frac{t^{q-\tau}}{q}\rho_{ \eta}(\nabla u)-\frac{\lambda}{\tau}\|u\|_{\tau}^{\tau}+\frac{\lambda}{\tau} \int_{\{u_{n}<tu\}}u^{\tau}dz. \tag{3.16}\]
Note that
\[\frac{t^{q-\tau}}{q}\rho_{\eta}(\nabla u)\to 0\text{ \ as \ }t\to 0^{+},\] \[\frac{\lambda}{\tau}\int_{\{u_{n}<tu\}}u^{\tau}dz\to 0\text{ \ as \ }t\to 0^{+}.\]
Therefore from (3.16) we have
\[\limsup_{t\to 0^{+}}\frac{\sigma_{\lambda}^{+}(tu)}{t^{\tau}}=-\frac{\lambda}{ \tau}\|u\|_{\tau}^{\tau}<0.\]
So, for \(t\in(0,1)\) small, one has
\[\sigma_{\lambda}^{+}(tu)<0,\] \[\Rightarrow \sigma_{\lambda}^{+}\left(\tilde{u}_{\lambda}\right)<0=\sigma_{ \lambda}^{+}(0)\] \[\Rightarrow \tilde{u}_{\lambda}\neq 0.\]
From (3.15) we have for all \(h\in W_{0}^{1,\eta}(\Omega)\) that
\[\left\langle\left(\sigma_{\lambda}^{+}\right)^{\prime}\left(\tilde{u}_{ \lambda}\right),h\right\rangle=0\ \Rightarrow\left\langle V\left(\tilde{u}_{\lambda}\right),h\right\rangle= \int_{\Omega}\gamma_{\lambda}^{+}\left(z,\tilde{u}_{\lambda}\right)hdz.\]
Choosing \(h=-\tilde{u}_{\lambda}^{-}\in W_{0}^{1,\eta}(\Omega)\), by (3.14), we obtain
\[\rho_{\eta}(\nabla\tilde{u}_{\lambda}^{-})=0\Rightarrow\tilde{u}_{\lambda} \geq 0,\ \tilde{u}_{\lambda}\neq 0.\]
Also, if we use the test function \(h=\left(\tilde{u}_{\lambda}-u_{n}\right)^{+}\in W_{0}^{1,\eta}(\Omega)\), since \(u_{n}\in S_{\lambda}^{+}\) and (3.13), we have
\[\left\langle V\left(\tilde{u}_{\lambda}\right),\left(\tilde{u}_{ \lambda}-u_{n}\right)^{+}\right\rangle\] \[= \int_{\Omega}\lambda u_{n}^{\tau-1}\left(\tilde{u}_{\lambda}-u_{ n}\right)^{+}dz\] \[\leq \int_{\Omega}\left(\lambda u_{n}^{\tau-1}+f\left(z,u_{n}\right) \right)\left(\tilde{u}_{\lambda}-u_{n}\right)^{+}dz\] \[= \left\langle V\left(u_{n}\right),\left(\tilde{u}_{\lambda}-u_{n} \right)^{+}\right\rangle,\] \[\Rightarrow \tilde{u}_{\lambda}\leq u_{n}.\]
So, we have proved that
\[\tilde{u}_{\lambda}\in\left[0,u_{n}\right],\ \tilde{u}_{\lambda}\neq 0.\]
From (3.14) it follows that \(\tilde{u}_{\lambda}\) is a positive solution of \(\left(A_{\lambda}\right)\). Then by Proposition 2.5, we have
\[\tilde{u}_{\lambda}=\bar{u}_{\lambda}\Rightarrow\bar{u}_{\lambda}\leq u_{n}\ \text{ for all }\ n\geqslant n_{0}.\]
But this contradicts the hypothesis that \(u_{n}\to 0\) in \(W_{0}^{1,\eta}(\Omega)\) (recall that we have assumed that \(u_{\lambda}^{*}=0\)). Therefore \(u_{\lambda}^{*}\neq 0\). In (3.8), pass to the limit as \(n\rightarrow\infty\) and use (3.12), we obtain
\[\left\langle V\left(u_{\lambda}^{*}\right),h\right\rangle=\int_{\Omega}\lambda \left(u_{\lambda}^{*}\right)^{\tau-1}hdz+\int_{\Omega}f\left(z,u_{\lambda}^{*} \right)hdz,\ \text{ for all }\ h\in W_{0}^{1,\eta}(\Omega).\]
Moreover, we have
\[u_{\lambda}^{*}\in S_{\lambda}^{+},\ u_{\lambda}^{*}=\inf S_{\lambda}^{+}.\]
Similarly we produce a maximal element for \(S_{\lambda}^{-}\). The set \(S_{\lambda}^{-}\) is upward directed. So, we can find \(\left\{v_{n}\right\}_{n\in\mathbb{N}}\subseteq S_{\lambda}^{-}\) increasing sequence such that \(\sup S_{\lambda}^{-}=\sup_{n\in\mathbb{N}}v_{n}\).
In the next section, we will use \(u_{\lambda}^{*}\) and \(v_{\lambda}^{*}\) to generate a nodal solution. The idea is to look for nontrivial solutions of \((P_{\lambda})\)\((\lambda\in(0,\lambda_{*}))\) in the order interval \([v_{\lambda}^{*},\,u_{\lambda}^{*}]\) which are different from \(u_{\lambda}^{*}\) and \(v_{\lambda}^{*}\). On account of the extremality of \(u_{\lambda}^{*}\) and \(v_{\lambda}^{*}\), such a solution must be nodal. To produce this solution, we shall use truncation and comparison techniques and critical groups.
## 4 Nodal solutions
In order to focus on the order interval \([v_{\lambda}^{*},u_{\lambda}^{*}]\)\((\lambda\in(0,\lambda_{*}))\), we introduce the following truncation of the reaction of the reaction of \((P_{\lambda})\)
\[g_{\lambda}(z,x)=\begin{cases}\lambda|v_{\lambda}^{*}(z)|^{\tau-2}v_{\lambda} ^{*}(z)+f\left(z,v_{\lambda}^{*}(z)\right)&\text{ if }x<v_{\lambda}^{*}(z),\\ \lambda|x|^{\tau-2}x+f(z,x)&\text{ if }v_{\lambda}^{*}(z)\leq x\leq u_{ \lambda}^{*}(z),\\ \lambda u_{\lambda}^{*}(z)^{\tau-1}+f\left(z,u_{\lambda}^{*}(z)\right)&\text{ if }u_{\lambda}^{*}(z)<x.\end{cases} \tag{4.1}\]
Also we consider the positive and negative truncations of \(g_{\lambda}(z,\cdot)\), namely the functions
\[g_{\lambda}^{\pm}(z,x)=g_{\lambda}(z,\pm x^{\pm}). \tag{4.2}\]
All three are Caratheodory functions, we set
\[G_{\lambda}(z,x)=\int_{0}^{x}g_{\lambda}(z,s)ds\text{ and }G_{\lambda}^{\pm}(z,x )=\int_{0}^{x}g_{\lambda}^{\pm}(z,s)ds,\]
and consider the \(C^{1}\)-functionals \(\beta_{\lambda}\), \(\beta_{\lambda}^{\pm}:W_{0}^{1,\eta}(\Omega)\to\mathbb{R}\) defined by
\[\beta_{\lambda}(u)=\frac{1}{p}\rho_{a}(\nabla u)+\frac{1}{q}\| \nabla u\|_{q}^{q}-\int_{\Omega}G_{\lambda}(z,u)dz,\] \[\beta_{\lambda}^{\pm}(u)=\frac{1}{p}\rho_{a}(\nabla u)+\frac{1}{ q}\|\nabla u\|_{q}^{q}-\int_{\Omega}G_{\lambda}^{\pm}(z,u)dz,\]
for all \(u\in W_{0}^{1,\eta}(\Omega)\). From (4.1) and (4.2), we can see that
\[K_{\beta_{\lambda}}\subseteq\left[v_{\lambda}^{*},u_{\lambda}^{*}\right],\ K_{\beta_{ \lambda}^{+}}\subseteq\left[0,u_{\lambda}^{*}\right],\ K_{\beta_{\lambda}^{-} }\subseteq\left[v_{\lambda}^{*},0\right].\]
The extremality of \(u_{\lambda}^{*},v_{\lambda}^{*}\) implies that
\[K_{\beta_{\lambda}}\subseteq\left[v_{\lambda}^{*},\ \ u_{\lambda}^{*}\right],K_{ \beta_{\lambda}^{+}}=\left\{0,u_{\lambda}^{*}\right\},\ K_{\beta_{\lambda}^{-} }=\left\{v_{\lambda}^{*},0\right\}. \tag{4.3}\]
Also let \(\varphi_{\lambda}:W_{0}^{1,\eta}(\Omega)\to\mathbb{R}\) be the energy functional of problem \((P_{\lambda})\) defined by
\[\varphi_{\lambda}(u)=\frac{1}{p}\rho_{\alpha}(\nabla u)+\frac{1}{q}\|\nabla u \|_{q}^{q}-\frac{\lambda}{\tau}\|u\|_{\tau}^{\tau}-\int_{\Omega}F\left(z,u \right)dz,\ \text{ for all }\ u\in W_{0}^{1,\eta}(\Omega).\]
Evidently \(\varphi_{\lambda}\in C^{1}\left(W_{0}^{1,\eta}(\Omega)\right)\).
As mentioned in the introduction, we will address the challenges arising from the absence of a global regularity theory by employing critical groups. We will compute the critical groups of \(\beta_{\lambda}(\cdot)\) and \(\beta_{\lambda}^{\pm}(\cdot)\).
First of all, we will compute the critical groups of \(\beta_{\lambda}(\cdot)\) at \(0\).
**Proposition 4.1**.: _If hypotheses \((H_{0}),(H_{1})\) hold and \(\lambda\in(0,\lambda_{*})\), then_
\[C_{k}\left(\beta_{\lambda},0\right)=0,\ \ \text{for all}\ \ k\in\mathbb{N}_{0}.\]
Proof.: For any \(u\in W_{0}^{1,\eta}(\Omega)\), we have
\[\left|\varphi_{\lambda}(u)-\beta_{\lambda}(u)\right|\] \[= \left|\int_{\Omega}\left(\frac{\lambda}{\tau}|u|^{\tau}+F(z,u)-G_ {\lambda}(z,u)\right)dz\right|\] \[\leq \int_{\left\{u<v_{\lambda}^{*}\right\}}\left|\frac{\lambda}{\tau} (|u|^{\tau}-|v_{\lambda}^{*}|^{r})-\lambda|v_{\lambda}^{*}|^{\tau-2}v_{\lambda }^{*}\left(u-v_{\lambda}^{*}\right)\right|dz\] \[+\int_{\left\{u<v_{\lambda}^{*}\right\}}\left|F(z,u)-F\left(z,v_{ \lambda}^{*}\right)-f\left(z,v_{\lambda}^{*}\right)\left(u-v_{\lambda}^{*} \right)\right|dz\] \[+\int_{\left\{u_{\lambda}^{*}<u\right\}}\left|\frac{\lambda}{ \tau}\Big{(}u^{\tau}-(u_{\lambda}^{*})^{r}\Big{)}-\lambda(u_{\lambda}^{*})^{ \tau-1}\left(u-u_{\lambda}^{*}\right)\right|dz\] \[+\int_{\left\{u_{\lambda}^{*}<u\right\}}\left|F(z,u)-F\left(z,u_{ \lambda}^{*}\right)-f\left(z,u_{\lambda}^{*}\right)\left(u-u_{\lambda}^{*} \right)\right|dz. \tag{4.4}\]
Note that, from the continuous embedding \(W_{0}^{l,\eta}(\Omega)\hookrightarrow L^{\tau}(\Omega)\), we have that for some \(c_{8}>0\)
\[\int_{\left\{u<v_{\lambda}^{*}\right\}}\left|\frac{\lambda}{\tau}(|u|^{\tau}- |v_{\lambda}^{*}|^{r})-\lambda|v_{\lambda}^{*}|^{\tau-2}v_{\lambda}^{*}\left( u-v_{\lambda}^{*}\right)\right|dz\leq\lambda c_{8}\|u\|^{\tau}. \tag{4.5}\]
Since \(F(z,\cdot)\) is \(L^{\infty}\)-locally Lipschitz integrand and
\[|v_{\lambda}^{*}|\leq|u|,\ \ \text{on}\ \ \left\{u<v_{\lambda}^{*}\right\},\]
it yields
\[\int_{\left\{u<v_{\lambda}^{*}\right\}}\left|F(z,u)-F\left(z,v_{\lambda}^{*} \right)-f\left(z,v_{\lambda}^{*}\right)\left(u-v_{\lambda}^{*}\right)\right| dz\leq c_{9}\|u\|\]
for some \(c_{9}>0\).
Similarly, we have for some \(c_{10}>0\)
\[\int_{\left\{u_{\lambda}^{*}<u\right\}}\left|\frac{\lambda}{\tau}(u^{\tau}-(u _{\lambda}^{*})^{\tau})-\lambda(u_{\lambda}^{*})^{\tau-1}(u-u_{\lambda}^{*}) \right|dz\leq\lambda c_{10}\|u\|^{\tau} \tag{4.6}\]
and for some \(c_{11}>0\)
\[\int_{\left\{u_{\lambda}^{*}<u\right\}}\left|F(z,u)-F(z,u_{\lambda}^{*})-f(z, u_{\lambda}^{*})(u-u_{\lambda}^{*})\right|dz\leq c_{11}\|u\|. \tag{4.7}\]
Returning to (4.4) and using (4.5), (4.6) and (4.7), we have for some \(c_{12}>0\) and all \(\|u\|\leq 1\)
\[|\varphi_{\lambda}(u)-\beta_{\lambda}(u)|\leq c_{12}\|u\|^{\tau}. \tag{4.8}\]
Next we conduct a similar estimation for the difference of the two derivatives. So let \(u,h\in W^{1,\eta}_{0}(\Omega)\), we have
\[|\langle\varphi^{\prime}_{\lambda}(u)-\beta^{\prime}_{\lambda}(u),h\rangle|\] \[\leq \int_{\left\{u<v^{*}_{\lambda}\right\}}\lambda\left|\left|u\right|^ {\tau-2}u-|v^{*}_{\lambda}|^{\tau-2}\,v^{*}_{\lambda}\right|\left|h|dz+\int_{ \left\{u<v^{*}_{\lambda}\right\}}|f(z,u)-f\left(z,v^{*}_{\lambda}\right)| \left|h\right|dz\] \[+\int_{\left\{u^{*}_{\lambda}<u\right\}}\lambda\left|u\right|^{ \tau-1}-\left(u^{*}_{\lambda}\right)^{\tau-1}\right|\left|h|dz+\int_{\left\{u ^{*}_{\lambda}<u\right\}}|f(z,u)-f(z,u^{*}_{\lambda})|\left|h\right|dz.\]
Note that \(|u|^{\tau-1}\in L^{\tau^{\prime}}(\Omega)\ \left(\frac{1}{\tau}+\frac{1}{\tau^{ \prime}}=1\right)\), while from Proposition 2.1, we know that \(|h|\in L^{\tau}(\Omega)\). Also \(|u|\in L^{(q^{*})^{\prime}}(\Omega),|h|\in L^{q^{*}}(\Omega)\), here \(2\leq q^{*}\). So, using the Holder's inequality and Proposition 2.1, we have for some \(c_{13}>0\) and all \(|u|\leq 1\),
\[|\langle\varphi^{\prime}_{\lambda}(u)-\beta^{\prime}_{\lambda}(u),h\rangle| \leq(\lambda+1)c_{13}\|u\|\|h\|\Rightarrow\|\varphi^{\prime}_{\lambda}(u)- \beta^{\prime}_{\lambda}(u)\|_{*}\leq(\lambda+1)c_{13}\|u\|. \tag{4.9}\]
From (4.8) and (4.9), for any \(\varepsilon>0\), there exists \(\delta\in(0,1)\) such that
\[\|\varphi_{\lambda}-\beta_{\lambda}\|_{C^{1}(\bar{B}_{\delta})}\leq\varepsilon\]
where \(\bar{B}_{\delta}=\{u\in W^{1,\eta}_{0}(\Omega):\|u\|\leq\delta\}\).
We assume that \(K_{\beta_{\lambda}}\) is finite. Otherwise on account of (4.3) we already have an infinity of nodal solutions and so we are done. Therefore we can use the \(C^{1}\)-continuity property of critical groups (see [4, Theorem 5.126]) and obtain
\[C_{k}(\varphi_{\lambda},0)=C_{k}(\beta_{\lambda},0)\quad\text{for all $k\in\mathbb{N}_{0}$}. \tag{4.10}\]
Since \(0\leq\lambda|x|^{\tau}\leq|x|^{\tau}+f(z,x)x\) for a.e. \(z\in\Omega\) and all \(|x|\leq\delta\), then from [10, Proposition 9] and (4.10), we have
\[C_{k}(\varphi_{\lambda},0)=0\text{ for all $k\in\mathbb{N}_{0}$}\Rightarrow C_{k}( \beta_{\lambda},0)=0\text{ for all $k\in\mathbb{N}_{0}$}.\]
**Proposition 4.2**.: _If hypotheses \((H_{0}),(H_{1})\) hold and \(\lambda\in(0,\lambda_{*})\), then_
\[C_{k}(\beta_{\lambda},u^{*}_{\lambda})=C_{k}(\beta^{+}_{\lambda},v^{*}_{ \lambda})\,\text{ and }\,C_{k}(\beta_{\lambda},v^{*}_{\lambda})=C_{k}(\beta^{-}_{\lambda},v^{*}_{ \lambda})\,\text{ for all $k\in\mathbb{N}_{0}$}.\]
Proof.: Let \(W_{+}=\left\{u\in W^{1,\eta}_{0}(\Omega):0\leq u(z)\text{ for a.e. }z\in\Omega\right\}\). Note that
\[\beta_{\lambda}|_{W_{+}}=\beta^{+}_{\lambda}|_{W_{+}}. \tag{4.11}\]
For any \(u\in W^{1,\eta}_{0}(\Omega)\), by (4.1) and (4.11) we have
\[|\beta_{\lambda}(u)-\beta^{+}_{\lambda}(u)|\] \[\leq \int_{\Omega}|G_{\lambda}(z,u)-G^{+}_{\lambda}(z,u)|dz\] \[\leq \int_{\Omega}|G_{\lambda}(z,u)-G_{\lambda}(z,u^{*}_{\lambda})|dz+ \int_{\Omega}|G^{+}_{\lambda}(z,u^{*}_{\lambda})-G^{+}_{\lambda}(z,u)|dz. \tag{4.12}\]
We will now estimate the two integrals on the right-hand side of (4.12),
\[\int_{\Omega}|G_{\lambda}(z,u)-G_{\lambda}(z,u_{\lambda}^{*})|dz\] \[\leq \int_{\{u<v_{\lambda}^{*}\}}\left|\frac{\lambda}{\tau}\left(|v_{ \lambda}^{*}|^{\tau}-(u_{\lambda}^{*})^{\tau}\right)+|v_{\lambda}^{*}|^{\tau-2 }v_{\lambda}^{*}(u-v_{\lambda}^{*})\right|dz\] \[+\int_{\{u<v_{\lambda}^{*}\}}|F(z,v_{\lambda}^{*})-F(z,u_{\lambda }^{*})+f(z,v_{\lambda}^{*})(u-v_{\lambda}^{*})|\,dz\] \[+\int_{\{v_{\lambda}^{*}\leq u\leq u_{\lambda}^{*}\}}\left|\frac{ \lambda}{\tau}(|u|^{\tau}-(u_{\lambda}^{*})^{\tau})+F(z,u)-F(z,u_{\lambda}^{* })\right|dz\] \[+\int_{\{u_{\lambda}^{*}<u\}}\left|\lambda(u_{\lambda}^{*})^{\tau -1}(u-u_{\lambda}^{*})+f(z,u_{\lambda}^{*})(u-u_{\lambda}^{*})\right|dz. \tag{4.13}\]
We make the following observations.
\(\bullet\) If \(|v_{\lambda}^{*}|\leq u_{\lambda}^{*}\), then \(|v_{\lambda}^{*}|^{\tau}-(u_{\lambda}^{*})^{\tau}\leq 0\).
If \(u_{\lambda}^{*}<|v_{\lambda}^{*}|\), since \(u_{\lambda}^{*},v_{\lambda}^{*}\in L^{\infty}(\Omega)\), we have for some \(c_{14}>0\)
\[0\leq|v_{\lambda}^{*}|^{\tau}-(u_{\lambda}^{*})^{\tau}\leq c_{14} \left\{\begin{array}{ll}(|v_{\lambda}^{*}|-u_{\lambda}^{*})^{\tau}&\text{ if }\tau\leq 2,\\ |v_{\lambda}^{*}|-u_{\lambda}^{*}&\text{ if }2<\tau.\end{array}\right.\]
On \(\{u<v_{\lambda}^{*}\}\), we have
\[|v_{\lambda}^{*}|\leq|u|.\]
Therefore on \(\{u<v_{\lambda}^{*}\}\), we have
\[0\leq|v_{\lambda}^{*}|^{\tau}-(u_{\lambda}^{*})^{\tau} \leq c_{14}\left\{\begin{array}{ll}(|u|-u_{\lambda}^{*})^{\tau}& \text{ if }\tau\leq 2,\\ |u|-u_{\lambda}^{*}&\text{ if }2<\tau.\end{array}\right.\] \[\leq c_{14}\left\{\begin{array}{ll}|u-u_{\lambda}^{*}|^{\tau}& \text{ if }\tau\leq 2,\\ |u-u_{\lambda}^{*}|&\text{ if }2<\tau.\end{array}\right.\]
Similarly on \(\{v_{\lambda}^{*}\leq u\leq u_{\lambda}^{*}\}\), we have for some \(c_{15}>0\)
\[||u|^{\tau}-(u_{\lambda}^{*})^{\tau}|\leq c_{15}\left\{\begin{array}{ll}|u- u_{\lambda}^{*}|^{\tau}&\text{ if }\tau\leq 2,\\ |u-u_{\lambda}^{*}|&\text{ if }2<\tau.\end{array}\right.\]
\(\bullet\)\(F(z,\cdot)\) is a \(L^{\infty}\)-locally Lipschitz integrand.
Using these observations in (4.13), we see that for some \(c_{16}>0\) and \(\|u-u_{\lambda}^{*}\|\leq 1\) is small
\[\int_{\Omega}|G_{\lambda}(z,u)-G_{\lambda}\left(z,u_{\lambda}^{*} \right)|\,dz\leq c_{16}\left\|u-u_{\lambda}^{*}\right\|^{\tau}. \tag{4.14}\]
Also we have
\[\int_{\Omega}\left|G_{\lambda}^{+}\left(z,u_{\lambda}^{*}\right)-G_ {\lambda}^{+}(z,u)\right|dz\] \[\leq \int_{\left\{u<v_{\lambda}^{*}\right\}}\left|G_{\lambda}\left(z,u_ {\lambda}^{*}\right)\right|dz\] \[+ \int_{\left\{u_{\lambda}^{*}\leq u\leq u_{\lambda}^{*}\right\}} \left|\frac{\lambda}{\tau}\left(\left(u_{\lambda}^{*}\right)^{\tau}-\left(u^{+ }\right)^{\tau}\right)+\left(F\left(z,u_{\lambda}^{*}\right)-F\left(z,u^{+} \right)\right)\right|dz\] \[+ \int_{\left\{u_{\lambda}^{*}<u\right\}}\left|\lambda\left(u_{ \lambda}^{*}\right)^{\tau}+f\left(z,u_{\lambda}^{*}\right)u_{\lambda}^{*}- \frac{\lambda}{\tau}(u^{\tau}-\left(u_{\lambda}^{*}\right)^{\tau}-\left(F(z,u )-F\left(z,u_{\lambda}^{*}\right)\right))\right|dz \tag{4.15}\]
For \(\delta^{\prime}\in(0,1)\), let \(\bar{B}_{\delta^{\prime}}(u_{\lambda}^{*})=\left\{u\in W_{0}^{1,\eta}(\Omega ):\|u-u_{\lambda}^{*}\|\leq\delta^{\prime}\right\}\). Recall that \(v_{\lambda}^{*}\prec 0\prec u_{\lambda}^{*}\), we have
\[\int_{\left\{u<v_{\lambda}^{*}\right\}}\left|G_{\lambda}\left(z,u_{\lambda}^{ *}\right)\right|dz\to 0,\text{ and }\int_{\left\{u_{\lambda}^{*}<u\right\}}\left( \lambda\left(u_{\lambda}^{*}\right)^{\tau}+f\left(z,u_{\lambda}^{*}\right)u_{ \lambda}^{*}\right)dz\to 0\text{ as }\delta^{\prime}\to 0^{+}. \tag{4.16}\]
Also as the above, we have for some \(c_{17}>0\), all \(u\in\bar{B}_{\delta^{\prime}}\left(u_{\lambda}^{*}\right)\) with \(\delta^{\prime}\in(0,1)\) small
\[\int_{\left\{v_{\lambda}^{*}\leq u\leq u_{\lambda}^{*}\right\}} \left|\frac{\lambda}{\tau}\left(\left(u_{\lambda}^{*}\right)^{\tau}-\left(u^{ +}\right)^{\tau}\right)+\left(F\left(z,u_{\lambda}^{*}\right)-F(z,u^{+}) \right)\right|dz\leq c_{17}\left\|u-v_{\lambda}^{*}\right\|^{\tau}. \tag{4.17}\]
Similarly we have for some \(c_{18}>0\), all \(u\in\bar{B}_{\delta^{\prime}}\left(u_{\lambda}^{*}\right)\) with \(\delta^{\prime}\in(0,1)\) small,
\[\int_{\left\{u_{\lambda}^{*}<u\right\}}\left|\frac{\lambda}{\tau}\left(u^{ \tau}-\left(u_{\lambda}^{*}\right)^{\tau}\right)+\left(F(z,u)-F\left(z,u_{ \lambda}^{*}\right)\right)\right|dz\leq c_{18}\|u-u_{\lambda}^{*}\|^{\tau}. \tag{4.18}\]
We return to (4.15) and use (4.16), (4.17), (4.18), we can obtain for some \(c_{19}>0\) and for \(\delta^{\prime}\in(0,1)\) small
\[\int_{\Omega}\left|G_{\lambda}^{+}\left(z,u_{\lambda}^{*}\right)-G_{\lambda}^ {+}(z,u)\right|dz\leq O\left(\delta^{\prime}\right)+c_{19}\left\|u-u_{\lambda} ^{*}\right\|^{\tau}. \tag{4.19}\]
From (4.14) and (4.19) it follows that
\[\left|\beta_{\lambda}(u)-\beta_{\lambda}^{+}(u)\right|\leqslant c_{20}\|u-u_ {\lambda}^{*}\|^{\tau}+O\left(\delta^{\prime}\right),\]
for some \(c_{20}>0\).
Therefore given \(\varepsilon>0\), we can find \(\delta_{0}^{\prime}\in(0,1)\) small such that
\[\left|\beta_{\lambda}(u)-\beta_{\lambda}^{+}(u)\right|\leqslant c_{20}\delta^ {\prime}+\frac{\varepsilon}{4}\quad\text{for all }\delta^{\prime}\in(0,\delta_{0}^{\prime}].\]
Hence if \(\delta_{0}^{\prime}\in\left(0,\frac{\varepsilon}{4c_{20}}\right)\), we have
\[\left|\beta_{\lambda}(u)-\beta_{\lambda}^{+}(u)\right|\leqslant\frac{ \varepsilon}{2}\quad\text{for all }u\in\bar{B}_{\delta^{\prime}}\left(u_{\lambda}^{*}\right),\ \delta^{\prime}\in(0,\delta_{0}^{\prime}]. \tag{4.20}\]
Next we estimate the difference of the two derivatives for \(u,h\in W^{1,\eta}_{0}(\Omega)\) as follows,
\[\left|\langle\beta^{\prime}_{\lambda}(u)-(\beta^{+}_{\lambda})^{ \prime}(u),h\rangle\right|\] \[\leq \int_{\Omega}\left|g_{\lambda}(z,u)-g^{+}_{\lambda}(z,u)\right| \left|h\right|dz\] \[\leq \int_{\Omega}\left|g_{\lambda}(z,u)-g_{\lambda}(z,u^{*}_{ \lambda})\right|\left|h\right|dz+\int_{\Omega}\left|g^{+}_{\lambda}(z,u^{*}_{ \lambda})-g^{+}_{\lambda}(z,u)\right|\left|h\right|dz. \tag{4.21}\]
We know that continuous convex (and concave) functions are locally Lipschitz (see, for example, [19, Corollary 5.1.23]). Therefore both \(g_{\lambda}(z,\cdot)\) and \(g^{+}_{\lambda}(z,\cdot)\) are \(L^{\infty}\)-locally Lipschitz integrands (see (4.1) and (4.2)). So, from (4.21) we have that for some \(c_{21}>0\),
\[\left|\langle\beta^{\prime}_{\lambda}(u)-(\beta^{+}_{\lambda})^{ \prime}(u),h\rangle\right|\leq c_{21}\int_{\Omega}|u-u^{*}_{\lambda}||h|dz. \tag{4.22}\]
From Proposition 2.1, we know that \(W^{1,\eta}_{0}(\Omega)\hookrightarrow L^{q^{*}}(\Omega)\) continuously. By hypotheses \((H_{0})\), \(2\leq q^{*}\), hence \((q^{*})^{\prime}=\frac{q^{*}}{q^{*}-1}\leq 2\) and so \(W^{1,\eta}_{0}(\Omega)\hookrightarrow L^{(q^{*})^{\prime}}(\Omega)\) continuously. In (4.22), we can use Holder inequality and the continuous embedding \(W^{1,\eta}_{0}(\Omega)\hookrightarrow L^{q^{*}}(\Omega)\), \(W^{1,\eta}_{0}(\Omega)\hookrightarrow L^{(q^{*})^{\prime}}(\Omega)\) to obtain for some \(c_{22}>0\)
\[\left|\langle\beta^{\prime}_{\lambda}(u)-\left(\beta^{+}_{\lambda }\right)^{\prime}(u),h\rangle\right|\leq c_{22}\left\|u-u^{*}_{\lambda}\right\| \left\|h\right\|\] \[\Rightarrow \left\|\beta^{\prime}_{\lambda}(u)-\left(\beta^{+}_{\lambda} \right)^{\prime}(u)\right\|_{*}\leq c_{22}\left\|u-u^{*}_{\lambda}\right\|. \tag{4.23}\]
From (4.20) and (4.23) we see that given \(\varepsilon>0\), we can find \(\delta^{\prime}>0\) such that
\[\left\|\beta_{\lambda}-\beta^{+}_{\lambda}\right\|_{C^{1}(\bar{B}_{\delta^{ \prime}}(u^{*}_{\lambda}))}\leq\varepsilon.\]
As before, the \(C^{1}\)-continuity property of the critical groups implies that
\[C_{k}(\beta_{\lambda},u^{*}_{\lambda})=C_{k}(\beta^{+}_{\lambda},u^{*}_{ \lambda})\quad\text{for all }k\in\mathbb{N}_{0}.\]
In a similar fashion, we show that
\[C_{k}(\beta_{\lambda},v^{*}_{\lambda})=C_{k}(\beta^{-}_{\lambda},v^{*}_{ \lambda})\quad\text{for all }k\in\mathbb{N}_{0}.\]
Now we are ready to produce nodal solutions which vanish asymptotically as \(\lambda\to 0^{+}\).
**Proposition 4.3**.: _If hypotheses \((H_{0}),(H_{1})\) hold and \(\lambda\in(0,\lambda_{*})\), then problem \((P_{\lambda})\) has a nodal solution \(y_{\lambda}\in[v^{*}_{\lambda},u^{*}_{\lambda}]\) and_
\[y_{\lambda}\to 0\text{ in }W^{1,\eta}_{0}(\Omega)\cap L^{\infty}(\Omega)\text{ as } \lambda\to 0^{+}.\]
Proof.: From (4.1) and (4.2) it is clear that \(\beta_{\lambda}^{+}(\cdot)\) is coercive. Also using Proposition 2.1, we see that \(\beta_{\lambda}^{+}(\cdot)\) is sequentially weakly lower semi-continuous. So, we can find \(\widetilde{u}_{\lambda}^{*}\in W_{0}^{1,\eta}(\Omega)\) such that
\[\beta_{\lambda}^{+}(\widetilde{u}_{\lambda}^{*})=\inf\Big{\{}\beta_{\lambda}^{ +}(u):u\in W_{0}^{1,\eta}(\Omega)\Big{\}}. \tag{4.24}\]
If \(u\in C_{0}^{1}(\overline{\Omega})\) with \(u(z)>0\) for all \(z\in\Omega\), then for \(t\in(0,1)\) small, we have \(0\leq tu\leq\delta\) with \(\delta>0\) as in hypotheses \((H_{1})(2)\). We have
\[0\leq F(z,tu(z))\quad\text{for a.e. }z\in\Omega.\]
It follows from (4.1), (4.2), \(t\in(0,1)\) and \(q<p\) that
\[\beta_{\lambda}^{+}(tu)\leq\frac{t^{q}}{q}\rho_{\eta}(\nabla u)-\frac{\lambda t ^{\tau}}{\tau}\|u\|_{\tau}^{\tau}\]
Since \(\tau<q\), choosing \(t\in(0,1)\) even smaller if necessary, we infer that
\[\beta_{\lambda}^{+}(tu)\leq 0,\] \[\Rightarrow \beta_{\lambda}^{+}(\widetilde{u}_{\lambda}^{*})<0=\beta_{ \lambda}^{+}(0),\] \[\Rightarrow \widetilde{u}_{\lambda}^{*}\neq 0.\]
From (4.24) we have that \(\widetilde{u}_{\lambda}^{*}\in K_{\beta_{\lambda}^{+}}\). Therefore (4.3) implies that \(\widetilde{u}_{\lambda}^{*}=u_{\lambda}^{*}\). So, from Proposition 4.2, we can write that
\[C_{k}(\beta_{\lambda}^{+},u_{\lambda}^{*})=\delta_{k,0}\mathbb{Z} \quad\text{for all }k\in\mathbb{N}_{0},\] \[\Rightarrow C_{k}(\beta_{\lambda},u_{\lambda}^{*})=\delta_{k,0}\mathbb{Z} \quad\text{for all }k\in\mathbb{N}_{0}. \tag{4.25}\]
Similarly, using this time \(\beta_{\lambda}^{-}\), we obtain
\[C_{k}(\beta_{\lambda}^{+},v_{\lambda}^{*})=\delta_{k,0}\mathbb{Z} \quad\text{for all }k\in\mathbb{N}_{0}. \tag{4.26}\]
From (4.1) we see that \(\beta_{\lambda}(\cdot)\) is coercive. Then [17, Proposition 6.2.24] implies that
\[C_{k}(\beta_{\lambda}^{+},\infty)=\delta_{k,0}\mathbb{Z}\quad\text{for all }k\in\mathbb{N}_{0}. \tag{4.27}\]
Suppose that \(K_{\beta_{\lambda}}=\{0,u_{\lambda}^{*},v_{\lambda}^{*}\}\). From Proposition 4.1, (4.25), (4.26), (4.27) and the Morse relation with \(t=-1\) (see (2.1)), we have
\[2(-1)^{0}=(-1)^{0},\]
which is a contradiction. Therefore there exists \(y_{\lambda}\in W_{0}^{1,\eta}(\Omega)\) such that
\[y_{\lambda}\in K_{\beta_{\lambda}}\subseteq[v_{\lambda}^{*},u_{\lambda}^{*}],y _{\lambda}\notin\{0,u_{\lambda}^{*},v_{\lambda}^{*}\},\]
thus, we know that \(y_{\lambda}\) is a nodal solution of problem \((P_{\lambda})\) for \(\lambda\in(0,\lambda_{*})\).
On account of Proposition 3.2, we have
\[u_{\lambda}^{*},v_{\lambda}^{*}\to 0\text{ in }W_{0}^{1,\eta}(\Omega)\cap L^{ \infty}(\Omega)\ \text{ as }\ \lambda\to 0^{+},\] \[\Rightarrow y_{\lambda}\to 0\text{ in }W_{0}^{1,\eta}(\Omega)\cap L^{ \infty}(\Omega)\ \text{ as }\ \lambda\to 0^{+}.\]
Summarizing our findings for problem \((P_{\lambda})\), we can give the proof of Theorem 1.1. Note that we provide sign information for all solutions, the solutions are ordered and we provide their asymptotic behavior as \(\lambda\to 0^{+}\).
## 5 Proof of Theorem 1.1
From Proposition 3.2, 3.3 and 4.3, we know that for all \(\lambda>0\) small, problem \((P_{\lambda})\) has at least three solutions \(v_{\lambda}^{*},y_{\lambda},u_{\lambda}^{*}\in W_{0}^{1,\eta}(\Omega)\cap L^{ \infty}(\Omega)\) such that
\[v_{\lambda}^{*}\leqslant y_{\lambda}\leqslant u_{\lambda}^{*},\] \[v_{\lambda}^{*}\prec 0,\ y_{\lambda}\ \text{is nodal},\ 0\prec u_{ \lambda}^{*},\] \[\text{and}\ v_{\lambda}^{*},y_{\lambda},u_{\lambda}^{*}\to 0\ \text{in}\ W_{0}^{1,\eta}(\Omega)\cap L^{\infty}(\Omega)\ \text{as}\ \lambda\to 0^{+}.\]
The proof is completed.
## Acknowledgements
C. Ji was partially supported by National Natural Science Foundation of China (No. 12171152).
|
2305.06936 | An Option-Dependent Analysis of Regret Minimization Algorithms in
Finite-Horizon Semi-Markov Decision Processes | A large variety of real-world Reinforcement Learning (RL) tasks is
characterized by a complex and heterogeneous structure that makes end-to-end
(or flat) approaches hardly applicable or even infeasible. Hierarchical
Reinforcement Learning (HRL) provides general solutions to address these
problems thanks to a convenient multi-level decomposition of the tasks, making
their solution accessible. Although often used in practice, few works provide
theoretical guarantees to justify this outcome effectively. Thus, it is not yet
clear when to prefer such approaches compared to standard flat ones. In this
work, we provide an option-dependent upper bound to the regret suffered by
regret minimization algorithms in finite-horizon problems. We illustrate that
the performance improvement derives from the planning horizon reduction induced
by the temporal abstraction enforced by the hierarchical structure. Then,
focusing on a sub-setting of HRL approaches, the options framework, we
highlight how the average duration of the available options affects the
planning horizon and, consequently, the regret itself. Finally, we relax the
assumption of having pre-trained options to show how in particular situations,
learning hierarchically from scratch could be preferable to using a standard
approach. | Gianluca Drappo, Alberto Maria Metelli, Marcello Restelli | 2023-05-10T15:00:05Z | http://arxiv.org/abs/2305.06936v1 | # An Option-Dependent Analysis of Regret Minimization Algorithms
###### Abstract
A large variety of real-world Reinforcement Learning (RL) tasks is characterized by a complex and heterogeneous structure that makes end-to-end (or flat) approaches hardly applicable or even infeasible. Hierarchical Reinforcement Learning (HRL) provides general solutions to address these problems thanks to a convenient multi-level decomposition of the tasks, making their solution accessible. Although often used in practice, few works provide theoretical guarantees to justify this outcome effectively. Thus, it is not yet clear when to prefer such approaches compared to standard flat ones. In this work, we provide an option-dependent upper bound to the regret suffered by regret minimization algorithms in finite-horizon problems. We illustrate that the performance improvement derives from the planning horizon reduction induced by the temporal abstraction enforced by the hierarchical structure. Then, focusing on a sub-setting of HRL approaches, the options framework, we highlight how the average duration of the available options affects the planning horizon and, consequently, the regret itself. Finally, we relax the assumption of having pre-trained options to show how in particular situations, learning hierarchically from scratch could be preferable to using a standard approach.
## 1 Introduction
Hierarchical Reinforcement Learning (HRL, Pateria et al., 2021) is a learning paradigm that decomposes a long-horizon Reinforcement Learning (RL, Sutton and Barto, 2018) task into a sequence of potentially shorter and simpler sub-tasks. The sub-tasks themselves could be further divided, generating a hierarchical structure organized in an arbitrary number of levels. Each of these defines a different problem, where the original action space is replaced by the set of sub-tasks available on the lower level, and the same could be replicated for multiple levels. Although, the actual state transition is induced only once the control reaches the leaf nodes, where the policies choose among the primitive actions (i.e., actions of the original MDP on top of which the hierarchy is constructed). For the higher levels, once a sub-task is selected, the control passes to the relative internal policy until its termination. This introduces the concept of _temporal abstraction_(Precup and Sutton, 1997), for what concerns the high-level policy, the action persists for a certain time, resulting in an actual reduction of the original planning horizon.
Several algorithms demonstrate outstanding performance compared to standard RL approaches in several long-horizon problems (Levy et al., 2019, Vezhnevets et al., 2017, Bacon et al., 2017, Nachum et al., 2018). However, such evidence is mainly emerging in practical applications, and the theoretical understanding of the inherent reasons for these advantages is still underdeveloped. Only a few papers tried to justify these advantages theoretically, focusing on different aspects. For instance, Mann et al. (2015) studies the convergence of an algorithm that uses temporally extended actions instead of primitive ones. Fruit et al. (2017) and the extension Fruit and Lazaric (2017) focus on the exploration benefit of using options in average reward problems. More recently, Wen et al. (2020) show how the MDP structure affects regret. In this paper, we seek to further bridge this theory-practice gap, following the intuition that a hierarchical structure in a finite-horizon problem would positively affect the sample complexity by reducing the planning horizon. This could help to discriminate among situations in which a hierarchical approach could be more effective than a standard one for this particular family of problems.
**Contributions** The contributions of the paper can be summarized as follows. (1) We propose a new algorithm for the finite-horizon setting that exploits a set of _fixed_ options
[Sutton et al., 1999] to solve the HRL problem. (2) We conducted a regret analysis of this algorithm, providing an option-dependent upper bound, which, to the best of our knowledge, is the first in the Finite Horizon setting. This result could be used to define new objective functions for options discovery methods that would search for options that minimize this regret. (3) For the sake of our analysis, we formulate the notion of Finite Horizon SMDP and a _performance difference lemma_ for this setting. (4) Lastly, we provide an algorithm to relax the assumption of having options with fixed policies, and we demonstrate that there are situations in which such an approach could provide better guarantees in terms of sample complexity.
**Outline** In the following sections, we first introduce the problem and the notion used. Then, Section 3 describes the main motivation behind this work and the focus on the finite-horizon settings, and Section 4 describes the new formalism introduced. The algorithm and its extension are described in Section 5, and the main result and its derivation are discussed in Sections 6 and 7. Finally, we describe in detail the related works (Section 8) and discuss some further directions of research beyond the present work.
## 2 Preliminaries
In this section, we provide the necessary background that will be employed in the remainder of the paper.
**Finite-Horizon MDPs** A Finite Horizon Markov Decision Process [MDP, Puterman, 2014] is defined as a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},p,r,H)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are the finite state and the primitive action spaces, respectively, \(p(s^{\prime}|s,a,h)\) is the transition probability function defining the probability of transitioning to state \(s^{\prime}\in\mathcal{S}\) by taking action \(a\in\mathcal{A}\) in state \(s\in\mathcal{S}\) at stage \(h\in[H]\). \(r(s,a,h)\) is the reward function that evaluates the quality of action \(a\in\mathcal{A}\) when taken in state \(s\in\mathcal{S}\) at stage \(h\in[H]\), and \(H\) is the horizon, which defines the duration of each episode of interaction with the environment. The behavior of an agent is modeled by a deterministic policy \(\pi:\mathcal{S}\times[H]\rightarrow\mathcal{A}\) that maps states \(s\in\mathcal{S}\) and stages \(h\in[H]\) to actions.
**Semi-MDP** A Semi-Markov Decision Process [SMDP, Baykal-Girsoy, 2010, Cinlar, 2013] is a generalization of the MDP formalism. It admits _temporally extended_ actions, which, contrary to _primitive_ ones (i.e., actions that execute for a single time step), can execute for a certain time during which the agent has no control over the decision process. A usual notion when treating SMDP is the _duration_ or _holding time_, \(\tau(s,a,h)\), which is the number of primitive steps taken inside a temporally extended action.
HRL builds upon the theory of Semi-MDPs, characterizing the concept of temporally extended action with basically two main formalisms [Pateria et al., 2021]: sub-tasks [Dietterich, 2000] and options [Sutton et al., 1999]. For the sake of this paper, we focus on the options framework.
**Options** An option [Sutton et al., 1999] is a possible formalization of a temporally extended action. It is characterized by three components \(o=(\mathcal{I}_{o},\beta_{o},\pi_{o})\). \(\mathcal{I}_{o}\subseteq\mathcal{S}\times[H]\) is the subset of states and stages pairs \((s,h)\) in which the option can start, \(\beta_{o}:\mathcal{S}\times[H]\rightarrow[0,1]\) defines the probability that an option terminates in a specific state-stage pair, \(\pi_{o}:\mathcal{S}\times[H]\rightarrow\mathcal{A}\) is the policy executed until its termination. An example of an option could be a pre-trained policy to execute a specific task in a control problem, such as picking up an object.
Exactly as stated by Sutton et al. [1999, Theorem 1]_an MDP in which primitive actions \(\mathcal{A}\) are replaced by options \(\mathcal{O}\), becomes an SMDP._ In this paper, we consider a hierarchical approach working for a two-level hierarchy. On the top, the goal is to find the optimal policy \(\mu:\mathcal{S}\rightarrow\mathcal{O}\), which determines the optimal option for each state-instant pair. Once an option is selected, out of the SMDP's scope, its policy is executed until its termination, and the control returns to the high-level policy. An assumption is needed on the set of the given options [Fruit et al., 2017].
**Assumption 2.1** (Admissible options).: The set of options \(\mathcal{O}\) is assumed admissible, i.e., all options terminate in finite time with probability 1 and, \(\{\forall o\in\mathcal{O},\,s\in\mathcal{S},\,\text{and}\,h\in[H],\;\exists o^{ \prime}\in\mathcal{O}:\beta_{o}(s,h)>0\text{ and }(s,h)\in\mathcal{I}_{o^{\prime}}\}\).
Lastly, we introduce an essential quantity for our analysis.
**Regret** The _regret_ is a performance metric for algorithms frequently used in provably efficient RL. For any starting state \(s\in\mathcal{S}\), and up to the episode \(K\), it is defined as:
\[\text{\emph{Regret(K)}}\overset{\text{\emph{def}}}{=}\sum_{k=1}^{K}V^{*}(s,1 )-V^{\mu_{k}}(s,1) \tag{1}\]
and evaluates the performance of the policy learned until episode \(k\), \(V^{\mu_{k}}\) compared to the value of an optimal policy \(V^{*}\).
### Notation
In the following, we will use \(\tilde{O}(\cdot)\) to indicate quantities that depend on \((\cdot)\) up to logarithmic terms. \(\mathbb{1}\,(x=a)\) defines the indicator function
\[\mathbb{1}(x=a)\overset{\text{\emph{def}}}{=}\begin{cases}0,&\text{if }x\neq a\\ 1,&\text{if }x=a\end{cases}\]
In the analysis, we denote optimistic terms with \(\sim\) and empirical ones with \(\wedge\), e.g., \(\tilde{p}\)_and \(\hat{r}\) are, respectively, the optimistic transition model and the estimated reward function._
Motivation and intuition
Usually, in Reinforcement Learning, the complexity of a problem is highly correlated to the planning horizon, which is even more natural in finite-horizon MDPs. The regret analysis in the literature provides results for both the lower and upper bound on the regret paid by an algorithm in this setting, where there is a dependency on \(H\).
Here comes our intuition, by using a hierarchical approach, we can intrinsically reduce the planning horizon because the number of decisions taken in \(H\) time steps is scaled by a term closely related to the average duration of each action, and thus, also the complexity scales with this quantity. In addition, if the sub-tasks themselves need to be learned because policies are not provided, simplification can also be induced in these new problems. Under certain assumptions, they could have shorter horizons, and an agent during the training can focus just on smaller regions of the entire state space. Furthermore, the learning could be further guided by an additional reward that could better specify the singular sub-problem.
Starting from this intuition, we analyze the performance of an algorithm in a Finite Horizon Semi-MDP, considering a set of pre-trained options and, afterward, its extension, which incorporates a first phase of options learning. Lastly, we provide a comparative study with its flat counterpart.
## 4 Finite horizon SMDP
In this section, we present a new formalism, _Finite-Horizon Semi-Markov Decision Processes_, that combines the notion used in FH-MDP with the concept of temporal abstraction.
A finite-horizon semi-MDP is defined as a tuple \(\mathcal{SM}=(\mathcal{S},\mathcal{O},p,r,H)\), where \(\mathcal{S}\) and \(\mathcal{O}\) are the finite state and the temporally extended action spaces, respectively, \(p(s^{\prime},h^{\prime}|s,o,h)\) is the probability of ending to state \(s^{\prime}\in\mathcal{S}\), after \((h^{\prime}-h)\) steps, by playing the temporally extended action \(o\in\mathcal{O}\) in state \(s\in\mathcal{S}\) at stage \(h\in[H]\). On the other hand, \(r(s,o,h)\) is the expected reward accumulated until the termination of the temporally extended action \(o\in\mathcal{O}\) played in state \(s\in\mathcal{S}\) at stage \(h\in[H]\) of the episode. Finally, \(H\) is the horizon of interaction, and still \(\tau(s,o,h)\) is the number of primitive steps taken inside the temporally extended action. The agent's behavior is modeled by a deterministic policy \(\mu:\mathcal{S}\times[H]\rightarrow\mathcal{O}\) mapping a state \(s\in\mathcal{S}\) and a stage \(h\in[H]\) to a temporally extended action. The goal of the agent is to find a policy \(\mu^{*}\) that maximizes the value function, defined as the expected sum of the rewards collected over the horizon of interaction and recursively defined as:
\[V^{\mu}(s,h)=\underset{(s^{\prime},h^{\prime})\sim p(\cdot|s,\mu(s,h),h)}{ \mathbb{E}}\Big{[}r(s,\mu(s,h),h)+V^{\mu}(s^{\prime},h^{\prime})\Big{]},\]
with the convention that \(V^{\mu}(s,H)=0\). The value function of any optimal policy is denoted by \(V^{*}(s,h)\coloneqq V^{\mu^{*}}(s,h)\).
## 5 Algorithms
FH-SMDP-UCRLis a variant of the algorithm presented in Fruit and Lazaric (2017), which in turn is inspired by UCRL2 (Auer et al., 2008) and adapted for FH-SMDPs. This family of algorithms implements the principle of "_optimism in the face of uncertainty_", which states that when interacting in unknown environments--with an unknown model--the decisions have to be guided by a trade-off between what we believe is the best option and by a term representing the uncertainty on our estimates. More formally, it is introduced the so-called _exploration bonus_, which quantifies the uncertainty level on our estimations of the model, computed from the observed samples. This exploration bonus is used to regulate the _exploration-exploitation_ dilemma, inducing the algorithm to explore regions of the space with high uncertainty instead of sticking to what seems to be the optimal solution and to overcome situations in which the optimal solution resides in a region not yet discovered.
However, a direct application of these algorithms in our setting is unfeasible, as they are designed for infinite-horizon average reward settings. Due to the lack of methods operating in these settings, we need to design a new algorithm for finite-horizon SMDPs following the same paradigm.
As displayed by Algorithm 1, at each episode \(k\), we compute an estimate of the SMDP model, by computing, from the collected samples up to episode \(k\), the empirical transition probability \(\hat{p}_{k}\) and the reward function \(\hat{r}_{k}\).
\[\hat{p}_{k}(s^{\prime},h^{\prime}|s,o,h)=\frac{\sum_{i=1}^{k-1}1((s,o,s^{\prime },h,h^{\prime})_{i}=(s,o,h,s^{\prime},h^{\prime}))}{n_{k}(s,o,h)} \tag{2}\]
\[\hat{r}_{k}(s,o,h)=\frac{\sum_{i=1}^{k-1}r_{i}(s,o,h)}{n_{k}(s,o,h)} \tag{3}\]
We then redefine the confidence intervals of these two quantities, \(\beta_{k}^{p}\) and \(\beta_{k}^{r}\), respectively as
\[\beta_{k}^{r}(s,o,h) \propto \sqrt{\frac{2\hat{\mathbb{Var}}(r)\ln 2/\delta}{n_{k}(s,o,h)}}+ \frac{7\ln 2/\delta}{3(n_{k}(s,o,h-1)}, \tag{4}\] \[\beta_{k}^{p}(s,o,h) \propto \sqrt{\frac{S\log\big{(}\frac{n_{k}(s,o,h)}{\delta}\big{)}}{n_{k} (s,o,h)}}, \tag{5}\]
where \(\hat{\mathbb{Var}}(r)\) is the sample variance of the reward function. From the estimates and the confidence intervals just defined, we can build the confidence sets \(B_{k}^{p}\) and \(B_{k}^{r}\), which contain, with high probability, the true model. Being \(\mathcal{SM}_{k}\) the set of plausible SMDPs, characterized by rewards and
transition within the confidence sets, with \(\mathcal{SM}_{k}\) and an adaptation of _extended value iteration_(Auer et al., 2008), for FH-SMDP (Algorithm 2), we can compute the optimistic policy \(\tilde{\mu_{k}}\) and the relative optimistic value function \(\tilde{V}^{\mu_{k}}\). Then, by playing this policy for an entire episode, we collect new samples and restart the process for the next episode \(k+1\).
### Option Learning
By relaxing the assumption of having a set of pre-trained options, considering known just their initial state set and termination conditions, we can enhance characteristics of problems more suited to be solved with a hierarchical approach even when no pre-trained policies are provided.
We present a model-based algorithm divided into two phases, which initially learns each option policy individually, and then exploits them to solve the SMDP with FH-SMDP-UCRL. As Algorithm 3 shows, each option is considered as a single FH-MDPs, defined based on its initial-state set and termination probability, as \(\mathcal{M}_{o}=(\mathcal{S}_{o},\mathcal{A}_{o},p,r_{o},H_{o})\) where \(S_{o}\subseteq S\), \(A_{o}\subseteq A\), \(H_{o}\leq H\), which means that each option operates on a restricted portion of the original problem, for a certain fixed horizon \(H_{o}\). The option's optimal policy is the policy of the relative sub-FH-MDP computed until episode \(K_{o}\), which is the number of episodes assigned to each option.
Nevertheless, if no assumption on the reward function is defined, the options' optimal policies could be sub-optimal regarding the optimal policy computed by a standard approach for that portion of the MDP, being the option's scope limited to a certain part of the MDP with the impossibility of having feedback on what happens after its termination.
Therefore we need to state:
**Assumption 5.1**.: Given an MDP \(\mathcal{M}=(\mathcal{S},\mathcal{A},p,r,H)\) and a set of options \(o\in\mathcal{O}\). Define \(\pi_{o}^{*}\) as the optimal policy of the option \(o\) learned individually on the sub-MDP \(\mathcal{M}_{o}=(\mathcal{S}_{o},\mathcal{A}_{o},p,r_{o},H_{o})\) with \(S_{o}\subseteq S\), \(A_{o}\subseteq A\), and \(H_{o}\leq H\). The reward function \(r_{o}\) of the sub-MDP \(\mathcal{M}_{o}\), which could differ from \(r\), ensure that
\[\pi^{*}(s)=\pi_{o}^{*}(s)\;\;\forall s\in S_{o}\]
with \(\pi^{*}(s)\) the optimal policy on \(\mathcal{M}\).
This assumption guarantees that the computed option's optimal policy equals the optimal policy of the entire problem in the option's region.
```
0:\(\mathcal{S},\mathcal{O}\) with fixed policies, \(H\) Initialize \(\mu_{0}\) at random and \(Q_{1}(s,o,h)=0\) for all \((s,o,h)\in\mathcal{S}\times\mathcal{O}\times[H]\)
1: Execute \(\mu_{0}\) for \(H\) steps and collect tuples \((s,o,h,s^{\prime},h^{\prime})\) and \(r(s,o,h)\) to store in \(\mathcal{D}_{1}\)
2:for\(k=1,\ldots,K\)do
3: Compute \(n_{k}(s,o,h)\)
4: Estimate empirical SMDP \(\widehat{\mathcal{SM}}_{k}=(\mathcal{S},\mathcal{O},\hat{p}_{k},\hat{r}_{k})\) with equations 2, 3.
5: Compute the confidence sets \(B_{k}^{r}(s,o,h)\) and \(B_{k}^{p}(s,o,h)\) using the confidence interval (Eq. 4, 5)
6: Planning with Backward Induction for \(\mu_{k}\), using an adaptation to finite horizon of _Extended Value Iteration_(Auer et al., 2008) (Algorithm 2)
7:for\(h=1,\ldots,H\)do
8: Execute \(o=\mu_{k}(s,h)\) until it terminates
9: Observe \((s^{\prime},h^{\prime})\) and \(r(s,o,h)\)
10: Add the tuple \((s,o,h,s^{\prime},h^{\prime})_{k}\) and \(r_{k}(s,o,h)\) to \(\mathcal{D}_{k+1}\)
11: Set \(h=h^{\prime}\)
12:endfor
13:endfor
```
**Algorithm 1** UCRL-FH-SMDP
## 6 Main Results
In this section, we present the main contributions of the paper, which in particular are an upper bound on the regret of FH-SMDP-UCRL that highlights particular problem-dependent features and, an upper bound on the regret of its extension including a first phase of options learning.
**Theorem 6.1**.: _Considering a non-stationary Finite Horizon SMDP \(\mathcal{SM}\) and a set of options \(\mathcal{O}\), with bounded primitive reward \(r(s,a)\in[0,1]\). The regret suffered by algorithm FH-SMDP-UCRL, in \(K\) episodes of horizon \(H\) is bounded as:_
\[\text{Regret(K)}\leq\tilde{O}\left(\left(\sqrt{SOKd^{2}}\right)\left(\overline{ T}+\sqrt{S}H\right)\right)\]
_with probability \(1-\delta\). Where:_
\[\overline{T} =\max_{s,o,h}\sqrt{\mathbb{E}[\tau(o,s,h)^{2}]}\] \[=\max_{s,o,h}\sqrt{\mathbb{E}[\tau(o,s,h)]^{2}+\mathrm{Var}[\tau(s,o,h)]},\]
\(\tau\) _is the holding time, and \(d\) describes the expected number of decisions taken in one episode that is \(d\approx H/\bar{\tau}\), with \(\bar{\tau}\) the average duration of the set of options._
This result introduces one of the main contributions, an option-dependent upper bound on the regret in FH-MDP with options, not worst-case as in Fruit and Lazaric (2017). A dependency on the properties of an option set is introduced, embodied into both \(\overline{T}\) and \(d\). The former gives the same interesting consideration already underlined by Fruit
and Lazaric (2017), whereby the extra cost of having actions with random duration is only partially additive rather than multiplicative. On the other hand, the latter emphasizes the real benefit of using a hierarchical approach over a flat one. The longer the expected duration of the set of options, the more the effective horizon of the SMDP, \(d\), decreases. Indeed, \(d\approx\frac{H}{\tau}\), with \(\bar{\tau}\) the average holding time of the options. Notice that there is a \(\sqrt{d}\) worsening factor, which comes from the fact that we consider a non-stationary MDP in the analysis. This outcome is common in finite-horizon literature Azar et al. (2017); Dann et al. (2017); Zanette and Brunskill (2018), where, instead, the regret increases by a factor of \(\sqrt{H}\).
Let's now analyze the regret suffered by the two-phase algorithm that first learns each option policy and then subsequently solves the relative SMDP.
```
1:Input:\(\mathcal{S},\mathcal{O},B_{k}^{r},B_{k}^{p}\)
2:Set \(Q_{H+1}(s,o)=0\) for all \((s,o)\in\mathcal{S}\times\mathcal{O}\)
3:for\(h=H\ldots 1\)do
4:for\((s,o)\in\mathcal{S}\times\mathcal{O}\)do
5:for\(h^{\prime}=h+1\ldots H+1\)do
6: Compute \[Q_{\text{lab}}(s,o)=\max_{r\in B_{(s,o,h)}^{r}(s,o,h)}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Renewal Process
The expected number of options played in one episode \(d\), clearly depends on the random duration of each of these options; hence it is itself a random variable, and we would like to bound it with some quantity. Resorting to the _Renewal Theory_(Smith, 1958), this corresponds to the _Renewal Function_\(m(t)\).
**Definition 6.3** (Renewal Process).: Let \(S_{1},S_{2}\dots\) be a sequence of i.i.d. random variables with finite and non-zero mean, representing the random time elapsed between two consecutive events, defined as the holding time. For each \(n>0\) we define \(J_{n}=\sum_{i=1}^{n}S_{i}\), as the time at which the \(n^{th}\) event of the sequence terminates. Then, the sequence of random variables \(X_{t}\), characterized as
\[X_{t}=\sum_{n}^{\infty}\mathbb{I}_{\{J_{n}\leq t\}}=\sup\{n:J_{n}\leq t\} \tag{7}\]
constitutes a Renewal Process \((X_{t})_{t\geq 0}\), representing the number of consecutive events that occurred up to time \(t\).
**Definition 6.4** (Renewal Function).: Considering a renewal process \((X_{t})_{t\geq 0}\), the renewal function \(m(t)\) is the expected number of consecutive events that occurred by time \(t\).
\[m(t)=\mathbb{E}[X_{t}]\]
Hence, it is possible to take inspiration from a bound of the renewal function to bound the expected number of options played in one episode.
**Lemma 6.5**.: _[Bound on number of options played in one episode] Considering a Finite Horizon SMDP \(\mathcal{SM}\) with horizon H and, \(O\) options with duration \(\tau_{min}\leq\tau\leq\tau_{max}\) and \(\min_{o}(\mathbb{E}[\tau_{o}])\) the expected duration of the shorter option. The expected number of options played in one episode \(d\) can be seen as the renewal function \(m(t)\) of a renewal process up to the instant \(H\). With probability \(1-\delta\) this quantity is bounded by_
\[d<\sqrt{\frac{32(\tau_{max}-\tau_{min})H(\ln 2-\ln\delta)}{(\min_{o}(\mathbb{E} [\tau_{o}]))^{3}}}+\frac{H}{\min_{o}(\mathbb{E}[\tau_{o}])}\]
Refer to the appendix for detailed proof of this result.
### Fixed Option Length
Let's now analyze a particular case to clarify the claim introduced with Theorem 6.1. A scenario in which the given options are deterministic with fixed length.
**Corollary 6.6**.: _Considering a non-stationary Finite Horizon SMDP \(\mathcal{SM}\) and a set of deterministic option \(O\) with fixed duration \(\bar{\tau}\), the regret payed by FH-SMDP-UCRL after K steps is upper bounded by:_
\[\text{Regret(K)}\leq\tilde{O}\Bigg{(}\frac{H}{\bar{\tau}}\Big{(}\sqrt{SOK} \Big{)}\bigg{(}\bar{\tau}+\sqrt{S}H\bigg{)}\Bigg{)}\]
Proof.: The result is trivially derived by substituting \(d\) with the actual number of decisions taken in one episode, which now is a defined number equal to \(H/\bar{\tau}\). Then, the same applied for \(\overline{T}\) that, considering options of length \(\bar{\tau}\), is exactly \(\bar{\tau}\).
This bound clearly shows a dependency on the choice of the options set. The second term, which is the dominant one, is mitigated by the \(\sqrt{\bar{\tau}}\), thus reducing the sample complexity as expected. The other \(\sqrt{\frac{H}{\bar{\tau}}}\), as for Theorem 6.1, comes from the non-stationarity of the SMDP.
### Derivation of FH-MDP and Parallelism with Ar-Smpd.
To further strengthen the obtained result, we can show that considering some assumptions, we can derive the upper bound by Auer et al. (2008) adapted to FH-MDPs(Ghavamzadeh et al., 2020).
Finite Horizon MDP.Referring to the result provided by Auer et al. (2008) adapted to the finite horizon case (Ghavamzadeh et al., 2020), the regret in Finite Horizon MDPs scales with \(\tilde{O}(HS\sqrt{AT})\), or with \(\tilde{O}(H^{\frac{3}{2}}S\sqrt{AT})\) when the MDP in non-stationary. If, in our upper bound, we substitute the cardinality of the option set \(O\), with the primitive-action space \(A\). This leads to having \(\overline{T}=1\) and \(d=H\) because the primitive actions, by definition, terminate after a single time step. Thus, the average duration of these single-step options is 1, and the number of decisions taken in one episode is exactly \(H\). Then, having bounded primitive reward \(r(s,a)\in[0,1]\), we can write our result as \(\tilde{O}(H^{\frac{3}{2}}S\sqrt{AKH})\) and considering the definition of \(T=KH\)(Dann et al., 2017; Azar et al., 2017; Zanette and Brunskill, 2018), we obtain the same equation.
**Remark** We are aware of the tighter upper bounds in the Finite Horizon literature by Azar et al. (2017), that get rid of a \(\sqrt{HS}\), by tightly bounding the estimation error \((\tilde{p}-p)\tilde{V}^{\mu_{k}}\) and their exploration bonus in terms of the variance of \(V^{*}\) at the next state, and by using empirical Bernstein and Freedman's inequalities (Maurer and Pontil, 2009; Freedman, 1975). However, with this work, our main focus is to emphasize the role played by the options set's composition instead of providing a very tight analysis. We still think the same tricks could be used in our analysis to tighten the bound, but we leave that for future work.
Parallelism with Average Reward Setting.Fruit and Lazaric (2017) showed that the regret in SMDPs with
options when considering bounded holding times, and \(R_{max}=1\) scales with \(\tilde{O}(D_{\mathcal{O}}S_{\mathcal{O}}\sqrt{On}+T_{max}\sqrt{S_{\mathcal{O}}On})\). On the other hand, considering the same assumptions, our result becomes of order \(\tilde{O}(HS\sqrt{OKd^{2}}+T_{max}\sqrt{SOKd^{2}})\). It is clearly impossible to derive one setting from the other. Nevertheless, we can spot some similarities between the two bounds. We can say that for finite-horizon problems, the diameter \(D\) coincides with the horizon \(H\)(Ghavamzadeh et al., 2020). Besides, \(Kd\) is exactly equal to \(n\), the number of decisions made up to episode \(K\). The state space \(S\) is the state space of the SMDP in our formalism, which is the definition provided for \(S_{O}\) in Fruit and Lazaric (2017). Consider, again, that the additional \(\sqrt{d}\) comes from the fact that we refer to a non-stationary FH-SMDP.
Thus, we prove that our result is a generalization of the case of FH-MDP and closely relates to the result presented for the Average Reward Setting.
## 7 Proofs Sketch
In this section, we provide the sketch proofs of theorem 6.1 and theorem 6.2. Please refer to the appendix for all the details.
### Sketch Proof of Theorem 6.1
We defined the regret in finite horizon problems as in eq. 1. Optimistic algorithms work by finding an optimistic estimation of the model of the environment to compute the optimistic value function and the optimistic policy. Considering how the confidence sets are constructed, we can state that \(\tilde{p}\geq p\) and \(\tilde{r}\geq r\), where terms without tilde are the real one, hence, \(V^{*}(s,h)\leq\tilde{V}^{\mu_{k}}(s,h)\) for all \(h\). Thus, we can bound eq. 1 with
\[\text{{Regret}}(\text{{K}})\overset{opt}{\leq}\sum_{k=1}^{K}\tilde{V}^{\mu_{k} }(s,1)-V^{\mu_{k}}(s,1) \tag{8}\]
Let's now introduce a Performance Difference Lemma for FH-SMDPs.
**Lemma 7.1**.: _[Performance Difference Lemma for FH-SMDP] Given two FH-SMDPs \(\tilde{M}\) and \(\tilde{M}\) with horizon \(H\), and respectively rewards \(\hat{r}\), \(\tilde{r}\) and transition probabilities \(\hat{p}\), \(\tilde{p}\). The difference in the performance of a policy \(\mu_{k}\) is:_
\[\tilde{V}^{\mu_{k}}(s,1)-\hat{V}^{\mu_{k}}(s,1)\] \[=\hat{\mathbb{E}}\bigg{[}\sum_{i=1}^{H}\Big{(}\big{(}\tilde{r}(s_ {i},o_{i},h_{i})-\hat{r}(s_{i},o_{i},h_{i})\big{)}\] \[+\big{(}\tilde{p}(s_{i+1},h_{i+1}|s_{i},o_{i},h_{i})-\hat{p}(s_{i+ 1},h_{i+1}|s_{i},o_{i},h_{i})\big{)}\] \[\tilde{V}^{\mu_{k}}(s_{i+1},h_{i+1})\Big{)}\mathbbm{1}\{h_{i}<H \}\bigg{]}\]
_where \(\hat{\mathbb{E}}\) is the expectation taken w.r.t. \(\hat{p}\) and \(\mu_{k}\)._
Note that the summation steps are not unitary but skip according to the length of the transitions \(h^{\prime}-h\). The derivation of this lemma follows the one provided by Dann et al. (2017) for FH-MDPs that is commonly used in literature (Azar et al., 2017; Zanette and Brunskill, 2018). Check the appendix for further details.
Now we can use lemma 7.1 to substitute the difference of value function in eq. 8 and we can upper bound both the difference of \(r\) and \(p\), with 2 times their confidence intervals and the optimistic value \(\tilde{V}^{\mu_{k}}(s_{i+1},h_{i+1})\) with the horizon \(H\) - we consider bounded primitive reward \(r(s,a)\in[0,1]\).
\[\text{{Regret}}(\text{{K}})\leq\sum_{k=1}^{K}\mathbb{E}\bigg{[}\sum_{i=1}^{H} \Big{(}2\beta_{k}^{r}+2\beta_{k}^{p}H\Big{)}\mathbbm{1}\{h_{i}<H\}\bigg{]}\]
In the Finite-Horizon literature(Dann et al., 2017; Zanette and Brunskill, 2018), two terms are commonly used in the proofs: (1) \(w_{k}(s,o,h)\) that is the probability of taking the option \(o\), in state \(s\) at time step \(h\), which clearly depends on the policy \(\mu_{k}\) and the transition probability of the real SMDP, (2) \(L_{k}\), which defines the set of episodes visited sufficiently often, and the set of \((s,o,h)\) that were not visited often enough to cause high regret. Therefore, for using the same approach to conduct the proof, we can substitute the expectation \(\mathbb{E}\), which is taken w.r.t. the policy \(\mu_{k}\) and the real transition probability \(p(s^{\prime},h^{\prime}|s,o,h)\), with
\[\sum_{(s,o,h)\in L_{k}}w_{k}(s,o,h)\]
We defined the confidence intervals of \(r\) and \(p\), as in the equations 4, 5, respectively using Empirical Bernstein Inequality (Maurer and Pontil, 2009), Hoeffding (1963) and Weissman et al. (2003).
By substituting these definitions and the term just introduced, we get, up to numerical constants, that the regret is bounded by
\[\sum_{k}\sum_{i\in[H]}\sum_{(s,o,h)\in L_{k}}\frac{w_{k}(s_{i},o_{i},h_{i})}{ \sqrt{n_{k}(s_{i},o_{i},h_{i})}}\bigg{(}\sqrt{\text{{Var}}(r)}+\sqrt{S}H+ \frac{1}{\sqrt{n_{k}(s,o,h)}}\bigg{)}\]
**Lemma 7.2**.: _Considering a non-stationary MDP M with a set of options as an SMDP \(M_{\mathcal{O}}\)(Sutton et al., 1999). In \(M_{\mathcal{O}}\) the number of decisions taken in the \(k^{th}\)-episode is a random variable \(d\) and_
\[\sum_{i\in H}\sum_{(s,o)\in L_{k}}w_{k}(s_{i},o_{i},h_{i})\mathbbm{1}\{h_{i}<H \}=d\text{ with }\{\forall k:d\leq H\}\]
_Therefore, the following holds true:_
\[\sum_{k}\sum_{i\in H}\sum_{(s,o)\in L_{k}}w_{k}(s_{i},o_{i},h_{i})\sqrt{\frac{ 1}{n_{k}(s_{i},o_{i},h_{i})}}=\tilde{O}\bigg{(}\sqrt{SOKd^{2}}\bigg{)}\]
_or, using the same notation used in Fruit and Lazaric (2017), \(\tilde{O}(\sqrt{SOKd^{2}})\), with \(n=Kd\) the number of decisions taken up to episode \(K\)._
Substituting the result of Lemma 7.2 in the equation of the regret, we get
\[\text{\emph{Regret(K)}}\leq\tilde{O}\Bigg{(}\Big{(}\sqrt{\hat{dSOn}}\Big{)} \bigg{(}\sqrt{\hat{\mathbb{V}}_{\text{ar}}(r)}+\sqrt{S}H\bigg{)}+dSO\Bigg{)}\]
where as mentioned above, \(d\) is the expected number of decision steps taken in one episode, \(n\) is the total number of decisions taken up to episode \(k\), and \(\mathbb{V}\mathrm{ar}(r)\) is the empirical variance of the reward that emerged from the use of Empirical Bernstein inequality. A dependency on the variance of the reward is not that explainable for what we want to show; hence we upper bound this term by the square root of the empirical variance \(\overline{T}\) of the duration of the options seen up to episode \(k\), and this complete the proof.
### Sketch proof of Theorem 6.2
In order to prove the regret paid by the two-phase algorithm, we first consider that we can write the regret as the sum of the regret paid in the first phase and the regret paid in the second one, plus an additional bias term. In the first phase, we pay full regret for each option learning, then the maximum average regret considering the option learning as a finite horizon MDP with horizon \(H_{o}\), and the regret of the SMDP learning with fixed options
\[\text{\emph{Regret(K)}}\leq\sum_{a\in O}K_{o}H_{o}+K_{2}\max_{a\in O}\frac{1} {K_{o}}H_{o}^{2}S_{a}\sqrt{A_{o}K_{o}}+HS\sqrt{Od^{2}K_{2}} \tag{9}\]
Where \(K_{2}\) are the episodes used for the SMDP learning, and \(K=\sum_{o\in O}K_{o}+K_{2}\). Then considering that we allocate \(K_{o}\) episodes for each option learning, and \(A_{o}\) and \(S_{o}\) are respectively the upper bounds on the action-space cardinality and state-space cardinality of the options set, we can get rid of the \(\max_{o\in\mathcal{O}}\). Now bounding \(K_{2}\leq K\), we can find the optimal \(K_{o}\) in closed form, and substituting it in Equation 9, we conclude the proof.
## 8 Related Works
In the FH-MDP literature, several works provide an analysis of the regret of different algorithms. Osband and Van Roy (2016) present a lower bound that scales with \(\Omega(\sqrt{HSAT})\). On the other hand, many other works propose various upper bounds for their algorithm. The most common upper bound is the adaptation of Auer et al. (2008) proposed by Ghavamzadeh et al. (2020), which is of the order of \(O(HS\sqrt{AT})\). This result has then been improved in the following papers. An example is Azar et al. (2017), which proposes a method with an upper bound on the regret of \(O(\sqrt{HSAT})\) that successfully matches the lower bound. As mentioned above, both upper and lower bounds depend on \(H\).
Nevertheless, few works focused on theoretically understanding the benefits of hierarchical reinforcement learning approaches, and, to the best of our knowledge, this is the first to analyze these aspects in FH-SMDPs. To conduct our analysis, we take inspiration from the paper by Fruit and Lazaric (2017), in which they propose an adaptation of UCRL2 (Auer et al., 2008) for SMDPs. They first study the regret of the algorithm for general SMDPs and then focus on the case of MDP with options, providing both a lower bound and a worst-case upper bound. This work was the first that theoretically compares the use of options instead of primitive actions to learn in SMDPs. Nonetheless, it focuses on the average reward setting to study how it is possible to induce a more efficient exploration by using options, and it assumes fixed options. Differently, we aim to analyze the advantages of using options to reduce the sample complexity of the problem, resorting to the intuition that temporally extended actions can intrinsically reduce the planning horizon in FH-SMDPs. Furthermore, we provide an _option-dependent_ upper bound, instead of a worst-case one, that better quantifies the impact of the option duration on the regret. Other works providing a theoretical analysis of hierarchical reinforcement learning approaches are Fruit et al. (2017), which is an extension of the aforementioned work in which the need for prior knowledge of the distribution of cumulative reward and duration of each option is relaxed. Even in this case, they consider the average reward setting, and the objective is identical.
Then, Mann et al. (2015) study the convergence property of Fitted Value Iteration (FVI) using temporally extended actions, showing that a longer duration of options and pessimistic estimates of the value function lead to faster convergence. Finally, Wen et al. (2020) demonstrate how patterns and substructures in the MDP provide benefits in terms of planning speed and statistical efficiency. They present a Bayesian approach exploiting this information, and they analyze how sub-structure similarities and sub-problems' complexity contribute to the regret of their algorithm.
## 9 Conclusions
In conclusion, we propose a new algorithm for Finite Horizon Semi Markov decision processes called FH-SMDP-UCRL, and we provide theoretical evidence that supports our original claim. Using hierarchical reinforcement learning, it is provably possible to reduce the problem complexity of a Finite Horizon problem when using a well-defined set of options. This analysis is the first for FH-SMDP and provides a form of option-dependent analysis for the regret that could be used to define objectives for options discovery methods better. Furthermore, by relaxing the assumption of having a set of fixed options' policies, we were able to provide insights on classes of problems in which a hierarchical approach from scratch would still be beneficial compared to a flat one. In the future, we would like to improve the algorithm proposed for options learning to tighten the
theoretical guarantees and further characterize this family of problems. Finally, we would like to investigate, following the ideas of Wen et al. (2020), how the structure of the MDP could appear in our bound, which, in our opinion, is a fundamental point to put another brick in the direction of total understanding on the promising power of HRL.
|
2302.05240 | Resonance between planar self-affine measures | We show that if $\lbrace \varphi_i\rbrace_{i\in \Gamma}$ and $\lbrace
\psi_j\rbrace_{j\in\Lambda}$ are self-affine iterated function systems on the
plane that satisfy strong separation, domination and irreducibility, then for
any associated self-affine measures $\mu$ and $\nu$, the inequality $$\dim_{\rm
H}(\mu*\nu) < \min \lbrace 2, \dim_{\rm H} \mu + \dim_{\rm H} \nu \rbrace$$
implies that there is algebraic resonance between the eigenvalues of the linear
parts of $\varphi_i$ and $\psi_j$. This extends to planar non-conformal setting
the existing analogous results for self-conformal measures on the line. | Aleksi Pyörälä | 2023-02-10T13:37:07Z | http://arxiv.org/abs/2302.05240v4 | # Resonance between planar self-affine measures
###### Abstract.
We show that if \(\{\phi_{i}\}_{i\in\Gamma}\) and \(\{\psi_{j}\}_{j\in\Lambda}\) are self-affine iterated function systems on the plane that satisfy strong separation, domination and irreducibility, then for any associated self-affine measures \(\mu\) and \(\nu\), the inequality
\[\dim_{\mathrm{H}}(\mu*\nu)<\min\{2,\dim_{\mathrm{H}}\mu+\dim_{\mathrm{H}}\nu\}\]
implies that there is algebraic resonance between the eigenvalues of the linear parts of \(\phi_{i}\) and \(\psi_{j}\). This extends to planar non-conformal setting the existing analogous results for self-conformal measures on the line.
Key words and phrases:Self-affine measures, Hausdorff dimension, convolution of measures, resonance 2020 Mathematics Subject Classification: Primary 28A80; Secondary 37A10 I thank Ville Suomala and Meng Wu for their reading of and comments on the manuscript.
## 1. Introduction
In the 1960s, Furstenberg conjectured that if \(X,Y\subseteq[0,1]\) are closed sets invariant under multiplication by integers \(m\) and \(n\), respectively, then for any \(s\neq 0\), the inequality
\[\dim(X+sY)<\min\{1,\dim X+\dim Y\} \tag{1.1}\]
implies that \(\frac{\log m}{\log n}\in\mathbb{Q}\). Here and in the following, \(\dim\) denotes the (lower) Hausdorff dimension for both sets and measures. This conjecture was one of several that aimed to capture the idea that if \(\frac{\log m}{\log n}\not\in\mathbb{Q}\), then expansions in base \(m\) and \(n\) should have no common structure: Indeed, the right-hand side of (1.1) is always an upper bound for \(\dim(X+sY)\), while the strict inequality (1.1) implies that many of the fibers \(\{(x,y)\in X\times sY:\ x+y=z\}\) are large, which heuristically means that \(X\) and \(Y\) should have arithmetically similar structure in many places. The phenomenon (1.1) is usually referred to as resonance: \(X\) and \(Y\) are said to _resonate_ if they satisfy (1.1) for some \(s\), and otherwise they are said to _dissonate_.
It is also natural to ask if a similar phenomenon holds in a more general setting: For dynamical systems \(X\) and \(Y\), does (1.1) imply some kind of algebraic or arithmetic similarity between the sets or the dynamics? The first result in this direction is due to Moreira [8] from 1998, who proved that for two self-conformal sets on the line, (1.1) cannot hold in the presence of an irrationality assumption if one of the
sets is totally non-linear. Recall that a set \(K\subseteq\mathbb{R}\) is called self-conformal if
\[K=\bigcup_{i=1}^{m}f_{i}(K) \tag{1.2}\]
for some \(C^{1+\varepsilon}\)-contractions \(f_{i}\).
In 2009, Peres and Shmerkin [22] established analogous results for sums of self-similar sets on the line: The dimension of the sum is maximal unless the contraction ratios of the defining similarities form an _arithmetic set_. A set \(A\subseteq\mathbb{R}\) is called arithmetic if \(A\subseteq\alpha\mathbb{N}\) for some \(\alpha\in\mathbb{R}\). Recall that a set is self-similar if it satisfies (1.2) with \(f_{i}\) being similarities. An analogous result for convolutions of Cantor measures on the line was obtained shortly after by Nazarov, Peres and Shmerkin [20]: Indeed, by replacing sum with convolution in (1.1), one can formulate the concept of resonance for measures.
A major breakthrough on the topic of resonance between dynamical systems was achieved by Hochman and Shmerkin in 2012 [16], who introduced a powerful method called the _local entropy averages_ to attack problems regarding projections (and therefore sums) of dynamically defined fractals. Hochman and Shmerkin managed to both prove the original conjecture of Furstenberg, and extend to the setting of measures the existing results on the sums of self-similar and self-conformal sets on the line.
Namely, they proved that if \(\{f_{i}\}_{i\in\Gamma}\) and \(\{g_{j}\}_{j\in\Lambda}\) are families of \(C^{1+\varepsilon}\)-contractions on \(\mathbb{R}\) that satisfy the open set condition, then for any associated self-conformal measures \(\mu\) and \(\nu\) and any \(t>0\),
\[\dim(\mu*S_{t}\nu)=\min\{1,\dim\mu+\dim\nu\}\]
unless the asymptotic contraction ratios of \(\phi_{i}\) and \(\psi_{j}\) form an arithmetic set. Recall that \(\mu\) is self-conformal if
\[\mu=\sum_{i\in\Gamma}p_{i}\cdot\mu\circ f_{i}^{-1}\]
for a probability vector \((p_{i})_{i\in\Gamma}\) and \(f_{i}\) as above. Due to well-known variational principles, the result for self-conformal measures implies the result for sets as well. Recently, this result was generalized not to require any separation conditions by Barany, Kaenmaki, Wu and the author in a work in progress [7].
Perhaps surprisingly, almost nothing seems to be known of this phenomenon in higher dimensions. While there certainly exists literature on bounding the size of sums or convolutions from below, such as the famous inverse theorem of Hochman [14, 15], it primarily focuses on showing that, for very general \(X\) and \(Y\), the sum (or convolution) \(X+Y\) is _strictly larger_ than \(X\), unless \(X\) and \(Y\) have a very special structure. See also [11, 24] and the references therein for progress in related phenomena. However, the existing results do not aim to capture the spirit of the phenomenon predicted by Furstenberg, that "geometric resonance" of dynamically defined sets should imply a kind of "algebraic resonance" between the dynamics.
The purpose of the present work is to provide an extension of this principle to the planar setting. However, one has to be careful in formulating an extension of this
idea beyond the line: The direct extension, that
\[\dim(X+Y)<\min\{2,\dim X+Y\} \tag{1.3}\]
unless there is algebraic resonance between the dynamics of \(X\) and \(Y\), breaks down easily. Indeed, one can of course isometrically embed any sets \(X\) and \(Y\) on the line to the plane, and their sum will always have dimension at most \(1\). It is also not difficult to construct examples of \(X\) and \(Y\) with dimension strictly greater than one by taking product sets. Thus, in order to expect (1.3) to imply algebraic resonance, one has to assume that \(X\) and \(Y\) are "spread out" in sufficiently many directions, in some sense.
In this paper, we consider the size of \(\mu*\nu\) when \(\mu\) and \(\nu\) are self-affine measures on the plane. Let \(\mathbb{RP}^{1}\) denote the collection of one-dimensional subspaces of \(\mathbb{R}^{2}\). For a \(2\times 2\)-matrix \(A\), let \(|\lambda_{1}(A)|\leq|\lambda_{2}(A)|\) denote its eigenvalues. Let \(A\) also denote the action induced by \(A\) on \(\mathbb{RP}^{1}\). We say that a system \(\Phi=\{f_{i}(x)=A_{i}x+a_{i}\}_{i\in\Gamma}\) of affine contractions on \(\mathbb{R}^{2}\) satisfies
1. the _strong separation condition_ if there exists a bounded open set \(V\neq\emptyset\) such that for every \(i\neq j\in\Gamma\), \(f_{i}(\operatorname{cl}(V))\subseteq V\) and \(f_{i}(\operatorname{cl}(V))\cap f_{j}(\operatorname{cl}(V))=\emptyset\),
2. _hyperbolicity_ if there exists at least one \(i\in\Gamma\) such that \(|\lambda_{1}(A)|<|\lambda_{2}(A)|\),
3. _irreducibility_ if for every \(\theta\in\mathbb{RP}^{1}\), there exists \(i\in\Gamma\) such that \(A_{i}\theta\neq\theta\), and
4. the _domination condition_ if there exists a multicone \(\mathcal{C}\subseteq\mathbb{RP}^{1}\), i.e. a finite union of closed cones, such that \(A_{i}\mathcal{C}\subseteq\operatorname{int}(\mathcal{C})\) for each \(i\in\Gamma\).
**Theorem 1.1**.: _Let \(\Phi=\{\varphi_{i}(x)=A_{i}x+a_{i}\}_{i\in\Gamma}\) and \(\Psi=\{\psi_{j}(x)=B_{j}x+b_{j}\}_{j\in\Lambda}\) be systems of affine contractions on \(\mathbb{R}^{2}\) that satisfy the strong separation condition, hyperbolicity, and irreducibility. Let \(\mu\) and \(\nu\) be self-affine measures associated to \(\Phi\) and \(\Psi\), and suppose that \(\dim\mu\geq\dim\nu\). If_
\[\dim(\mu*\nu)<\min\{2,\dim\mu+\dim\nu\},\]
_then \(\dim\mu>1>\dim\nu\) and, if \(\Phi\) and \(\Psi\) also satisfy the domination condition, then_
\[\{\log|\lambda_{1}(A_{i})|:\ i\in\Gamma\}\cup\{\log|\lambda_{2}(B_{j})|:\ j\in\Lambda\}\]
_is an arithmetic set._
Let us comment on the assumptions of the theorem. The assumption of strong separation is classical in the study of iterated function systems, since it makes it possible to view the attractor as a dynamical system, giving accesss to a multitude of tools from ergodic theory. However, during recent years, much attention has been directed towards establishing existing results without assuming any separation conditions, and we expect it can be removed from our result as well.
The assumption of hyperbolicity ensures that the systems \(\Phi\) and \(\Psi\) are "strictly self-affine". This is crucial in our approach, since strictly self-affine measures have very special tangential structure that we will heavily use. Without the hyperbolicity assumption, the contractions are similarities up to a change of basis, meaning that the self-affine measures are essentially self-similar. Of course, it would be interesting to find an analogous result for convolutions of self-similar measures on the plane.
The assumption of irreducibility is our way to ensure that the measures \(\mu\) and \(\nu\) are "spread out" in sufficiently many directions, which is something one has to assume as explained in the preceding discussion. Without this assumption, it is easy to construct examples for which the conclusion of the theorem does not hold, by e.g. constructing measures on the Bedford-McMullen carpets. However, we do not know if it is enough to assume irreducibility for just one of the systems \(\Phi\) and \(\Psi\).
If it happens that \(\dim\mu\geq\dim\nu\geq 1\) or \(1\geq\dim\mu\geq\dim\nu\), then the strong separation, hyperbolicity and irreducibility are enough to ensure that \(\mu*\nu\) has the maximal dimension. The case \(\dim\mu>1>\dim\nu\) is more delicate, since now \(\mu\) might be large enough to "absorb" some of \(\nu\) in the convolution, and some kind of independence of their local structures is required as in the analogous one-dimensional results. In establishing this independence, we require the domination condition since it gives us a way to connect the eigenvalues of \(A_{i}\) and \(B_{j}\) to the dynamics of the scenery processes of \(\mu\) and \(\nu\) around typical points. However, it is likely that the assumption is just a by-product of our argument.
### On the proof of Theorem 1.1
For simplicity, we suppose that the contractions in \(\Phi\) and \(\Psi\) map the unit ball into disjoint ellipses. While stronger than the separation condition that we assume, the same argument works for the classical strong separation condition up to minor technical additions.
We will prove the statement by contradiction: we show that if any of the conditions
* \(\dim\mu\geq\dim\nu\geq 1\),
* \(1\geq\dim\mu\geq\dim\nu\), or
* \(\{\log|\lambda_{1}(A_{i})|:\ i\in\Gamma\}\cup\{\log|\lambda_{2}(B_{j})|:\ j\in\Lambda\}\) is not an arithmetic set
holds, then we will have \(\dim(\mu*\nu)=\min\{2,\dim\mu+\dim\nu\}\). Our proof is based on the local entropy averages of Hochman-Shmerkin [16]: instead of proving directly that \(\dim(\mu*\nu)\) is large, we will show that the entropy of \(\mu^{\prime}*\nu^{\prime}\) is large on many scales, where \(\mu^{\prime}\) and \(\nu^{\prime}\) are _magnifications_ of \(\mu\) and \(\nu\) along properly chosen filtrations of their supports.
The machinery of Hochman-Shmerkin [16] asserts that \(\dim(\mu*\nu)\) is at least the average of finite-scale entropies of magnifications of \(\mu\) and \(\nu\). With this machinery, the cases i) and ii) above are very simple in principle. Indeed, magnifying both \(\mu\) and \(\nu\) along the "cylinder ellipses" \(\varphi_{i_{1}}\circ\cdots\circ\varphi_{i_{n}}(B(0,1))\) and \(\psi_{j_{1}}\circ\cdots\circ\psi_{j_{n}}(B(0,1))\), on the limit they both resemble orthogonal projections of the original measures, by the hyperbolicity assumption. As such, they will have entropy close to either \(1\) in the case i), or \(\dim\mu\) and \(\dim\nu\) in the case ii), by the strong projection theorem for self-affine measures [2, Theorem 7.1], due to Barany, Hochman and Rapaport. Since the ellipses on which the magnifications are supported on have major axes pointing in different directions by the irreducibility assumption, their convolution has a product-like structure on the plane and thus has entropy close to \(\min\{2,\dim\mu+\dim\nu\}\).
The situation \(\dim\mu>1>\dim\nu\) is more delicate, since the magnifications of \(\mu\) along the construction ellipses are no longer enough to store a sufficient amount
of the dimension of \(\mu\). Instead, we will apply the local entropy averages with \(\mu\) magnified along dyadic squares and \(\nu\) along the cylinder ellipses.
For any such magnifications \(\mu^{\prime}\) and \(\nu^{\prime}\) of \(\mu\) and \(\nu\), and any orthogonal projection \(\pi\), applying the chain rule of entropy yields that the entropy of \(\mu^{\prime}*\nu^{\prime}\) is equal to
\[\text{entropy of }\pi\mu^{\prime}*\pi\nu^{\prime}\ +\ \text{ conditional entropy of }\mu^{\prime}*\nu^{\prime}\text{ w.r.t. }\pi\] \[\geq \text{entropy of }\pi\mu^{\prime}*\pi\nu^{\prime}\ +\ \text{ entropy of }\mu^{\prime}-\text{entropy of }\pi\mu^{\prime}. \tag{1.4}\]
The key geometric ingredient in our proof is the observation that each \(\mu^{\prime}\) has a _fiber structure_ in the sense that \(\pi\mu^{\prime}\) is (close to) a _slice measure_ of \(\mu\), for a properly chosen \(\pi\). The precise form of this structure is stated in Proposition 4.1 which is the main technical contribution of this paper. Similar fiber structures have been previously observed for self-affine sets by Kaenmaki, Koivusalo and Rossi [18] and for self-affine measures with an additional projection condition by Kempton [19]. Combining this with the dimension conservation phenomenon that follows for planar self-affine measures from the Ledrappier-Young formula of Barany [1] and the general fact that the average entropy of \(\mu^{\prime}\) over many scales is close to the dimension of \(\mu\), we see that upon averaging, (1.4) is in fact close to
\[\text{entropy of }\pi\mu^{\prime}*\pi\nu^{\prime}\ +\ 1, \tag{1.5}\]
recalling that we are assuming \(\dim\mu>1\).
Thus, if we manage to show that for the measures \(\pi\mu^{\prime}\) and \(\pi\nu^{\prime}\)_on the line_ we have
\[\text{entropy of }\pi\mu^{\prime}*\pi\nu^{\prime}\geq\min\{1,\text{entropy of }\pi\mu^{\prime}\ +\ \text{entropy of }\pi\nu^{\prime}\}-o(1), \tag{1.6}\]
then by (1.5), we have that the average entropy of \(\mu^{\prime}*\nu^{\prime}\) is at least
\[\min\{2,\text{entropy of }\pi\mu^{\prime}\ +\ 1\ +\ \text{ entropy of }\pi\nu^{\prime}\}-o(1)\] \[= \min\{2,\dim\mu+\dim\nu\}-o(1)\]
completing the proof by the local entropy averages and another application of the dimension conservation of [1] and the projection theorem of [2].
Proving (1.6) is the part where the assumption iii) on the eigenvalues of \(A_{i}\) and \(B_{j}\) steps in. We need to inspect the dynamics of the sequences \((\pi\mu^{\prime})\) and \((\pi\nu^{\prime})\) obtained by continuously magnifying \(\mu\) and \(\nu\), and show that they are sufficiently independent of each other. Since \(\pi\mu^{\prime}\) is (close to) a slice of \(\mu\) and \(\pi\nu^{\prime}\) is an orthogonal projection of \(\nu\), they both have essentially conformal structure, and we are able to use methods similar to those in the conformal case on the line.
### Structure
In Section 2 we introduce our setting more rigorously, and collect some general known results and short lemmas on self-affine measures, dynamical systems and linear algebra. Section 3 is devoted to translating the local entropy averages machinery of [16] to our setting, while in Section 4 we state our main geometric and dynamical results, and explain how the proof of Theorem 1.1 is concluded using these. Section 5 is perhaps the most technical one, devoted to the proof of the main geometric result, Proposition 4.1. The arguments here were inspired by the work of Kempton [19]. Finally, in Section 6 we investigate the dynamics of the
sequences of magnifications of \(\mu\) and \(\nu\), and prove the required lower bounds for their average entropies.
Table 1. Notation
\begin{tabular}{l l} \hline \hline \(\Gamma,\Lambda\) & Finite alphabets \\ \(\mathtt{i},\mathtt{j},\mathtt{k},\dots\) & Infinite words of \(\Gamma^{\mathbb{N}}\), \(\Lambda^{\mathbb{N}}\) \\ \(\mathtt{a},\mathtt{b},\dots\) & Finite words \\ \(\Phi=\{\varphi_{i}\}_{i\in\Gamma},\Psi=\{\psi_{j}\}_{j\in\Lambda}\) & Systems of affine invertible contractions \\ \(\bar{\mu},\bar{\nu}\) & Bernoulli measures on \(\Gamma^{\mathbb{N}}\) and \(\Lambda^{\mathbb{N}}\) \\ \(\Pi\) & The natural projections \(\Gamma^{\mathbb{N}}\to\mathbb{R}^{2}\) and \(\Lambda^{\mathbb{N}}\to\mathbb{R}^{2}\) \\ \(\mu,\nu\) & Projections of \(\bar{\mu}\), \(\bar{\nu}\) through \(\Pi\) \\ \(\mu_{D}\) & Normalized restricition on \(D\) \\ \(\mu^{D}\) & Measure \(\mu^{D}\) linearly rescaled onto \([-1,1)^{d}\) \\ \(\pi_{\theta}\) & Orthogonal projection onto the line \(\theta\) \\ \(\pi^{i}\) & Projection onto the \(i\)th coordinate \\ \(\mu_{\mathtt{i},\theta}\) & A "slice measure" of \(\mu\); see (4.1) \\ \(R_{\theta}\) & A rotation taking \(\theta\) onto the \(x\)-axis \\ \(Y_{\mathtt{i},\theta,r_{1},r_{2}}\) & A rectangle with side lengths \(2^{-r_{2}}\leq 2^{-r_{1}}\) \\ \(H_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\) & An affine map rescaling \(Y_{\mathtt{i},\theta,r_{1},r_{2}}\) onto \(R_{\theta}^{-1}[-1,1]^{2}\) \\ \(A=U\dot{D}V^{-1}\) & The singular value decomposition \\ \(L_{\mathtt{i},\theta,k}\) & Non-singular linear map \(\mathbb{R}^{2}\to\mathbb{R}^{2}\) \\ \(Q_{\mathtt{i},\theta,k}\) & \(Y_{\mathtt{\sigma}^{\ell_{k}\mathtt{i},\theta,kN+\log\alpha_{1}(\mathtt{i}|_ {\ell_{k}}),kN+\log\alpha_{2}(\mathtt{i}|_{\ell_{k}})}}\) \\ \(E_{\mathtt{i},\theta,k}\) & The largest ellipse contained in \(Q_{\mathtt{i},\theta,k}\) \\ \(\theta(\mathtt{i}),\theta^{-}(\mathtt{i}),\theta^{*}(\mathtt{i})\) & The direction of the longer axis of \(A(B(0,1))\) \\ \(\|A\|\) & Limit orientations; see Lemma 2.1 \\ \(A|_{\theta}\) & The restriction of \(A\) onto the line \(\theta\) \\ \(\alpha_{1}(A)\leq\alpha_{2}(A)\) & The singular values of \(A\) \\ \(i_{k}(\mathtt{i}),\ i_{k}(\mathtt{j})\) & Stopping times; \(\|A_{\mathtt{i}|_{i_{k}}}\|\approx\|B_{\mathtt{i}|_{i_{k}}}\|\approx 2^{-kN}\) \\ \(\ell_{k}=\ell_{k}(\mathtt{i})\) & An increasing sequence; see (5.1) \\ \(\mu_{\mathtt{i}|_{i_{k}}},\ \nu_{\mathtt{j}|_{i_{k}}},\ \mu^{\mathtt{i},k}\) & Magnifications of \(\mu\) and \(\nu\); see Notation 3.1 \\ \(\rho(n,(\mathtt{i},\theta))\) & The reflection done by \(A_{\mathtt{i}|_{n}}^{-1}\) on the line \(\theta\) \\ \(\mathcal{Z}_{\Phi},\mathcal{Z}_{\Psi}\) & Suspension flows; see Section 6 \\ \(\mathcal{Z}_{\Phi}^{\prime},\mathcal{Z}_{\Psi}^{\prime}\) & Projections of \(\mathcal{Z}_{\Phi}\) and \(\mathcal{Z}_{\Psi}\) through \(\pi^{1,2,4}\) \\ \(F:\mathcal{Z}_{\Phi}\to\mathcal{P}(\mathbb{R}^{2})\) & Coding of \(\pi_{\theta(\mathtt{i})^{\perp}}\mu^{\mathtt{i},k}\) via \(\mathcal{Z}_{\Phi}\); see (6.5) \\ \(G:\mathcal{Z}_{\Psi}\to\mathcal{P}(\mathbb{R}^{2})\) & Coding of \(\pi_{\theta}\nu_{\mathtt{j}|_{i_{k}}}\) via \(\mathcal{Z}_{\Psi}\); see (6.7) \\ \(F^{\prime},G^{\prime}\) & Functions \(F\) and \(G\) without reflection \\ \hline \hline \end{tabular}
Table 1. Notation
## 2. Preliminaries
In this paper, a measure refers to a Radon measure on a metrizable topological space. The notation \(\mathcal{P}(X)\) stands for probability measures on the space \(X\). For a measure \(\mu\) on \(X\) and a subset \(Y\subseteq X\), \(\mu|_{Y}\) denotes the restriction of \(\mu\) onto \(Y\), \(\mu_{Y}:=\mu(Y)^{-1}\mu|_{Y}\) the normalized restriction when \(\mu(Y)>0\). For a measurable function \(f\), let \(f\mu:=\mu\circ f^{-1}\) denote the push-forward. The space of probability measures is always equipped with the weak-\({}^{*}\) topology which we metrize using the Levy-Prokhorov metric \(d_{\mathrm{LP}}\),
\[d_{\mathrm{LP}}(\mu,\nu)=\inf\{\varepsilon>0:\ \mu(A)\leq\nu(A^{\varepsilon})+ \varepsilon,\ \nu(A)\leq\mu(A^{\varepsilon})+\varepsilon\text{ for all Borel }A\},\]
where \(A^{\varepsilon}\) denotes the open \(\varepsilon\)-neighbourhood of \(A\). We measure the size of measures with the lower Hausdorff dimension,
\[\dim\mu=\dim_{\mathrm{H}}\mu=\inf\{\dim_{\mathrm{H}}(E):\ \mu(E)>0\},\]
and occasionally with upper and lower local dimensions,
\[\underline{\dim}_{\mathrm{loc}}\mu(x)=\liminf_{r\to 0}\frac{\log\mu(B(x,r))}{ \log r},\qquad\overline{\dim}_{\mathrm{loc}}\mu(x)=\limsup_{r\to 0}\frac{\log\mu(B(x,r))} {\log r},\]
where \(B(x,r)\) denotes the closed ball centered at \(x\) and of radius \(r\). The measure \(\mu\) is called exact dimensional if \(\underline{\dim}_{\mathrm{loc}}\mu(x)=\overline{\dim}_{\mathrm{loc}}\mu(x)\) almost everywhere. The following connection between Hausdorff and lower local dimension is well-known: For any measure \(\mu\),
\[\dim\mu=\mathrm{ess}\inf_{x\sim\mu}\underline{\dim}_{\mathrm{loc}}\mu(x).\]
### Symbolic dynamics
Let \(\Gamma\) be a finite set with \(\#\Gamma\geq 2\), and let \(\Phi=\{\varphi_{i}\}_{i\in\Gamma}\) be an iterated function system of contractions on \(\mathbb{R}^{d}\). It is well-known that there exists a unique compact set \(K\), called the _attractor_ of \(\Phi\), such that
\[K=\bigcup_{i\in\Gamma}\varphi_{i}(K).\]
We refer to Falconer's book [9] for standard properties of iterated function systems. If the functions \(\varphi_{i}\) are of the form \(\varphi_{i}(x)=A_{i}x+a_{i}\), where \(A_{i}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) are invertible linear maps with \(\|A_{i}\|<1\) and \(a_{i}\in\mathbb{R}^{d}\), then the IFS \(\Phi\) is called _self-affine_ and its attractor a self-affine set.
Write \(\Gamma^{*}=\bigcup_{n}\Gamma^{n}\) for the set of finite words composed of elements of \(\Gamma\). For a finite word \(\mathtt{a}=i_{0}i_{1}\ldots i_{n}\in\Gamma^{*}\) we write \(\varphi_{\mathtt{a}}=\varphi_{i_{0}}\circ\cdots\circ\varphi_{i_{n}}\). Let \(|\mathtt{a}|\) denote the number of elements in \(\mathtt{a}\). For finite words \(\mathtt{a}\) and \(\mathtt{b}\), let \(\mathtt{a}\mathtt{b}\in\Gamma^{|\mathtt{a}|+|\mathtt{b}|}\) denote their concatenation.
For a word \(\mathtt{i}\in\Gamma^{\mathbb{N}}\cup\Gamma^{*}\) and and integer \(k\leq|\mathtt{i}|\), let \(\mathtt{i}|_{k}\in\Gamma^{k}\) denote its projection to the first \(k\) coordinates. When \(k=0\), set \(\mathtt{i}|_{k}=\emptyset\). For \(\mathtt{i},\mathtt{j}\in\Gamma^{\mathbb{N}}\), let \(\mathtt{i}\wedge\mathtt{j}:=\mathtt{i}|_{k}\), where \(k\) is the largest integer for which \(\mathtt{i}|_{k}=\mathtt{j}|_{k}\). Define a distance \(d\) on \(\Gamma^{\mathbb{N}}\) by
\[d(\mathtt{i},\mathtt{j})=2^{-|\mathtt{i}\wedge\mathtt{j}|}\]
for every \(\mathtt{i},\mathtt{j}\in\Gamma^{\mathbb{N}}\). For a finite word \(\mathtt{a}\in\Gamma^{*}\), write \([\mathtt{a}]\) for the cylinder set \(\{\mathtt{i}\in\Gamma^{\mathbb{N}}:\mathtt{i}|_{[\mathtt{a}]}=\mathtt{a}\}\). It is not difficult to see that the cylinder sets are closed and open in the topology generated by \(d\).
It is sometimes convenient to consider the two-sided sequence space \(\Gamma^{\mathbb{Z}}\). For \(\mathtt{i}=\ldots i_{-2}i_{-1};i_{0}i_{1}i_{2}\ldots\in\Gamma^{\mathbb{Z}}\) and \(m\leq n\in\mathbb{Z}\), write \(\mathtt{i}|_{m}^{n}=i_{m}i_{m+1}\ldots i_{n}\). The metric \(d\) extends to \(\Gamma^{\mathbb{Z}}\) by replacing \(\mathtt{i}|_{k}\) by \(\mathtt{i}|_{-k}^{k}\) in the definition of \(\mathtt{i}\wedge\mathtt{j}\). The cylinder sets of \(\Gamma^{\mathbb{Z}}\) are given by \([\mathtt{i}]_{m}^{n}:=\{\mathtt{j}\in\Gamma^{\mathbb{Z}}:\ \mathtt{j}|_{m}^{n}=\mathtt{i}|_{m}^{n}\}\). There is a natural surjection \(\Gamma^{\mathbb{Z}}\to\Gamma^{\mathbb{N}}\) given by the restriction to the "positive coordinates", \(\ldots i_{-1};i_{0}i_{1}\ldots=:\mathtt{i}\mapsto\mathtt{i}^{+}:=i_{0}i_{1}\ldots\). Similarly, we define the projection to the "negative coordinates" by \(\mathtt{i}^{-}:=i_{-1}i_{-2}\ldots\in\Gamma^{\mathbb{N}}\).
We let \(\sigma\) denote the left-shift on both \(\Gamma^{\mathbb{N}}\) and \(\Gamma^{\mathbb{Z}}\), given by \(\sigma(i_{0}i_{1}\ldots)=i_{1}i_{2}\ldots\) and \(\sigma(i_{-1};i_{0}i_{1}\ldots)=\ldots i_{0};i_{1}i_{2}\ldots\). The tuples \((\Gamma^{\mathbb{N}},\sigma)\) and \((\Gamma^{\mathbb{Z}},\sigma)\) are referred to as the one-sided and two-sided shift spaces, respectively. For any \(\sigma\)-invariant probability measure \(\nu\) on \(\Gamma^{\mathbb{Z}}\), there is a unique \(\sigma\)-invariant probability measure on \(\Gamma^{\mathbb{Z}}\) which we also denote by \(\nu\), given by \(\nu([\mathtt{i}]_{m}^{n}):=\nu([\mathtt{i}|_{m}^{n}])\) for each \(\mathtt{i}\in\Gamma^{\mathbb{Z}}\), \(m\leq n\in\mathbb{Z}\). This is referred to as the natural extension of \(\nu\).
### Linear algebra and matrix products
Let \(A\) be a real-valued \(2\times 2\)-matrix. Recall that the eigenvalues of \(A\) are denoted by \(\lambda_{1}(A)\) and \(\lambda_{2}(A)\) with \(|\lambda_{1}(A)|\leq|\lambda_{2}(A)|\). The matrix \(A\) is called _hyperbolic_ (or _proximal_) if it has two real eigenvalues of different absolute value. If a collection of matrices satisfies the domination condition, then each of the matrices is hyperbolic; see [4, Corollary 2.4]. A hyperbolic matrix \(A\) maps the unit ball onto an ellipse with major axes of length \(\alpha_{1}(A)<\alpha_{2}(A)\), where \(\alpha_{1}(A),\alpha_{2}(A)\) are the singular values of \(A\). Let \(\theta(A)\in\mathbb{RP}^{1}\) denote the line parallel to the longer semiaxis of this ellipse.
For a finite word \(\mathtt{a}=i_{0}i_{1}\ldots i_{n}\in\Gamma^{*}\), write \(A_{\mathtt{a}}=A_{i_{0}}A_{i_{1}}\ldots A_{i_{n}}\) and \(A_{\mathtt{a}}^{-1}=(A_{\mathtt{a}})^{-1}=A_{i_{n}}^{-1}\ldots A_{i_{1}}^{-1}A_ {i_{0}}^{-1}\). We define a distance \(d\) on \(\mathbb{RP}^{1}\), given by the smaller angle between lines. The following lemma is, for us, the key technical consequence of the domination assumption in Theorem 1.1.
**Lemma 2.1**.: _Let \(\{A_{i}\}_{i\in\Gamma}\) be a collection of \(2\times 2\)-matrices satisfying the domination condition. Then the limits_
\[\theta(\mathtt{i}) :=\lim_{n\to\infty}\theta(A_{\mathtt{i}|_{n}}),\] \[\theta^{-}(\mathtt{i}) :=\lim_{n\to\infty}\theta(A_{i_{0}}^{-1}A_{i_{1}}^{-1}\ldots A_{i_{ n}}^{-1}),\] \[\theta^{*}(\mathtt{i}) :=\lim_{n\to\infty}\theta(A_{i_{0}}^{*}A_{i_{1}}^{*}\ldots A_{i_{ n}}^{*})\]
_exist for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\), the convergences are uniform, and the functions \(\mathtt{i}\mapsto\theta(\mathtt{i})\), \(\mathtt{i}\mapsto\theta^{-}(\mathtt{i})\) and \(\mathtt{i}\mapsto\theta^{*}(\mathtt{i})\) are Holder continuous._
Proof.: See [23, Lemma 2.1] and [18, Lemma 2.1].
**Lemma 2.2**.: _For any \(\mathtt{i}\in\Gamma^{\mathbb{N}}\) and \(\theta\in\mathbb{RP}^{1}\setminus\{\theta(\mathtt{i})\}\),_
\[\lim_{n\to\infty}d(A_{\mathtt{i}|_{n}}^{-1}\theta,\ \theta(A_{\mathtt{i}|_{n}}^{-1}))\to 0.\]
_In particular, \(\theta(\Gamma^{\mathbb{N}})\subseteq\mathcal{C}\), where \(\mathcal{C}\) is the strongly invariant multicone of \(\{A_{i}\}_{i\in\Gamma}\)._
Proof.: Let \(A_{\mathtt{i}|_{n}}=U_{\mathtt{i}|_{n}}D_{\mathtt{i}|_{n}}V_{\mathtt{i}|_{n}}^{-1}\) denote the singular value decomposition. By Lemma 2.1, \(U_{\mathtt{i}|_{n}}\) takes the \(x\)-axis onto a line which tends to \(\theta(\mathtt{i})\) as \(n\to\infty\). In particular, \(U_{\mathtt{i}|_{n}}^{-1}\theta\) remains uniformly bounded away from the \(x\)-axis. By [5, Theorem B], \(\frac{\alpha_{1}(A_{\mathtt{i}|_{n}})}{\alpha_{2}(A_{\mathtt{i}|_{n}})}\to 0\), whence \(U_{\mathtt{i}|_{n}}^{-1}\theta\) is pulled very close to the \(y\)-axis by \(D_{\mathtt{i}|_{n}}^{-1}\) and finally taken close to \(\theta(A_{\mathtt{i}|_{n}}^{-1})\) by \(V_{\mathtt{i}|_{n}}^{-1}\).
For the second statement, note that \(A_{\mathtt{i}|_{n}}\mathcal{C}\subseteq\mathcal{C}\) and \(d(A_{\mathtt{i}|_{n}}\mathcal{C},\theta(A_{\mathtt{i}|_{n}}))\to 0\) as \(n\to\infty\) by the above, since \(\mathcal{C}\) contains more than one point. Since \(\mathcal{C}\) is closed, the claim follows.
We require the following technical observation, that the directions \(\theta(\mathtt{i})^{\perp}\) and \(\theta^{*}(\mathtt{i})\) are bounded away from each other, uniformly in \(\mathtt{i}\).
**Lemma 2.3**.: _Let \(\{A_{i}\}_{i\in\Gamma}\) be a dominated tuple of matrices. Then there exist a constant \(C>0\) such that_
\[d(\theta(\mathtt{i})^{\perp},\ \theta^{*}(\mathtt{i}))\geq C\]
_for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\)._
Proof.: Let \(f\) denote the function \(\mathtt{i}\mapsto d(\theta(\mathtt{i})^{\perp},\ \theta^{*}(\mathtt{i}))\). Since \(d(\theta(\mathtt{i})^{\perp},\theta^{*}(\mathtt{i}))=d(\theta(\mathtt{i}), \theta^{*}(\mathtt{i})^{\perp})\), and it is not difficult to see from the singular value decomposition that \(\theta^{*}(\mathtt{i})=\theta^{-}(\mathtt{i})^{\perp}\), we have that \(f(\mathtt{i})=d(\theta(\mathtt{i}),\ \theta^{-}(\mathtt{i}))\). Since \(f\) is continuous by Lemma 2.1 and \(\Gamma^{\mathbb{N}}\) is compact, it suffices to show that \(f(\mathtt{i})\neq 0\) for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\).
By the assumption, \(\{A_{i}\}_{i\in\Gamma}\) has a strongly invariant multicone \(\mathcal{C}\). By Lemma 2.2, \(\theta(\mathtt{i})\in\operatorname{int}(\mathcal{C})\) for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\). On the other hand, \(A_{i}^{-1}(\mathbb{RP}^{1}\setminus\mathcal{C})\subseteq\mathbb{RP}^{1}\setminus \mathcal{C}\) for every \(i\in\Gamma\), whence \(\theta^{-}(\mathtt{i})\in\operatorname{cl}(\mathbb{RP}^{1}\setminus\mathcal{C})\) for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\).
In particular, \(\theta(\mathtt{i})\neq\theta^{-}(\mathtt{i})\) and the result follows.
For a matrix \(A\), let \(v_{i}(A)\) denote the eigenspace associated to \(\lambda_{i}(A)\) for \(i=1,2\). When \(A\) is hyperbolic, \(v_{i}(A)\in\mathbb{RP}^{1}\) for \(i=1,2\). The following observation follows immediately from the eigenvalue decomposition.
**Lemma 2.4**.: _For a hyperbolic matrix \(A\), \(\lim_{n\to\infty}\theta(A^{n})=v_{2}(A)\)._
### The Furstenberg measures
For any probability vector \((p_{i})_{i\in\Gamma}\) and Bernoulli measure \(\bar{\mu}:=p^{\mathbb{N}}\) on \(\Gamma^{\mathbb{N}}\), there are associated measures \(\mu_{F}\) and \(\mu_{F}^{*}\) on \(\mathbb{RP}^{1}\), called the _Furstenberg measures_, which are given by
\[\mu_{F}=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}\delta_{\theta(A_{in}^{-1} \ldots A_{i_{0}}^{-1})},\]
\[\mu_{F}^{*}=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}\delta_{\theta(A_{in}^{*} \ldots A_{i_{0}}^{*})}\]
for \(\bar{\mu}\)-almost every \(\mathtt{i}\). The measures \(\mu_{F}\) and \(\mu_{F}^{*}\) are supported on the sets \(\theta(\Gamma^{\mathbb{N}})\) and \(\theta^{*}(\Gamma^{\mathbb{N}})=\theta(\Gamma^{\mathbb{N}})^{\perp}\), respectively. The product measures \(\bar{\mu}\times\mu_{F}\) and \(\bar{\mu}\times\mu_{F}^{*}\) are
invariant and ergodic under the maps
\[M:(\mathtt{i},\theta) \mapsto(\sigma\mathtt{i},A_{i_{0}}^{-1}\theta),\] \[M_{*}:(\mathtt{i},\theta) \mapsto(\sigma\mathtt{i},A_{i_{0}}^{*}\theta),\]
respectively. When \(\Phi\) satisfies the domination condition, the measures \(\bar{\mu}\times\mu_{F}\) and \(\bar{\mu}\times\mu_{F}^{*}\) are images of the Bernoulli measure \(p^{\mathbb{Z}}\) on \(\Gamma^{\mathbb{Z}}\) through the factor maps \(\mathtt{i}\mapsto(\mathtt{i}^{+},\theta^{-}(\mathtt{i}^{-}))\) and \(\mathtt{i}\mapsto(\mathtt{i}^{+},\theta^{*}(\mathtt{i}^{-}))\), which easily implies the existence, invariance and ergodicity of \(\bar{\mu}\times\mu_{F}\) and \(\bar{\mu}\times\mu_{F}^{*}\). These measures do also exist without the domination assumption, with the prescribed properties, which is a classical result of Furstenberg. When \(\Phi\) is irreducible, it is known that the measures \(\mu_{F}\) and \(\mu_{F}^{*}\) are non-atomic.
### Self-affine measures
Recall that \(K\) denotes the attractor of the system of affine contractions \(\Phi\). Let \(\Pi:\Gamma^{\mathbb{N}}\to K\) denote the surjection
\[\mathtt{i}\mapsto\lim_{k\to\infty}\varphi_{\mathtt{i}\downarrow_{k}}(0)\]
which we call the natural projection. Fix a probability vector \(p=(p_{i})_{i\in\Gamma}\) and let \(\bar{\mu}=p^{\mathbb{N}}\) denote the Bernoulli measure on \(\Gamma^{\mathbb{N}}\) with marginal \(p\). The measure \(\mu:=\Pi\bar{\mu}=\bar{\mu}\circ\Pi^{-1}\) is called the _self-affine measure_ associated to \(p\), and is well-known to be a Radon measure supported on \(K\) that satisfies
\[\mu=\sum_{i\in\Gamma}p_{i}\cdot\varphi_{i}\mu.\]
Throughout the paper, we will use \(\dim\) to denote the Hausdorff dimension of both sets and measures. The following strong projection theorem for self-affine measures is due to by Barany, Hochman and Rapaport [2]. For \(\theta\in\mathbb{RP}^{1}\), let \(\pi_{\theta}:\mathbb{R}^{2}\to\theta\) denote the orthogonal projection onto the line \(\theta\).
**Theorem 2.5** ([2], Theorem 7.1).: _Let \(\mu\) be a self-affine measure associated to a totally irreducible, hyperbolic IFS \(\Phi\) with the strong separation condition. Then for any orthogonal projection \(\pi\),_
\[\dim\pi\mu=\min\{1,\dim\mu\}.\]
We remark that the result holds also without the hyperbolicity assumption; this is an earlier result due to Hochman and Shmerkin [16].
It follows from the Ledrappier-Young formula due to Barany [3] that planar self-affine measures satisfy dimension conservation in directions typical for the Furstenberg measure \(\mu_{F}\). The Ledrappier-Young formula was shortly after generalized to higher dimensions by Barany and Kaenmaki [3]. The application to dimension conservation was recently generalized by Feng [10] to higher dimensions and for more general self-affine measures. Let \(\mu=\int\mu_{x}^{\theta}\,d\pi_{\theta}\mu(x)\) denote the disintegration of \(\mu\) with respect to the orthogonal projection \(\pi_{\theta}\).
**Theorem 2.6** (Corollary of [1], Theorem 2.7).: _Let \(\mu\) be a self-affine measure associated to an affine system of contractions \(\Phi\) with the strong separation and domination
conditions, and \(\mu_{F}\) the associated Furstenberg measure. Then for \(\mu_{F}\)-a.e. \(\theta\) and \(\pi_{\theta}\mu\)-a.e. \(x\), the measure \(\mu_{x}^{\theta}\) is exact dimensional and_
\[\dim\mu=\dim\pi_{\theta^{\perp}}\mu+\dim\mu_{x}^{\theta}.\]
### Flows and eigenvalues
Let \(I\) be either \(\mathbb{R}\) or \([0,+\infty)\), and let \(\{T_{s}\}_{s\in I}\) be a family of measurable functions on a measure space \((X,\mu)\) with the property that \(T_{s}\circ T_{t}=T_{s+t}\) for each \(s,t\in I\). Recall that a real number \(c\in\mathbb{R}\) is an _eigenvalue_ of the flow \((X,T_{s},\mu)\) if there exists a measurable function \(f:X\to\mathbb{C}\) such that \(f\circ T_{s}=e(cs)f\) almost everywhere, for every \(s\in I\), where we write \(e(x):=\exp(2\pi ix)\). If the flow is measure-preserving, i.e. \(T_{s}\mu=\mu\) for every \(s\), then it is not difficult to see that any eigenfunction \(f\) satisfies \(|f|\equiv 1\). We now record some standard properties of eigenvalues.
**Lemma 2.7** (Special case of Lemma 3.11, [13]).: _Let \((X,T_{s},\mu)\) be an ergodic measure-preserving flow, and let \(c\in\mathbb{R}\). Then the discrete-time dynamical system \((X,T_{c},\mu)\) is ergodic if and only if \(c^{-1}\) is not an eigenvalue of \((X,T_{s},\mu)\)._
**Lemma 2.8**.: _A measure-preserving flow \((X,T_{s},\mu)\) on a standard Borel space \(X\) has at most countably many eigenvalues._
Proof.: It is not difficult to see that eigenfunctions for different eigenvalues are orthogonal. Since the space \(L^{2}(X)\) is separable, any collection of orthogonal functions has to be countable.
The proof of the following result for the product of discrete-time dynamical systems can be found in many textbooks. Most proofs utilize the spectral theorem, and applying the spectral theorem for one-parameter families of unitary operators, the same proofs go through for flows as well.
**Proposition 2.9**.: _The product of two ergodic flows is ergodic if and only if the flows have no common eigenvalues._
Let \((\Gamma^{\mathbb{N}},\sigma)\) be a shift space. Given a function \(f:\Gamma^{\mathbb{N}}\to(0,+\infty)\), we may build a flow from \((\Gamma^{\mathbb{N}},\sigma)\) by "flowing up" from a point \(\mathtt{i}\in\Gamma^{\mathbb{N}}\) until we reach the time \(f(\mathtt{i})\), then switch to the point \(\sigma\mathtt{i}\) and continue flowing until the time \(f(\sigma\mathtt{i})\), and so on. Formally, we let \(Z=(\Gamma^{\mathbb{N}}\times\mathbb{R})/\sim\) eqipped with the quotient topology, where \(\sim\) denotes the equivalence relation generated by \((\mathtt{i},f(\mathtt{i}))\sim(\sigma\mathtt{i},0)\) for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\). Denoting each equivalence class \([(\mathtt{i},t)]\) by the unique representative \((\mathtt{i},t)\) with \(0\leq t<f(\mathtt{i})\), we may write
\[Z=\{(\mathtt{i},t):\ \mathtt{i}\in\Gamma^{\mathbb{N}},0\leq t\leq f(\mathtt{i})\}.\]
Let \(T_{s}:(\mathtt{i},t)\mapsto(\mathtt{i},t+s)\) for every \(s\geq 0\). If there is a \(\sigma\)-invariant measure \(\mu\) on \(\Gamma^{\mathbb{N}}\) which measures \(f\), a natural \(T_{s}\)-invariant measure on \(Z\) is given by \(\lambda:=(\mu\times\mathcal{L})_{Z}\). The tripet \((\Gamma^{\mathbb{N}},T_{s},\lambda)\) is called the _suspension_ of \((\Gamma^{\mathbb{N}},\sigma,\mu)\) under the roof function \(f\). It is well-known that if \((\Gamma^{\mathbb{N}},\sigma,\mu)\) is ergodic, then so is \((Z,T_{s},\lambda)\).
A special property of regular enough suspension flows is that to any eigenvalue corresponds a _continuous_ eigenfunction.
**Proposition 2.10** ([21], Proposition 6.2).: _Let \((Z,T_{s},\lambda)\) be the suspension of a shift space \((\Gamma^{\mathbb{N}},\sigma,\mu)\) under a locally Holder continuous roof function, where \(\mu\) is the equilibrium state for a locally Holder continuous potential on \(\Gamma^{\mathbb{N}}\). Then a number \(\alpha\in\mathbb{R}\) is an eigenvalue of \((Z,T_{s},\lambda)\) for a continuous eigenfunction if and only if it is an eigenvalue for a measurable eigenfunction._
It is well-known that Bernoulli measures on \(\Gamma^{\mathbb{N}}\) are equilibrium states for locally constant potentials.
**Lemma 2.11**.: _Let \(f:\Gamma^{\mathbb{Z}}\to(0,+\infty)\) be a continuous function such that \(f(\mathtt{i})=f(\mathtt{j})\) whenever \(\mathtt{i}^{+}=\mathtt{j}^{+}\), and let \(f^{+}:\Gamma^{\mathbb{N}}\to[0,+\infty)\) denote the function given by \(f^{+}(\mathtt{i})=f(\mathtt{j})\) for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\) and \(\mathtt{j}\in\Gamma^{\mathbb{Z}}\) such that \(\mathtt{j}^{+}=\mathtt{i}\)._
_Let \(\mu\) be a shift-invariant ergodic measure on \(\Gamma^{\mathbb{Z}}\) which measures \(f\), and write \(\mu^{+}\) for the projection onto the positive coordinates. Let \((Z,T_{s},\lambda)=:\mathcal{Z}\) denote the suspension of \((\Gamma^{\mathbb{Z}},\sigma,\mu)\) over \(f\), and \((Z^{+},T_{s},\lambda^{+})=:\mathcal{Z}^{+}\) the suspension of \((\Gamma^{\mathbb{N}},\sigma,\mu^{+})\) over \(f^{+}\)._
_The flows \(\mathcal{Z}\) and \(\mathcal{Z}^{+}\) have the same eigenvalues._
Proof.: Since \(\mathcal{Z}\) factors onto \(\mathcal{Z}^{+}\) through \((\mathtt{i},t)\mapsto(\mathtt{i}^{+},t)\), it is not difficult to see that the eigenvalues of \(\mathcal{Z}^{+}\) are also eigenvalues of \(\mathcal{Z}\).
For the other direction, suppose that \(\alpha\) is not an eigenvalue of \(\mathcal{Z}^{+}\). Then by Lemma 2.7, the discrete-time system \((Z^{+},T_{\alpha^{-1}},\lambda^{+})\) is ergodic. Let \(U\subseteq Z\) be an open set of the form \([\mathtt{i}]_{m}^{n}\times I\) for some \(\mathtt{i}\in\Gamma^{\mathbb{Z}}\), \(m\leq n\in\mathbb{Z}\) and an interval \(I\subseteq\mathbb{R}\). Since any continuous function on \(Z\) can be approximated arbitrarily well (in \(L^{1}\)) by simple functions on this kind of sets, in order to show that \(\lambda\) is ergodic under \(T_{1/\alpha}\), it suffices to show that
\[\lim_{n\to\infty}\frac{1}{n}\#\{0\leq k\leq n:\ T_{k/\alpha}(\mathtt{i},t)\in U \}=\lambda(U)\]
for a.e. \((\mathtt{i},t)\).
Let \(\ell\in\mathbb{N}\) be large enough so that for every \((\mathtt{i},t)\in T_{-\ell/\alpha}U\), also \((\mathtt{j},t)\in T_{-\ell/\alpha}U\) whenever \(\mathtt{j}^{+}=\mathtt{i}^{+}\). Write \((T_{-\ell/\alpha}U)^{+}\) for the projection of \(T_{-\ell/\alpha}U\) onto \(Z^{+}\), and note that \(T_{-\ell/\alpha}U\) equals the embedding of \((T_{-\ell/\alpha}U)^{+}\) to \(Z\).
Let \(A^{+}\) be the set of full \(\lambda^{+}\)-measure such that for each \((\mathtt{j},t)\in A^{+}\), we have
\[\lim_{n\to\infty}\frac{1}{n}\#\{0\leq k\leq n:\ T_{k/\alpha}(\mathtt{j},t)\in( T_{-\ell/\alpha}U)^{+}\}=\lambda^{+}((T_{-\ell/\alpha}U)^{+})=\lambda(T_{- \ell/\alpha}U)=\lambda(U).\]
by Birkhoff's ergodic theorem. The second-to-last equality follows from the choice of \(\ell\), and the last follows from \(T_{t}\)-invariance of \(\lambda\). Now, if \(A\) is the embedding of \(A^{+}\) to \(Z\), then \(T_{\ell/\alpha}A\) has full \(\lambda\)-measure, and for each \((\mathtt{i},t)\in T_{\ell/\alpha}A\),
\[\lim_{n\to\infty}\frac{1}{n}\#\{0\leq k\leq n:\ T_{k/\alpha}( \mathtt{i},t)\in U\}\] \[= \lim_{n\to\infty}\frac{1}{n}\#\{0\leq k\leq n:\ T_{(k-\ell)/ \alpha}(\mathtt{i},t)\in T_{-\ell/\alpha}U\}\] \[= \lambda(U).\]
Therefore \(\lambda\) is ergodic under \(T_{1/\alpha}\) and by Lemma 2.7, \(\alpha\) is not an eigenvalue of \(\mathcal{Z}\).
### Shannon entropy
For a probability measure \(\mu\) on \(\mathbb{R}^{d}\) and any measurable partitions \(\mathcal{E}\) and \(\mathcal{F}\) of \(\mathbb{R}^{d}\), we write \(H(\mu,\mathcal{E})=-\sum_{E\in\mathcal{E}}\mu(E)\log\mu(E)\) for the _(Shannon) entropy_ of \(\mu\) with respect to \(\mathcal{E}\), and \(H(\mu,\mathcal{E}|\mathcal{F})=-\sum_{F\in\mathcal{F}}\mu(F)H(\mu_{F}, \mathcal{E})\) for the _conditional entropy_ of \(\mu\) with respect to \(\mathcal{E}\), given \(\mathcal{F}\).
Let \(\mathcal{D}_{n}=\mathcal{D}_{n}(\mathbb{R}^{d})\) denote the partition of \(\mathbb{R}^{d}\) into dyadic cubes of side-length \(2^{-n}\) which we call the "level-\(n\)" dyadic partition. When \(D\in\mathcal{D}_{n}\) and \(\mu\) is a measure with \(\mu(D)>0\), let \(\mu^{D}:=F\mu_{D}\), where \(F\) is the homothety sending \(D\) onto \([-1,1)^{d}\). For \(x\in\mathbb{R}^{d}\), write \(D_{n}(x)\) for the element of \(\mathcal{D}_{n}\) that contains \(x\). For entropy with respect to the dyadic partition we use the short-hand notation \(H_{n}(\mu)=H(\mu,\mathcal{D}_{n})=-\sum_{D\in\mathcal{D}_{n}}\mu(D)\log\mu(D)\). In the following, we record some elementary properties of entropy.
**Lemma 2.12**.: _Let \(\mu\) be a probability meausure on \(\mathbb{R}^{d}\), and let \(\mathcal{E}\) and \(\mathcal{F}\) be partitions of \(\mathbb{R}^{d}\) such that each element of \(\mathcal{E}\) intersects at most \(k\) elements of \(\mathcal{F}\) and vice versa. Then_
\[|H(\mu,\mathcal{E})-H(\mu,\mathcal{F})|\leq O(k).\]
**Lemma 2.13** (Chain rule).: _Let \(\mu\) be a probability measure on \(\mathbb{R}^{2}\), and let \(\mathcal{E}\), \(\mathcal{F}\) be partitions of \(\mathbb{R}^{2}\) such that \(\mathcal{E}\) refines \(\mathcal{F}\). Then_
\[H(\mu,\mathcal{E})=H(\mu,\mathcal{E}|\mathcal{F})+H(\mu,\mathcal{F}).\]
Let \(\mathcal{D}_{n}(\theta)\) denote the level-\(n\) dyadic partition of \(\theta\in\mathbb{RP}^{1}\). The following is a simple application of the chain rule; let \(R_{\theta}\) denote the "shortest" rotation which takes \(\theta\in\mathbb{RP}^{1}\) onto the \(x\)-axis, with \(R_{y\text{-axis}}\) given by the clockwise rotation.
**Lemma 2.14**.: _Let \(\mu\) be a probability measure on \(\mathbb{R}^{2}\), and let \(\theta\in\mathbb{RP}^{1}\). Then, denoting \(H_{n}(\mu|\pi_{\theta}):=H(\mu,R_{\theta\perp}^{-1}\mathcal{D}_{n}(\mathbb{R}^ {2})|\pi_{\theta}^{-1}\mathcal{D}_{n}(\theta))\), we have_
\[|H_{n}(\mu)-(H_{n}(\pi_{\theta}\mu)+H_{n}(\mu|\pi_{\theta}))|\leq O(1)\]
_for every \(n\in\mathbb{N}\)._
**Lemma 2.15** (Concavity and almost-convexity).: _For any probability measures \(\mu_{1},\dots,\mu_{k}\) and a probability vector \((p_{1},\dots,p_{k})\),_
\[p_{i}H_{n}(\mu_{i})\leq H_{n}(\sum_{i=1}^{k}p_{i}\mu_{i})\leq p_{i}H_{n}(\mu_ {i})-\sum_{i=1}^{k}p_{i}\log p_{i}\]
Taking a convolution can decrease entropy at most by an additive constant that depends only on the dimension of the ambient space.
**Lemma 2.16**.: _For any probability measures \(\mu\) and \(\nu\) and any \(n\),_
\[H_{n}(\mu*\nu)\geq H_{n}(\mu)-O(1).\]
Proof.: The claim follows by approximating \(\nu\) in weak-\({}^{*}\) sense with a convex combination of Dirac measures, and then applying Lemmas 2.15 and 2.12.
**Lemma 2.17**.: _Let \(\mu\) and \(\nu\) be probability measures on \([0,1]\), and suppose that \(\mu\) is non-atomic. Then for every \(r>0\) and \(\varepsilon>0\), there exists \(N_{0}\in\mathbb{N}\) such that for any interval \(I\) with \(\mu(I)\geq r\), we have_
\[\frac{1}{N}H_{N}(\mu_{I}*\nu)\geq\dim(\mu*\nu)-\varepsilon\]
_for every \(N\geq N_{0}\)._
Proof.: Let \(K_{r}=\{(a,b)\in\mathbb{R}^{2}:\ \mu([a,b])\geq r\}\). We will show that for every \(r>0\) such that \(K_{r}\) is nonempty, \(\frac{1}{N}H_{N}(\mu_{[a,b]}*\nu)\) is continuous in \((a,b)\in K_{r}\) (in the subspace topology) and the continuity is uniform in \(N\).
Let \(r,\varepsilon>0\) be given, and let \(\delta>0\) be small with respect to \(\varepsilon\) and \(r\). Fix \((a_{0},b_{0})\in K_{r}\) and let \(I_{\delta}\) be the largest interval contained in \([a,b]\) for every \((a,b)\in B((a_{0},b_{0}),\delta)\). If \(\delta\) is small enough, we have \(\frac{\mu(I_{\delta})}{\mu([a,b])}\in[1-\varepsilon,1+\varepsilon]\) for every \((a,b)\in B((a_{0},b_{0}),\delta)\), by non-atomicity of \(\mu\) and the assumption \(\mu([a_{0},b_{0}])\geq r\).
Now, for every \((a,b),(a^{\prime},b^{\prime})\in B((a_{0},b_{0}),\delta)\), applying bilinearity of convolution and Lemma 2.15 in the first and second-to-last inequalities, we have
\[H_{N}(\mu_{[a,b]}*\nu)\] \[\leq \frac{\mu(I_{\delta})}{\mu([a,b])}H_{N}(\mu_{I_{\delta}}*\nu)+ \left(1-\frac{\mu(I_{\delta})}{\mu([a,b])}\right)H_{N}(\mu_{[a,b]\setminus I_ {\delta}}*\nu)+\varepsilon\] \[\leq (1+\varepsilon)\frac{\mu(I_{\delta})}{\mu([a^{\prime},b^{\prime} ])}H_{N}(\mu_{I_{\delta}}*\nu)+2N\varepsilon\] \[\leq (1+\varepsilon)H_{N}(\mu_{[a^{\prime},b^{\prime}]}*\nu)+2N\varepsilon\] \[\leq H_{N}(\mu_{[a^{\prime},b^{\prime}]}*\nu)+3N\varepsilon.\]
Thus \((a,b)\mapsto\frac{1}{N}H_{N}(\mu_{[a,b]}*\nu)\) is continuous in \(K_{r}\), uniformly in \(N\). On the other hand, for every \((a,b)\in K_{r}\),
\[\dim(\mu*\nu)\leq\dim(\mu_{[a,b]}*\nu)\leq\liminf_{N\to\infty}\frac{1}{N}H_{N }(\mu_{[a,b]}*\nu)\]
by Fatou's lemma. By uniform continuity, this convergence is uniform in \(K_{r}\), which is what we wanted to prove.
### Magnifying measures
It turns out that the magnifications of a strictly self-affine measure along a conformal partition have a very special structure. This structure makes it much easier to analyse the convolutions of magnifications instead of the convolution of self-affine measures directly, which in turn is put to use in the proof of Theorem 1.1 through the local entropy averages. In the following, we recap the standard terminology of magnifying measures, and some statistical properties regarding sequences of magnifications.
For \(x\in\mathbb{R}^{d}\) and \(r\geq 0\), we let \(T_{x}:\ y\mapsto y-x\) denote the translation taking \(x\) to the origin, and \(S_{r}\) the exponential "magnification" operation on measures, given for a measure \(\mu\) whose support contains the origin by
\[S_{r}\mu(A)=\mu(B(0,2^{-r}))^{-1}\mu(2^{-r}A\cap B(0,2^{-r}))\]
for every Borel \(A\subseteq B(0,1)\). The following property of \(S_{r}\)-ergodic measures appears also in [17].
**Lemma 2.18**.: _Let \(\mu\) be a Radon measure on \(\mathbb{R}^{2}\), and let \(P\) be a measure on \(\mathcal{P}(\mathbb{R}^{2})\) which is invariant and ergodic under \((S_{r})_{r\geq 0}\). Then for any \(\theta\in\mathbb{RP}^{1}\) such that \(\pi_{\theta}\mu\) is exact dimensional, we have_
\[\dim(\pi_{\theta}\nu*\pi_{\theta}\mu)=\min\{1,\dim\pi_{\theta}\nu+\dim\pi_{ \theta}\nu\}\]
_for \(P\)-almost every \(\nu\)._
Proof.: For any probability measure \(\nu\), it is not difficult to see that \(\pi_{\theta}S_{r}\nu\ll S_{r}\pi_{\theta}\nu\), hence \(\pi_{\theta}\mu*\pi_{\theta}S_{r}\nu\ll\pi_{\theta}\mu*S_{r}\pi_{\theta}\nu\) and \(\dim(\pi_{\theta}\mu*\pi_{\theta}S_{r}\nu)\geq\dim(\pi_{\theta}\mu*S_{r}\pi_ {\theta}\nu)\). Since the set of functions \(\{\mathbb{R}^{2}\to\mathbb{R},\ (x,y)\mapsto x+e^{r}y:\ r\geq 0\}\) is a parametrization of orthogonal projections onto lines with slope in \([1,\pi/2)\), the classical projection theorem of Marstrand-Mattila asserts that
\[\dim(\pi_{\theta}\mu*S_{r}\pi_{\theta}\nu)=\min\{1,\dim\pi_{\theta}\mu+\dim\pi_ {\theta}\nu\}\]
for \(\mathcal{L}\)-a.e. \(r\). Since \(\dim\pi_{\theta}S_{r}\nu\geq\dim\pi_{\theta}S_{t}\nu\) for \(r\geq t\) and \(P\)-a.e. \(\nu\), it follows from \(S_{r}\)-ergodicity of \(P\) that \(\dim\pi_{\theta}\nu=:C\) is constant \(P\)-a.e.
Thus, using the \(S_{r}\)-invariance of \(P\),
\[\int\dim(\pi_{\theta}\mu*\pi_{\theta}\nu)\,dP(\nu) =\int_{0}^{1}\int\dim(\pi_{\theta}\mu*\pi_{\theta}\nu)\,dS_{r}P( \nu)\,dr\] \[=\int_{0}^{1}\int\dim(\pi_{\theta}\mu*\pi_{\theta}S_{r}\nu)\,dP( \nu)\,dr\] \[\geq\int\int_{0}^{1}\dim(\pi_{\theta}\mu*S_{r}\pi_{\theta}\nu)\, dr\,dP(\nu)\] \[=\int\min\{1,\dim\pi_{\theta}\mu+\dim\pi_{\theta}\nu\}\,dP(\nu)\] \[=\min\{1,\dim\pi_{\theta}\mu+C\}.\]
This implies that \(\dim(\pi_{\theta}\mu*\pi_{\theta}\nu)\geq\min\{1,\dim\pi_{\theta}\mu+\dim\pi_ {\theta}\nu\}\) for \(P\)-a.e. \(\nu\). The other direction is standard.
There is a natural way in which measures on \(\mathbb{R}^{d}\) give rise to measures on \(\mathcal{P}(\mathbb{R}^{d})\). Namely, consider the sequence \((S_{r}T_{x}\mu)_{r\geq 0}\), called the _scenery_ of \(\mu\) at \(x\). The statistical properties of this sequence are described by the accumulation points of the sequence \(\left(\frac{1}{t}\int_{0}^{t}\delta_{S_{r}T_{x}\mu}\,dr\right)_{t\geq 1}\), called the _scenery flow_ of \(\mu\) at \(x\). The accumulation points of the scenery flow in the weak-\({}^{*}\) topology are measures on \(\mathcal{P}(\mathbb{R}^{d})\), and are called _tangent distributions_ of \(\mu\) at \(x\). The measure \(\mu\) is called _uniformly scaling_ if the scenery flow converges almost everywhere to a unique tangent distribution \(P\). It is then said that \(\mu\) generates \(P\).
A remarkable result of Hochman [12] is that tangent distributions at almost every point are _fractal distributions_, objects which enjoy strong spatial invariance properties. We will give the definition here for completeness, although we will not use it directly.
**Definition 2.19**.: An \(S_{r}\)-invariant measure \(P\) on \(\mathcal{P}(\mathbb{R}^{d})\) is called a _fractal distribution_ if for any measurable \(A\), \(P(A)=1\) if and only if for every \(r>0\), \(P\)-almost every \(\nu\) satisfies
\[S_{r}T_{x}\nu\in A\]
for \(\nu\)-almost every \(x\) with \(B(x,e^{-r})\subseteq B(0,1)\).
**Theorem 2.20** (Theorem 1.7, [12]).: _Let \(\mu\) be a Radon measure on \(\mathbb{R}^{d}\). Then for \(\mu\)-almost every \(x\), every tangent distribution at \(x\) is a fractal distribution._
**Lemma 2.21**.: _Let \(P\) be a fractal distribution. Then for \(P\)-a.e. \(\nu\), any line \(L\) with \(\nu(L)>0\) must contain the origin._
Proof.: We first show that \(P\)-a.e. atomic measure is the point mass at the origin. This is almost immediate from the results of [12].
For a contradiction, let \(A=\{\nu:\ \nu(\{x\})>0,\ x\neq 0\}\) and suppose that \(P(A)>0\). Let \(P^{\prime}\) be an ergodic component of \(P\) with \(P^{\prime}(A)>0\), and let \(\eta\in A\) be a uniformly scaling measure generating \(P^{\prime}\). Indeed, it is not difficult to verify from the definition of a fractal distribution that \(P^{\prime}\)-almost every measure is uniformly scaling. Let \(x\) be such that \(\eta(\{x\})>0\). Since \(\eta|_{\{x\}}\ll\eta\), also \(\eta|_{\{x\}}\) generates \(P^{\prime}\) by an application of the Lebesgue-Besicovitch differentiation theorem, whence \(P^{\prime}\) is supported on the point mass at the origin. In particular, \(P^{\prime}(A)=0\), a contradiction.
Now, to prove the statement of the lemma, suppose that there exists a set \(B\) with \(P(B)>0\) such that for every \(\nu\in B\), there exists a line \(L_{\nu}\) with \(\nu(L_{\nu})>0\) and \(0\not\in L_{\nu}\). Let \(P^{\prime}\) be an ergodic component of \(P\) with \(P^{\prime}(B)>0\), and let \(\eta\in B\) be a uniformly scaling measure generating \(P^{\prime}\). Now, since \(\eta(L_{\eta})>0\), also the measure \(\eta|_{L_{\eta}}\) generates \(P^{\prime}\). Let \(L\) denote the line \(L_{\eta}\) translated so that it contains the origin. Clearly, all tangent measures of \(\eta|_{L_{\eta}}\) are supported on \(L\), whence \(P^{\prime}\) is supported on measures which are supported on \(L\). Since any line not containing the origin intersects \(L\) in at most one point, such a line has \(P^{\prime}\)-almost surely zero measure by the above. Thus \(P^{\prime}(B)=0\) which is a contradiction.
We can also choose to magnify \(\mu\) along a discrete sequence of scales. For \(N\in\mathbb{N}\), we call the sequence \(\big{(}\frac{1}{nN}\sum_{k=1}^{n}\delta_{S_{kN}T_{x}\mu}\big{)}_{n\in\mathbb{N}}\) the \(N\)_-scenery process_ of \(\mu\) at \(x\). Similarly, weak-\({}^{*}\) accumulation points of this sequence we call \(N\)_-tangent distributions_. While these are a-priori different objects from the accumulation points of the continuous-time scenery flow, most properties that are typical for tangent distributions are also typical for \(N\)-tangent distributions.
**Lemma 2.22**.: _Let \(\mu\) be a Radon measure on \(\mathbb{R}^{d}\). For \(\mu\)-almost every \(x\), any \(N\in\mathbb{R}\) and any \(N\)-tangent distribution \(P\) of \(\mu\) at \(x\), the following holds: For \(P\)-almost every \(\nu\), any line \(L\) with \(\nu(L)>0\) that intersects the interior of \(B(0,1)\) must contain the origin._
Proof.: It follows from [12, Proposition 5.5] that for \(\mu\)-almost every \(x\), if \(P\) is an \(N\)-tangent distribution at \(x\), then the measure
\[Q:=\int\int_{0}^{N}\delta_{S_{r}\nu}\,dr\,dP(\nu)\]
is a fractal distribution. Therefore by Lemma 2.21, for \(P\)-a.e. \(\nu\) and \(\mathcal{L}\)-almost every \(r\in[0,N]\), any line \(L\) with \(S_{r}\nu(L)>0\) must contain the origin. Taking \(r\to 0\) along a countable sequence, we see that for \(P\)-almost every \(\nu\), any line \(L\) with \(\nu(L)>0\) that intersects the interior of \(B(0,1)\) must contain the origin.
### Conditional measures on lines
For a measure \(\mu\) on \(\mathbb{R}^{2}\) and \(\theta\in\mathbb{RP}^{1}\), let \(\mu=\int\mu_{x}^{\theta}\,d\pi_{\theta}\mu(x)\) denote the disintegration of \(\mu\) with respect to \(\pi_{\theta}\). It is well-known that for almost every \(x\in\theta\), the measure \(\mu_{x}^{\theta}\) is supported on the line \(x+\theta^{\perp}\), and that \(\mu_{x}^{\theta}\) is the limit of normalized restrictions of \(\mu\) on thinner and thinner tubes centered at \(x+\theta^{\perp}\). It will be useful for us to know that these tubes can replaced by preimages of sets of relatively large measure.
**Lemma 2.23**.: _Let \(\mu\) be a measure on \(\mathbb{R}^{2}\), let \(\delta>0\) and \(\theta\in\mathbb{RP}^{1}\). For every \(r>0\), let_
\[\mathcal{I}(x,r,\delta)=\{I\subseteq B(x,r):\ \frac{\pi_{\theta}\mu(I)}{\pi_{ \theta}\mu(B(x,r))}\geq\delta\}.\]
_Then for \(\pi_{\theta}\mu\)-almost every \(x\), we have_
\[\lim_{r\to 0}\sup_{I\in\mathcal{I}(x,r,\delta)}d_{\mathrm{LP}}\left(\mu_{\pi_{ \theta}^{-1}(I)},\ \mu_{x}^{\theta}\right)=0.\]
Proof.: Let \(\varepsilon>0\), and let \(E_{n}\subseteq\theta\) be the compact set given by Lusin's theorem with \(\pi_{\theta}\mu(E_{n})\geq 1-1/n\), on which the function \(y\mapsto\mu_{y}^{\theta}\) is continuous. Since \(\pi_{\theta}\mu(\bigcup_{n\in\mathbb{N}}E_{n})=1\), it suffices to prove the statement for almost every \(x\in E_{n}\), for every \(n\in\mathbb{N}\). Now, for almost every \(x\in E_{n}\), if \(B:=B(x,r)\) and \(I\in\mathcal{I}(x,r,\delta)\),
\[\frac{\pi_{\theta}\mu(I\cap E_{\varepsilon})}{\pi_{\theta}\mu(I)} =1-\frac{\pi_{\theta}\mu(I\setminus E_{\varepsilon})}{\pi_{ \theta}\mu(I)}\] \[\geq 1-\frac{\pi_{\theta}\mu(B\setminus E_{\varepsilon})}{\pi_{ \theta}\mu(B)}\frac{\pi_{\theta}\mu(B)}{\pi_{\theta}\mu(I)}\] \[\geq 1-\frac{1}{\delta}\frac{\pi_{\theta}\mu(B\setminus E_{ \varepsilon})}{\pi_{\theta}\mu(B)}\] \[=1-o(r)/\delta\]
by an application of the Lebesgue-Besicovitch differentiation theorem.
Therefore, for any Borel set \(A\subseteq\mathbb{R}^{2}\), \(I\in\mathcal{I}(x,r,\delta)\) and \(\varepsilon>0\),
\[\mu_{\pi_{\theta}^{-1}I}(A^{\varepsilon}) =\frac{1}{\pi_{\theta}\mu(I)}\mu(\pi_{\theta}^{-1}I\cap A^{ \varepsilon})\] \[=\frac{1}{\pi_{\theta}\mu(I)}\int_{I}\mu_{y}^{\theta}(A^{ \varepsilon})\,d\pi_{\theta}\mu(y)\] \[\geq\frac{1-o(r)/\delta}{\pi_{\theta}\mu(I\cap E_{\varepsilon}) }\int_{I\cap E_{n}}\mu_{y}^{\theta}(A^{\varepsilon})\,d\pi_{\theta}\mu(y)\] \[\geq(1-o(r)/\delta)(\mu_{x}^{\theta}(A)-\varepsilon)\]
if \(r\) is small enough, by continuity of \(y\mapsto\mu_{y}^{\theta}\). Similarly,
\[\mu_{\pi_{\theta}^{-1}I}(A)\leq\frac{1}{\pi_{\theta}\mu(I\cap E_{n})}\int_{I \cap E_{\varepsilon}}\mu_{y}^{\theta}(A^{\varepsilon})\,d\pi_{\theta}\mu(y)+ o(r)/\delta\leq\mu_{x}^{\theta}(A^{\varepsilon})+o(r)/\delta+\varepsilon,\]
so \(d_{\mathrm{LP}}(\mu_{\pi^{-1}I},\ \mu_{x}^{\theta})\leq 2\varepsilon\) for small enough \(r\).
## 3. On local entropy averages
In this section, we recall the local entropy averages of [16] and introduce the different notions of magnifactions of measures that we use. Let \(\mu\) and \(\nu\) be self-affine measures as in the statement of Theorem 1.1, and denote by \(\bar{\mu}\), \(\bar{\nu}\) the associated Bernoulli measures.
By Lemmas 2.1, 2.3 and basic geometry,
\[\|B_{\mathsf{j}|_{n}}\|\approx|\pi_{\theta(\mathsf{j})}B_{\mathsf{j}|_{n}}B(0, 1)|\approx|\pi_{\theta^{*}(\mathsf{j})}B_{\mathsf{j}|_{n}}B(0,1)|\]
for all \(\mathsf{j}\in\Lambda^{\mathbb{N}}\) and \(n\in\mathbb{N}\), where the constant of comparability depends only on the IFS \(\Psi\). In order to simplify notation, suppose that \(1/2\leq\|B_{\mathsf{j}|_{n}}\|^{-1}|\pi_{\theta^{*}(\mathsf{j})}B_{\mathsf{j}| _{n}}B(0,1)|\leq 2\) for every \(\mathsf{j}\) and \(n\).
For every \(\mathsf{i}\in\Gamma^{\mathbb{N}}\) and \(\mathsf{j}\in\Lambda^{\mathbb{N}}\), we define the "stopping times"
\[i_{k}=i_{k}(\mathsf{i}) =\min\{n\in\mathbb{N}:\ |\pi_{\theta^{*}(\mathsf{i})}A_{\mathsf{i}|_{n}}B (0,1)|\leq 2^{-kN}\},\] \[i_{k}=i_{k}(\mathsf{j}) =\min\{n\in\mathbb{N}:\ |\pi_{\theta^{*}(\mathsf{j})}B_{\mathsf{j}| _{n}}B(0,1)|\leq 2^{-kN}\}.\]
Although the stopping times on \(\Gamma^{\mathbb{N}}\) and \(\Lambda^{\mathbb{N}}\) are denoted by the same letter \(i_{k}\), the choice of domain will always be clear from the context: for example, \(\mathsf{i}|_{i_{k}}:=\mathsf{i}|_{i_{k}(\mathsf{i})}\) and \(\mathsf{j}|_{i_{k}}:=\mathsf{j}|_{i_{k}(\mathsf{j})}\). Then for every \(k\),
\[\|A_{\mathsf{i}|_{i_{k}}}\|\approx\|B_{\mathsf{j}|_{i_{k}}}\|\approx 2^{-kN}.\]
We now define the different notions of scale-\(k\) magnifications of \(\mu\) and \(\nu\).
**Notation 3.1**.: For each \(\mathsf{i}\in\Gamma^{\mathbb{N}}\), \(\mathsf{j}\in\Lambda^{\mathbb{N}}\) and \(k\in\mathbb{N}\), let
\[\nu_{\mathsf{j}|_{i_{k}}} :=S_{kN}T_{\Pi(\sigma^{i_{k}}\mathsf{j})}\psi_{\mathsf{j}|_{i_{k}}}\nu,\] \[\mu_{\mathsf{i}|_{i_{k}}} :=S_{kN}T_{\Pi(\sigma^{i_{k}}\mathsf{i})}\varphi_{\mathsf{i}|_{i_{k }}}\nu,\] \[\mu^{\mathsf{i},k} :=\mu^{D_{kN}(\Pi(\mathsf{i}))}.\]
Note that \(\mu_{\mathtt{i}|_{i_{k}}}\) and \(\nu_{\mathtt{j}|_{i_{k}}}\) are measures supported on a thin ellipses whose major axes have length comparable to \(1\) and are oriented in the directions \(\theta(A_{\mathtt{i}|_{i_{k}}})\) and \(\theta(B_{\mathtt{j}|_{i_{k}}})\), respectively. On the other hand, \(\mu^{\mathtt{i},k}\) is a measure supported on \([-1,1]^{2}\), the "dyadic magnification" of \(\mu\).
We require the following form of the local entropy averages of [16]:
**Theorem 3.2**.: _For any \(\varepsilon>0\) there exists \(N\in\mathbb{N}\) such that if_
\[\liminf_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(\mu^{\mathtt{ i},k}*\nu_{\mathtt{j}|_{i_{k}}})\geq c\ \ \text{or}\ \ \liminf_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(\mu_{ \mathtt{i}|_{i_{k}}}*\nu_{\mathtt{j}|_{i_{k}}})\geq c\]
_for \(\bar{\mu}\times\bar{\nu}\)-a.e. \((\mathtt{i},\mathtt{j})\), then \(\dim(\mu*\nu)\geq c-\varepsilon\)._
The proof is essentially in [16], but we provide a sketch for the proof of the first half of the statement for the convenience of the reader. The second half goes through similarly. We begin by recalling some terminology of [16]. In the following, let \(X\) and \(Y\) be finite sets, let \(0<\rho<1\), and let \(d\) be a metric on \(X^{\mathbb{N}}\) such that \(d(\mathtt{x},\mathtt{x}^{\prime})\approx\rho^{\mathtt{j}\wedge\mathtt{x}^{ \prime}}|\). Following [16], we say that
* a map \(g:X^{\mathbb{N}}\to Y^{\mathbb{N}}\) is a _tree morphism_ if, for every \(n\in\mathbb{N}\) and length-\(n\) cylinder \([x_{1}\dots x_{n}]\subseteq X^{\mathbb{N}}\) there exists a length-\(n\) cylinder \([y_{1}\dots y_{n}]\subseteq Y^{\mathbb{N}}\) such that \(g([x_{1}\dots x_{n}])\subseteq[y_{1}\dots y_{n}]\),
* a map \(h:X^{\mathbb{N}}\to\mathbb{R}^{d}\) is _faithful_ if there exists a constant \(C\) such that for any \([x_{1}\dots x_{n}]\subseteq X^{\mathbb{N}}\), no point in \(f([x_{1}\dots x_{n}])\) is covered by more than \(C\) of the sets \(f([x_{1}\dots x_{n}x])\) for \(x\in X\), and \(f([x_{1}\dots x_{n}])\) contains a ball of radius \((C^{-1}\rho)^{n}\) and is contained in a ball of radius \((C\rho)^{n}\).
Proof of Theorem 3.2.: Let \(N\in\mathbb{N}\). As in [16], the idea is to lift \(\mu\times\nu\) to a measure \(\eta\) on the tree
\[\Sigma:=\{1,\dots,2^{N}\}^{\mathbb{N}}\times\Lambda^{\mathbb{N}},\]
and then find a tree morphism \(g_{N}:\Sigma\to Y^{\mathbb{N}}_{N}\) and a faithful map \(h_{N}:Y^{\mathbb{N}}_{N}\to\mathbb{R}^{2}\) such that \((h_{N}\circ g_{N})\eta=\mu*\nu\).
On \(\Sigma\), let distance between pairs \((\mathtt{k},\mathtt{j})\) and \((\mathtt{k}^{\prime},\mathtt{j}^{\prime})\) be defined as the number
\[\max\{2^{-N\cdot\max\{n:\ \mathtt{k}|_{n}=\mathtt{k}^{\prime}|_{n}\}},2^{-N \cdot\max\{n:\ \mathtt{j}|_{i_{n}}=\mathtt{j}^{\prime}|_{i_{n}}\}}\}.\]
Then the topology on \(\Sigma\) is generated by sets of the form \([\mathtt{k}|_{n}]\times[\mathtt{j}|_{i_{n}}]\), \(\mathtt{k}\in\{1,\dots,2^{N}\}^{\mathbb{N}}\), \(\mathtt{j}\in\Lambda^{\mathbb{N}}\).
Write \(D_{2^{N}}(\mathbb{R}^{2})=\{D_{1},\dots,D_{2^{N}}\}\) and let \(F_{k}\) denote the map which sends \([-1,1]\) onto the closure of \(D_{k}\). Let \(\Pi_{1}\) denote the map \(\{1,\dots,2^{N}\}\to[-1,1]^{2},\mathtt{k}\mapsto\lim_{n\to\infty}F_{\mathtt{ k}|_{n}}(0)\), and \(\Pi_{2}\) the map \(\Lambda^{\mathbb{N}}\to\mathbb{R}^{2},\mathtt{j}\mapsto\lim_{k\to\infty}\psi_{ \mathtt{j}|_{k}}(0)\). Using the maps \(\Pi_{1}\) and \(\Pi_{2}\), the measure \(\mu\times\nu\) can be lifted to a measure \(\eta\) on \(\Sigma\) (by possibly translating the dyadic partition so that the boundaries of dyadic squares have \(\mu\)-measure \(0\)). Let \(\Pi_{0}\) denote the map \(\Sigma\to\mathbb{R}^{2}\times\mathbb{R}^{2},(\mathtt{k},\mathtt{j})\mapsto( \Pi_{1}(\mathtt{k}),\Pi_{2}(\mathtt{j}))\) and \(*\) the map \(\mathbb{R}^{2}\times\mathbb{R}^{2}\to\mathbb{R}^{2},(x,y)\mapsto x+y\) so that \(\Pi_{0}\eta=\mu\times\nu\) and \((*\circ\Pi)\eta=\mu*\nu\).
We will now construct a tree \(Y_{N}^{\mathbb{N}}\), a tree morphism \(g_{N}:\Sigma\to Y_{N}^{\mathbb{N}}\) and a faithful map \(h_{N}:Y_{N}^{\mathbb{N}}\to\mathbb{R}^{2}\) such that the following diagram commutes:
Let \(Y_{N}=\{(\frac{k}{2^{N}},\frac{\ell}{2^{N}}):\;0\leq k,\ell\leq 2^{N}-1\}\), and associate to each \(x\in Y_{N}\) the square \(Q_{x}\) of side length \(2^{-N+4}\) and bottom left corner at \(x\). This gives an overlapping cover for \([-1,1]^{2}\). Writing \(S_{x}\) for the affine contraction which sends \([-1,1]^{2}\) to \(Q_{x}\), there is a faithful surjection \(h_{N}:Y_{N}^{\mathbb{N}}\to[-1,1]^{2}\), \(h_{N}(x_{1}x_{2}x_{3}\ldots)=\lim_{n\to\infty}S_{x_{1}}\circ\cdots\circ S_{x_{ n}}(0)\). Let the metric on \(Y_{N}^{\mathbb{N}}\) be given by \(d(\mathtt{x},\mathtt{y})=2^{-N|\mathtt{x}\wedge\mathtt{y}|}\) for \(\mathtt{x},\mathtt{y}\in Y_{N}^{\mathbb{N}}\)
We construct \(g_{N}\) iteratively. First observe that for any \([\mathtt{k}|_{n}]\times[\mathtt{j}|_{i_{n}}]\), we have \(\Pi_{1}[\mathtt{k}|_{1}]+\Pi_{2}[\mathtt{j}|_{i_{1}}]\subseteq Q_{a_{1}}\) for some \(Q_{a_{1}}\), since \(\|B_{\mathtt{j}|_{1}}\|\leq 2|\pi_{\theta(\mathtt{j})^{*}}B_{\mathtt{j}|_{1}}|\leq 2 ^{-N+1}\) and thus \(\operatorname{diam}(\Pi_{1}[\mathtt{k}|_{1}]+\Pi_{2}[\mathtt{j}|_{i_{1}}]) \leq 2^{-N+2}\). Supposing that the sequence \(a_{1}\ldots a_{n}\) has been determined, we choose \(a_{n+1}\) so that \(\Pi_{1}[\mathtt{k}|_{n+1}]+\Pi_{2}[\mathtt{j}|_{i_{n+1}}]\subseteq Q_{a_{1} \ldots a_{n}a_{n+1}}\). Since \(\operatorname{diam}(Q_{a_{1}\ldots a_{n}})\to 0\), we may define \(g_{N}(\mathtt{k},\mathtt{j})=a_{1}a_{2}\ldots\). By construction, we have \(h_{N}\circ g_{N}=*\circ\Pi\).
Since \(h_{N}\) is faithful, [16, Proposition 5.3] asserts that it preserves entropy in the sense that
\[|H_{(k+1)N}(g_{N}\eta_{[\mathtt{k}|_{k}]\times[\mathtt{j}|_{i_{k}}]})-H_{(k+1 )N}(\mu_{\Pi_{1}[\mathtt{k}|_{k}]}*\psi_{\mathtt{j}|_{i_{k}}}\nu)|\leq O(1)\]
for every \(k\in\mathbb{N}\). Since rescaling a measure does not change its entropy much if we change the scale of the entropy by the same amount, by Lemma 2.12, we have
\[|H_{(k+1)N}(\mu_{\Pi_{1}[\mathtt{k}|_{k}]}*\psi_{\mathtt{j}|_{i_{k}}}\nu)-H_{N }(\mu^{\mathtt{i},k}*\nu_{\mathtt{j}|_{i_{k}}})|\leq O(1)\]
where \(\mathtt{i}\in\Gamma^{\mathbb{N}}\) is such that \(\Pi(\mathtt{i})=\Pi_{1}(\mathtt{k})\). Therefore, by the assumption and the local entropy averages for measures on trees [16, Theorem 4.4], we have \(\dim g_{N}\eta\geq c\). Finally, taking \(g_{N}\eta\) through the faithful map \(h_{N}\) yields \(\mu*\nu\) and distorts the dimension by at most \(O(1/N)\), by [16, Proposition 5.2], which completes the proof.
## 4. Proof of Theorem 1.1
In this section, we state our key technical results, and apply them to prove Theorem 1.1. We begin with some notation.
Recall that we defined \(\mathbb{RP}^{1}\) as the collection of one-dimensional subspaces of \(\mathbb{R}^{2}\). Through the identification \(\mathbb{RP}^{1}\cong[0,\pi)\) we use \(\theta\in[0,\pi)\) to denote both angles and lines (making that angle with the positive \(x\)-axis). Recall that \(R_{\theta}\) denotes the "shortest" rotation which takes \(\theta\) onto \(0\) (the \(x\)-axis), and for \(\theta=0^{\perp}\), choose the clockwise rotation. For almost every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\) and \(\theta\in\mathbb{RP}^{1}\), we let \(\mu_{\mathtt{i},\theta}\) denote the probability measure on \(\mathbb{R}\) obtained by taking the conditional measure \(\mu_{\Pi(\mathtt{i})}^{\theta}\) from
the disintegration \(\mu=\int\mu_{x}^{\theta}\,d\pi_{\theta}\mu=\int\mu_{\Pi(\mathfrak{i})}^{\theta}\,d \bar{\mu}\) supported on the line \(\theta^{\perp}+\Pi(\mathfrak{i})\), translating it by \(T_{\Pi(\mathfrak{i})}\) and finally rotating it by \(R_{\theta}\). That is,
\[\mu_{\mathfrak{i},\theta}=R_{\theta}T_{\Pi(\mathfrak{i})}\mu_{\Pi(\mathfrak{i })}^{\theta}. \tag{4.1}\]
For \(t\geq 0\), write \(\mu_{\mathfrak{i},\theta,t}:=S_{t}\mu_{\mathfrak{i},t}\). The measures \(\mu_{\mathfrak{i},\theta}\) are occasionally called _slices_ of \(\mu\).
Fix a large integer \(N\), and for \(\mathfrak{i}\in\Gamma^{\mathbb{N}}\) and \(k\in\mathbb{N}\), let \(\ell_{k}=\ell_{k}(\mathfrak{i})\) be any increasing sequence such that \(\lim_{k\to\infty}\ell_{k}=\infty\) and
\[B(\Pi(\mathfrak{i}),2^{-kN})\cap\Pi(\Gamma^{\mathbb{N}})=B(\Pi(\mathfrak{i}), 2^{-kN})\cap\varphi_{\mathfrak{i}|_{\ell_{k}}}(B(0,1)) \tag{4.2}\]
for every \(k\). Such a sequence exists for every \(\mathfrak{i}\) by the strong separation condition.
Our main geometric observation is that for any self-affine measure \(\mu\) with \(\dim\mu>1\), the measures \(\mu^{\mathfrak{i},k}\) have a fiber structure in the sense that \(\pi_{\theta(\mathfrak{i})^{\perp}}\mu^{\mathfrak{i},k}\) is very close to a slice of the original measure \(\mu\), in a direction typical to the Furstenberg measure \(\mu_{F}\). This is true also when \(\dim\mu\leq 1\), but in this case the proof is slightly different. Throughout the paper, we adopt the convention that \(-\mu\) denotes the push-forward of \(\mu\) through the map \(x\mapsto-x\).
**Proposition 4.1** (Fiber structure).: _Let \(\mu\) be as in Theorem 1.1 with \(\dim\mu>1\), and suppose that the domination condition is satisfied. Let \(\varepsilon>0\) and let \(n\in\mathbb{N}\) be large. For every \(\mathfrak{a}\in\Gamma^{n}\), the following holds after an expanding affine change of coordinates:_
_For \(\bar{\mu}\)-almost every \(\mathfrak{i}\in[\mathfrak{a}]\) and all \(\theta\) in a set of positive \(\mu_{F}\)-measure, there exists \(m\in\mathbb{N}\), a sequence of intervals \((I_{k})_{k}\) of length \(2^{-m}\) and a set \(\mathcal{N}_{\varepsilon}\subseteq\mathbb{N}\) with \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}\cap[0,n])}{n}\geq 1-\varepsilon\) such that_
\[d_{\mathrm{LP}}(\pi_{\theta(\mathfrak{i})^{\perp}}\mu^{\mathfrak{i},k+m},\ R_{ \theta(\mathfrak{i})^{\perp}}^{-1}\left(\rho(\ell_{k},(\mathfrak{i},\theta)) \mu_{M^{\ell_{k}}(\mathfrak{i},\theta),kN+\log\alpha_{1}(\mathfrak{i}|_{\ell _{k}})}\right)^{I_{k}})<\varepsilon\]
_for every \(k\in\mathcal{N}_{\varepsilon}\), where \(\rho:\mathbb{N}\times\Gamma^{\mathbb{N}}\times\mathbb{RP}^{1}\to\{-1,1\}\) is a cocycle._
The domination assumption can be removed from the proposition at the cost of having the change of coordinates depend on the word \(\mathfrak{i}\in\Gamma^{\mathbb{N}}\). The proof of the proposition is given in Section 5.
### Estimating the local entropy averages
Recall from Section 3 that we have to find a lower bound for either \(\frac{1}{N}H_{N}(\mu^{\mathfrak{i},k}*\nu_{\mathfrak{j}|_{i_{k}}})\) or \(\frac{1}{N}H_{N}(\mu_{\mathfrak{i}|_{i_{k}}}*\nu_{\mathfrak{j}|_{i_{k}}})\), for most \(k\). Bounding the second quantity is easier, but the bound we obtain is only efficient when \(\dim\mu\geq\dim\nu\geq 1\) or \(1\geq\dim\mu\geq\dim\nu\):
**Claim 4.2**.: _For \(\bar{\mu}\)-a.e. \(\mathfrak{i}\in\Gamma^{\mathbb{N}}\), \(\bar{\nu}\)-a.e. \(\mathfrak{j}\in\Lambda^{\mathbb{N}}\) and every \(\varepsilon>0\),_
\[\liminf_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(\mu_{ \mathfrak{i}|_{i_{k}}}*\nu_{\mathfrak{j}|_{i_{k}}})\geq\min\{1,\dim\mu\}+\min \{1,\dim\nu\}-\varepsilon\]
_for all large enough \(N\)._
The proof is given in Section 6. Applying Theorem 3.2, we get that
\[\dim(\mu*\nu)\geq\min\{1,\dim\mu\}+\min\{1,\dim\nu\}.\]
From this it readily follows that if
\[\dim(\mu*\nu)<\min\{2,\dim\mu+\dim\nu\},\]
then \(\dim\mu>1>\dim\nu\). This was one of the statements of Theorem 1.1. From now on, we suppose that \(\dim\mu>1>\dim\nu\), and that \(\Phi\) and \(\Psi\) satisfy the domination condition.
In this case, bounding the quantity \(\frac{1}{N}H_{N}(\mu^{\mathfrak{1},k}*\nu_{\mathfrak{j}|_{i_{k}}})\) is more efficient. Applying Lemmas 2.13 and 2.16 we see that
\[\frac{1}{N}H_{N}(\mu^{\mathfrak{1},k}*\nu_{\mathfrak{j}|_{i_{k}}})\] \[\geq \frac{1}{N}H_{N}(\pi_{\theta(\mathfrak{1})^{\perp}}\mu^{ \mathfrak{1},k}*\pi_{\theta(\mathfrak{1})^{\perp}}\nu_{\mathfrak{j}|_{i_{k}}}) +\frac{1}{N}H_{N}(\mu^{\mathfrak{1},k}|\pi_{\theta(\mathfrak{1})^{\perp}})-O( 1/N)\] \[= \frac{1}{N}H_{N}(\pi_{\theta(\mathfrak{1})^{\perp}}\mu^{ \mathfrak{1},k}*\pi_{\theta(\mathfrak{1})^{\perp}}\nu_{\mathfrak{j}|_{i_{k}}}) +\frac{1}{N}H_{N}(\mu^{\mathfrak{1},k})-\frac{1}{N}H_{N}(\pi_{\theta( \mathfrak{1})^{\perp}}\mu^{\mathfrak{1},k})-O(1/N)\]
for all \(N\). Here, the average of the terms \(\frac{1}{N}H_{N}(\mu^{\mathfrak{1},k})\) over \(k\) is close to \(\dim\mu\). Thus, Theorem 1.1 follows from the following inequalities together with the local entropy averages:
**Claim 4.3**.: _For \(\bar{\mu}\times\mu_{F}\)-a.e. \((\mathfrak{i},\theta)\), for every \(\varepsilon>0\) any any large enough \(N\), after the affine change of coordinates of Proposition 4.1 we have_
\[\limsup_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(\pi_{\theta( \mathfrak{1})^{\perp}}\mu^{\mathfrak{1},k})\leq\dim\mu-1+\varepsilon.\]
**Claim 4.4**.: _Suppose that for some \((i,j)\in\Gamma\times\Lambda\), \(\frac{\log|\lambda_{1}(A_{i})|}{\log|\lambda_{2}(B_{j})|}\not\in\mathbb{Q}\). Then for any \(\varepsilon>0\) and large enough \(N\), there exists \(n\in\mathbb{N}\) such that for any \(\mathfrak{a}\in\Gamma^{n}\), the following holds:_
_For \(\bar{\mu}\)-a.e. \(\mathfrak{i}\in[\mathfrak{a}]\) and \(\bar{\nu}\)-a.e. \(\mathfrak{j}\in\Lambda^{\mathbb{N}}\), after the affine change of coordinates of Proposition 4.1 we have_
\[\liminf_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(\pi_{\theta( \mathfrak{1})^{\perp}}\mu^{\mathfrak{1},k}*\pi_{\theta(\mathfrak{1})^{\perp}} \nu_{\mathfrak{j}|_{i_{k}}})\geq\min\{1,\dim\mu-1+\dim\nu\}-\varepsilon.\]
These claims are also proved in Section 6. We show how to conclude the proof of Theorem 1.1 from this.
Proof of Theorem 1.1.: Let \(\mu\) and \(\nu\) be as in the statement, with \(\dim\mu>1>\dim\nu\). For a contradiction, suppose that
\[\{|\lambda_{1}(A_{i})|:\ i\in\Gamma\}\cup\{|\lambda_{2}(B_{j})|:\ j\in\Lambda\}\]
is not an arithmetic set. It is not difficult to see that there must then exist a pair \((i,j)\in\Gamma\times\Lambda\) such that \(\frac{\log|\lambda_{1}(A_{i})|}{\log|\lambda_{2}(B_{j})|}\not\in\mathbb{Q}\).
Let \(\varepsilon>0\) and let \(N\in\mathbb{N}\) be large enough as in Claims 4.3 and 4.4 and Theorem 3.2. Let \(n\in\mathbb{N}\) be large as in Claim 4.4, and fix a word \(\mathtt{a}\in\Gamma^{n}\). Apply also the affine change of coordinates offered by Proposition 4.1. First of all, by [16, Lemma 4.3], we have
\[\dim\mu=\liminf_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(\mu^{ \mathtt{i},k}),\]
so by the previous claims we have
\[\liminf_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}( \mu^{\mathtt{i},k}*\nu_{\mathtt{j}|_{k}})\] \[\geq \liminf_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\left(\frac{1}{N}H _{N}(\pi_{\theta(\mathtt{i})^{\perp}}\mu^{\mathtt{i},k}*\pi_{\theta(\mathtt{i })^{\perp}}\nu_{\mathtt{j}|_{k}})+\frac{1}{N}H_{N}(\mu^{\mathtt{i},k})-\frac{ 1}{N}H_{N}(\pi_{\theta(\mathtt{i})^{\perp}}\mu^{\mathtt{i},k})-O(1/N)\right)\] \[\geq \min\{1,\dim\mu-1+\dim\nu\}+\dim\mu-(\dim\mu-1)-\varepsilon\] \[= \min\{2,\dim\mu+\dim\nu\}-\varepsilon\]
for \(\bar{\mu}\)-almost every \(\mathtt{i}\in[\mathtt{a}]\) and \(\bar{\nu}\)-almost every \(\mathtt{j}\in\Lambda^{\mathbb{N}}\). Since \(\mathtt{a}\in\Gamma^{n}\) was arbitrary, it now follows from Theorem 3.2 that \(\dim(\mu*\nu)\geq\min\{2,\dim\mu+\dim\nu\}-2\varepsilon\), which is a contradiction if \(\varepsilon\) is small enough.
## 5. The scenery of the self-affine measure
In this section, our aim is to prove Proposition 4.1, whence we assume throughout the section that \(\mu\) is a self-affine measure as in Theorem 1.1 with \(\dim\mu>1\). The arguments of this section were inspired by the work of Kempton [19] on a similar result for a more special class of self-affine measures, and some of our lemmas are analogous to those in [19].
### Restrictions of \(\mu\) on thin rectangles
The strong separation condition asserts the existence of a bounded open set \(V\neq\emptyset\) whose closure is mapped into disjoint subsets of \(V\) by the maps \(\varphi_{i}\). For the sake of notational simplicity, we suppose that \(V=B(0,1)\); since all the statements in the following are local in nature, everything works also for a general \(V\) by restricting onto small balls centered in the point of interest.
Let \(\mathtt{i}\in\Gamma^{\mathbb{N}}\), let \(B\) be a small ball centered at \(\Pi(\mathtt{i})\) and let \(n\) be the largest integer so that \(\varphi_{\mathtt{i}|_{n}}(B(0,1))\supseteq B\). By the strong separation condition, we have \(\mu_{B}=\varphi_{\mathtt{i}|_{n}}\varphi_{\mathtt{i}|_{n}}^{-1}\mu_{B}= \varphi_{\mathtt{i}|_{n}}\mu_{\varphi_{\mathtt{i}|_{n}}^{-1}B}\). Therefore, in order for us to understand the magnifications \(\mu_{B}\), we have to understand the measures \(\mu_{\varphi_{\mathtt{i}|_{n}}^{-1}B}\), the restrictions of \(\mu\) on the thinner and thinner ellipsoids \(\varphi_{\mathtt{i}|_{n}}^{-1}B\) whose major axes have length comparable to \(1\). It is intuitive that these measures should approximate the slice measures of \(\mu\) in different directions, as \(n\to\infty\). But since slices can be defined as limits of the measures supported on thinner and thinner tubes, it is better to work with rectangles instead of ellipsoids for a moment.
For \(\mathtt{i}\in\Gamma^{\mathbb{N}}\), \(\theta\in\mathbb{RP}^{1}\), \(r_{2}\geq r_{1}\geq 0\), write \(Y_{\mathtt{i},\theta,r_{1},r_{2}}\) for the rectangle centered at \(\Pi(\mathtt{i})\) with sidelengths \(2^{-r_{1}}\geq 2^{-r_{2}}\) and the longer side oriented in the direction \(\theta\). For a rectangle \(Y\) with center at \(x\) and major side oriented in the direction \(\theta\), write \(H_{Y}\) for the map which translates \(x\) to the origin and stretches \(T_{x}Y\) onto \(R_{\theta}^{-1}[-1,1]^{2}\). It follows from Lemma 2.23 and standard geometric measure theory that the measures \(H_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\mu_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\) have a fiber structure in the following sense.
**Lemma 5.1**.: _For \(\bar{\mu}\times\mu_{F}\)-a.e. \((\mathtt{i},\theta)\in\Gamma^{\mathbb{N}}\times\mathbb{RP}^{1}\) and any \(\delta,\varepsilon,c,r_{1}>0\), there exists \(t>0\) such that the following holds:_
_If \((Q_{r_{2}})_{r_{2}\geq 0}\) is a sequence of squares that contain the origin with one side parallel to \(\theta\), \(|Q_{r_{2}}|\geq c>0\) and_
\[H_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\mu_{Y_{\mathtt{i},\theta,r_{1},r_{2}}} (Q_{r_{2}})\geq\delta\]
_for every \(r_{2}\geq t\), then_
\[d_{\mathrm{LP}}\left(\pi_{\theta}((H_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\mu_{Y _{\mathtt{i},\theta,r_{1},r_{2}}})_{Q_{r_{2}}}),\ (R_{\theta}^{-1}\mu_{\mathtt{i},\theta,r_{1}})_{\pi_{\theta}Q_{r_{2}}} \right)<\varepsilon\]
_for every \(r_{2}\geq t\)._
For the proof, we record the following elementary observation.
**Lemma 5.2**.: _For any \(r,\delta>0\), the following holds for all small enough \(\varepsilon\geq\varepsilon^{\prime}>0\):_
_If \(\mu\) and \(\nu\) are probability measures on \([-1,1]^{d}\) with \(d_{\mathrm{LP}}(\mu,\nu)<\varepsilon^{\prime}\) and \(B=B(x,r)\) is a closed ball with \(\min\{\mu(B),\nu(B)\}\geq\delta\) and \(\nu(B(x,r+\varepsilon^{\prime}))\leq\nu(B(x,r-\varepsilon^{\prime}))+\varepsilon\), then_
\[d_{\mathrm{LP}}(\mu_{B},\nu_{B})<O(\varepsilon/\delta).\]
Proof of Lemma 5.1.: Since \(\dim\mu>1\), it follows from Theorems 2.5 and 2.6 that \(\dim\mu_{\mathtt{i},\theta}>0\) for \(\bar{\mu}\times\mu_{F}\)-almost every \((\mathtt{i},\theta)\). In particular, the slices \(\mu_{\mathtt{i},\theta}\) are non-atomic. Recall that the slices \(\mu_{\mathtt{i},\theta}\) have been translated so that the origin belongs to their supports. For \(\bar{\mu}\times\mu_{F}\)-a.e. \((\mathtt{i},\theta)\), it follows from non-atomicity of \(\mu_{\mathtt{i},\theta}\) that
\[\inf\{\mu_{\mathtt{i},\theta}([a,b]):\ a\leq 0\leq b,\ b-a\geq c\}>0.\]
Thus, for a given \(\varepsilon>0\), there exists \(c^{\prime}>0\) and a set \(A_{\varepsilon}\) with \(\bar{\mu}\times\mu_{F}(A_{\varepsilon})>1-\varepsilon\) such that \(\mu_{\mathtt{i},\theta}([a,b])\geq c^{\prime}\) for every \((\mathtt{i},\theta)\in A_{\varepsilon}\) and \(a\leq 0\leq b\) with \(b-a\geq c\). Since \(\bar{\mu}\times\mu_{F}(\bigcup_{n\in\mathbb{N}}A_{1/n})=1\), it suffices to prove the statement for \((\mathtt{i},\theta)\in A_{\varepsilon}\), for a given \(\varepsilon>0\).
Now, let \(E_{r_{2}}=\pi_{\theta^{\perp}}^{-1}(\pi_{\theta^{\perp}}Q_{r_{2}})\). Lemma 2.23 translated to the language of this section states that
\[d_{\mathrm{LP}}\left(\pi_{\theta}(H_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\mu_{Y_ {\mathtt{i},\theta,r_{1},r_{2}}})_{E_{r_{2}}},\ R_{\theta}^{-1}\mu_{\mathtt{i},\theta,r_{1}}\right)\to 0.\]
See Figure 1. Since \(E_{r_{2}}\cap\pi_{\theta}^{-1}(\pi_{\theta}Q_{r_{2}})=Q_{r_{2}}\), \(\mu_{\mathtt{i},\theta}(\pi_{\theta}Q_{r_{2}})\geq c^{\prime}\) and \(\mu_{\mathtt{i},\theta}\) is non-atomic, Lemma 5.2 asserts that also
\[d_{\mathrm{LP}}\left(\pi_{\theta}(H_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\mu_{Y_ {\mathtt{i},\theta,r_{1},r_{2}}})_{Q_{r_{2}}},\ (R_{\theta}^{-1}\mu_{\mathtt{i},\theta,r_{1}})_{\pi_{\theta}Q_{r_{2}}} \right)\to 0.\]
We will next relate the measures \(\mu^{\mathtt{i},k}\) to the measures \(H_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\mu_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\). To make statements more economic, we introduce some additional notation.
### Magnifications of \(\mu\)
For \(\mathtt{a}\in\Gamma^{*}\), let \(A_{\mathtt{a}}=U_{\mathtt{a}}D_{\mathtt{a}}V_{\mathtt{a}}^{-1}\) denote the singular value decomposition, where \(U_{\mathtt{a}},V_{\mathtt{a}}\) are orthogonal and \(D_{\mathtt{a}}=\operatorname{diag}(\alpha_{1}(A_{\mathtt{a}}),\alpha_{2}(A_{ \mathtt{a}}))\). Fix a large integer \(N\), and for \(\mathtt{i}\in\Gamma^{\mathbb{N}}\) and \(k\in\mathbb{N}\), let \(\ell_{k}=\ell_{k}(\mathtt{i})\) be any increasing sequence such that \(\lim_{k\to\infty}\ell_{k}=\infty\) and
\[B(\Pi(\mathtt{i}),2^{-kN})\cap\Pi(\Gamma^{\mathbb{N}})=B(\Pi(\mathtt{i}),2^{- kN})\cap\varphi_{\mathtt{i}|_{\ell_{k}}}(B(0,1)) \tag{5.1}\]
for every \(k\). Such a sequence exists for every \(\mathtt{i}\) by the strong separation condition. Throughout the following, we will use the short-hand notation
\[Q_{\mathtt{i},\theta,k}:=Y_{\sigma^{\ell_{k}}\mathtt{i},\theta,kN+\log\alpha _{1}(\mathtt{i}|_{\ell_{k}}),kN+\log\alpha_{2}(\mathtt{i}|_{\ell_{k}})}.\]
**Proposition 5.3**.: _For every \((\mathtt{i},\theta)\in\Gamma^{\mathbb{N}}\times\mathbb{RP}^{1}\), there exists an integer \(m\geq 0\) and a sequence of non-singular linear maps \((L_{\mathtt{i},\theta,k})_{k\in\mathbb{N}}\) such that_
\[S_{kN+m}T_{\Pi(\mathtt{i})}\mu=S_{m}U_{\mathtt{i}|_{\ell_{k}}}V_{\mathtt{i}|_ {\ell_{k}}}^{-1}L_{\mathtt{i},\theta,k}H_{Q_{i,A_{\mathtt{i}|_{\ell_{k}}}^{-1 }\theta,k}}\mu_{Q_{i,A_{\mathtt{i}|_{\ell_{k}}}^{-1}\theta,k}}\]
_for all large enough \(k\)._
An important point of the proposition is that the direction \(\theta\) can be chosen arbitrarily. In proving this we require the following geometric lemma, analogous to [19, Lemma 8.2]. Write \(E_{\mathtt{i},\theta,k}\subseteq Q_{\mathtt{i},\theta,k}\) for the largest ellipsoid contained in \(Q_{\mathtt{i},\theta,k}\).
**Lemma 5.4** (Figure 2).: _For \(\mu\times\mu_{F}\)-almost every \((\mathtt{i},\theta)\), there exists \(0<C<1\) such that_
\[CT_{\Pi(\sigma^{\ell_{k}}\mathtt{i})}E_{\mathtt{i},\theta(A_{\mathtt{i}|_{ \ell_{k}}}^{-1}),k}\subseteq T_{\Pi(\sigma^{\ell_{k}}\mathtt{i})}Q_{\mathtt{i},A_{\mathtt{i}|_{\ell_{k}}}^{-1}\theta,k}\]
_for all large enough \(k\)._
Proof.: For any \(\theta,\;\mathtt{i}\in\Gamma^{\mathbb{N}}\) and \(n\in\mathbb{N}\), we have
\[\tan(d(A_{\mathtt{i}|_{n}}^{-1}\theta,\;\theta(A_{\mathtt{i}|_{n}}^{-1})))= \frac{\alpha_{1}(\mathtt{i}|_{n})}{\alpha_{2}(\mathtt{i}|_{n})}\tan(d(\theta, \;\theta(A_{\mathtt{i}|_{n}})^{\perp})). \tag{5.2}\]
See Figure 3.
Write \(a=\alpha_{2}(\mathtt{i}|_{n})\) and \(b=\alpha_{1}(\mathtt{i}|_{n})\). Let \(X=[-b,b]\times[-a,a]\) and \(Y=R_{\eta}^{-1}([-Cb,Cb]\times[-Ca,Ca])\) with \(\eta=d(A_{\mathtt{i}|_{n}}^{-1}\theta,\;\theta(A_{\mathtt{i}|_{n}}^{-1}))\). It is not difficult to see that \(Y\subseteq X\) if we have \(\sin\eta\cdot Ca+Cb\leq b\), or equivalently,
\[\sin\eta\leq\frac{(1-C)b}{Ca}.\]
Since \(\eta\to 0\) as \(n\to\infty\), we have \(\sin\eta\leq 2\tan\eta\) for large enough \(n\). Therefore, we have
\[\sin\eta\leq\frac{2b}{a}\tan(d(\theta,\;\theta(A_{\mathtt{i}|_{n}})^{\perp})) \leq\frac{(1-C)b}{Ca},\]
when we choose \(C\leq\frac{1}{4\tan(d(\theta,\;\theta(\mathtt{i})^{\perp}))+1}\)
Proof of Proposition 5.3.: Let \((\mathtt{i},\theta)\in\Gamma^{\mathbb{N}}\times\mathbb{RP}^{1}\), let \(k\) be large, and let \(\ell_{k}\) be defined as in (5.1). Note that with the above notation, \((\varphi_{\mathtt{i}|_{\ell_{k}}})^{-1}B(\Pi(\mathtt{i}),2^{-(k-1)N})=E_{ \mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}\). Write \(B_{\mathtt{i}|_{\ell_{k}}}:=U_{\mathtt{i}|_{\ell_{k}}}D_{\mathtt{i}|_{\ell_{ k}}}^{-1}U_{\mathtt{i}|_{\ell_{k}}}^{-1}\) and note that \(S_{(k-1)N}B_{\mathtt{i}|_{\ell_{k}}}^{-1}\) is the linear map which scales the origocentric ellipsoid \(U_{\mathtt{i}|_{\ell_{k}}}V_{\mathtt{i}|_{\ell_{k}}}^{-1}T_{\varphi_{\mathtt{ i}|_{\ell_{k}}}^{-1}\pi(\mathtt{i})}E_{\mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{ k}}}^{-1}),k}\) onto \(B(0,1)\) without rotating it.
By the strong separation condition, with this notation we may write
\[S_{(k-1)N}T_{\Pi(\mathtt{i})}\mu =S_{(k-1)N}T_{\pi(\mathtt{i})}\varphi_{\mathtt{i}|_{\ell_{k}}} \mu_{E_{\mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}}\] \[=S_{(k-1)N}B_{\mathtt{i}|_{\ell_{k}}}^{-1}B_{\mathtt{i}|_{\ell_{ k}}}A_{\mathtt{i}|_{\ell_{k}}}T_{\varphi_{\mathtt{i}|_{\ell_{k}}}^{-1}\pi( \mathtt{i})}\mu_{E_{\mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}}\] \[=S_{(k-1)N}B_{\mathtt{i}|_{\ell_{k}}}^{-1}U_{\mathtt{i}|_{\ell_{ k}}}V_{\mathtt{i}|_{\ell_{k}}}^{-1}T_{\varphi_{\mathtt{i}|_{\ell_{k}}}^{-1}\pi( \mathtt{i})}\mu_{E_{\mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}}\] \[=U_{\mathtt{i}|_{\ell_{k}}}V_{\mathtt{i}|_{\ell_{k}}}^{-1}H_{Q_{ \mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}}\mu_{E_{\mathtt{i},\theta (A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}}.\]
by switching the order of scaling and rotation in the last equality.
Let \(0<C<1\) be the constant given by Lemma 5.4 so that
\[CT_{\Pi(\sigma^{\ell_{k}}\mathtt{i})}E_{\mathtt{i},\theta(A_{\mathtt{i}|_{ \ell_{k}}}^{-1}),k}\subseteq T_{\Pi(\sigma^{\ell_{k}}\mathtt{i})}Q_{\mathtt{i},A_{\mathtt{i}|_{\ell_{k}}}^{-1}\theta,k}.\]
Then \(S_{-\log C}H_{Q_{\mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}}\mu_{E_ {\mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}}=S_{-\log C}H_{Q_{ \mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{-1}),k}}\mu_{Q_{\mathtt{i},A_{ \mathtt{i}|_{\ell_{k}}}^{-1}\theta,k}}\); see Figure 4.
Writing
\[L_{\mathtt{i},\theta,k}:=H_{Q_{\mathtt{i},\theta(A_{\mathtt{i}|_{\ell_{k}}}^{- 1}),k}}H_{Q_{\mathtt{i},A_{\mathtt{i}|_{\ell_{k}}}^{-1}\theta,k}}^{-1},\]
we have obtained the representation
\[S_{(k-1)N-\log C}T_{\Pi(\mathfrak{i})}\mu=S_{-\log C}U_{\mathfrak{i}|_{\ell_{k}}}V _{\mathfrak{i}|_{\ell_{k}}}^{-1}L_{\mathfrak{i},\theta,k}H_{Q_{\mathfrak{i},A_{ \mathfrak{i}|_{\ell_{k}}}^{-1}\theta,k}}\mu^{Q_{\mathfrak{i},A_{\mathfrak{i}|_ {\ell_{k}}}^{-1},k}}.\]
This proves the statement with \(m:=\lceil-\log C\rceil\).
In particular, since \(2^{kN}T_{\Pi(\mathfrak{i})}D_{kN+m}(\Pi(\mathfrak{i}))=T_{2^{kN}\Pi(\mathfrak{ i})\mod 1}D_{m}(2^{kN}\Pi(\mathfrak{i})\mod 1)\), where we write
\[(x,y)\mod 1:=(x\mod 1,y\mod 1)\]
for \((x,y)\in\mathbb{R}^{2}\), and \(D_{kN+m}(\Pi(\mathfrak{i}))\subseteq B(\Pi(\mathfrak{i}),2^{-kN-m})\), we have
\[\mu^{\mathfrak{i},k+m}=(U_{\mathfrak{i}|_{\ell_{k}}}V_{\mathfrak{i}|_{\ell_{k} }}^{-1}L_{\mathfrak{i},\theta,k}H_{Q_{\mathfrak{i},A_{\mathfrak{i}|_{\ell_{k}} }^{-1},k}}\mu_{Q_{\mathfrak{i},A_{\mathfrak{i}|_{\ell_{k}}}^{-1},k}})^{T_{2^{ kN}\Pi(\mathfrak{i})\mod 1}D_{m}(2^{kN}\Pi(\mathfrak{i})\mod 1)}\]
for every large enough \(k\).
The distortion \(U_{\mathfrak{i}|_{\ell_{k}}}V_{\mathfrak{i}|_{\ell_{k}}}^{-1}L_{\mathfrak{i}, \theta,k}\)
If the map \(U_{\mathfrak{i}|_{\ell_{k}}}V_{\mathfrak{i}|_{\ell_{k}}}^{-1}L_{\mathfrak{i}, \theta,k}\) were just a rotation, then all that would be left to obtain the fiber structure of \(\mu^{\mathfrak{i},k+m}\) were to apply Lemma 5.1. However, it turns out that this is not necessarily the case, and \(U_{\mathfrak{i}|_{\ell_{k}}}V_{\mathfrak{i}|_{\ell_{k}}}^{-1}L_{\mathfrak{i}, \theta,k}\) instead takes squares onto parallelograms in a way which depends on \(\mathfrak{i}\) and \(\theta\).
To overcome the difficulties brought by this additional distortion, we begin by breaking \(U_{\mathfrak{i}|_{n}}V_{\mathfrak{i}|_{n}}^{-1}\) into three components: The reflection, and two rotations which do not reflect.
For a linear map \(A:\mathbb{R}^{2}\to\mathbb{R}^{2}\) and \(\theta\in\mathbb{RP}^{1}\), let \(A|_{\theta}:\ \theta\to\mathbb{R}^{2}\) denote the map obtained by restricting \(A\) onto \(\theta\). Let \(O\subset\Gamma^{\mathbb{N}}\times\mathbb{RP}^{1}\) be the open set of those \((\mathfrak{i},\theta)\) for which \(\langle\pi^{1}|_{\theta}\ |\ \pi^{1}\circ A_{\mathfrak{i}_{0}}^{-1}|_{\theta}\rangle<0\). Define the function \(\rho:\ \mathbb{N}\times\Gamma\times\mathbb{RP}^{1}\to\{-1,1\}\),
\[\rho(n,(\mathfrak{i},\theta))=\prod_{k=1}^{n}(-1)^{\chi_{O}(M^{k}(\mathfrak{i },\theta))}. \tag{5.3}\]
This map captures the reflections done by \(A_{\mathfrak{i}|_{n}}^{-1}\) on the line \(\theta\), or by \(A_{\mathfrak{i}|_{n}}\) on the line \(A_{\mathfrak{i}|n}^{-1}\theta\). Indeed, for any \(x\in\theta\), we may write
\[A_{\mathfrak{i}|_{n}}^{-1}x=\|A_{\mathfrak{i}|_{n}}^{-1}x\|R_{A_{\mathfrak{i} |_{n}}^{-1}\theta}^{-1}\rho(n,(\mathfrak{i},\theta))R_{\theta}x. \tag{5.4}\]
The map \(\rho\) is easily seen to satisfy the _cocycle equation_
\[\rho(n+k,(\mathfrak{i},\theta))=\rho(n,M^{k}(\mathfrak{i},\theta))\rho(k,( \mathfrak{i},\theta)).\]
**Lemma 5.5**.: _Let \(\mathfrak{i}\in\Gamma^{\mathbb{N}}\) and let \(U_{\mathfrak{i}|_{n}}D_{\mathfrak{i}|_{n}}V_{\mathfrak{i}|_{n}}^{-1}\) be the singular value decomposition of \(A_{\mathfrak{i}|_{n}}\). For any \(\theta\in\mathbb{RP}^{1}\setminus\{\theta(\mathfrak{i})\}\), we have_
\[\|U_{\mathfrak{i}|_{n}}V_{\mathfrak{i}|_{n}}^{-1}|_{A_{\mathfrak{i}|_{n}}^{-1} \theta}-R_{\theta(A_{\mathfrak{i}|_{n}})^{\perp}}^{-1}\rho(n,(\mathfrak{i}, \theta))R_{A_{\mathfrak{i}|_{n}}^{-1}\theta}|_{A_{\mathfrak{i}|_{n}}^{-1} \theta}\|\to 0\]
_as \(n\to\infty\)._
Proof.: Let \(B_{\mathfrak{i}|_{n}}:=U_{\mathfrak{i}|_{n}}D_{\mathfrak{i}|_{n}}^{-1}U_{ \mathfrak{i}|_{n}}^{-1}\) and note that
\[A_{\mathfrak{i}|_{n}}=B_{\mathfrak{i}|_{n}}^{-1}B_{\mathfrak{i}|_{n}}A_{ \mathfrak{i}|_{n}}^{-1}=B_{\mathfrak{i}|_{n}}^{-1}U_{\mathfrak{i}|_{n}}V_{ \mathfrak{i}|_{n}}^{-1}.\]
In particular, \(U_{\mathfrak{i}|_{n}}V_{\mathfrak{i}|_{n}}^{-1}=B_{\mathfrak{i}|_{n}}A_{ \mathfrak{i}|_{n}}\).
Now, by (5.4),
\[A_{\mathfrak{i}|_{n}}|_{A_{\mathfrak{i}|_{n}}^{-1}\theta}=\|A_{\mathfrak{i}|_ {n}}|_{A_{\mathfrak{i}|_{n}}^{-1}\theta}\|R_{\theta}^{-1}\rho(n,(\mathfrak{i},\theta))R_{A_{\mathfrak{i}|_{n}}^{-1}\theta}|_{A_{\mathfrak{i}|_{n}}^{-1} \theta}.\]
On the other hand,
\[\left\|B_{\mathfrak{i}|_{n}}|_{\theta}-(-1)^{k_{n}}\|A_{\mathfrak{i}|_{n}}^{- 1}|_{\theta}\|R_{\theta(\mathfrak{i})^{\perp}}^{-1}R_{\theta}|_{\theta}\right\|\to 0\]
as \(n\to\infty\), for some sequence \((k_{n})_{n}\in\mathbb{N}\), by Lemma 2.4. Since \(B_{\mathfrak{i}|_{n}}\) is positive definite and \(\theta\neq\theta(\mathfrak{i})\), the sequence \((-1)^{k_{n}}\) has to be eventually constant. By absorbing the eventual value of this sequence to the definition of \(\rho(n,(\mathfrak{i},\theta))\), we may without loss of generality suppose that
\[\lim_{n\to\infty}\|A_{\mathfrak{i}|_{n}}^{-1}|_{\theta}\|^{-1}B_{\mathfrak{i}| _{n}}|_{\theta}=R_{\theta(\mathfrak{i})^{\perp}}^{-1}R_{\theta}|_{\theta}.\]
Thus
\[\|B_{\mathfrak{i}|_{n}}A_{\mathfrak{i}|_{n}}|_{A_{\mathfrak{i}|_{n}}^{-1} \theta}-R_{\theta(\mathfrak{i})^{\perp}}^{-1}\rho(n,(\mathfrak{i},\theta))R_{ A_{\mathfrak{i}|_{n}}^{-1}\theta}|_{A_{\mathfrak{i}|_{n}}^{-1}\theta}\|\to 0.\]
It remains to study the behaviour of linear map \(L_{\mathfrak{i},\theta,k}\) as \(k\to\infty\), in the statement of Proposition 5.3. The content of the following lemma is that \(U_{\mathfrak{i}|_{\ell_{k}}}V_{\mathfrak{i}|_{\ell_{k}}}^{-1}L_{\mathfrak{i}, \theta,k}\) takes \(H_{Q_{\mathfrak{i},A_{\mathfrak{i}|_{\ell_{k}}}^{-1}\theta,k}}Q_{\mathfrak{i},A_{\mathfrak{i}|_{\ell_{k}}}^{-1}\theta,k}=R_{(A_{\mathfrak{i}|_{\ell_{k}}}^{ -1}\theta)^{\perp}}[-1,1]^{2}\) onto a parallelogram of bounded eccentricity and one side in direction \(\theta(\mathfrak{i})\).
**Lemma 5.6**.: _Let \(F_{\theta,\mathfrak{i}}^{j}=\begin{bmatrix}1&j\cdot\tan(\theta-\theta( \mathfrak{i})^{\perp})\\ 0&1\end{bmatrix}\) for \(j\in\{-1,1\}\). Then for every \(\mathfrak{i}\in\Gamma^{\mathbb{N}}\) and \(\theta\in\mathbb{RP}^{1}\setminus\{\theta(\mathfrak{i})\}\), there is a sequence \((j_{k})_{k\in\mathbb{N}}\subseteq\{-1,1\}\) such that_
\[\|U_{\mathfrak{i}|_{\ell_{k}}}V_{\mathfrak{i}|_{\ell_{k}}}^{-1}L_{\mathfrak{i},\theta,k}-R_{\theta(\mathfrak{i})^{\perp}}^{-1}\rho(\ell_{k},(\mathfrak{i}, \theta))F_{\theta,\mathfrak{i}}^{j_{k}}R_{A_{\mathfrak{i}|_{\ell_{k}}}^{-1} \theta}\|\to 0\]
_as \(k\to\infty\)._
Sketch of proof.: Recall that \(L_{\mathfrak{i},\theta,k}=H_{Q_{\mathfrak{i},\theta(A_{\mathfrak{i}|_{\ell_{k} }}^{-1}),k}}H_{Q_{\mathfrak{i},A_{\mathfrak{i}|_{\ell_{k}}}^{-1}\theta,k}}^{-1 }\). As the map \(L_{\mathfrak{i},\theta,k}\) acts on the set \(R_{A_{\mathfrak{i}|_{\ell_{k}}}^{-1}\theta}^{-1}[-1,1]^{2}\), it is first taken to the thin rectangle \(H_{Q_{\mathfrak{i},A_{\mathfrak{i}|_{\ell_{k}}}^{-1}\theta,k}}^{-1}R_{A_{ \mathfrak{i}|_{\ell_{k}}}^{-1}\theta}^{-1}[-1,1]^{2}\) (see Figure 5), and then stretched onto a parallelogram by \(H_{Q_{\mathfrak{i},\theta(A_{\mathfrak{i}|_{\ell_{k}}}^{-1}),k}}\).
The angle between these thin rectangles is \(d(\theta(A^{-1}_{\mathbf{i}|_{\ell_{k}}}),\ A^{-1}_{\mathbf{i}|_{\ell_{k}}}\theta)\), so by (5.2) and basic geometry, this parallelogram tends to \(R^{-1}_{\theta(A^{-1}_{\mathbf{i}|_{\ell_{k}}})}F^{1}_{\theta,\mathbf{i}}[-1,1]^ {2}\) as \(k\to\infty\). In particular,
\[\|L_{\mathbf{i},\theta,k}-R^{-1}_{\theta(A^{-1}_{\mathbf{i}|_{\ell_{k}}})}F^{1 }_{\theta,\mathbf{i}}R_{A^{-1}_{\mathbf{i}|_{\ell_{k}}}\theta}\|\to 0\]
as \(k\to\infty\). Using the fact that \(d(\theta(A^{-1}_{\mathbf{i}|_{\ell_{k}}}),\ A^{-1}_{\mathbf{i}|_{\ell_{k}}} \theta)\to 0\), Lemma 5.5 and incorporating the possible reflection for the line \(\theta(\mathbf{i})\) as the sequence \((j_{k})_{k\in\mathbb{N}}\subseteq\{-1,1\}\) completes the proof.
### Fiber structure for \(\mu^{\mathbf{i},k}\)
We will now explain how to combine Proposition 5.3 and Lemmas 5.1 and 5.6 to prove Proposition 4.1 and to obtain the fiber structure of \(\mu^{\mathbf{i},k}\). Recall that the problem is the additional distortion brought by \(U_{\mathbf{i}|_{\ell_{k}}}V^{-1}_{\mathbf{i}|_{\ell_{k}}}L_{\mathbf{i},\theta,k}\); see Figure 6.
Figure 6. For us to be able to directly apply Lemma 5.1, the above parallelogram would have to be the unit square.
However, because of our freedom in choosing the direction \(\theta\), this issue can be resolved in the following way. Essentially, we restrict the measure \(\mu\) to a cylinder where the directions \(\theta(\mathtt{i})\) are close to each other, and then apply a change of coordinates to take the parallelogram in Figure 6 close to the unit square.
Let \(\varepsilon_{1}>0\) be a small number. We will later see how small it has to be. Let also \(\varepsilon^{\prime}>0\) be small with respect to \(\varepsilon_{1}\). Let \(n=n(\varepsilon^{\prime})\) be large and fix a word \(\mathtt{a}\in\Gamma^{n}\). By Lemma 2.1, if \(n\) is large enough, there exists \(\theta_{0}\in\mathbb{RP}^{1}\) such that \(d(\theta(\mathtt{a}\mathtt{i}),\theta_{0})<\varepsilon^{\prime}\) for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\).
Let \(A=A(\operatorname{spt}\mu_{F},\theta_{0},\varepsilon_{1})\) be a non-singular linear map preserving \(\theta_{0}\) and \(\theta_{0}^{\perp}\), such that \(d(A(\operatorname{spt}\mu_{F}),\theta_{0}^{\perp})<\varepsilon_{1}\). This map will be the change of coordinates in the statement of Proposition 4.1. Let us see that it has the required properties.
If \(\varepsilon^{\prime}\) was chosen small enough (w.r.t. \(\operatorname{spt}\mu_{F}\)), we have for every \(\mathtt{i}\in\Gamma^{\mathbb{N}}\) that \(d(A\theta(\mathtt{a}\mathtt{i}),\theta_{0})<\varepsilon_{1}\) by continuity of \(A\). By rotating coordinates, we may suppose that \(\theta_{0}=0\), and upon changing coordinates by \(A\) we replace the IFS \(\Phi\) by the conjugate IFS \(\{A\varphi_{i}A^{-1}\}_{i\in\Gamma}\), \(\mu\) by the self-affine measure \(A\mu\), and the Furstenberg measure \(\mu_{F}\) by the measure \(A\mu_{F}\) (which is the Furstenberg measure induced by \(A\mu\)); see Figure 7.
To recap, after the change of coordinates by \(A\) we have
\[d(\operatorname{spt}\mu_{F},0^{\perp}) <\varepsilon_{1}\text{ and } \tag{5.6}\] \[d(\theta(\mathtt{i}),0) <\varepsilon_{1}\text{ for every }\mathtt{i}\in[\mathtt{a}]. \tag{5.5}\]
Let \(F^{j}_{\theta,\mathtt{i}}\) be as in Lemma 5.6. It follows from (5.5) and (5.6) that for \(\mathtt{i}\in[\mathtt{a}]\) and for \(\theta\) in a set of positive \(\mu_{F}\)-measure, the map \(F^{j}_{\theta,\mathtt{i}}\) is within distance \(\varepsilon_{1}\) of the identity, and thus the map \(U_{\mathtt{i}_{\mathtt{i}}|_{\ell_{k}}}V^{-1}_{\mathtt{i}_{\mathtt{i}}|_{\ell_ {k}}}L_{\mathtt{i},\theta,k}\) is within distance \(\varepsilon_{1}\) of \(R^{-1}_{\theta(\mathtt{i})^{\perp}}\rho(\ell_{k},(\mathtt{i},\theta))R_{A_{ \mathtt{i}_{\mathtt{i}_{\mathtt{i}_{\mathtt{i}_{\mathtt{i}_{\mathtt{i}_{ \mathtt{i}_{\mathtt{i}_{\mathtt{i}_{\mathtt{i}_{\mathtt{i}}_{\mathtt{i}}_{ \mathtt{i}}_{\mathtt{i}}}}}}}}}}}}\theta\) for large \(k\), by Lemma 5.6; see Figure 8.
Figure 7. If the ellipse \(\varphi_{\mathtt{a}}(B(0,1))\) is thin enough w.r.t. \(\operatorname{diam}\operatorname{spt}\mu_{F}\), it remains thin after mapping it through \(A\) even though many directions \(\theta\in\operatorname{spt}\mu_{F}\) are pulled very close to \(\theta_{0}^{\perp}\).
We are now ready to prove Proposition 4.1.
Proof of Proposition 4.1.: Let \(\varepsilon>0\), let \(n\in\mathbb{N}\) be large and apply the change of coordinates that was described above. Let \(\mathsf{a}\in\Gamma^{n}\) be arbitrary.
Let \(m\) be the constant given by Proposition 5.3. Our first goal is to ensure that the origin is often close to the center of the square \(T_{2^{kN}\Pi(\mathtt{i})\mod 1}D_{m}(2^{kN}\Pi(\mathtt{i})\mod 1)\), so that the square has relatively large mass which is required to apply Lemma 5.1. In this proof, write
\[\mathtt{i}^{k}:=2^{k}\Pi(\mathtt{i})\mod 1\]
and note that
\[S_{kN}T_{\Pi(\mathtt{i})}D_{kN+m}(\Pi(\mathtt{i}))=T_{\mathtt{i}^{kN}}D_{m}( \mathtt{i}^{kN}).\]
For any \(\delta>0\)
\[B(\Pi(\mathtt{i}),2^{kN+m}\delta)\subseteq D_{kN+m}(\Pi(\mathtt{i}))\text{ if and only if }B(\mathtt{i}^{kN+m},\delta)\subseteq[-1,1]^{2}.\]
We begin by showing that this inclusion holds for most \(k\), for small enough \(\delta\).
By Fubini's theorem, by applying a random translation to \(\mu\) (which does not affect the dimension of \(\mu*\nu\)), we may suppose that for \(\bar{\mu}\)-almost every \(\mathtt{i}\), the sequence \((\mathtt{i}^{kN+m})_{k\in\mathbb{N}}\) equidistributes for the Lebesgue measure on \([-1,1]^{2}\). In particular, for a small enough \(\delta_{0}>0\) there exists \(\mathcal{N}_{\varepsilon}^{1}\subseteq\mathbb{N}\) with \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}^{1}\cap[0,n])}{n}\geq 1- \varepsilon/4\) such that
\[B(\mathtt{i}^{kN+m},\delta_{0})\subseteq[-1,1]^{2} \tag{5.7}\]
for every \(k\in\mathcal{N}_{\varepsilon}^{1}\). On the other hand, by [12, Proposition 1.19],
\[\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}\frac{\log S_{kN}T_{\Pi(\mathtt{i})} \mu(B(0,\delta_{0}))}{\log\delta_{0}}\leq\overline{\dim}_{\text{loc}}\mu(\Pi( \mathtt{i}))<\infty\]
for \(\bar{\mu}\)-a.e. \(\mathtt{i}\), so there exists a \(\delta>0\) and a set \(\mathcal{N}_{\varepsilon}^{2}\subseteq\mathbb{N}\) such that \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}^{2}\cap[0,n])}{n}\geq 1- \varepsilon/4\) and
\[S_{kN}T_{\Pi(\mathtt{i})}\mu(B(0,\delta_{0}))\geq\delta>0 \tag{5.8}\]
for every \(k\in\mathcal{N}_{\varepsilon}^{2}\).
Now, for \(k\in\mathcal{N}_{\varepsilon}^{1}\cap\mathcal{N}_{\varepsilon}^{2}\), we have
\[S_{kN}T_{\Pi(\mathtt{i})}\mu(T_{\mathtt{i}^{kN}}D_{m}(\mathtt{i}^{kN}))\geq\delta.\]
The remaining task is now to ensure that the measure \(U_{\mathtt{i}|_{\ell_{k}}}V_{\mathtt{i}|_{\ell_{k}}}^{-1}L_{\mathtt{i},\theta,k}H_{Q_{\mathtt{i},A_{\mathtt{i}|_{\ell_{k}}}^{-1}}\theta,k}\mu_{Q_{\mathtt{ i},A_{\mathtt{i}|_{\ell_{k}}}^{-1}}\theta,k}\) does not give too large mass around the boundaries of squares, which was another requirement of Lemma 5.1.
For \(s,t>0\) define the set
\[A_{s,t}=\{\nu\in\mathcal{P}(\mathbb{R}^{2}):\text{ there exists a line }\ell\text{ with }d(\ell,0)>\delta_{0}\text{ and }\nu(\ell^{s})\geq t\}.\]
By Lemma 2.22, for \(\bar{\mu}\)-almost every \(\mathtt{i}\), for any \(\varepsilon^{\prime}>0\) there exists \(\varepsilon_{1}>0\) such that
\[\limsup_{n\to\infty}\frac{\#\{0\leq k\leq n:\ S_{kN}T_{\Pi(\mathtt{i})}\mu\in A _{\varepsilon_{1},\varepsilon^{\prime}}\}}{n}\leq\varepsilon/4.\]
Indeed, if this was not the case, we could find a sequence of \(N\)-tangent distributions \((P_{n})_{n\in\mathbb{N}}\) such that \(P_{n}(A_{1/n,\varepsilon^{\prime}})\geq\varepsilon/4\) by the fact that \(A_{1/n,\varepsilon^{\prime}}\) is closed. In particular, any accumulation point \(P\) of this sequence (which is again an \(N\)-tangent distribution) would have
\[P\left(\bigcap_{n}A_{1/n,\varepsilon^{\prime}}\right)=\lim_{n\to\infty}P(A_{1 /n,\varepsilon^{\prime}})\geq\lim_{n\to\infty}\limsup_{m\to\infty}P_{m}(A_{1/n,\varepsilon^{\prime}})\geq\varepsilon/4,\]
contradicting Lemma 2.22. Let \(\mathcal{N}_{\varepsilon}^{3}\) be such that for each \(k\in\mathcal{N}_{\varepsilon}^{3}\), \(S_{kN}T_{\Pi(\mathtt{i})}\mu\in A_{\varepsilon_{1},\varepsilon^{\prime}}\). Then \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}^{3}\cap[0,n])}{n}\geq 1 -\varepsilon/4\).
Let \(X\) be the set given by Lemma 5.1 (applied with \(c=\delta_{0}/2\)), such that \(\bar{\mu}\times\mu_{F}(X)>1-\varepsilon^{\prime}\) and for all large enough \(r_{2}\) and every \((\mathtt{i},\theta)\in X\), we have
\[d_{\mathrm{LP}}\left(\pi_{\theta}((H_{Y_{\mathtt{i},\theta,r_{1},r_{2}}}\mu_{Y _{\mathtt{i},\theta,r_{1},r_{2}}})_{Q_{r_{2}}}),\ (R_{\theta}^{-1}\mu_{\mathtt{i},\theta,r_{1}})_{\pi_{\theta}Q_{r_{2}}} \right)<\varepsilon/2.\]
Finally, for every \((\mathtt{i},\theta)\) we may let \(\mathcal{N}_{\varepsilon}^{4}\subseteq\mathbb{N}\) be the set of those \(k\) for which \(M^{\ell_{k}}(\mathtt{i},\theta)\in X\). For \(\bar{\mu}\times\mu_{F}\)-almost every \((\mathtt{i},\theta)\), we have \(\liminf_{n\to\infty}\frac{\#\{0\leq k\leq n:\ M^{k}(\mathtt{i},\theta)\in X\} }{n}\geq 1-\varepsilon^{\prime}\) by Birkhoff's ergodic theorem, and since it is not difficult to verify from the definition of \(\ell_{k}\) (in (5.1)) that \(\ell_{k}\leq O_{N}(k)\), we have \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}^{4}\cap[0,n])}{n}\geq 1 -\varepsilon/4\) if \(\varepsilon^{\prime}\) is small enough.
Let now \(\mathtt{i}\in[\mathtt{a}]\) be chosen from the set of full \(\bar{\mu}\)-measure and \(\theta\in\mathbb{RP}^{1}\) from a set of positive \(\mu_{F}\)-measure such that for \((\mathtt{i},\theta)\), all of the sets \(\mathcal{N}_{\varepsilon}^{j}\) as above exist, and let \(\mathcal{N}_{\varepsilon}=\mathcal{N}_{\varepsilon}^{1}\cap\mathcal{N}_{ \varepsilon}^{2}\cap\mathcal{N}_{\varepsilon}^{3}\cap\mathcal{N}_{\varepsilon}^ {4}\). By Proposition 5.3 and Lemma 5.6,
\[d_{\mathrm{LP}}(S_{kN+m}T_{\Pi(\mathtt{i})}\mu,\ R_{\theta(\mathtt{i})^{1}}^{-1 }\rho(\ell_{k},(\mathtt{i},\theta))F_{\theta,\mathtt{i}}^{j_{k}}R_{A_{\mathtt{ i}|_{\ell_{k}}}^{-1}}\theta^{H_{Q_{\mathtt{i},A_{\mathtt{i}|_{\ell_{k}}}^{-1}} \theta,k}}\mu_{Q_{\mathtt{i},A_{\mathtt{i}|_{\ell_{k}}}^{-1}}\theta,k}})<\varepsilon \tag{5.9}\]
for some \((j_{k})_{k\in\mathbb{N}}\subseteq\{-1,1\}\). For every \(k\in\mathcal{N}_{\varepsilon}\), we have
\[S_{kN}T_{\Pi(\mathtt{i})}\mu(T_{\mathtt{i}^{kN}}D_{m}(\mathtt{i}^{kN}))\geq\delta\]
by (5.7) and (5.8), and
\[S_{kN}T_{\Pi(\mathfrak{i})}\mu((T_{\mathfrak{i}^{kN}}D_{m}(\mathfrak{i}^{kN}))^{ \varepsilon_{1}})\leq S_{kN}T_{\Pi(\mathfrak{i})}\mu(T_{\mathfrak{i}^{kN}}D_{m }(\mathfrak{i}^{kN}))+\varepsilon^{\prime}\]
by the definition of \(\mathcal{N}_{\varepsilon}^{3}\). Since \(\|F_{\mathfrak{i},\theta}-\mathrm{Id}\|<\varepsilon_{1}\) by the properties (5.5) and (5.6) of our change of coordinates, we have by Lemma 5.2, by the definition of \(\mathcal{N}_{\varepsilon}^{4}\) and (5.9) that
\[d_{\mathrm{LP}}\left(\pi_{\theta(\mathfrak{i})^{\perp}}\mu^{\mathfrak{i},k+m},\ \left(R_{\theta(\mathfrak{i})^{\perp}}^{-1}\rho(\ell_{k},(\mathfrak{i},\theta) )\mu_{M^{\ell_{k}}(\mathfrak{i},\theta),kN+\log\alpha_{1}(\mathfrak{i}|_{\ell _{k}})}\right)^{I_{k}}\right)<\varepsilon/2+O(\varepsilon^{\prime}/\delta)<\varepsilon,\]
for every \(k\in\mathcal{N}_{\varepsilon}\), when \(I_{k}:=\pi_{\theta(\mathfrak{i})^{\perp}}T_{\mathfrak{i}^{kN}}D_{m}( \mathfrak{i}^{kN})\).
## 6. Bounding the local entropy averages
In this section, we prove Claims 4.2, 4.3 and 4.4. For this, we need to understand the dynamics of the sequences \((\pi_{\theta(\mathfrak{i})^{\perp}}\mu^{\mathfrak{i},k})_{k\in\mathbb{N}}\) and \((\pi_{\theta(\mathfrak{i})^{\perp}}\nu_{\mathfrak{j}|_{i_{k}}})_{k\in\mathbb{N}}\).
### Auxiliary suspension flows
We require a cocycle acting on \(\Lambda^{\mathbb{N}}\times\mathbb{RP}^{1}\) similar to \(\rho\) of the previous section, which we also denote by \(\rho\) since it plays an identical role: Let \(O\subseteq\Lambda^{\mathbb{N}}\times\mathbb{RP}^{1}\) denote the open set of those \((\mathfrak{j},\theta)\) for which \(\langle\pi_{\theta}\circ B_{j_{0}}|\pi_{B_{j_{0}}^{*}\theta}\rangle<0\), and set
\[\rho(k,(\mathfrak{j},\theta))=\prod_{\ell=1}^{k}(-1)^{\chi[O](M_{*}^{\ell}( \mathfrak{j},\theta))}. \tag{6.1}\]
Define the Holder continuous functions
\[f:\Gamma^{\mathbb{Z}}\times\mathbb{RP}^{1}\to\mathbb{R},\,( \mathfrak{i},\theta)\mapsto-\log\|A_{i_{0}}|_{A_{i_{0}}^{-1}\theta}\|\] \[g:\Lambda^{\mathbb{Z}}\times\mathbb{RP}^{1}\to\mathbb{R},\,( \mathfrak{j},\theta)=-\log\|B_{j_{0}}^{*}|_{\theta}\|\]
and the sets
\[Z_{\Phi}:=\{(\mathfrak{i},\theta,u,t):\ \mathfrak{i}\in \Gamma^{\mathbb{Z}},\theta\in\mathbb{RP}^{1},u\in\{-1,1\},0\leq t\leq f( \mathfrak{i},\theta)\},\] \[Z_{\Psi}:=\{(\mathfrak{j},\theta,u,t):\ \mathfrak{j}\in \Lambda^{\mathbb{Z}},\theta\in\mathbb{RP}^{1},u\in\{-1,1\},0\leq t\leq g( \mathfrak{j},\theta)\},\]
both equipped with the identifications \((\mathfrak{i},\theta,u,f(\mathfrak{i},\theta))=(M(\mathfrak{i},\theta),\rho( 1,(\mathfrak{i},\theta))u,0)\) and \((\mathfrak{j},\theta,u,g(\mathfrak{j},\theta))=(M_{*}(\mathfrak{j},\theta), \rho(1,(\mathfrak{j},\theta))u,0)\).
Let \(\mathcal{T}_{s}\) denote the flow induced by the positive reals on both \(Z_{\Phi}\) and \(Z_{\Psi}\), given by
\[\mathcal{T}_{s}:(\mathfrak{k},\theta,u,t)\mapsto(\mathfrak{k},\theta,u,t+s)\]
for every \(s\geq 0\) and \((\mathfrak{k},\theta,u,t)\in Z_{\Phi}\cup Z_{\Psi}\).
Let \(\gamma=\delta_{-1}/2+\delta_{1}/2\). Denote by \(\lambda_{\Phi}\) the measure \(\bar{\mu}\times\mu_{F}\times\gamma\times\mathcal{L}\) restricted and normalized on \(Z_{\Phi}\), and let \(\lambda_{\Psi}\) denote the normalized restriction of \(\bar{\nu}\times\nu_{F}^{*}\times\gamma\times\mathcal{L}\) on \(Z_{\Psi}\).
Denote by \(\mathcal{Z}_{\Phi}\) and \(\mathcal{Z}_{\Psi}\) the suspensions \((Z_{\Phi},\lambda_{\Phi},\mathcal{T}_{s})\) and \((Z_{\Psi},\lambda_{\Psi},\mathcal{T}_{s})\). It is not difficult to see that they are measure-preserving. Since the skew-product maps
\[(\mathtt{i},\theta,u) \mapsto(M(\mathtt{i},\theta),\rho(1,(\mathtt{i},\theta))u),\] \[(\mathtt{j},\theta,u) \mapsto(M_{*}(\mathtt{j},\theta),\rho(1,(\mathtt{j},\theta))u)\]
are \(\bar{\mu}\times\mu_{F}\times\gamma\)- and \(\bar{\nu}\times\nu_{F}^{*}\times\gamma\)-ergodic, respectively, by [25], it is standard that these suspensions are also ergodic.
Let \(\mathcal{Z}_{\Phi}^{\prime}=\pi^{1,2,4}\mathcal{Z}_{\Phi}\) and \(\mathcal{Z}_{\Psi}^{\prime}=\pi^{1,2,4}\mathcal{Z}_{\Psi}\). These are suspension flows of \(\Gamma^{\mathbb{N}}\times\mathbb{RP}^{1}\) and \(\Lambda^{\mathbb{N}}\times\mathbb{RP}^{1}\) over \(f\) and \(g\), respectively, and preserve the measures \(\lambda_{\Phi}^{\prime}:=\bar{\mu}\times\mu_{F}\times\mathcal{L}\) and \(\lambda_{\Psi}^{\prime}:=\bar{\nu}\times\nu_{F}^{*}\times\mathcal{L}\), restricted and normalized on the sets \(Z_{\Phi}^{\prime}=\pi^{1,2,4}Z_{\Phi}\) and \(Z_{\Psi}^{\prime}=\pi^{1,2,4}Z_{\Psi}\).
The key observation we require on the dynamics of \(\mathcal{Z}_{\Phi}^{\prime}\times\mathcal{Z}_{\Psi}^{\prime}\) is the following.
**Proposition 6.1**.: _If the IFSs \(\Phi\) and \(\Psi\) are dominated and there exists a pair \((i,j)\in\Gamma\times\Lambda\) with_
\[\frac{\log|\lambda_{1}(A_{i})|}{\log|\lambda_{2}(B_{j})|}\not\in\mathbb{Q},\]
_then the flow \(\mathcal{Z}_{\Phi}^{\prime}\times\mathcal{Z}_{\Psi}^{\prime}\) is ergodic under the discrete-time map \(\mathcal{T}_{N}\) for some \(N\in\mathbb{N}\)._
Let \(f^{\prime}:\Gamma^{\mathbb{Z}}\to(0,+\infty)\) denote the Holder continuous map \(\mathtt{i}\mapsto f(\mathtt{i}^{+},\theta^{-}(\mathtt{i}^{-}))\), and let
\[Z_{\Phi}^{\prime\prime}=\{(\mathtt{i},t):\ \mathtt{i}\in\Gamma^{\mathbb{Z}},0 \leq t\leq f^{\prime}(\mathtt{i})\}.\]
Let \(\lambda_{\Phi}^{\prime\prime}\) denote the measure \(\bar{\mu}\times\mathcal{L}\) restricted and normalized on \(Z_{\Phi}^{\prime\prime}\), and write \(\mathcal{Z}_{\Phi}^{\prime\prime}=(Z_{\Phi}^{\prime\prime},\mathcal{T}_{s}, \lambda_{\Phi}^{\prime\prime})\) for the suspension of \(\Gamma^{\mathbb{Z}}\) over \(f^{\prime}\). Similarly, let \(\mathcal{Z}_{\Psi}^{\prime\prime}\) be the suspension of \(\Lambda^{\mathbb{Z}}\) over \(g^{\prime}(\mathtt{j}):=g(\mathtt{j}^{+},\theta^{*}(\mathtt{j}^{-}))\).
We will first show that the semiflow \(\mathcal{Z}_{\Phi}^{\prime\prime}\times\mathcal{Z}_{\Psi}^{\prime\prime}\) is ergodic, and then use this to prove the ergodicity of \(\mathcal{Z}_{\Phi}^{\prime}\times\mathcal{Z}_{\Psi}^{\prime}\). Borrowing an idea of Bowen [6], let \(r:\Gamma^{\mathbb{Z}}\to\Gamma^{\mathbb{Z}}\) denote the function which replaces all the negative coordinates of \(\ldots i_{-1};i_{0}i_{1}\ldots\) by \(i_{0}\). Then \(f^{\prime}\) is cohomologous to \(h:=f^{\prime}\circ r\), that is,
\[f^{\prime}=h+u-u\circ\sigma, \tag{6.2}\]
where \(u\) is the continuous function defined by \(u(\mathtt{i})=\sum_{k=0}^{\infty}(f^{\prime}(\sigma^{k}\mathtt{i})-f^{\prime}( \sigma^{k}r(\mathtt{i})))\). The sum converges since \(f^{\prime}\) is Holder. In particular, the flow \(\mathcal{Z}_{\Phi}^{\prime\prime}\) is conjugate to the suspension of \(\Gamma^{\mathbb{Z}}\) over \(h\), denoted by \(\mathcal{Z}_{\Phi}^{h}=(Z_{\Phi}^{h},\mathcal{T}_{s},\lambda_{\Phi}^{h})\), and the advantage of this is that the function \(h\) depends only on the positive coordinates of \(\Gamma^{\mathbb{Z}}\).
**Lemma 6.2**.: _The eigenvalues of the flow \(\mathcal{Z}_{\Phi}^{\prime\prime}\) are contained in the set_
\[\bigcap_{i\in\Gamma}(\log|\lambda_{1}(A_{i})|)^{-1}\mathbb{Q}.\]
Proof.: Since \(\mathcal{Z}_{\Phi}^{\prime\prime}\) is conjugate to \(\mathcal{Z}_{\Phi}^{h}\) and conjugate flows have the same eigenvalues, it suffices to prove the statement for \(\mathcal{Z}_{\Phi}^{h}\). Moreover, by Lemma 2.11, we may replace \(\mathcal{Z}_{\Phi}^{h}\) by \((\mathcal{Z}_{\Phi}^{h})^{+}\) which we will in this proof continue to denote by \(\mathcal{Z}_{\Phi}^{h}\), for simplicity of notation.
Let \(i\in\Gamma\), and let \(\mathtt{i}_{0}=\ldots iii\ldots\in\Gamma^{\mathbb{Z}}\).
Now, \(|\lambda_{1}(A_{i})|^{-1}=|\lambda_{2}(A_{i}^{-1})|=\|A_{i}^{-1}|_{\theta(\mathfrak{ i}_{0}^{-})}\|=\|A_{i}|_{\theta^{-}(\mathfrak{i}_{0}^{-})}\|^{-1}\). In particular,
\[f^{\prime}(\mathfrak{i}_{0})=-\log\|A_{i}|_{A_{i}^{-1}\theta^{-}(\mathfrak{i}_ {0}^{-})}\|=-\log|\lambda_{1}(A_{i})|.\]
From (6.2) it is clear that also \(h(\mathfrak{i}_{0})=-\log|\lambda_{1}(A_{i})|\).
Let \(\beta\neq 0\) be an eigenvalue of \(\mathcal{Z}_{\Phi}^{h}\). By Proposition 2.10, there exists a _continuous_ eigenfunction \(\phi\) for \(\beta\) on \(\mathcal{Z}_{\Phi}^{h}\), so we have
\[\phi(\mathcal{T}_{s}(\mathfrak{i},t))=e(\beta s)\phi(\mathfrak{i},t) \tag{6.3}\]
for _every_\((\mathfrak{i},t)\in Z_{\Phi}^{h}\) and \(s\geq 0\).
Since \(\phi\neq 0\), we may let \(\psi:Z_{\Phi}^{h}\to\mathbb{R}\) be the real-valued function defined by \(\phi(\mathfrak{i},t)=e(\psi(\mathfrak{i},t))\) for every \((\mathfrak{i},t)\in Z_{\Phi}^{h}\). We obtain from (6.3) that
\[\psi(\mathcal{T}_{s}(\mathfrak{i},t))=\beta s+\psi(\mathfrak{i},t)+n( \mathfrak{i},t)\]
for some integer-valued function \(n:Z_{\Phi}^{h}\to\mathbb{Z}\). Inserting the value \(s=h(\mathfrak{i})\), we obtain
\[\psi(\sigma\mathfrak{i},t)=\beta h(\mathfrak{i})+\psi(\mathfrak{i},t)+n( \mathfrak{i},t)\]
which is equivalent to
\[h(\mathfrak{i})=\beta^{-1}n(\mathfrak{i},t)+\beta^{-1}\psi(\mathfrak{i},t)- \beta^{-1}\psi(\sigma\mathfrak{i},t).\]
In particular, \(h(\mathfrak{i}_{0})=\beta^{-1}n(\mathfrak{i}_{0},t)\in\beta^{-1}\mathbb{Z}\) since \(\mathfrak{i}_{0}\) is a fixed point for \(\sigma\). However, we saw above that \(h(\mathfrak{i}_{0})=-\log|\lambda_{1}(A_{i})|\), whence it follows that \(\beta\in(\log|\lambda_{1}(A_{i})|)^{-1}\mathbb{Q}\).
**Lemma 6.3**.: _The eigenvalues of \(\mathcal{Z}_{\Psi}^{\prime\prime}\) are contained in the set_
\[\bigcap_{j\in\Lambda}(\log|\lambda_{2}(B_{j})|)^{-1}\mathbb{Q}\]
Proof.: Let \(\mathfrak{j}_{0}=\ldots jjj\ldots\). Then \(\mathfrak{j}_{0}\) is a fixed point for \(\sigma\), and
\[g^{\prime}(\mathfrak{j}_{0})=-\log\|B_{j}^{*}|_{\theta^{*}(\mathfrak{j}_{0}^{ -})}\|=-\log|\lambda_{2}(B_{j}^{*})|=-\log|\lambda_{2}(B_{j})|\]
and by replacing \(\mathcal{Z}_{\Psi}^{\prime\prime}\) with a conjugate flow, we can proceed exactly as in the proof of the previous claim.
Proof of Proposition 6.1.: By Lemmas 6.2 and 6.3 and the assumption of Proposition 6.1, the flows \(\mathcal{Z}_{\Phi}^{\prime\prime}\) and \(\mathcal{Z}_{\Psi}^{\prime\prime}\) have no common eigenvalues. Thus by Proposition 2.9, the product \(\mathcal{Z}_{\Phi}^{\prime\prime}\times\mathcal{Z}_{\Psi}^{\prime\prime}\) is ergodic. Since it has at most countably many eigenvalues, there exists a real number \(c>0\) such that it is also ergodic under the discrete-time map \(\mathcal{T}_{c}\). By a change of coordinates, we may suppose further that \(c=N\) is an integer.
Now, since we have the domination assumption in place, the system \(\mathcal{Z}_{\Phi}^{\prime}\times\mathcal{Z}_{\Psi}^{\prime}\) is a factor of the system \(\mathcal{Z}_{\Phi}^{\prime\prime}\times\mathcal{Z}_{\Psi}^{\prime\prime}\) through the map
\[(\mathfrak{i},t,\mathfrak{j},s)\mapsto(\mathfrak{i}^{+},\theta^{-}(\mathfrak{i }^{-}),t,\mathfrak{j}^{+},\theta^{*}(\mathfrak{j}^{-}),s).\]
Since factor maps preserve ergodicity, this proves the statement.
### Dynamics of the magnifications of \(\mu\)
We claim that the flows \(\mathcal{Z}_{\Phi}\) and \(\mathcal{Z}_{\Psi}\) capture the dynamics of the sequences \((\pi_{\theta(\mathtt{i})^{\perp}}\mu^{\mathtt{i},k})_{k\in\mathbb{N}}\) and \((\pi_{\theta(\mathtt{i})^{\perp}}\nu_{\mathtt{j}|_{k}})_{k\in\mathbb{N}}\).
The following simple geometric observation justifies our choice of the roof function.
**Lemma 6.4**.: _For a.e. \((\mathtt{i},\theta)\), there exists a constant \(C(\mathtt{i},\theta)>0\) such that_
\[\frac{\|A_{\mathtt{i}|_{n}}|_{A_{\mathtt{i}|_{n}}^{-1}\theta}\|}{\alpha_{1}(A _{\mathtt{i}|_{n}})}\to C(\mathtt{i},\theta)\]
_as \(n\to\infty\)._
Proof.: Note first that \(A_{\mathtt{i}|_{n}}B(0,1)\) is an ellipse with minor axis of length \(\alpha_{1}(A_{\mathtt{i}|_{n}})\) in direction which tends to \(\theta(\mathtt{i})^{\perp}\). On the other hand, \(\|A_{\mathtt{i}|_{n}}|_{A_{\mathtt{i}|_{n}}^{-1}\theta}\|=|A_{\mathtt{i}|_{n} }B(0,1)\cap\theta|\). Therefore, by Lemma 2.1,
\[\frac{\|A_{\mathtt{i}|_{n}}|_{A_{\mathtt{i}|_{n}}^{-1}\theta}\|}{\alpha_{1}(A _{\mathtt{i}|_{n}})}=\frac{|A_{\mathtt{i}|_{n}}B(0,1)\cap\theta|}{|A_{\mathtt{ i}|_{n}}B(0,1)\cap\theta(A_{\mathtt{i}|_{n}})^{\perp}|}=(1+o(n))\frac{|A_{ \mathtt{i}|_{n}}B(0,1)\cap\theta|}{|A_{\mathtt{i}|_{n}}B(0,1)\cap\theta( \mathtt{i})^{\perp}|}\]
and by basic geometry, the above tends to \((\cos(d(\theta,\theta(\mathtt{i})^{\perp})))^{-1}=:C(\mathtt{i},\theta)\).
In particular, asymptotically we have that
\[\mu_{M^{\ell_{k}}(\mathtt{i},\theta),kN+\log\alpha_{1}(\mathtt{i}|_{\ell_{k}} )}=\mu_{M^{\ell_{k}}(\mathtt{i},\theta),kN+\log\|A_{\mathtt{i}|_{\ell_{k}}}|_ {A_{\mathtt{i}|_{\ell_{k}}}^{-1}\theta}\|-\log C(\mathtt{i},\theta)}\]
where the sequence \((\ell_{k})_{k\in\mathbb{N}}=(\ell_{k}(\mathtt{i}))_{k\in\mathbb{N}}\) was chosen to be an increasing one that tends to infinity such that
\[B(\Pi(\mathtt{i}),2^{-kN})\cap\Pi(\Gamma^{\mathbb{N}})\subseteq\varphi_{ \mathtt{i}|_{\ell_{k}}}(B(0,1))\]
for every \(k\). We will now specify our choice of the sequence. Let \(N\) be large enough so that \(2^{-N}<\min\{d(\varphi_{i}(B(0,1)),\varphi_{j}(B(0,1))):\ i,j\in\Gamma\}\).
Define the sequence \(\ell_{k}=\ell_{k}(\mathtt{i})\) by
\[\ell_{k} =\max\{n:\ -\log\|A_{\mathtt{i}|_{n}}|_{A_{\mathtt{i}|_{n}}^{-1} \theta}\|\leq(k-1)N-C(\mathtt{i},\theta)\}\] \[=\max\{n:\ \|A_{\mathtt{i}|_{n}}|_{A_{\mathtt{i}|_{n}}^{-1}\theta}\|^ {-1}\leq 2^{(k-1)N-\log C(\mathtt{i},\theta)}\}\] \[=\max\{n:\ \alpha_{1}(A_{\mathtt{i}|_{n}})\geq 2^{-(k-1)N}\}\]
and notice for every \(k\),
\[B(\Pi(\mathtt{i}),2^{-kN})\cap\Pi(\Gamma^{\mathbb{N}})\subseteq\varphi_{ \mathtt{i}|_{\ell_{k}}}(B(0,1)).\]
With this choice,
\[(\mathtt{i},\theta,(k-1)N-C(\mathtt{i},\theta))=(M^{\ell_{k}}(\mathtt{i}, \theta),(k-1)N-C(\mathtt{i},\theta)+\log\|A_{\mathtt{i}_{\ell_{k}}}|_{A_{ \mathtt{i}|_{\ell_{k}}}^{-1}\theta}\|). \tag{6.4}\]
Let \(F\) denote the map \(Z_{\Phi}\to\mathcal{P}(\mathbb{R}^{2})\),
\[(\mathtt{i},\theta,u,t)\mapsto R_{\theta(\mathtt{i})^{\perp}}^{-1}u\mu_{ \mathtt{i},\theta,t+N}. \tag{6.5}\]
**Claim 6.5**.: _For every \((\mathtt{i},\theta,u)\) and \(t\geq 0\),_
\[F(\mathtt{i},\theta,u,f(\mathtt{i},\theta)+t)=F(M(\mathtt{i},\theta),\rho(1,( \mathtt{i},\theta)u,t).\]
_In particular, \(S_{t}\circ F=F\circ\mathcal{T}_{t}\)._
Proof.: By the choice of \(N\), we have
\[B(\Pi(\mathtt{i}),2^{f(\mathtt{i},\theta)+t+N})\cap\Pi(\Gamma^{\mathbb{N}}) \subseteq\varphi_{i_{0}}(B(0,1))\]
for every \(\mathtt{i},\theta\) and \(t\geq 0\). Since \(A_{i_{0}}|_{A_{i_{0}}^{-1}\theta}=2^{-f(\mathtt{i},\theta)}R_{\theta}^{-1} \rho(1,(\mathtt{i},\theta))R_{A_{i_{0}}^{-1}\theta}\) as was noted in (5.4), it follows that
\[F(\mathtt{i},\theta,u,f(\mathtt{i},\theta)+t) =u\mu_{\mathtt{i},\theta,f(\mathtt{i},\theta)+t+N}\] \[=(\varphi_{i_{0}}\mu)_{\mathtt{i},\theta,f(\mathtt{i},\theta)+t+N}\] \[=R_{\theta}S_{f(\mathtt{i},\theta)+t+N}T_{\Pi(\mathtt{i})}( \varphi_{i_{0}}\mu)_{\Pi(\mathtt{i})}^{\theta}\] \[=R_{\theta}S_{f(\mathtt{i},\theta)+t+N}T_{\Pi(\mathtt{i})}\varphi _{i_{0}}\mu_{\Pi(\sigma\mathtt{i})}^{A_{i_{0}}^{-1}\theta}\] \[=\rho(1,(\mathtt{i},\theta))\mu_{M(\mathtt{i},\theta),t+N}\] \[=F(M(\mathtt{i},\theta),\rho(1,(\mathtt{i},\theta))u,t).\]
By basic linear algebra,
\[\|A_{i_{1}}A_{i_{2}}|_{A_{i_{2}}^{-1}A_{i_{1}}^{-1}\theta}\|=\|A_{i_{1}}|_{A_{ i_{1}}^{-1}\theta}\|\cdot\|A_{i_{2}}|_{A_{i_{2}}^{-1}A_{i_{1}}^{-1}\theta}\|.\]
Combining this with (6.4), we have
\[F(\mathtt{i},\theta,\mathrm{Id},(k-1)N-C(\mathtt{i},\theta))=R_{\theta( \mathtt{i})^{\perp}}^{-1}\rho(\ell_{k},(\mathtt{i},\theta))\mu_{M^{\ell_{k}}( \mathtt{i},\theta),kN+\log\alpha_{1}(A_{\mathtt{i}|_{\ell_{k}}})}.\]
Let \(\varepsilon>0\). Then by Proposition 4.1, the above and Birkhoff's ergodic theorem, there exist \(n,m\in\mathbb{N}\) and a sequence of intervals \((I_{k})_{k}\) with \(|I_{k}|\geq 2^{-m}\) such that the following holds:
For every \(\mathtt{a}\in\Gamma^{n}\) and \(\mu\)-a.e. \(\mathtt{i}\in[\mathtt{a}]\) and all \(\theta\) in a set of positive \(\mu_{F}\)-measure, there exists a set \(\mathcal{N}_{\varepsilon}\subseteq\mathbb{N}\) such that \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}\cap[0,n])}{n}\geq 1-\varepsilon\) and after an affine change of coordinates,
\[d_{\mathrm{LP}}(\pi_{\theta(\mathtt{i})^{\perp}}\mu^{\mathtt{i},k+m},\ (F(\mathtt{i},\theta,1,(k-1)N-\log C(\mathtt{i},\theta)))^{I_{k}})<\varepsilon \tag{6.6}\]
for every \(k\in\mathcal{N}_{\varepsilon}\).
### Dynamics of the magnifications of \(\nu\)
Write \(G_{\omega}:Z_{\Psi}\to\mathcal{P}(\mathbb{R}^{2})\) for the map
\[(\mathtt{j},\theta,v,t)\mapsto R_{\omega}S_{t}vR_{\theta}\pi_{\theta}T_{\Pi( \mathtt{j})}\nu. \tag{6.7}\]
It is not difficult to see that for every \(\theta\in\mathbb{RP}^{1}\), \(x\in\mathbb{R}^{2}\) and a matrix \(B\), we have
\[\pi_{\theta}(Bx)=\pm R_{\theta}^{-1}\|B^{*}|_{\theta}\|R_{B^{*}\theta}\pi_{B^{ *}\theta}(x),\]
where the sign is negative if and only if \(\langle\pi_{\theta}\circ B|\pi_{B^{*}\theta}\rangle<0\). Write \(O\subseteq\Lambda^{\mathbb{N}}\times\mathbb{RP}^{1}\) for the open set in which this is the case. In particular,
\[|\pi_{\theta}(B_{\mathsf{j}|_{k}}B(0,1))|=\|B_{\mathsf{j}|_{k}}^{*}|_{\theta}\| =\|B_{j_{0}}^{*}|_{\theta}\|\cdot\ldots\cdot\|B_{j_{k}}^{*}|_{B_{j_{k-1}}^{*} \ldots B_{j_{0}}^{*}\theta}\|\]
and
\[\pi_{\theta}\nu_{\mathsf{j}|_{i_{k}}}=R_{\theta}^{-1}\rho(i_{k},(\mathsf{j}, \theta))S_{kN+\log\|B_{\mathsf{j}|_{i_{k}}}^{*}|_{\theta}\|}R_{B_{\mathsf{j}|_ {i_{k}}}^{*}\theta}\pi_{B_{\mathsf{j}|_{i_{k}}}^{*}\theta}T_{\Pi(\sigma^{i_{k} }\mathsf{j})}\nu,\]
where \(\rho\) denotes the cocycle defined in (6.1). In particular, for any \(\mathsf{j}\), \(\theta\) and \(n\),
\[\pi_{\theta}(\nu_{\mathsf{j}|_{i_{n}}})=G_{\theta}(\mathcal{T}_{nN}(\mathsf{j },\theta,1,0)).\]
### Proof of Claims 4.2, 4.3 and 4.4
Proof of Claim 4.2.: Recall that \(\mu_{\mathsf{i}|_{i_{k}}}\) and \(\nu_{\mathsf{j}|_{i_{k}}}\) are measures supported on ellipses with major axes of length \(\approx 1\) and in directions \(\theta(A_{\mathsf{i}|_{i_{k}}})\) and \(\theta(B_{\mathsf{j}|_{i_{k}}})\), respectively, and minor axes of length tending to \(0\) as \(k\to\infty\). In particular, \(d_{\mathrm{LP}}(\nu_{\mathsf{j}|_{i_{k}}},\;\pi_{\theta(\mathsf{j})}(\nu_{ \mathsf{j}|_{i_{k}}}))\to 0\), whence also
\[d_{\mathrm{LP}}(\nu_{\mathsf{j}|_{i_{k}}},\;G_{\theta(\mathsf{j})}(\mathcal{T} _{kN}(\mathsf{j},\theta(\mathsf{j})^{\perp},1,0)))\to 0\]
as \(k\to\infty\). By Theorem 2.5, \(\dim G_{\theta(\mathsf{j})}(\mathsf{j},\theta,v,t)=\min\{1,\dim\nu\}\) for \(\lambda_{\Psi}\)-a.e. \((\mathsf{j},\theta,v,t)\). Therefore, for any \(\varepsilon>0\), large enough \(N\) and a.e. \((\mathsf{j},\theta,v,t)\) there exists a set \(\mathcal{N}_{\varepsilon}^{1}\subseteq\mathbb{N}\) with \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}^{1}\cap[0,n])}{n}\geq 1- \varepsilon/2\) such that
\[\frac{1}{N}H_{N}(G_{\theta(\mathsf{j})}(\mathcal{T}_{kN}(\mathsf{j},\theta,v,t )))>\min\{1,\dim\nu\}-\varepsilon/2\]
for every \(k\in\mathcal{N}_{\varepsilon}^{1}\). By Lemma 2.12, this in fact holds for every \(0\leq t<g(\mathsf{j},\theta)\). Replacing \(\eta\mapsto H_{N}(\eta)\) by a weak-\({}^{*}\) continuous substitute such as \(\eta\mapsto\int_{0}^{1}H_{N}(\delta_{x}*\eta)\,dx\), we can replace \((\mathsf{j},\theta,v,t)\) by \((\mathsf{j},\theta(\mathsf{j})^{\perp},1,0)\) since the orbits of the second coordinates under \(\mathcal{T}_{N}\) are asymptotic. Then, for all \(k\in\mathcal{N}_{\varepsilon}^{1}\), we have
\[\frac{1}{N}H_{N}(\nu_{\mathsf{j}|_{i_{k}}})>\min\{1,\dim\nu\}-\varepsilon/2.\]
Arguing similarly, for \(\bar{\mu}\)-almost every \(\mathsf{i}\) we find a set \(\mathcal{N}_{\varepsilon}^{2}\) such that \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}^{2}\cap[0,n])}{n}\geq 1- \varepsilon/2\) and
\[\frac{1}{N}H_{N}(\mu_{\mathsf{i}|_{i_{k}}})>\min\{1,\dim\mu\}-\varepsilon/2\]
for every \(k\in\mathcal{N}_{\varepsilon}^{2}\). Let \(\mathcal{N}_{\varepsilon}=\mathcal{N}_{\varepsilon}^{1}\cap\mathcal{N}_{ \varepsilon}^{2}\). Using the fact that \(\mu_{\mathsf{i}|_{i_{k}}}\) and \(\nu_{\mathsf{j}|_{i_{k}}}\) are supported on ellipses with different limiting directions and applying the chain rule for entropy (Lemma 2.13), we obtain that for \(\bar{\mu}\)-a.e. \(\mathsf{i}\) and \(\bar{\nu}\)-a.e. \(\mathsf{j}\),
\[\frac{1}{N}H_{N}(\mu_{\mathsf{i}|_{i_{k}}}*\nu_{\mathsf{j}|_{i_{k} }}) \geq\frac{1}{N}H_{N}(\mu_{\mathsf{i}|_{i_{k}}})+\frac{1}{N}H_{N}(\nu_{ \mathsf{j}|_{i_{k}}})\] \[\geq\min\{1,\dim\mu\}+\min\{1,\dim\nu\}-\varepsilon\]
for every \(k\in\mathcal{N}_{\varepsilon}\). Since \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}\cap[0,n])}{n}\geq 1-\varepsilon\), this completes the proof.
Proof of Claim 4.3.: For \(\bar{\mu}\times\mu_{F}\) a.e. \((\mathtt{i},\theta)\) we have
\[\lim_{N\to\infty}\frac{1}{N}H_{N}(\mu_{\mathtt{i},\theta})=\dim\mu_{\mathtt{i}, \theta}=1-\dim\mu\]
by Theorem 2.6. Let \(\varepsilon>0\).
Recall that \(Z^{\prime}_{\Phi}=\pi^{1,2,4}Z_{\Phi}\) is a suspension of \(\Gamma^{\mathbb{N}}\times\mathbb{RP}^{1}\). Since \(\mu_{\mathtt{i},\theta}\) is a.s. exact dimensional and non-atomic, by a slight modification of the proof of Lemma 2.17 there exists \(N_{0}\in\mathbb{N}\) and a set \(S\subseteq Z^{\prime}_{\Phi}\) with \(\lambda^{\prime}_{\Phi}(S)\geq 1-\varepsilon\) such that
\[\left|\frac{1}{N}H_{N}((F^{\prime}(\mathtt{i},\theta,t))^{I})-(1-\dim\mu) \right|<\varepsilon\]
for every \((\mathtt{i},\theta,t)\in S\), \(N\geq N_{0}\) and interval \(I\) that contains the origin and has length \(\geq 2^{-m}\), where \(m\) is the integer given by Proposition 4.1. In fact, for any \((\mathtt{i},\theta,t)\in S\), as a consequence of Lemma 2.12, we may suppose that this holds for any \(0\leq t<f(\mathtt{i},\theta)\). For \(\bar{\mu}\times\mu_{F}\)-almost every \((\mathtt{i},\theta)\) and every \(t\geq 0\), applying Birkhoff's ergodic theorem gives a set \(\mathcal{N}_{\varepsilon}\subset\mathbb{N}\) such that \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}\cap[0,n])}{n}\geq 1-\varepsilon\) and for each \(k\in\mathcal{N}_{\varepsilon}\),
\[\left|\frac{1}{N}H_{N}((F^{\prime}(\mathcal{T}_{kN}(\mathtt{i},\theta,t)))^{I} )-(1-\dim\mu)\right|<\varepsilon.\]
Therefore, for almost every \((\mathtt{i},\theta)\) we have
\[\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(F(\mathtt{i},\theta, 1,kN+\log C(\mathtt{i},\theta))^{I_{k}})\] \[= \frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}((F^{\prime}(\mathtt{ i},\theta,kN+\log C(\mathtt{i},\theta)))^{I_{k}})\] \[\leq \dim\mu-1+\varepsilon\]
for large enough \(n\), where \(C(\mathtt{i},\theta)\) is as in Lemma 6.4. The above also holds after the change of coordinates of Proposition 4.1 for the same value of \(N\), since as the measure \(F(\mathtt{i},\theta,1,kN+C(\mathtt{i},\theta))\) is supported on a line, an expanding affine change of coordinates only results in adding something to the fourth argument of \(F\) which we were free to choose to begin with. Finally, using (6.6), we obtain
\[\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(\pi_{\theta(\mathtt{i})^{\perp}} \mu^{\mathtt{i},k+m})\leq\dim\mu-1+2\varepsilon\]
for \(\bar{\mu}\)-almost every \(\mathtt{i}\in[\mathtt{a}]\) and all large enough \(n\).
Proof of Claim 4.4.: Recall the definitions of the functions \(F:Z_{\Phi}\to\mathcal{P}(\mathbb{R}^{2})\) and \(G_{\omega}:Z_{\Psi}\to\mathcal{P}(\mathbb{R}^{2})\) from (6.5) and (6.7). Since \(F\) intertwines the actions \(\mathcal{T}_{r}\) and \(S_{r}\) by Claim 6.5, \(F\lambda_{\Phi}\) is \(S_{r}\)-invariant and ergodic, so by Lemma 2.18 and Theorems 2.5 and 2.6,
\[\dim(\eta*\gamma)=\min\{1,\dim\eta+\dim\gamma\}=\min\{1,\dim\mu-1+\dim\nu\}\]
for \(F\lambda_{\Phi}\times G_{\theta(\mathfrak{1})^{\perp}}\lambda_{\Psi}\)-almost every \((\eta,\gamma)\). On the other hand,
\[F\lambda_{\Phi}=\frac{1}{2}F^{\prime}\lambda_{\Phi}^{\prime}+\frac{1}{2}(-F^{ \prime})\lambda_{\Phi}^{\prime}\qquad\text{and}\qquad G_{\theta(\mathfrak{1})^ {\perp}}\lambda_{\Psi}=\frac{1}{2}G_{\theta(\mathfrak{1})^{\perp}}^{\prime} \lambda_{\Psi}^{\prime}+\frac{1}{2}(-G_{\theta(\mathfrak{1})^{\perp}}^{\prime })\lambda_{\Psi}^{\prime},\]
where the negative sign denotes push-forward under the map \(x\mapsto-x\). In particular,
\[\dim((\pm F^{\prime})(\mathfrak{1},\theta_{1},t_{1})*(\pm G^{\prime})( \mathfrak{1},\theta_{2},t_{2}))=\min\{1,\dim\mu-1+\dim\nu\}\]
for any choice of the signs and \(\lambda_{\Phi}^{\prime}\times\lambda_{\Psi}^{\prime}\)-almost every \((\mathfrak{1},\theta_{1},t_{1},\mathfrak{1},\mathfrak{1},\theta_{2},j_{2})\).
Now, for \(\bar{\mu}\times\mu_{F}\)-a.e. \((\mathfrak{1},\theta)\), any interval that contains the origin has positive \(\mu_{\mathfrak{1},\theta}\)-measure. Therefore, for any \(\varepsilon>0\), applying Lemma 2.17 gives a set \(S\) with \(\lambda_{\Phi}^{\prime}\times\lambda_{\Psi}^{\prime}(S)>1-\varepsilon\) and an integer \(N_{0}\) such that
\[\frac{1}{N}H_{N}((\pm F^{\prime})(\mathfrak{1},\theta_{1},t_{1}) ^{I}*(\pm G^{\prime})(\mathfrak{1},\theta_{2},t_{2}))\] \[\geq \dim((\pm F^{\prime})(\mathfrak{1},\theta_{1},t_{1})*(\pm G^{ \prime})(\mathfrak{1},\theta_{2},t_{2}))-\varepsilon\] \[\geq \min\{1,\dim\mu-1+\dim\nu\}-\varepsilon\]
for all \(N\geq N_{0}\), \((\mathfrak{1},\theta_{1},t_{1},\mathfrak{1},\mathfrak{1},\theta_{2},t_{2})\in S\) and interval \(I\) of length \(2^{-m}\).
Recalling that \(\mathcal{Z}_{\Phi}^{\prime}\times\mathcal{Z}_{\Psi}^{\prime}\) is ergodic under \(\mathcal{T}_{N}\), by Birkhoff's ergodic theorem there exists for a.e. \((\mathfrak{1},\theta_{1},t_{1},\mathfrak{1},\theta_{2},t_{2})\) a set \(\mathcal{N}_{\varepsilon}^{1}\subseteq\mathbb{N}\) with \(\liminf_{n\to\infty}\frac{\#(\mathcal{N}_{\varepsilon}^{1}\cap[0,n])}{n}\geq 1-\varepsilon\) such that for every \(k\in\mathcal{N}_{\varepsilon}^{1}\),
\[\mathcal{T}_{kN}(\mathfrak{1},\theta_{1},t_{1},\mathfrak{1},\mathfrak{1}, \theta_{2},t_{2})\in S.\]
Now, for almost every \((\mathfrak{1},\theta_{1},t_{1},\mathfrak{1},\mathfrak{1},\theta_{2},t_{2})\), let
\[E_{u,v}(n)=\{0\leq k\leq n:\ \pi^{4,8}\mathcal{T}_{kN}(\mathfrak{1},\theta_{1},t _{1},1,\mathfrak{1},\mathfrak{1},\mathfrak{1},\theta_{2},t_{2},1)=(u,v)\}\]
for each \((u,v)\in\{-1,1\}^{2}\) and \(n\in\mathbb{N}\). Note that we omit the dependance on \((\mathfrak{1},\theta_{1},t_{1},\mathfrak{1},\mathfrak{1},\theta_{2},t_{2})\) from the notation. For each \((u,v)\),
\[\frac{\#(\mathcal{N}_{\varepsilon}^{1}\cap E_{u,v}(n))}{n}\geq\frac{\#E_{u,v} (n)}{n}-\varepsilon\]
for all large enough \(n\), whence
\[\frac{1}{n}\sum_{k\in E_{u,v}(n)}\frac{1}{N}H_{N}((uF^{\prime}( \mathcal{T}_{kN}(\mathfrak{1},\theta_{1},t_{1})))^{I_{k}}*(vG^{\prime}( \mathcal{T}_{kN}(\mathfrak{1},\theta_{2},t_{2}))))\] \[\geq \frac{\#E_{u,v}(n)}{n}\min\{1,\dim\mu-1+\dim\nu\}-\varepsilon\]
for any \((u,v)\) and large enough \(n\). Thus,
\[\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}((F(\mathcal{T}_{kN}( \mathfrak{1},\theta_{1},t_{1},1)))^{I_{k}}*G(\mathcal{T}_{kN}(\mathfrak{1}, \theta_{2},t_{2},1)))\] \[= \sum_{u,v\in\{-1,1\}}\frac{1}{n}\sum_{k\in E_{u,v}(n)}\frac{1}{ N}H_{N}((uF^{\prime}(\mathcal{T}_{kN}(\mathfrak{1},\theta_{1},t_{1})))^{I_{k}}*vG^{ \prime}(\mathcal{T}_{kN}(\mathfrak{1},\theta_{2},t_{2})))\] \[\geq \min\{1,\dim\mu-1+\dim\nu\}-4\varepsilon. \tag{6.8}\]
Since this holds for almost every choice of \((\mathtt{i},\theta_{1},t_{1},\mathtt{j},\theta_{2},t_{2})\), it in fact holds for any choice of \(0\leq t_{1}<f(\mathtt{i},\theta_{1})\) and \(0\leq t_{2}<g(\mathtt{j},\theta_{2})\), which follows from Lemma 2.12. Since the affine change of coordinates of Proposition 4.1 only results in a change of the values of \(t_{1}\) and \(t_{2}\), the above also holds after this change.
Note that \(G(\mathtt{j},\theta_{2},1,t_{2})\) is continuous in \(\theta_{2}\), uniformly in \(Z_{\Psi}\). On the other hand, for almost every choice of \(\theta_{2}\) and \(\mathtt{i}\in\Gamma^{\mathbb{N}}\) we have \(d(B_{\mathtt{j}|_{k}}\theta_{2},\;B_{\mathtt{j}|_{k}}\theta(\mathtt{i})^{ \perp})\to 0\) as \(k\to\infty\), by Lemma 2.2. Therefore, by replacing \(H_{N}\) by a weak-\({}^{*}\) continuous substitute, we may replace \(\theta_{2}\) by \(\theta(\mathtt{i})^{\perp}\) so that (6.8) still holds. Finally, applying (6.6), we obtain
\[\liminf_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\frac{1}{N}H_{N}(\pi_{\theta( \mathtt{i})^{\perp}}\mu^{\mathtt{i},k}*\pi_{\theta(\mathtt{i})^{\perp}}\nu_{ \mathtt{j}|_{i_{k}}})\geq\min\{1,\dim\mu-1+\dim\nu\}-6\varepsilon\]
for \(\bar{\mu}\)-almost every \(\mathtt{i}\in[\mathtt{a}]\) and \(\bar{\nu}\)-almost every \(\mathtt{j}\in\Lambda^{\mathbb{N}}\).
|
2303.12415 | Revisiting Leading Quantum Corrections to Near Extremal Black Hole
Thermodynamics | Computing the 4D Euclidean path integral to one-loop order we find the large
quantum corrections that govern the behavior of a spherically symmetric
non-supersymmetric near-extremal black hole at very low temperature. These
corrections appear from the near-horizon geometry of the near-extremal black
hole. Using first-order perturbation theory we find that such corrections arise
from the zero modes of the extremal background. In the logarithm of the
partition function, these correspond to terms involving logarithm of
temperature. Part of our result matches with the existing one in literature
derived from an effective Schwarzian theory. | Nabamita Banerjee, Muktajyoti Saha | 2023-03-22T09:27:11Z | http://arxiv.org/abs/2303.12415v1 | # Revisiting Leading Quantum Corrections to Near Extremal Black Hole Thermodynamics
###### Abstract
Computing the 4D Euclidean path integral to one-loop order we find the large quantum corrections that govern the behavior of a spherically symmetric non-supersymmetric near-extremal black hole at very low temperature. These corrections appear from the near-horizon geometry of the near-extremal black hole. Using first-order perturbation theory we find that such corrections arise from the zero modes of the extremal background. In the logarithm of the partition function, these correspond to terms involving logarithm of temperature. Part of our result matches with the existing one in literature derived from an effective Schwarzian theory.
## 1 Introduction
Black holes are thermal objects, uniquely described in the General Theory of Relativity by their mass, angular momentum, and charges. A revolutionary discovery in physics is the understanding of the laws of black hole thermodynamics, where the temperature is given by the surface gravity and the entropy is given by the area of the horizon [1; 2]
of the black hole. In [3; 4], it has been shown that the entropy of a black hole can be computed from a semiclassical computation of the Euclidean path integral in the black hole background. Later in [5], it was shown that the area law of entropy for a black hole with non-vanishing temperature can also be obtained as the Noether charge corresponding to the time translation Killing vector, evaluated on the black hole horizon. Beyond the semiclassical regime, the entropy gets universal corrections of the form of logarithm of horizon area [6; 7; 8; 9; 10]. Like ordinary thermodynamic systems, black hole entropy should also have a microscopic description in terms of the degeneracy of states in quantum theory. For a certain class of charged black holes, namely extremal black holes, the microscopic counting is very well understood in the context of string theory [11; 12; 13; 14; 15; 16; 17; 18; 19; 20].
A charged black hole at nonzero temperature, called a non-extremal black hole, has two distinct horizons. Such a non-extremal black hole emits thermal radiation [21; 22] and eventually settles to the ground state which corresponds to the extremal black hole. An extremal black hole is a charged black hole at zero temperature for which the two horizons coincide. For these black holes, Wald's formalism for computing entropy does not apply. Sen in [23; 24] computed their entropy using the entropy function formalism and obtained the correct area law, see also [25; 26]. It was shown that an extremal black hole has an infinitely long AdS\({}_{2}\) throat near the horizon which results in an enhanced isometry. This is particularly important in understanding the dynamics of these black holes. Going beyond the semiclassical limit, in [27; 28; 29] the logarithmic corrections were computed for extremal black holes and agreement with microscopic results in several scenarios was established. Clearly, extremal black holes play a very important role in understanding the microstructure of black holes. The logarithmic terms in black hole entropy were also computed in various other cases [30; 31; 32; 33; 34; 35; 36; 37; 38; 39], although the microscopic results are not available for such systems. These logarithmic corrections do not depend on the explicit ultraviolet structure of the underlying quantum theory of gravity. Rather, these are generated by loops of massless fields present in the theory. These corrections are universal features of the theory that can be extracted from the infrared data and yet these are very important to constrain the UV-complete theories.
For non-extremal black holes, a concrete microscopic understanding is so far lacking. This puts the study of near-extremal black holes on a very important footing. They can be considered as small temperature deviations from the extremal black holes ones to enjoy the reminiscence of that arise at extremality and simultaneously correspond to excited states on the microscopic side. On the macroscopic side, a naive semiclassical analysis for a near-extremal black hole gives the energy above extremality to be proportional to the square of temperature. However, the average energy of Hawking quanta is proportional to temperature. This seems to suggest that at sufficiently low temperature, the near-extremal black hole does not have enough energy to radiate, which is a clear contradiction to the concept of Hawking radiation. As a resolution to the apparent puzzle, in [40], it was argued that semiclassical physics breaks down at such small temperatures and to understand the system, one needs to incorporate quantum corrections to the thermodynamics. The authors considered the effective description [40; 41; 42] of the near-extremal black holes, where the low energy physics is described by a Schwarzian theory of slightly broken asymptotic
symmetry modes of the AdS\({}_{2}\) factor of extremal near-horizon geometry. Using the path integral of Schwarzian theory [43; 44], a large quantum correction of the form \(\log T\) appears in the logarithm of the partition function. These corrections are different than the logarithm of horizon area (or charge) correction although both of these come from the one-loop computation. Using the \(\log T\) term, the average energy expression gets an extra contribution that resolves the apparent contradiction involving Hawking radiation. This is because, in presence of this correction the average black hole energy remains greater than that of the Hawking quanta even at very low temperatures.
In this paper, we attempt to extract the \(\log T\) correction from a direct 4D Euclidean path integral computation without resorting to the effective lower-dimensional description. We observe that these corrections cannot be obtained by taking a small temperature limit of the results for a non-extremal black hole. Instead, we carry on the analysis in a limit where the near-extremal solution is treated as a small deviation of the extremal solution. The computation of the partition function for an extremal background is completely captured by the infinite near-horizon throat. Although the throat is finite for a near-extremal black hole, it is very large as the temperature is small. In the asymptotic region, the geometry is well-approximated by the full extremal solution. Here the effects of temperature are highly suppressed. Since the fluctuations die off near asymptotic infinity, the quantum corrections near the horizon have a more significant contribution than that in the asymptotic region. Hence, even in this case, the dynamics is governed by the near-horizon data. In this spirit, we quantize the system in the near-horizon region of the near-extremal black holes.
The computation of one-loop partition function amounts to evaluating the eigenvalues of the kinetic operator corresponding to small fluctuations around a background. Since the near-horizon near-extremal background is a deviation from the extremal AdS\({}_{2}\times\)S\({}^{2}\) geometry, the near-extremal kinetic operator is a small temperature deviation of the extremal kinetic operator. The eigenfunctions of the extremal kinetic operator are known which allows us to employ the first-order perturbation theory technique to find the near-extremal eigenvalues. We notice that the \(\log T\) correction generates from the zero modes of the extremal kinetic operator, which get a small non-zero mass due to the near-extremal correction of the background. All other modes give rise to contributions, polynomially suppressed in temperature. Therefore, we find the zero modes of the extremal kinetic operator and compute the corresponding eigenvalue corrections. The \(\log T\) correction coming from the tensor zero modes (asymptotic symmetries of AdS\({}_{2}\)), is in agreement with the Schwarzian results. However, we get additional corrections from other zero modes. Finally, we would like to comment that the issues raised in this paper are similar in spirit to that of [45], but the explicit analysis and computations are different. Also, we differ in our interpretation of the results.
The paper is organized as follows: In section 2 we discuss the near-horizon geometry of a near-extremal black hole in 4D Einstein-Maxwell theory and compute the Bekenstein-Hawking entropy from the near-horizon geometry only. This signifies that at least at the semiclassical level, the near-horizon information is enough to find the entropy of the system. In section 3, we discuss the forms of the quantum correction to near-extremal partition function and lay out our strategy of computing \(\log T\) contributions. Using first
order perturbation theory, we compute the \(\log T\) corrections in section 4. In section 5, we present an effective Schwarzian description that captures part of the 4D computations. Finally, we summarize the results in section 6. The appendices contain some relevant computational details.
## 2 Near-extremal black hole in 4D Einstein-Maxwell theory
We consider the 4D Einstein-Maxwell action in Euclidean signature:
\[\mathcal{S}=-\frac{1}{16\pi G_{N}}\int d^{4}x\sqrt{g}(R-F^{2}). \tag{1}\]
We will set \(16\pi G_{N}=1\) for convenience. The Euclidean time direction is compact. For a well-defined variational problem, we add appropriate boundary terms near asymptotic infinity in the spatial direction. Imposing Dirichlet and Neumann boundary conditions on the metric and gauge field respectively, the required boundary term [46; 47; 3; 4] is given by,
\[\mathcal{S}_{\rm bdy}=-2\int\sqrt{\gamma}(K+2n_{A}A_{B}F^{AB}), \tag{2}\]
here \(\gamma\) is the induced metric and \(n_{A}\) is the outward normal to the boundary. Varying the action (1) along with the boundary terms, we have the equations of motion given as:
\[R_{AB}=2F_{AC}F_{B}^{\phantom{B}C}-\frac{1}{2}g_{AB}\bar{F}^{2}; \qquad R=0;\qquad\nabla_{A}F^{AB}=0. \tag{3}\]
The classical solutions satisfy these equations of motion and also the Bianchi identities, given by,
\[\nabla_{[A}F_{BC]}=0;\qquad R_{A[BCD]}=0. \tag{4}\]
Spherically symmetric black hole solutions in this theory are given by Reissner-Nordstrom geometry, labeled by mass and charge parameters. For a black hole solution, the periodicity of the time direction is fixed by the inverse temperature. We are interested in a near-extremal black hole solution that has a very small temperature. This solution is perturbatively close to the zero-temperature extremal solution. We will now briefly discuss the geometries.
### The full extremal solution and its near horizon geometry
In this subsection, we will discuss the extremal Reissner-Nordstrom solution since we will be treating the near-extremal solution as a small deviation from extremality. We begin with the generic non-extremal Reissner-Nordstrom solution1 in the theory (1),
Footnote 1: Without loss of generality we are considering electric charge only since in 4D, we have electric-magnetic duality.
\[ds^{2}=g_{AB}dx^{A}dx^{B}=f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d \Omega^{2},\quad f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}, \tag{5}\] \[A_{t}=iQ\left(\frac{1}{r_{+}}-\frac{1}{r}\right),\quad F_{rt}= \frac{iQ}{r^{2}}. \tag{6}\]
This solution has two horizons2
Footnote 2: We note that the two horizons are visible in the Lorentzian geometry. The Euclidean geometry starts from \(r=r_{+}\), while the time direction has periodicity equal to the inverse temperature.
\[M=\frac{1}{2r_{+}}(Q^{2}+r_{+}^{2}),\quad r_{-}=\frac{Q^{2}}{r_{+}}. \tag{7}\]
The temperature is given by,
\[T=\frac{1}{4\pi}\big{|}f^{\prime}(r_{+})\big{|}=\frac{1}{4\pi r_{+}^{3}}(r_{+} ^{2}-Q^{2}). \tag{8}\]
At extremality, the two horizons coincide such that \(M=Q=r_{0}\), where \(r=r_{0}\) denotes the extremal horizon. For the extremal black hole, \(f(r_{0})=0\) and \(f^{\prime}(r_{0})=0\). Then the \(g_{tt}\) component of the metric takes the following form which now has a double zero at \(r=r_{0}\),
\[g_{tt}=f(r)=\left(1-\frac{r_{0}}{r}\right)^{2}. \tag{9}\]
In the near-horizon region i.e. for \(r-r_{0}=\rho\ll r_{0}\), the solution can be expressed as,
\[ds^{2}=\frac{\rho^{2}}{r_{0}^{2}}dt^{2}+\frac{r_{0}^{2}d\rho^{2}}{\rho^{2}}+r _{0}^{2}d\Omega^{2},\quad F_{rt}=\frac{i}{r_{0}}. \tag{10}\]
Therefore the geometry is AdS\({}_{2}\times\)S\({}^{2}\) near the horizon. In this region, the symmetry gets enhanced due to the AdS\({}_{2}\) factor which plays a very important role in the dynamics of these black holes.
### The full near-extremal solution and its near horizon geometry
Next, keeping the charge fixed to its extremal value \(r_{0}\), we introduce a small mass above extremality such that the black hole becomes slightly non-extremal. As a consequence, the near-horizon geometry of a near-extremal black hole is described by a small deviation from AdS\({}_{2}\times\)S\({}^{2}\). Before moving ahead with the explicit structure of the geometry, let us briefly mention the effective 2D description of the near-horizon physics of such black holes, as presented in the existing literature [40; 41; 42]. Using the symmetries of the near-horizon region, the 4D theory can be reduced to a two-dimensional manifold which, in the massless sector, gives rise to a 2D theory of gravity coupled to dilaton. An appropriate Weyl transformation of the 2D metric removes the kinetic term of the dilaton. The constant dilaton solution in this theory corresponds to the near-horizon extremal geometry. The standard procedure to describe near-extremal physics is to consider fluctuations of only the dilaton field around its constant value, while keeping the metric part same. At first order in fluctuations, the resulting theory turns out to be Jackiw-Teitelboim (JT) gravity 3,
with appropriate boundary conditions [48; 49]. By integrating out the dilaton, JT gravity can be further boiled down to a 1D Schwarzian theory [44; 50], which captures the near-extremal physics. This puts a constraint on the 2D metric, which sets the curvature to a negative constant value i.e. the metric is fixed to asymptotically AdS\({}_{2}\). The falloff of the dilaton also gets fixed near the boundary. Thus the effective JT description suggests that the near-horizon geometry of the near-extremal black hole is a Weyl transformed AdS\({}_{2}\), where the conformal factor is fixed by the dilaton profile with a sphere, having a slightly varying radius, also given by the dilaton. This form of the solution is however critical, since it does not solve the 4D equations of motion. In this section, we directly compute the near-horizon geometry from 4D Reissner-Nordstrom solution, which also satisfies the equations of motion to leading order in deviation from extremality. We argue that this near-horizon geometry (after considering suitable Weyl factor) cannot be transformed into a locally AdS\({}_{2}\) geometry and hence is not equivalent to the solution coming from JT gravity. Our effective description of the system is presented in section 5.
We now present the near-extremal geometry. Due to the presence of a small temperature, the horizons split slightly from the extremal one. We parametrize the near-extremal solution by \(r_{0}\) and \(\delta\), where \(\delta\ll r_{0}\) characterizes the first-order deviation from extremality4. In terms of these parameters we have,
Footnote 4: Since \(\delta\sim T\), we will use the temperature \(T\) as the perturbation parameter in the computation of one-loop determinant so that we can directly extract out the \(\log T\) dependence. But for the semiclassical computation from the near-horizon geometry, it is instructive to parametrize the solution by \(\delta\).
\[M=r_{0}+\frac{\delta^{2}}{2r_{0}}+\frac{2\delta^{3}}{r_{0}^{2}} +\mathcal{O}(\delta^{4}),\] \[r_{+}=r_{0}+\delta+\frac{5\delta^{2}}{2r_{0}}+\mathcal{O}( \delta^{3}),\] \[T=\frac{\delta}{2\pi r_{0}^{2}}+\mathcal{O}(\delta^{3}),\quad \beta=\frac{2\pi r_{0}^{2}}{\delta}+16\pi\delta-\frac{45\pi\delta^{2}}{2r_{0}} +\mathcal{O}(\delta^{3}). \tag{11}\]
Hence, the full near-extremal solution gets corrected at order \(\delta^{2}\). It is given by (6) with the \(g_{tt}\) component being,
\[f(r)=\Big{(}1-\frac{r_{0}}{r}\Big{)}^{2}-\frac{\delta^{2}}{rr_{0}}. \tag{12}\]
We will split the full near-extremal solution into near-region and far-region, which will be important for the computations. From effective 2D perspective, such a splitting was performed in [40; 41; 42].
#### The geometry in near-horizon region (NHR):
First, we consider the near-horizon geometry of the near-extremal RN black holes. We perform the following coordinate transformations on the RN geometry (6) with parameters (11),
\[r(\eta)=r_{+}+\delta(\cosh\eta-1),\quad t(\theta)=\frac{r_{0}^{2}}{\delta}\theta, \tag{13}\]
where, the coordinates range from \(0<\eta<\eta_{0}\) and \(0<\theta<2\pi\). We denote the coordinates on AdS\({}_{2}\) by \(x^{\mu}\) and the coordinates on S\({}^{2}\) by \(x^{i}\). The horizon is located at \(\eta=0\), such that \(r=r_{+}\). In this coordinate system, the near-extremal geometry has the form \(\tilde{g}_{AB}=g^{0}_{AB}+\delta g^{(c)}_{AB},\tilde{F}_{AB}=F^{0}_{AB}+\delta F ^{(c)}_{AB},\tilde{A}_{B}=A^{0}_{B}+\delta A^{(c)}_{B}\) where5
Footnote 5: The same can be obtained by perturbatively solving the 4D equations of motion directly in the near-horizon region as illustrated in appendix C.
\[g^{0}_{AB}dx^{A}dx^{B}=r_{0}^{2}(d\eta^{2}+\sinh^{2}\eta d\theta ^{2})+r_{0}^{2}(d\psi^{2}+\sin^{2}\psi d\varphi^{2}),\] \[F^{0}_{\mu\nu}=\frac{i}{r_{0}}\varepsilon_{\mu\nu},\quad A^{0}_ {\theta}=ir_{0}(\cosh\eta-1). \tag{14}\]
These are the \(\mathcal{O}(1)\) pieces of the expansion that give the near-horizon extremal geometry. Note that at this order, the horizon is located at \(\eta=0\) or at \(r=r_{0}\), which is the extremal horizon. The \(\mathcal{O}(\delta)\) correction is given as,
\[g^{(c)}_{AB}dx^{A}dx^{B}=2r_{0}(2+\cosh\eta)\tanh^{2}\left(\frac {\eta}{2}\right)(d\eta^{2}-\sinh^{2}\eta d\theta^{2})+2r_{0}\cosh\eta d\Omega^ {2},\] \[F^{(c)}_{\mu\nu}=-2ir_{0}^{-2}\cosh\eta\varepsilon_{\mu\nu}, \quad A^{(c)}_{\theta}=-i\sinh^{2}\eta. \tag{15}\]
Here the perturbative parameter is the small deviation of horizon \(\delta\), proportional to the temperature. \(\varepsilon_{\mu\nu}\) is the Levi-Civita tensor on AdS\({}_{2}\), with the non-zero component being \(\varepsilon_{\eta\theta}=r_{0}^{2}\sinh\eta\). This geometry has also been discussed in [45]. Two important points to note are,
* We are considering a near-extremal black hole with a very small temperature \(T\), so that we have \(\delta\ll r_{0}\) or \(r_{0}T\ll 1\). The perturbative expansion of the near-horizon geometry is valid as long as we are very close to the horizon so that the new radial coordinate \(\eta\) does not grow much. Hence, we choose the radial cutoff \(\eta_{0}\) such that \(\delta\mathrm{e}^{\eta_{0}}\ll r_{0}\). For an extremal black hole, this radial cutoff can be taken to infinity, resulting in an infinite AdS\({}_{2}\) throat.
* From the structure of the near-extremal correction, we note that the geometry on the \((\eta,\theta)\) plane is not asymptotically AdS\({}_{2}\). All the corrections to the fields appear at the same order of temperature and they diverge near the cutoff surface at \(\eta=\eta_{0}\). Since the deviation \(g^{(c)}_{\mu\nu}\) is traceless with respect to the AdS\({}_{2}\) metric, it cannot be transformed to even a small Weyl transformation of AdS\({}_{2}\) via coordinate transformations. This point is in contradiction with a 2D effective description of these black holes in terms of a JT-like theory, since, for JT theory, the background must be a locally AdS\({}_{2}\) geometry. We shall expand on this in the discussion section.
#### The geometry in far-horizon region (FHR):
In the far region, we need to consider the full solution, where the corrections appear at \(\mathcal{O}(\delta^{2})\). At large enough distances from the horizon, the geometry closely resembles the full extremal geometry as the horizons appear to be overlapping. Hence in the FHR, the effects of temperature become negligible as compared to that in the NHR.
o far we have split the full near-extremal geometry into near-horizon and far-horizon regions. These regions are separated by a 3D boundary curve located at \(\eta=\eta_{0}\) or \(r=r_{b}\). We denote the boundary as \(\partial N\). The parameters \(\eta_{0}\) and \(r_{b}\) are related through the coordinate transformation (13). The fields are smooth across this artificial boundary. We impose Dirichlet boundary condition on the metric and Neumann boundary condition on the gauge field. Physically these two conditions fix the length of the boundary and the charge of the black hole respectively.
To summarize, the full manifold (\(M\)) is obtained by gluing the two geometries across \(\partial N\). The NHR manifold has a boundary \(\partial N\) whereas the FHR manifold has two boundaries \(\partial N\) and \(\partial M\). The near-horizon boundary \(\partial N\) is shared by both the manifolds and \(\partial M\) is the boundary located near asymptotic infinity. We will work in a limit such that the boundary \(\partial N\) is asymptotically far from the horizon with respect to the NHR but it still lies in the near-horizon region with respect to asymptotic infinity. These limits also have been discussed in [40; 41; 42] and are given in equations (17).
### Semiclassical near-extremal entropy from near-horizon geometry
The thermodynamics of the near-extremal black hole can be studied using the full geometry as discussed in appendix B, where we work in an ensemble with fixed charge and fixed length of the boundary at asymptotic infinity. In this section, we will extract the Bekenstein-Hawking entropy from the near-horizon region only without referring to the far-horizon data. This is because entropy is a near-horizon quantity for any black hole, which can be anticipated from Wald's derivation of entropy as the Noether charge at horizon [5]. For the computation of entropy, we don't need additional counterterms [4], since the role of counterterms is only to regulate the energy via appropriate background subtraction. For computing the entropy, we need to consider the boundary length as an independent parameter for our choice of ensemble. This plays the role of the inverse temperature from the perspective of an observer in the near-horizon boundary. For this purpose, we need to parametrize the black hole solution with charge \(Q=r_{0}\) and the shift \(\delta\) in the horizon
Figure 1: Splitting of the geometry into near-horizon and far-horizon regions
radius (or mass above extremality) instead of parametrizing by temperature, which gives the boundary length near asymptotic infinity.
The near-horizon geometry, that describes the small-temperature physics above extremality, has been discussed in section 2.2. This geometry, given by (14) and (15), is well-approximated to describe the same up to a radial distance \(\eta_{0}\) such that \(\eta_{0}\) is large but the near-extremal corrections (terms proportional to \(\delta\)) remain small compared to the extremal geometry. Therefore we have,
\[\delta\mathrm{e}^{\eta_{0}}\ll r_{0}, \tag{16}\] \[\mathrm{e}^{\eta_{0}}\approx\frac{r_{0}}{\delta}\epsilon,\quad \epsilon\ll 1. \tag{17}\]
To get the entropy, We evaluate the action (1) along with the boundary terms (2) for the near-horizon near-extremal solution, where the boundary is located at radial distance \(\eta=\eta_{0}\) in the NHR. The on-shell action is given as,
\[I=16\pi(-\pi r_{0}^{2}-2\pi r_{0}\delta\cosh^{2}\eta_{0}). \tag{18}\]
The boundary length is given as,
\[\beta_{0}=\frac{1}{r_{0}}\int_{\eta=\eta_{0}}d\theta\sqrt{g_{\theta\theta}}= 2\pi\sinh\eta_{0}-\frac{2\pi\delta}{r_{0}}\operatorname{csch}\eta_{0}(2-3 \cosh\eta_{0}+\cosh^{3}\eta_{0}). \tag{19}\]
Now we use the condition (17) so that the near-horizon approximation holds and we work in small \(\epsilon\) limit. The entropy is given by,
\[S_{\text{near-ext}}=\beta_{0}\frac{\partial I}{\partial\beta_{0}}-I=\beta_{0 }\frac{\partial I}{\partial\delta}\frac{\partial\delta}{\partial\beta_{0}}-I =16\pi^{2}r_{0}^{2}\left(1+\frac{2\delta}{r_{0}}\right). \tag{20}\]
This result is obtained for small \(\delta\) and \(\epsilon\) and it is equal to horizon area to linear order in \(\delta\). In terms of the temperature parameter, we recover the semiclassical entropy as:
\[S_{\text{near-ext}}=16\pi^{2}r_{0}^{2}\left(1+4\pi r_{0}T\right). \tag{21}\]
Therefore, we see that the Wald entropy [5] can be independently computed from the near-horizon geometry only. The result is of course in agreement with the computation using full geometry as presented in appendix B, where we also discuss the computation of energy. In the subsequent sections, we compute the quantum \(\log T\) correction to the semiclassical result, which is the main goal of this paper.
## 3 Quantum corrections to near extremal black hole partition function
The contribution to entropy coming from terms proportional to the logarithm of area has been a subject of huge interest in the context of extremal and non-extremal black holes [27; 28; 29; 30; 32]. This appears from the total one-loop correction to the partition function due to the presence of massless fields. On one hand, these corrections can be computed from the low energy data i.e. the computations do not require the ultraviolet information
of the underlying quantum theory. On the other hand, the universal feature of these log corrections allows more control over the microstructure of the black holes. For certain classes of extremal black holes, these corrections match with the microscopic results [27; 28; 29; 30]. A similar study for near-extremal black holes is also very important, as these systems can be considered as small temperature deviations from extremal black holes. Furthermore, at very low temperatures the semiclassical thermodynamic description is not enough to study the dynamics of these black holes [40], as we describe below.
### Breakdown of semiclassical physics
As noted in the introduction, the semiclassical analysis breaks down at sufficiently low temperature. Let us briefly discuss the importance of quantum corrections for a near-extremal black hole at very low temperatures. It can be understood from the expression of mass (11), which is proportional to the energy of the system (18) under semiclassical approximation. In terms of temperature, it is given as,
\[E=16\pi(r_{0}+2\pi^{2}r_{0}^{3}T^{2}). \tag{19}\]
Therefore, the thermodynamic energy above extremality goes as \(\sim T^{2}\). But this is inconsistent with Hawking radiation since the average energy of thermal radiation goes as \(\sim T\). Below a certain mass scale \(M_{\rm gap}\sim r_{0}^{-3}\), the semiclassical energy of the black hole is less than that of the average energy of radiation. This implies that the black hole cannot radiate even though it has a nonzero temperature. To resolve this issue it was conjectured that there exists a literal mass gap of order \(M_{\rm gap}\) between the extremal and lightest near-extremal states, although in a non-supersymmetric theory, the rationale of the gap was not justified and hence the conjecture is critical. A resolution was proposed in [40], where the authors argued that, at very low temperatures, semiclassical description breaks down and one has to take quantum effects into account. They further used a 2D effective theory technique to compute the partition function at low temperatures. An interesting result from this approach is the emergence of a quantum correction of the form \(\log T\) in the logarithm of partition function. It has been shown that, once this correction is taken into account, the average i.e. thermodynamic energy remains greater than that of Hawking radiation even at small temperatures. Hence, it was concluded that there is actually no mass gap. In a nutshell, due to the breakdown of semiclassical analysis at low enough temperatures, it is required to consider the effect of quantum corrections. In this section, we shall address the same in the original 4D description of the near-extremal black holes.
### Form of the quantum corrections in near-extremal limit
We attempt to understand the one-loop correction to the partition function for a near-extremal black hole via a Euclidean path integral computation in 4D, without getting into an effective lower-dimensional description. The near-extremal solution is parametrized by two large parameters: the charge (or extremal horizon radius) \(r_{0}\) and the inverse temperature \(\beta\sim 1/T\). We evaluate the large contributions involving these parameters, in particular, the logarithmic contributions. Although computing the full one-loop contribution directly is out of hand, Sen and collaborators have put forward [27; 28; 29; 30; 32; 33] a general
strategy to extract the logarithm of horizon radius contributions for (non-)extremal black holes. As we will argue below, the \(\log T\) contributions cannot be obtained by taking a small temperature limit of these computations. Toward the end of this section, we present our strategy to compute such corrections. We find that, to the leading order, the large quantum contributions are of the form \(\log r_{0}\) and \(\log T\), whereas there are further polynomially suppressed corrections in temperature.
### A brief discussion on the log correction for (non-)extremal black holes
Following [30; 33], to compute the one-loop partition function for a generic black hole solution in Einstein-Maxwell theory (1), the fields are fluctuated around the black hole background,
\[g_{AB}=\tilde{g}_{AB}+h_{AB},\quad A_{B}=\tilde{A}_{B}+\frac{1}{2}a_{B}. \tag{20}\]
The action is expanded to quadratic order in fluctuations. The zeroth order term of the expansion is the on-shell action, evaluated for the background \(\{\tilde{g}_{AB},\tilde{A}_{B}\}\), which is a constant and needs to be regulated properly to get sensible semiclassical physics. By action principle, in the presence of appropriate boundary terms (2), the first-order term vanishes as the background satisfies the equations of motion. Our goal is to integrate out the Gaussian-like quadratic action and find the one-loop correction to the partition function.
Since the fluctuations have redundancies due to diffeomorphism and \(U(1)\) gauge invariances, we also add gauge-fixing terms of the following form, to the quadratic action,
\[S_{\text{diffeo}} =-\frac{1}{2}\int d^{4}x\sqrt{\tilde{g}}\left(\tilde{\nabla}_{A}h ^{AC}-\frac{1}{2}\tilde{\nabla}^{C}h\right)\left(\tilde{\nabla}^{B}h_{BC}- \frac{1}{2}\tilde{\nabla}_{C}h\right), \tag{21}\] \[S_{\text{gauge}} =-\frac{1}{2}\int d^{4}x\sqrt{\tilde{g}}(\tilde{\nabla}_{A}a^{A} )^{2}. \tag{22}\]
The quadratic action of fluctuations takes the form,
\[S^{(2)}\equiv\int d^{4}x\sqrt{\tilde{g}}\ \Psi\tilde{\Delta}\Psi, \tag{23}\]
where \(\Psi\) represents all the fields of the theory and \(\tilde{\Delta}\) is a 2-derivative differential operator, constructed out of the background. The partition function is then given as the integral,
\[Z=\int\mathcal{D}\Psi\mathrm{e}^{-S^{(2)}}=\frac{1}{\sqrt{\det(\tilde{\Delta} )}}. \tag{24}\]
We have omitted the constant semiclassical contribution to avoid notational clutter. To evaluate the integral it is required to compute the eigenvalues of the kinetic operator which in turn gives the determinant. Using the heat-kernel formalism for a generic (non-) extremal background, presented in [32; 33], the logarithm of horizon radius contribution can be computed. In principle, for the computation of partition function, the Lagrangian density should be integrated over the full background. Due to the infinite AdS\({}_{2}\) throat in
the near-horizon region of an extremal black hole, the dynamics is wonderfully captured by the near-horizon geometry. Hence, for an extremal black hole, the background is considered to be the near horizon AdS\({}_{2}\times\)S\({}^{2}\) geometry. An important point to note is that, for non-extremal black holes, one needs to remove the effects of thermal gas to obtain the correct entropy corresponding to the degeneracy of the black hole states.
For an extremal black hole, the log correction can be computed even without the heat-kernel method. Since for the extremal AdS\({}_{2}\times\)S\({}^{2}\) background, the eigenfunctions of the kinetic operator are known. Using the explicit form of these eigenfunctions, the log correction has also been computed by finding the eigenvalues for a class of extremal black holes [27; 28; 29; 30]. These corrections are also computed using Sen's quantum entropy function formalism[15; 16; 27].
For a near-extremal black hole, it is natural to consider a small temperature limit of the non-extremal result. The computation for a non-extremal black hole [32] is however performed under a limit where the horizon radius \(r_{+}\) and the inverse temperature \(\beta\) are of the same order i.e. \(r_{+}\sim\beta\). This is not true for a near-extremal black hole, where the full horizon radius depends on two independent large parameters: the extremal radius and inverse temperature. Also, this computation gives the temperature-dependent corrections to be a polynomial expansion. Through this procedure, it is not possible to obtain the \(\log T\) corrections. Therefore, we consider the near-extremal black hole as a deviation from the extremal one and try to compute the \(\log T\) corrections. We discuss our strategy for the same in the next subsection.
### Strategy for the quantum correction computation for near-extremal black holes
We compute the one-loop corrected partition function for a near-extremal black hole by finding the eigenvalues of the kinetic operator. We consider the near-horizon region of the black hole to be a small temperature deviation of the extremal near-horizon geometry. The near-horizon throat of an extremal black hole is infinite and hence, all the computations for an extremal black hole get contributions from the near-horizon region only. For a near-extremal black hole, the throat is finite yet large. Therefore, we expect that many of the physical questions can be answered from the near-horizon region. In the far region near asymptotic infinity, the geometry can be well-approximated by the full extremal geometry. Also, the fluctuations die off in this region. Therefore, in presence of the large near-horizon throat, the contributions coming from the FHR are very small compared to the contributions of the NHR. Hence, we focus on the near-horizon physics, where the near-extremal geometry is a perturbative linear order temperature deviation of AdS\({}_{2}\times\)S\({}^{2}\) geometry and is given in (15). The kinetic operator can also be expanded in the same way. This allows us to apply first-order perturbation theory for the computation of the eigenvalues. The computation is schematically described below.
Due to the perturbative expansion of the background geometry, the kinetic operator splits into two parts given as \(\tilde{\Delta}=\Delta^{0}+T\Delta^{(c)}\). The \(\mathcal{O}(T^{0})\) term \(\Delta^{0}\) is the extremal kinetic operator. Whereas the \(\mathcal{O}(T)\) term \(\Delta^{(c)}\) is a differential operator which we treat perturbatively. We denote the eigenvalues of the full kinetic operator by \(\Lambda_{n}\), which are
small deviations from the eigenvalues of the extremal operator as,
\[\tilde{\Lambda}_{n}=\Lambda_{n}^{0}+T\Lambda_{n}^{(c)}. \tag{3.7}\]
Here \(\Lambda_{n}^{0}\) are the eigenvalues of the extremal kinetic operator such that,
\[\Delta^{0}f_{n}^{0}(x)=\Lambda_{n}^{0}f_{n}^{0}(x), \tag{3.8}\]
where, \(f_{n}^{0}(x)\) represents the orthonormal eigenfunctions of the operator \(\Delta^{0}\). Now we invoke the standard machinery of first-order perturbation theory. We start with the modified eigenvalue equation having the following form,
\[(\Delta^{0}+T\Delta^{(c)})(f_{n}^{0}(x)+Tf_{n}^{(c)}(x))=( \Lambda_{n}^{0}+T\Lambda_{n}^{(c)})(f_{n}^{0}(x)+Tf_{n}^{(c)}(x)). \tag{3.9}\]
The \(\mathcal{O}(1)\) terms vanish due to the eigenvalue equation of the extremal kinetic operator. Thus at \(\mathcal{O}(T)\), we have:
\[\Delta^{(c)}f_{n}^{0}+\Delta^{0}f_{n}^{(c)}=\Lambda_{n}^{(c)}f_{n }^{0}+\Lambda_{n}^{0}f_{n}^{(c)}. \tag{3.10}\]
Taking inner product with \(f_{n}^{0*}\) on both sides of the equation and using the orthonormality conditions we have the correction to the eigenvalues as,
\[\Lambda_{n}^{(c)}=\int d^{4}x\sqrt{g^{0}}\ f_{n}^{0*}(x)\ \Delta^{(c)}\ f_{n}^{0}(x). \tag{3.11}\]
In order to find the corrections to eigenfunctions, we take inner product of (3.9) with \(f_{m}^{0*}\) for \(m\neq n\), which gives the following correction,
\[f_{n}^{(c)}(x)=\sum_{m\neq n}\frac{1}{\Lambda_{n}^{0}-\Lambda_{m }^{0}}\left(\int d^{4}x^{\prime}\sqrt{g^{0}}\ f_{m}^{0*}(x^{\prime})\ \Delta^{(c)}\ f_{n}^{0}(x^{\prime})\right)\ f_{m}^{0}(x). \tag{3.12}\]
To find the one-loop determinant, only the evaluation of the eigenvalues is required. The one-loop correction to the logarithm of partition function can be computed for \(\tilde{\Lambda}_{n}\neq 0\) as given by,
\[\log Z=-\frac{1}{2}\sum_{n}\log\tilde{\Lambda}_{n}. \tag{3.13}\]
**Contribution from extremal zero modes:**
We consider the eigenfunctions of the extremal kinetic operator, which have zero eigenvalues i.e. \(\Lambda_{n}^{0}=0\). For these modes, the corrected eigenvalues are linear in temperature. Therefore, the extremal zero modes acquire some small non-zero mass in the near-extremal background. These modes contribute to the \(\log T\) corrections in the logarithm of the partition function.
**Contribution from extremal non-zero modes:**
From the non-zero modes of the extremal kinetic operator, we will get contributions of the form \(\log r_{0}+\mathcal{O}(T)\) in the expression of the logarithm of the partition function. These corrections are much suppressed as compared to the \(\log T\) contribution.
#### Contribution from near-extremal zero modes:
There might be some modes that are zero modes for both extremal and near-extremal backgrounds. For such modes, the eigenvalue correction is \(\mathcal{O}(T^{2})\). Because of the vanishing eigenvalues, we cannot perform the corresponding Gaussian integrals. These modes can affect the partition function only through the measure. We will impose normalization conditions on these zero modes similar to the standard prescription, and investigate the contributions. As we will see later, there are indeed these zero modes but their measure does not give \(\log T\) contribution.
From this analysis, we understand that the \(\log T\) correction should be given by the contributions of the modes which are exact zero modes of the extremal kinetic operator. The origin of this correction is the small temperature-dependent mass acquired by the zero modes in presence of near-extremal correction to the background geometry. In the next section, we undertake this approach.
## 4 Computation of \(\log T\) contributions
In this section, we will compute the eigenvalues for the kinetic operator on the near-horizon near-extremal background using first-order perturbation theory and find the \(\log T\) corrections. Firstly, we consider the quadratic action [30; 33] for the fluctuations \(\{h_{AB},a_{A}\}\). Quadratic Lagrangian density for graviton,
\[\mathcal{L}_{hh} =h_{AB}\Big{[}\frac{1}{4}\tilde{g}^{AC}\tilde{g}^{BD}\Box-\frac{1 }{8}\tilde{g}^{AB}\tilde{g}^{CD}\Box+\frac{1}{2}\tilde{R}^{ACBD}+\frac{1}{2} \tilde{R}^{AC}\tilde{g}^{BD}-\frac{1}{2}\tilde{R}^{AB}\tilde{g}^{CD}\] \[+\frac{1}{8}\tilde{F}^{2}\left(2\tilde{g}^{AC}\tilde{g}^{BD}- \tilde{g}^{AB}\tilde{g}^{CD}\right)-\tilde{F}^{AC}\tilde{F}^{BD}-2\tilde{F}^{ AC}\tilde{F}^{C}{}_{E}\tilde{g}^{BD}+\tilde{F}^{AE}\tilde{F}^{B}{}_{E}\tilde{g}^{ CD}\Big{]}h_{CD}. \tag{4.1}\]
Quadratic Lagrangian density for photon,
\[\mathcal{L}_{aa}=\frac{1}{2}a_{A}\left(\tilde{g}^{AB}\Box-\tilde{R}^{AB} \right)a_{B}. \tag{4.2}\]
Mixing terms between graviton and photon,
\[\mathcal{L}_{ha}=-h_{AB}\left(4\tilde{g}^{A[C}\tilde{F}^{D]B}+\tilde{F}^{CD} \tilde{g}^{AB}\right)\tilde{\nabla}_{C}a_{D}. \tag{4.3}\]
Ghost Lagrangian,
\[\mathcal{L}_{\text{ghost}}=b_{A}\left(\tilde{g}^{AB}\Box+\tilde{R}^{AB} \right)c_{B}+b\Box c-2b\tilde{F}^{AB}\tilde{\nabla}_{A}c_{B}. \tag{4.4}\]
We have added the ghost terms to the action due to gauge fixing. Here the background is taken to be near-extremal. Therefore, the full quadratic action is given as,
\[S=\int d^{4}x\sqrt{\tilde{g}}(\mathcal{L}_{hh}+\mathcal{L}_{aa}+\mathcal{L}_{ ha}+\mathcal{L}_{\text{ghost}}). \tag{4.5}\]
### The extremal zero modes
For the quantum correction to the partition function, we need to find all the corrected eigenvalues. As discussed earlier, the zero modes of extremal background can give rise to \(\log T\) correction, whereas the nonzero modes give rise to polynomial corrections suppressed by powers of \(T\). In the appendix A, we have reviewed the eigenfunctions of the extremal kinetic operator. There are two classes of normalizable eigenfunctions on AdS\({}_{2}\) which are labeled by some continuous and discrete parameters. The discrete modes physically correspond to large gauge transformations and large diffeomorphisms, whereas the continuous modes are derived from normalizable scalars. Although the large gauge transformations and large diffeomorphisms are non-normalizable, the discrete vector and tensor modes, constructed out of their derivatives, are normalizable. The zero modes are part of the discrete modes [30]. See also [45], for a detailed discussion on the zero modes and their regularization.
Because of orthogonality, all the modes decouple in the extremal background hence their contributions can be studied separately. Firstly, we consider the contributions from discrete modes and identify the zero modes amongst them. We expand the nonzero components of the fields following [30] as linear combinations of discrete eigenfunctions,
\[a_{\mu}=E_{1}v_{\mu}+E_{2}\varepsilon_{\mu\nu}v^{\nu},\] \[h_{\mu i}=\frac{1}{\sqrt{\kappa}}\left(E_{3}\partial_{i}v_{\mu}+ \tilde{E}_{3}\varepsilon_{\mu\nu}\partial_{i}v^{\nu}+E_{4}\varepsilon_{ij} \partial^{j}v_{\mu}+\tilde{E}_{4}\varepsilon_{ij}\varepsilon_{\mu\nu}\partial ^{j}v^{\nu}\right),\] \[h_{\mu\nu}=\frac{r_{0}}{\sqrt{2}}(\nabla_{\mu}\hat{\xi}_{\nu}+ \nabla_{\nu}\hat{\xi}_{\mu}-g_{\mu\nu}\nabla^{\rho}\hat{\xi}_{\rho})+E_{6}w_{ \mu\nu};\quad\hat{\xi}_{\mu}=E_{5}v_{\mu}+\tilde{E}_{5}\varepsilon_{\mu\nu}v^{ \nu}. \tag{4.6}\]
Here, \(v_{\mu}\) is the normalizable vector mode (A.1) constructed out of the discrete non-normalizable scalar modes, multiplied with spherical harmonics. \(w_{\mu\nu}\) is the discrete normalizable tensor mode (A.1) corresponding to non-normalizable diffeomorphisms, multiplied with the spherical harmonics. \(\kappa\) is the \(-\Box_{S^{2}}\) eigenvalue given as \(\frac{l(l+1)}{r_{0}^{2}}\). We have suppressed the mode labels for simplicity since the different labels do not mix among themselves. For each sector, we will evaluate the contribution to the action, and finally, we will take a sum over all modes.
In the \(l=0\) sector of spherical harmonics, the modes \(E_{3},\tilde{E}_{3},E_{4},\tilde{E}_{4}\) are absent since these modes involve derivatives on \(S^{2}\). Therefore, the contribution to the zeroth order (i.e. extremal) action is given as,
\[-\frac{1}{2}\kappa(E_{1}^{2}+E_{2}^{2})-\frac{1}{2}(\kappa+2r_{0}^{-2})(E_{5} ^{2}+\tilde{E}_{5}^{2})-\frac{1}{2}\kappa E_{6}^{2}. \tag{4.7}\]
The contribution is diagonal in the coefficients \(E_{i}\) i.e. the corresponding basis elements are eigenfunctions of the extremal kinetic operator. Since \(\kappa=0\) for \(l=0\), we see that the contributions coming from \(E_{1},E_{2},E_{6}\) vanish. Hence, the corresponding basis elements i.e. \(v_{\mu},\varepsilon_{\mu\nu}v^{\nu},w_{\mu\nu}\) respectively, are the zero modes of the extremal operator. We will find the correction of eigenvalues for these eigenfunctions.
The contribution to the zeroth order action coming from each sector corresponding to \(l\geq 1\) is given as,
\[-\frac{1}{2}\kappa(E_{1}^{2}+E_{2}^{2})-\frac{1}{2}(\kappa+2r_{0}^{- 2})(E_{5}^{2}+\tilde{E}_{5}^{2})-\frac{1}{2}\kappa E_{6}^{2}\] \[-\frac{1}{2}(\kappa-2r_{0}^{-2})(E_{3}^{2}+\tilde{E}_{3}^{2}+E_{4 }^{2}+\tilde{E}_{4}^{2})+2ir_{0}^{-1}\sqrt{\kappa}(E_{1}\tilde{E}_{3}-E_{2}E_{ 3}). \tag{4.8}\]
The modes corresponding to \(E_{1},\tilde{E}_{3}\) and \(E_{2},E_{3}\) mix amongst themselves. For \(l=1\), the \(E_{4},\tilde{E}_{4}\) terms vanish i.e. the corresponding basis elements are zero modes of the extremal operator. Beyond \(l=1\), all modes have nonzero eigenvalues.
#### 4.1.1 Contribution from \(l=0\) tensor modes
The tensor modes \(w^{n}_{\mu\nu}\) are degenerate in the discrete label \(n\). Therefore, we apply degenerate perturbation theory to find the matrix elements between different labels. This matrix turns out to be diagonal. The eigenvalue correction corresponding to \(w^{n}_{\mu\nu}\) is given by the integral of \(w^{n}\cdot\Delta\cdot w^{n}\):
\[\Lambda[w^{n}_{\mu\nu}]=\frac{n\pi T}{256r_{0}}\Big{[} -69+8n(-6+11n+8n^{2})+4(1+n)(-1+8n^{2})\cosh\eta_{0}+\] \[+4(1+4n+2n^{2})\cosh 2\eta_{0}+4(1+n)\cosh 3\eta_{0}+\cosh 4\eta_{0} \Big{]}\] \[\cdot\left(\text{sech}\,\frac{\eta_{0}}{2}\right)^{6}\left(\text {csch}\,\frac{\eta_{0}}{2}\right)^{2}(\coth\eta_{0}+\text{csch}\,\eta_{0})^{- 2n}. \tag{4.9}\]
Using the value of the radial cutoff \(\eta_{0}\) from (2.17), we get,
\[\Lambda[w^{n}_{\mu\nu}]=\frac{n\pi T}{2r_{0}}. \tag{4.10}\]
This is the first-order correction to the eigenvalue for the tensor modes. The contribution to the logarithm of the partition function, coming from the tensor zero modes6 is given as,
Footnote 6: The real and imaginary parts of the tensor modes have the same eigenvalues. Hence, we multiply with a factor of 2.
\[\log Z_{\text{tensor}} =-2\cdot\frac{1}{2}\sum_{n\geq 2}\log\Lambda[w^{n}_{\mu\nu}]\] \[=-\sum_{n\geq 2}\log\left(\frac{n\pi T}{2r_{0}}\right)\] \[=\log\left(\prod_{n\geq 2}\frac{2r_{0}}{n\pi T}\right). \tag{4.11}\]
The product over \(n\) inside the logarithm can be evaluated using zeta function regularization [44; 51],
\[\prod_{n\geq 2}\frac{\alpha}{nT}=\frac{1}{\sqrt{2\pi}}\ \frac{T^{3/2}}{\alpha^{3/2}}. \tag{4.12}\]
Using this result to compute the product, we have:
\[\log Z_{\rm tensor}\sim\frac{3}{2}\log T. \tag{4.13}\]
The contribution coming from tensor zero modes agrees with the effective 2D theory results as derived in [40; 45]. The contributions to the partition function due to the modified eigenvalues of the extremal tensor zero modes can also be derived from the exact quantization of a Schwarzian theory. We come back to this discussion in section 5. The reason behind getting the same contribution from a one-loop computation stems from the one-loop exact structure of the Schwarzian theory. But the one-loop action (3.5) for the orthonormalized tensor modes does not reproduce the Schwarzian action. The emergence of a Schwarzian-like action from the tensor zero modes has been discussed in [45] where the authors have used a particular normalization for the modes. It differs from that of the standard orthonormal basis discussed in [30], which we have used extensively for our work. The computation of the action that describes the tensor zero modes requires an effective description of the theory, as will be described in section 5.
#### 4.1.2 Contribution from \(l=0\) vector modes
We denote the vector modes as, \(v^{a,n}_{\mu}\equiv\{v^{n}_{\mu},\varepsilon_{\mu\nu}v^{n,\nu}\}\), where \(n\) is the discrete label. All these modes are degenerate, therefore we invoke degenerate first-order perturbation theory. Hence we find the matrix elements:
\[\int d^{4}x\sqrt{g}\ v^{a.p}\cdot\Delta\cdot v^{b,n},\]
here \(\Delta\) is the kinetic operator, with an appropriate spacetime index structure. It turns out that this matrix is diagonal i.e. proportional to \(\delta^{pn}\delta_{ab}\). For the eigenvector \(v^{n}_{\mu}\), we find the eigenvalue:
\[\Lambda[v^{n}_{\mu}]=\frac{n\pi T}{2r_{0}}(1+2n+n\cosh\eta_{0})\left(\text{ sech}\,\frac{\eta_{0}}{2}\right)^{4}\left(\tanh\frac{\eta_{0}}{2}\right)^{2n}. \tag{4.14}\]
The eigenvalue corresponding to the eigenvector \(\varepsilon_{\mu\nu}v^{n,\nu}\) is given as, \(\Lambda[\varepsilon_{\mu\nu}v^{n,\nu}]=\Lambda[v^{n}_{\mu}]\). Using the value of the radial cutoff \(\eta_{0}\) in (2.17), at first order in temperature, the eigenvalue is \(0\) since \(\Lambda[v^{n}_{\mu}]\sim\mathcal{O}(T^{2})\). Therefore, we conclude that these modes are zero modes even in the near-extremal background and we cannot perform a Gaussian integral over them.
To understand the structure of the contribution to the partition function coming from the measure of these zero modes, we consider the normalization condition,
\[\int\mathcal{D}a_{\mu}\text{exp}\left(-\int d^{4}x\sqrt{g}g^{\mu\nu}a_{\mu}a_ {\nu}\right)=1. \tag{4.15}\]
Here we have considered the fluctuations \(a_{\mu}\) to be a linear combination of the \(l=0\) vector zero modes given as \(a_{\mu}=\alpha_{n}v^{n}_{\mu}\). Since these modes are also zero modes of the extremal background, we can readily see that the exponent in this integration has a temperature-independent piece and a term, linear in temperature. We get this form using the orthogonality condition of the modes. Considering \(\mathcal{D}a_{\mu}\sim\mathcal{N}^{\prime}\prod_{n}d\alpha_{n}\), the normalization condition
has the following form,
\[\int\mathcal{N}^{\prime}\prod_{n}d\alpha_{n}\text{exp}(-\mathcal{N}_{ n}^{2}\alpha_{n}^{2})=1. \tag{4.16}\]
Performing the Gaussian integral, we have
\[\frac{\mathcal{N}^{\prime}}{\sqrt{\prod_{n}\mathcal{N}_{n}^{2}}}= 1,\quad\mathcal{N}^{\prime}=\prod_{n}\mathcal{N}_{n}\sim\mathcal{O}(1)+ \mathcal{O}(T). \tag{4.17}\]
Therefore, we get that the form of the contribution coming from the measure has a \(\mathcal{O}(1)\) i.e. a temperature independent piece. In other words, there is no factor of \(T\) multiplying the partition function, hence giving no \(\log T\) contribution to the logarithm of partition function. These contributions will be polynomially suppressed in temperature.
#### 4.1.3 Contribution from \(l=1\) vector modes
We denote these modes as \(y_{\mu i}^{a,n}=v_{\mu}^{a,n}\xi_{i}^{2;1,m}\equiv\{\frac{1}{\sqrt{\kappa}} \varepsilon_{ij}\partial^{j}v_{\mu},\frac{1}{\sqrt{\kappa}}\varepsilon_{ij} \varepsilon_{\mu\nu}\partial^{j}v^{\nu}\}\). Here \(\kappa=2r_{0}^{-2}\) is the \(-\Box_{S^{2}}\) eigenvalue for the \(l=1\) sector and \(\xi_{i}^{2;1,m}\) is a vector eigenfunction of the Laplacian on \(S^{2}\) as in (A.24). Clearly, \(m\) runs over the values \(-1,0,+1\). Again we invoke degenerate perturbation theory but the correction matrix turns out to be diagonal. Therefore, for each value of the labels \(|m|\leq 1\) and \(n\geq 1\), we have the correction corresponding to \(\varepsilon_{ij}\partial^{j}v_{\mu}\):
\[\Lambda[\varepsilon_{ij}\partial^{j}v_{\mu,n}]=\frac{n\pi T}{32r_ {0}}[7+8n+4(1+n)\cosh\eta_{0}+\cosh 2\eta_{0}]\left(\text{sech}\,\frac{ \eta_{0}}{2}\right)^{4}\left(\tanh\frac{\eta_{0}}{2}\right)^{2n}. \tag{4.18}\]
The eigenvalue correction corresponding to the second kind of eigenfunction is the same i.e. and to order \(T\), the value is given by,
\[\Lambda[\varepsilon_{\mu\nu}\varepsilon_{ij}\partial^{j}v_{n}^{ \nu}]=\Lambda[\varepsilon_{ij}\partial^{j}v_{\mu,n}]=\frac{n\pi T}{4r_{0}}. \tag{4.19}\]
The contributions from these modes to the partition function are given by,
\[\log Z_{l=1\,\text{vector}} =-\frac{1}{2}\sum_{\begin{subarray}{c}n\geq 1,\\ |m|=0,1\end{subarray}}\log\Lambda[\varepsilon_{ij}\partial^{j}v_{\mu,n,m}]- \frac{1}{2}\sum_{\begin{subarray}{c}n\geq 1,\\ |m|=0,1\end{subarray}}\log\Lambda[\varepsilon_{\mu\nu}\varepsilon_{ij} \partial^{j}v_{n,m}^{\nu}]\] \[=-\frac{6}{2}\sum_{n\geq 1}\log\left(\frac{n\pi T}{4r_{0}}\right)\] \[=3\log\left(\prod_{n\geq 1}\frac{4r_{0}}{n\pi T}\right). \tag{4.20}\]
Using (4.12), we compute the product inside the logarithm, where we consider the \(n=1\) contribution separately. Therefore, we have
\[\log Z_{l=1\,\text{vector}}=3\log\Bigg{(}\frac{\pi^{3/2}}{\sqrt{ 2\pi}}\frac{T^{3/2}}{(4r_{0})^{3/2}}\Bigg{)}+3\log\Bigg{(}\frac{4r_{0}}{\pi T }\Bigg{)}. \tag{4.21}\]
Therefore, we also have \(\log T\) contribution from the \(l=1\) zero modes, given by:
\[\log Z_{l=1\,\text{vector}}\sim\frac{3}{2}\log T. \tag{4.22}\]
### Total \(\log T\) contribution from extremal zero modes
From our analysis, we get that the tensor modes give rise to the \(\log T\) contribution that matches with the Schwarzian result. The \(l=0\) vector modes have zero contribution at first-order in temperature. Whereas, the \(l=1\) vector modes give a non-trivial contribution. The full contribution is given by,
\[\log Z=\log\Bigg{(}\frac{\pi^{3/2}}{\sqrt{2\pi}}\frac{T^{3/2}}{(2r_{0})^{3/2}} \Bigg{)}+3\log\Bigg{(}\frac{\pi^{3/2}}{\sqrt{2\pi}}\frac{T^{3/2}}{(4r_{0})^{3/ 2}}\Bigg{)}+3\log\Bigg{(}\frac{4r_{0}}{\pi T}\Bigg{)}. \tag{4.23}\]
Hence, the dependence from (4.13) and (4.22) is given as,
\[\log Z\sim 3\log T. \tag{4.24}\]
The corrections coming from all other modes at first-order in temperature are suppressed. The large contribution coming from the charge of the black hole can be found in [30].
## 5 Revisiting the 1D effective description
In this section, we revisit the computation of the \(\log T\) corrections to the logarithm of partition function from an effective theory description. In particular, we show that the physics of the tensor zero modes at low temperatures is described by a Schwarzian theory. For addressing this description, working in the s-wave sector of the fields would be enough. We first reduce the theory (2.1) along with the boundary terms (2.2) located at the asymptotic boundary of a spherically symmetric Euclidean black hole. In order to understand the quantization of the system, we follow the decomposition of the near-extremal geometry into near-horizon and far-horizon regions as in section 2.2. Because of the long near-horizon throat, the quantum fluctuations in the FHR are suppressed as compared to the fluctuations in the NHR. Hence, we put the action on-shell in FHR and this effectively _induces_ a local boundary term at the boundary separating the NHR and FHR, as discussed in the appendix B. To understand the quantization at the NHR region we take the following strategy:
* **Finding the 2D effective action:** Since our interest is in spherically symmetric near-extremal black holes, we first reduce the 4-dimensional Einstein-Hilbert theory on an arbitrary spherically symmetric background. This gives us a reduced theory on a 2D manifold. Working in the s-wave sector, we consider the dimensional reduction ansatz as: \[ds^{2}=\frac{r_{0}}{\Phi}g_{\mu\nu}dx^{\mu}dx^{\nu}+\Phi^{2}(x)(d\psi^{2}+\sin^ {2}\psi d\varphi^{2}),\quad A_{B}\equiv(A_{\mu},0).\] (5.1) Plugging this ansatz into the action, we get a 2D Einstein-Hilbert-Maxwell action non-minimally coupled to the scalar \(\Phi\). The Weyl factor of the 2D metric is so chosen that the kinetic term of the scalar vanishes. Integrating out the 2D gauge fields, we obtain the 2D effective theory, \[\mathcal{S}=-4\pi\int_{N}d^{2}x\sqrt{g}\left(\Phi^{2}R+\frac{2r_{0}}{\Phi}- \frac{2r_{0}^{3}}{\Phi^{3}}\right)-8\pi\int_{\partial N}dx\sqrt{\gamma}\Phi^{ 2}K.\] (5.2)
The variational problem is well-defined for this theory when we impose Dirichlet boundary conditions on the fields. It admits a classical solution given by an AdS\({}_{2}\) metric and a constant dilaton as, \[g_{\mu\nu}dx^{\mu}dx^{\nu}=r_{0}^{2}(d\eta^{2}+\sinh^{2}\eta d\theta^{2}),\quad \Phi=r_{0}.\] (5.3) This solution can be uplifted to the 4D extremal near-horizon geometry (2.14).
* **Finding the near-extremal background:** Next, we look for another classical solution of this theory, which is a deviation from the solution (5.3) by a small temperature. We demand that, once obtained, the same should be uplifted to the near-horizon geometry of a near-extremal black hole in the four-dimensional parent theory. To get the same, first, we consider a deviation from extremality (5.3) as, \[\bar{g}_{\mu\nu}dx^{\mu}dx^{\nu}=r_{0}^{2}(d\eta^{2}+\sinh^{2}\eta d \theta^{2})+\delta g,\,\Phi=r_{0}(1+\phi),\] (5.4) such that the variations \(\delta g\) and \(\phi\) do not die off at the boundary \(\partial N\). Expanding the action (5.2) in these deviations and solving the equations of motion corresponding to these fields \(\delta g\) and \(\phi\), we intend to find the background solution that uplifts to the near-horizon near-extremal background as given in (2.15). The expansion of the action is given as, \[\mathcal{S}=16\pi^{2}r_{0}^{2}-16\pi\int_{\partial N}\sqrt{\gamma}\phi K+ \mathcal{S}^{(2)}[\delta g,\phi].\] (5.5) The second-order action \(\mathcal{S}^{(2)}\) is important to understand the structure of \(\delta g_{\mu\nu}\equiv\sigma_{\mu\nu}\) and \(\phi\) by solving the equations of motion for which only the bulk action is enough. The bulk part of the same is given below, \[\mathcal{S}^{(2)}_{\rm bulk}=\int d^{2}x\sqrt{g}r_{0}^{2}\Big{[} \frac{1}{4r_{0}^{2}}\sigma^{2}-\frac{1}{2r_{0}^{2}}\sigma_{\mu\nu}\sigma^{\mu \nu}+\frac{1}{2}\sigma\nabla_{\mu}\nabla_{\nu}\sigma^{\mu\nu}-\frac{1}{4} \sigma\nabla^{2}\sigma+\frac{1}{4}\sigma^{\mu\nu}\nabla^{2}\sigma_{\mu\nu}\] \[\qquad\qquad\qquad-\frac{1}{2}\sigma^{\nu\rho}\nabla_{\mu}\nabla _{\nu}\sigma_{\rho}^{\mu}+2\phi(\nabla_{\mu}\nabla_{\nu}\sigma^{\mu\nu}- \nabla^{2}\sigma+\frac{1}{r_{0}^{2}}\sigma)-\frac{12}{r_{0}^{2}}\phi^{2}\Big{]}.\] (5.6) Here we note that at the first-order in variation, the action is a pure boundary term depending only on the dilaton variation \(\phi\) and it is constant on the boundary. Furthermore, even though \(\delta g\) does not vanish at the boundary, all other first-order terms depending on \(\delta g\) vanish7. Footnote 7: This is a consequence of the simple structure of 1D boundary for which the extrinsic curvature is a pure trace i.e. in terms of boundary coordinates, \(K_{ab}=K\gamma_{ab}\). Now we turn to find the near-extremal solution such that the deviation from extremality correctly uplifts to (2.15). To get that, the arbitrary deviations \(\delta g\) may be decomposed into pure trace and traceless parts [51; 52], where the trace is computed with respect to the AdS\({}_{2}\) metric (5.3). Comparing (2.15) and the ansatz (5.1), we notice that for the near-extremal solution, the deviation of the 2D metric (i.e.
\(\frac{r_{0}}{\Phi}\bar{g}-g\)) should be traceless. This fixes the trace of \(\bar{g}\) in terms of the dilaton field. Maintaining these, we consider the form of the deviation as, \[\delta g_{\mu\nu}\ dx^{\mu}dx^{\nu}=\phi(\eta)(d\eta^{2}+\sinh^{2}\eta d\theta^{ 2})+\alpha(\eta)(d\eta^{2}-\sinh^{2}\eta d\theta^{2}).\] (100) Here we have taken a static ansatz i.e. the corrections are independent of \(\theta\). The equations of motion coming from the second-order action (101) are, \[\tanh\eta\ \phi^{\prime\prime}-\phi^{\prime}=0,\] \[\alpha^{\prime\prime}+3\coth\eta\ \alpha^{\prime}+\alpha=4\phi^{ \prime\prime}+4(3r_{0}^{2}-1)\phi.\] (101) Choosing appropriate integration constants and taking care of the Weyl factor, it can be shown that a generic solution of these equations gets uplifted to the solution described in (15) with the functions \(\alpha,\phi\) given as, \[\phi=4\pi r_{0}^{3}T\cosh\eta,\quad\alpha=4\pi r_{0}^{3}T(2+\cosh\eta)\tanh^{2 }\Big{(}\frac{\eta}{2}\Big{)}.\] (102)
* **Quantization of the linear order action:** Finally to quantize the theory at one-loop order around the above background, we consider the first-order deviation term of the action. The boundary behavior of the dilaton \(\phi\) can be fixed from the near-extremal solution. The presence of near-extremal deviations makes the asymptotic symmetry modes of AdS\({}_{2}\) slightly nondegenerate. These modes can be realized as a nontrivial wiggly-shaped boundary on rigid AdS\({}_{2}\) and the shape of the boundary can be parametrized by an arbitrary function \(\theta(u)\), where \(u\) is the boundary coordinate. The linear-order boundary term in (100) corresponds to the effective action of these boundary gravitons. It is well-studied in the literature that this boundary theory gives rise to a Schwarzian action [44; 50] of boundary modes8
Footnote 8: See also [53] for a review on this boundary description. This action has the form \(\int du\ \text{Sch}\left(\tan\frac{\theta}{2},u\right)\), where the Lagrangian density is a Schwarzian derivative9. This theory is also one-loop exact [43], which allows us to compute the partition function exactly when we consider the leading order deviation from extremality [40]. The contribution to the logarithm of the partition function turns out to be, \[\log Z\sim\frac{3}{2}\log T.\] This contribution can be traced back to the tensor zero modes contribution discussed in (4.1). The density of states [43; 44] from this computation gives a dependence of \(\sinh 2\sqrt{E}\) and it smoothly vanishes to zero as \(E\to 0\). This effective description does not incorporate the polynomially suppressed contributions in temperature to the logarithm of the partition function.
Footnote 9: The Schwarzian derivative is defined as,
\[\text{Sch}(F,u)=-\frac{1}{2}\left(\frac{F^{\prime\prime}}{F^{\prime}}\right)^ {2}+\left(\frac{F^{\prime\prime}}{F^{\prime}}\right)^{{}^{\prime}}.\]
Thus we find that the quantum (tensor modes) corrections to the partition function of near-extremal black holes can be computed from a direct four-dimensional analysis as in section 4.1.1 and from an effective two-dimensional analysis as in section 5. We would like to emphasize some points while comparing these two descriptions. To get an effective description, we fluctuate the fields around the extremal background, where the fluctuations do not die on the boundary. To get the correct near-extremal geometry, we consider the second-order action and solve the equations of motion. The analysis also shows us that the near-horizon geometry of the near-extremal black hole is not locally AdS\({}_{2}\). In fact, the geometry deviates by a traceless factor from extremality which cannot be captured by a conformal factor to AdS\({}_{2}\). To get an effective Schwarzian description, the deviations of both the metric and dilaton are equally important since they both grow similarly towards the boundary. The Schwarzian theory is one-loop exact, which reflects in the fact that we recover the same contribution from the large diffeomorphisms in a 4D one-loop computation. These two descriptions of near-extremal black holes are actually gauge-equivalent. In one description, the (tensor zero modes) fluctuations are realized from a bulk perspective in four dimensions whereas, in the 2D effective description, the fluctuations are localized on the near-horizon boundary.
We conclude this section with some important remarks that distinguish the above construction from that of the one presented in [40; 41; 42]. It is well known that the Schwarzian theory appears as an effective description of Jackiw-Teitelboim (JT) gravity. In JT gravity, the large diffeomorphisms of AdS\({}_{2}\) acquire a Schwarzian action. Similarly, as we found above, the dynamics of near-extremal black holes can also be obtained from a Schwarzian description that arises from the effective theory of large diffeomorphisms on AdS\({}_{2}\). But there are interesting differences between the 4D Einstein-Maxwell theory around (near)extremality and JT gravity. In JT gravity, the background geometry is locally AdS\({}_{2}\), which is obtained by integrating out the dilaton field. On this geometry, the non-trivially varying dilaton captures the slight breaking of conformal invariance, giving rise to the Schwarzian theory. But in the case of a near-extremal black hole, the geometry is not locally AdS\({}_{2}\). The fluctuations of the geometry from AdS\({}_{2}\) appear in the same order as that of the fluctuations of the dilaton. These fluctuations of the geometry cannot be gauged away as is evident from the non-constancy of the Ricci scalar, even after taking care of the Weyl factor. Therefore, although the 1D Schwarzian description appears in both the gravity theories, the equivalence of Einstein-Maxwell theory around a near-extremal black hole and JT gravity is questionable. Nevertheless, the effective description of the large diffeomorphisms via a Schwarzian theory is manifest in both scenarios.
## 6 Discussions
In this paper, we have studied the one-loop correction to the Euclidean partition function on a spherically symmetric electrically charged near-extremal background with charge \(r_{0}\) and arbitrary small temperature \(T\) in 4D Einstein-Maxwell theory. The quantum corrections are particularly important in the small temperature regime \(r_{0}T\ll 1\), where the semiclassical description is insufficient. In addition to the logarithm of area correction, the
one-loop result contains a large contribution of the form \(\log T\) which has been obtained from a Schwarzian effective action in [40; 45]. We extract these \(\log T\) corrections for a near-extremal black hole via direct computation of Euclidean path integral in 4D without referring to the effective lower-dimensional description. Along the line of standard procedure, we expand all the fields around their background solution and expand the action to quadratic order. Then the one-loop contribution can be obtained from the one-loop determinant of the kinetic operator i.e. from its eigenvalues.
In presence of a small temperature deviation, the infinite AdS\({}_{2}\) throat in the near-horizon geometry of an extremal black hole gets cut off at a finite yet very large distance. Hence, the quantum corrections in the near-horizon geometry are much larger than those coming from the asymptotic region of the near-extremal black hole, where it can be approximated by the full extremal geometry. We compute the one-loop determinant in this near-horizon region. We treat the near-horizon geometry of the near-extremal black hole as a linear order deviation from extremal AdS\({}_{2}\times\)S\({}^{2}\) geometry, where the deviations are parametrized by the temperature. Because of this structure of the background, the near-extremal kinetic operator can be expressed as a small temperature correction to the extremal kinetic operator. Thereafter to evaluate the eigenvalues, we invoke the first-order perturbation theory. From this analysis, we understand that the origin of the \(\log T\) contribution is due to the temperature-dependent mass acquired by the zero modes of the extremal operator in a near-extremal background. Contributions from other modes are polynomially suppressed in temperature and very small compared to the \(\log r_{0}\) and \(\log T\) contributions. We finally compute the total \(\log T\) corrections coming from the tensor and \(l=1\) vector zero modes. In particular, the tensor mode contribution agrees with the Schwarzian result.
Another important point to note is that the average thermodynamic energy and entropy can be computed as,
\[\langle E\rangle=-\frac{\log Z}{\partial\beta}\sim E_{\rm cl}+3T, \tag{6.1}\] \[\langle S\rangle=(1-\beta\partial_{\beta})\log Z\sim S_{\rm cl}+ 3\log T. \tag{6.2}\]
Here, \(\beta\) is the inverse temperature parameter. We see that at very small temperature, the entropy approaches negative infinity and is unphysical10. However, a non extremal black hole with any low temperature is certainly a physical object. To understand the issue better we find the density of states of the system 11. Since we are considering a spherically symmetric near-extremal black hole, we compute the density of states and entropy in a mixed ensemble (with fixed charge and energy), following [32],
Footnote 10: Similar issues have been raised in [54].
Footnote 11: We thank Ashoke Sen for explaining this point to us.
\[\rho(E)=\int d\beta{\rm e}^{\beta E}Z(\beta),\quad S(E)=\log\rho(E). \tag{6.3}\]
Considering the logarithmic correction (4.24) along with the semiclassical contribution above extremality, we have \(Z(\beta)\sim{\rm e}^{\frac{1}{\beta}}\beta^{-3}\). Therefore the density of states is given as,
\[\rho(E)\sim EJ_{2}(2\sqrt{E})\xrightarrow{E\to 0}\frac{1}{2}E^{2}, \tag{6.4}\]
here \(J_{\alpha}(x)\) is the Bessel's function of first kind. Therefore, as the energy \(E\) above extremality goes to zero, the density of states vanishes. At such low densities, the entropy is ill-defined and hence is not an appropriate physical quantity to look at. The system is perfectly well defined. We should note that this result of density of states will receive contributions from the \(\mathcal{O}(T)\) corrections of the logarithm of the partition function. To understand the energy dependence of low-temperature density of states it is important to consider the temperature dependence appropriately. An advantage of our strategy of section 3.4 is that it paves a way to compute these \(\mathcal{O}(T)\) corrections to near-extremal thermodynamics. On the contrary, it is very difficult to understand these corrections from a lower dimensional effective theory perspective, where we restrict only to the massless sector. The \(\mathcal{O}(T)\) computation would require keeping track of all the massive Kaluza-Klein modes. We would address the \(\mathcal{O}(T)\) corrections in a future work.
Let us conclude the paper with some directions that can be explored further. Recently, localization in supersymmetric theories has been discussed in [55; 56] for understanding the leading quantum corrections to the thermodynamics. It would be interesting to study the leading order quantum corrections in temperature for near-extremal partition function in such supersymmetric theories and to try to understand how much of these can be captured by (super)Schwarzian theories [55; 57]. We would also like to address the question in a microscopic description of the black holes and try to see if similar corrections can be extracted from the microscopic side. In our earlier work [53], we studied the validity of the two-dimensional effective description of near-extremal black holes in a gravity theory perturbatively corrected by higher derivative interactions. In light of the present work, we understand that the effective description via a JT-like theory is questionable. Instead, we should be able to find the correct Schwarzian as described in section 5. We keep this check for our future study.
###### Acknowledgments.
We thank Ashoke Sen for numerous important discussions and suggestions on the work. We are thankful to Shamik Banerjee for discussions and collaborations at the initial stage of this work. We are also thankful to Suvankar Dutta, G. J. Turiaci and V. Suneeta for helpful discussions and comments. NB would like to thank ICTS for its warm hospitality at an important stage of this work. MS would like to thank Arindam Bhattacharjee, Debangshu Mukherjee and Gurmeet for useful discussions and comments. Finally, we would like to thank the people of India for their generous support towards research in basic sciences.
Basis for different fields and conventions
For the sake of consistency, we will review the choice of basis on AdS\({}_{2}\times\)S\({}^{2}\) for various fields. These are discussed in profound detail in [28; 29; 30]. We will expand the fields in terms of the eigenfunctions of the Laplacian on AdS\({}_{2}\) and S\({}^{2}\). We will denote the four-dimensional coordinates as \(x^{A}\), the coordinates on AdS\({}_{2}\) and S\({}^{2}\) as \(x^{\mu}\) and \(x^{i}\) respectively. Since both \(AdS_{2}\) and \(S^{2}\) are two-dimensional maximally symmetric spaces with characteristic radii \(r_{0}\), we can write,
\[R_{\mu\nu\rho\sigma}=\frac{R}{2}(g_{\mu\rho}g_{\nu\sigma}-g_{\mu \sigma}g_{\nu\rho}),\quad R_{\mu\nu}=\frac{R}{2}g_{\mu\nu},\quad\text{with} \quad R=-\frac{2}{r_{0}^{2}}\] (A.1) \[R_{ijkl}=\frac{R}{2}(g_{ik}g_{jl}-g_{il}g_{jk}),\quad R_{ij}= \frac{R}{2}g_{ij},\quad\text{with}\quad R=\frac{2}{r_{0}^{2}}\] (A.2)
The gauge field strengths, being antisymmetric tensors in 2D, must be proportional to the Levi-Civita tensors. For our electrically charged extremal solution, we have
\[\varepsilon_{\eta\theta}=r_{0}^{2}\sinh\eta,\quad\varepsilon_{ \psi\varphi}=r_{0}^{2}\sin\psi\] (A.3) \[F_{\mu\nu}=i\frac{Q}{r_{0}^{2}}\varepsilon_{\mu\nu},\quad F_{ij}=0\] (A.4)
**Orthonormal basis in \(AdS_{2}\)**
* Eigenfunctions of the Laplacian operator: \[\nabla^{2}W_{p}=-\hat{\kappa}_{p}W_{p}\] (A.5) \[\int_{AdS_{2}}W_{p}W_{q}=\delta_{pq}\] (A.6)
* Explicit expression for the eigenfunctions with the label "\(p\)" representing \((\lambda,n)\) with \(0<\lambda<\infty\) and \(n\in\mathbb{Z}\), \[W_{p}\equiv f_{\lambda,n}(\eta,\theta)= \frac{1}{\sqrt{2\pi r_{0}^{2}}}\frac{1}{2^{|n|}|n|!}\Bigg{|}\frac{ \Gamma(i\lambda+\frac{1}{2}+|n|)}{\Gamma(i\lambda)}\Bigg{|}\mathrm{e}^{in\theta }\sinh^{|n|}\eta\] \[F\left(i\lambda+\frac{1}{2}+|n|,-i\lambda+\frac{1}{2}+|n|;|n|+1 ;-\sinh^{2}\frac{\eta}{2}\right)\] (A.7) \(F\) is the hypergeometric function. This has eigenvalue, \[\hat{\kappa}_{p}\equiv\frac{1}{r_{0}^{2}}\left(\lambda^{2}+\frac{1}{4}\right)\] (A.8)
* Normalized basis for vectors \(\{\hat{\xi}^{I}_{p,\mu}\,,I=1,2\}\), which can be constructed out of the normalizable scalar eigenfunctions \(W_{p}\). The "\(I\)" label corresponds to the number of linearly independent vectors, the "\(p\)" label characterizes the mode and the "\(\mu\)" index is the vector index. Both the vectors have the same \(\nabla^{2}\) eigenvalue. \[\hat{\xi}^{1}_{p,\mu}=\frac{1}{\sqrt{\hat{\kappa}_{p}}}\nabla_{ \mu}W_{p},\quad\hat{\xi}^{2}_{p,\mu}=\frac{1}{\sqrt{\hat{\kappa}_{p}}} \varepsilon_{\mu\nu}\nabla^{\nu}W_{p}\] (A.9) \[\nabla^{2}\hat{\xi}^{I}_{p,\mu}=-\left(\hat{\kappa}_{p}+\frac{1}{ r_{0}^{2}}\right)\hat{\xi}^{I}_{p,\mu}\] (A.10)
In addition to this, there are other normalizable vectors \(v^{I}_{n,\mu}\), \(I=1,2\) which are constructed out of derivatives acting on non-normalizable scalars on AdS\({}_{2}\), labeled by some discrete parameter '\(n\)'. These modes, corresponding to large gauge transformations have the following form, \[d\Phi_{n},\quad\Phi_{n}\equiv\frac{1}{\sqrt{2\pi|n|}}\left(\frac{ \sinh\eta}{1+\cosh\eta}\right)^{|n|}\mathrm{e}^{in\theta},\quad n=\pm 1,\pm 2\cdots\] (A.11) We construct a real basis for vectors by considering the real and imaginary parts of the vector in (A.11), which can be expressed as, \[v^{1}_{n,\mu}\equiv v_{n,\mu},\quad v^{2}_{n,\mu}\equiv\varepsilon _{\mu\nu}v^{\nu}_{n}\] (A.12) \[\nabla^{2}v^{I}_{n,\mu}=-r_{0}^{-2}v^{I}_{n,\mu}\] (A.13) \[\int g^{\mu\nu}\hat{\xi}^{I}_{p,\mu}\hat{\xi}^{J}_{q,\nu}=\delta^ {IJ}\delta_{pq},\quad\int g^{\mu\nu}v^{I}_{p,\mu}v^{J}_{q,\nu}=\delta^{IJ} \delta_{pq},\quad\int g^{\mu\nu}\hat{\xi}^{I}_{p,\mu}v^{J}_{q,\nu}=0\] (A.14) Therefore any vector on AdS\({}_{2}\) must be expanded in the basis \(\{\hat{\xi}^{I}_{p,\mu},v^{I}_{p,\mu}\}\) for \(I=1,2\), where the label '\(p\)' represents all the appropriate labels collectively in different categories.
* Normalized basis for symmetric rank two tensors \(\{\hat{\chi}^{P}_{p,\mu\nu}\,,P=1,2,3\}\), which can be again constructed out of the scalar eigenfunctions \(W_{p}\). The "\(P\)" label corresponds to the number of linearly independent elements, the "\(p\)" label characterizes the mode and the "\(\mu,\nu\)" label are the tensor indices. \[\hat{\chi}^{I}_{p,\mu\nu}=\frac{1}{\sqrt{\kappa_{p}+2r_{0}^{-2}}} (\nabla_{\mu}\hat{\xi}^{I}_{p,\nu}+\nabla_{\nu}\hat{\xi}^{I}_{p,\mu}-g_{\mu\nu }\nabla\cdot\hat{\xi}^{I}_{p}),\quad\hat{\chi}^{3}_{p,\mu\nu}=\frac{1}{\sqrt{ 2}}g_{\mu\nu}W_{p}\] (A.15) \[\nabla^{2}\hat{\chi}^{I}_{p,\mu\nu}=-(\hat{\kappa}_{p}+4r_{0}^{-2 })\hat{\chi}^{I}_{p,\mu\nu},\quad\nabla^{2}\hat{\chi}^{3}_{p,\mu\nu}=-\hat{ \kappa}_{p}\;\hat{\chi}^{3}_{p,\mu\nu}\] (A.16) There are additional normalized tensor modes \(w_{n,\mu\nu}\) corresponding to non-normalizable diffeomorphisms (or large diffeomorphisms), where \(\{n,\,n=\pm 2,\pm 3\cdots\}\) is a discrete label. These are given as, \[\frac{r_{0}}{\sqrt{\pi}}\left(\frac{|n|(n^{2}-1)}{2}\right)^{1/2} \frac{(\sinh\eta)^{|n|-2}}{(1+\cosh\eta)^{|n|}}\mathrm{e}^{in\theta}\left(d \eta^{2}+2i\sinh\eta d\eta d\theta-\sinh^{2}\eta d\theta^{2}\right)\] (A.17) These modes (constructed from the real and imaginary parts of (A.17)) need to be added as linearly independent elements in the basis, which we denote as \(\{\Omega^{P}_{p,\mu\nu},P=1,2,3\}\) which are given as, \[\Omega^{I}_{p,\mu\nu}=\frac{r_{0}}{\sqrt{2}}(\nabla_{\mu}v^{I}_{ p,\nu}+\nabla_{\nu}v^{I}_{p,\mu}-g_{\mu\nu}\nabla\cdot v^{I}_{p}),\quad\Omega^{3}_ {p,\mu\nu}\equiv w_{p,\mu\nu}\] (A.18) \[\nabla^{2}\Omega^{I}_{p,\mu\nu}=-\frac{4}{r_{0}^{2}}\Omega^{I}_{ p,\mu\nu},\quad\nabla^{2}w_{p,\mu\nu}=-\frac{2}{r_{0}^{2}}w_{p,\mu\nu}\] (A.19) \[\int g^{\mu\rho}g^{\nu\sigma}\hat{\chi}^{P}_{p,\mu\nu}\hat{\chi}^ {Q}_{q,\rho\sigma}=\delta^{PQ}\delta_{pq},\quad\int g^{\mu\rho}g^{\nu\sigma} \Omega^{P}_{p,\mu\nu}\Omega^{Q}_{q,\rho\sigma}=\delta^{PQ}\delta_{pq},\] \[\int g^{\mu\rho}g^{\nu\sigma}\hat{\chi}^{P}_{p,\mu\nu}\Omega^{Q} _{q,\rho\sigma}=0\] (A.20)
Therefore any symmetric rank two tensor on AdS\({}_{2}\) can be expanded in the basis \(\{\hat{\chi}^{P}_{p,\mu},\Omega^{P}_{p,\mu}\}\) for \(P=1,2,3\), where the label '\(p\)' represents all the appropriate labels collectively in different categories.
#### Orthonormal basis in \(\mathbf{S}^{2}\)
* Eigenfunctions of the Laplacian operator: \[\nabla^{2}U_{p}=-\kappa_{p}U_{p}\] (115) \[\int_{S^{2}}U_{p}U_{q}=\delta_{pq}\] (116)
* The explicit expression of the eigenfunctions and eigenvalues with the label "\(p\)" representing \((l,m)\) for \(l\in\mathbb{Z}^{+}\) and \(-2l<m<2l\), \[U_{p}\equiv\frac{1}{r_{0}}Y_{lm}(\psi,\varphi)=\left(\frac{2l+1}{4\pi r_{0}^{2 }}\frac{(l+|m|)!}{(l-|m|)!}\right)^{1/2}P_{l}^{-|m|}(\cos\psi)\mathrm{e}^{im \varphi},\quad\kappa_{p}=\frac{l(l+1)}{r_{0}^{2}}\] (117) Here \(Y_{lm}\) are the spherical harmonics.
* Normalized basis for vectors \(\{\xi^{I}_{p,i}\,,I=1,2\}\), which can be constructed out of the scalar eigenfunctions \(U_{p}\). The "\(I\)" label corresponds to the number of linearly independent vectors, the "\(p\)" label characterizes the mode and the "\(i\)" label is the vector index. Both the vectors have the same \(\nabla^{2}\) eigenvalues. \[\xi^{1}_{p,i}=\frac{1}{\sqrt{\kappa_{p}}}\nabla_{i}U_{p},\quad \xi^{2}_{p,i}=\frac{1}{\sqrt{\kappa_{p}}}\varepsilon_{ij}\nabla^{j}U_{p}\] (118) \[\nabla^{2}\xi^{I}_{p,i}=-\left(\kappa_{p}-\frac{1}{r_{0}^{2}} \right)\xi^{I}_{p,i}\] (119) \[\int_{S^{2}}g^{ij}\xi^{I}_{p,i}\xi^{J}_{q,j}=\delta^{IJ}\delta_{ pq}\] (120)
* Normalized basis for symmetric rank two tensors \(\{\chi^{P}_{p,ij}\,,P=1,2,3\}\), which can be again constructed out of the scalar eigenfunctions \(U_{p}\). The "\(P\)" label corresponds to the number of linearly independent elements, the "\(p\)" label characterizes the mode, and the "\(i,j\)" labels are the tensor indices. \[\chi^{I}_{p,ij}=\frac{1}{\sqrt{\kappa_{p}-2r_{0}^{-2}}}(\nabla_{i }\xi^{I}_{p,j}+\nabla_{j}\xi^{I}_{p,i}-g_{ij}\nabla\cdot\xi^{I}_{p}),\quad \chi^{3}_{p,ij}=\frac{1}{\sqrt{2}}g_{ij}U_{p}\] (121) \[\nabla^{2}\chi^{I}_{p,ij}=-(\kappa_{p}-4r_{0}^{-2})\chi^{I}_{p, ij},\quad\nabla^{2}\chi^{3}_{p,ij}=-\kappa_{p}\ \chi^{3}_{p,ij}\] (122) \[\int_{S^{2}}g^{ik}g^{jl}\chi^{P}_{p,ij}\chi^{Q}_{q,kl}=\delta^{PQ }\delta_{pq}\] (123)
Semiclassical thermodynamics of Reissner-Nordstrom solution
In this section, we will review the computation of thermodynamic quantities of a Reissner-Nordstrom black hole. Unlike the analysis of section 2.3, here we will take the boundary to infinity and perform background subtraction to regulate the action so that we have the correct expression for energy as well. Here, the form of the full geometry is required. The result for Bekenstein-Hawking entropy remains the same.
The regulated action is given as,
\[\mathcal{S}=-\int d^{4}x\sqrt{g}(R-F^{2})-2\int_{r_{\infty}}d^{3}x \sqrt{\gamma}(K+2n_{A}A_{B}F^{AB})+\frac{4}{r_{\infty}}\int_{r_{\infty}}d^{3}x \sqrt{\gamma} \tag{110}\]
Here we have added a counterterm at the boundary which essentially regulates the energy by subtracting the contribution coming from flat space. The periodicity of the flat space is so chosen that asymptotically the it approaches the black hole geometry [4]. In the computation of thermodynamic quantities, we will consider an ensemble where the charge and temperature are fixed.
### Non-extremal black hole
To compute the thermodynamic quantities, we first compute the on-shell action for the non-extremal RN geometry. For the solution (6), we have:
\[n_{A} =\frac{1}{\sqrt{f(r_{\infty})}}(0,1,0,0),\quad\gamma_{ab}=\text{ diag}(f(r_{\infty}),r_{\infty}^{2},r_{\infty}^{2}\sin^{2}\psi)\] \[K =\frac{2}{r_{\infty}}-\frac{Q^{2}+r_{+}^{2}}{2r_{+}r_{\infty}^{2 }}+\mathcal{O}\left(\frac{1}{r_{\infty}^{3}}\right),\quad A_{B}=iQ\left( \frac{1}{r_{+}}-\frac{1}{r}\right)(1,0,0,0)\]
We find the regulated on-shell action as given by,
\[I=\frac{4\pi\beta}{r_{+}}(3Q^{2}+r_{+}^{2}) \tag{111}\]
The energy for \(r_{\infty}\to\infty\) is given as,
\[E=\frac{\partial I}{\partial\beta}=\frac{8\pi(Q^{2}+r_{+}^{2})}{ r_{+}}=16\pi M \tag{112}\]
The entropy is given by,
\[S_{\text{ent}}=\beta E-I=16\pi^{2}r_{+}^{2} \tag{113}\]
This is in agreement with Wald's formula [5]. It is worth noting, that the expression of entropy does not depend on the location of the boundary i.e. for this computation, the boundary can be put into any finite location. Neither does it depend on the counterterm.
### Near-extremal black hole
In this subsection, we will compute the on-shell action for the near-extremal background and then compute the semiclassical contribution to the partition function and entropy. This result can be obtained by taking the small temperature limit of the computation for non-extremal black hole. But we will compute it from the near-horizon geometry and carefully consider the contributions coming from FHR. This analysis gives the correct expression for energy also. But for the computation of entropy, the near-horizon data is sufficient as in section 2.3.
The full geometry split into NHR and FHR as described in section 2.2. We will consider the Einstein-Maxwell theory on these two manifolds separately. We add appropriate boundary terms and counterterm on the boundary \(\partial M\) located at fixed radial distance \(r=r_{\infty}\) near asymptotic infinity. For metric and gauge field, we impose Dirichlet and Neumann boundary conditions respectively. Now we split the action into two parts given as, \(\mathcal{S}=\mathcal{S}_{1}+\mathcal{S}_{2}\), such that:
\[\mathcal{S}_{1}=-\int_{\eta=0}^{\eta_{0}}d^{4}x\sqrt{g}(R-F^{2}) \tag{100}\] \[\mathcal{S}_{2}=-\int_{r=r_{b}}^{r_{\infty}}d^{4}x\sqrt{g}(R-F^{2 })-2\int_{\partial M}d^{3}x\sqrt{\mathfrak{h}}(K+2n_{A}A_{B}F^{AB})+\frac{4}{ r_{\infty}}\int_{\partial M}d^{3}x\sqrt{\mathfrak{h}} \tag{101}\]
Here, the first part (100) of the action is evaluated on the NHR. We will see that the action (101) in the far part of the manifold generates a boundary term on the near-horizon boundary.
#### On-shell action in FHR:
In FHR, the full near-extremal geometry is of the form \(\{g=\bar{g}+\delta g,A=\bar{A}+\delta A\}\), where \(\{\bar{g},\bar{A}\}\) denotes the full extremal geometry. Since the departure from extremality is very small, the on-shell action in the far part can be evaluated by plugging in the full near-extremal solution into (101),
\[I_{2}[g,A]=\mathcal{S}_{2}[\bar{g},\bar{A}]+\delta\mathcal{S}_{2} \tag{102}\]
Since the extremal geometry also satisfies the equations of motion in FHR with periodicity of the time direction being \(\beta\), the bulk part of the first-order variation term \(\delta\mathcal{S}_{2}\) vanishes. From the bulk action, we have total derivative contributions that generate boundary terms on both the boundaries located at \(r=r_{b}\) and \(r=r_{\infty}\). Since \(\delta g\) die off near infinity and \(\delta F=0\), the boundary terms generated at \(r=r_{\infty}\) cancel with the Gibbons-Hawking and Maxwell boundary terms, consistent with the variational principle. Hence, we are left with a boundary term on the near-horizon boundary \(r=r_{b}\). Therefore we have,
\[\mathcal{S}_{2}[\bar{g},\bar{A}]=-32\pi^{2}r_{0}^{2}+\frac{64\pi^ {3}r_{0}^{3}}{\beta}-\frac{8\pi r_{0}}{r_{b}}(r_{0}-3r_{b})\beta\] \[\delta\mathcal{S}_{2}=-2\int_{\partial N}\sqrt{\mathfrak{h}} \left[(K+2n_{A}A_{B}F^{AB})_{\text{near-ext}}-(K+2n_{A}A_{B}F^{AB})_{\text{ ext}}\right] \tag{103}\]
The normal on \(\partial N\) points from the horizon to infinity. The on-shell action in far region is given as,
\[I_{2}[g,A]=I_{\rm FHR}-2\int_{\partial N}\sqrt{\mathfrak{h}}(K+2n_{ A}A_{B}F^{AB})_{\text{near-ext}} \tag{111}\] \[I_{\rm FHR}=16\pi\beta\left(-r_{0}+\frac{r_{0}^{2}}{r_{b}}+r_{b}\right) \tag{112}\]
This analysis shows that the geometry in the FHR can be well-approximated by the extremal geometry and it effectively generates a boundary term on the near-horizon boundary. We include this term in the NHR part of the action which is well-suited for the variational problem in this region. Supplementing the action (108) with this boundary term, we get:
\[\mathcal{S}_{\rm NHR}=-\int_{\eta=0}^{\eta_{0}}d^{4}x\sqrt{g}(R-F^{2})-2\int_{ \partial N}d^{3}x\sqrt{\mathfrak{h}}(K+2n_{A}A_{B}F^{AB}) \tag{113}\]
As discussed earlier, the boundary \(\partial N\) is located in the near-horizon region so that we consider it to be a small deviation from the horizon i.e. \(r_{b}=r_{0}(1+\varepsilon)\) for \(\varepsilon\ll 1\). Suppressing higher order terms in \(\varepsilon\), we have:
\[I_{\rm FHR}=16\pi\beta r_{0}(1+\varepsilon^{2}) \tag{114}\]
This is a divergent constant. As we will see below, the entire thermodynamics can be understood from the well-defined action (113) in the near-horizon region.
#### On-shell action in NHR:
Now we plug in the near-horizon near-extremal solution given by (14) and (15) into the action (113) in NHR,
\[I_{\rm NHR}=-16\pi^{2}r_{0}^{2}-\frac{32\pi^{3}r_{0}^{3}}{\beta}\left(1+\cosh 2 \eta_{0}\right) \tag{115}\]
The location of the near-horizon boundary \(\partial N\) is so chosen that it is asymptotically far from the horizon i.e. \(\eta_{0}\) is large. But it should still remain in the near-horizon region with respect to the FHR geometry. This condition also imposes an upper bound on the near-horizon radial coordinate \(\eta\). From (13) we have:
\[r_{b}=r_{+}+\frac{2\pi r_{0}^{2}}{\beta}(\cosh\eta_{0}-1)\approx r _{0}(1+\varepsilon)\] \[\frac{\pi r_{0}}{\beta}\mathrm{e}^{\eta_{0}}\simeq\varepsilon\ll 1 \tag{116}\]
Therefore, the location of \(\partial N\) is chosen such that the cutoff \(\eta_{0}\) lies in the range,
\[1\ll\mathrm{e}^{\eta_{0}}\ll\frac{\beta}{r_{0}} \tag{117}\]
As we will now show that the physical results do not depend on this location as long as the boundary lies in this range. Using (116), the on-shell action in NHR is given as,
\[I_{\rm NHR}=-16\pi^{2}r_{0}^{2}-\frac{32\pi^{3}r_{0}^{3}}{\beta}-16\pi\beta r _{0}\varepsilon^{2} \tag{118}\]
We have suppressed the higher order terms in \(\frac{1}{\beta}\) and \(\varepsilon\).
#### Full on-shell action and semiclassical entropy
The full on-shell action is given as,
\[I=I_{\rm NHR}+I_{\rm FHR}=-16\pi^{2}r_{0}^{2}-\frac{32\pi^{3}r_{0}^{3 }}{\beta}+16\pi\beta r_{0} \tag{111}\]
The semiclassical partition function is given by \(\log Z_{0}=-I\). The thermodynamic energy is given by,
\[E=\frac{\partial I}{\partial\beta}=16\pi r_{0}+\frac{32\pi^{3}r_ {0}^{3}}{\beta^{2}} \tag{112}\]
This is equal to the mass parameter of the near-extremal solution given in (11). The entropy is given by,
\[S_{\rm ent}=\beta E-I=16\pi^{2}r_{0}^{2}\left(1+\frac{4\pi r_{0 }}{\beta}\right) \tag{113}\]
This result is in agreement with the Bekenstein-Hawking entropy of the near-extremal black hole to order \(\frac{1}{\beta}\).
## Appendix C Solving the equations of motion in NHR
In order to understand the near-horizon geometry of the near-extremal black hole, we solve the equations of motion (3) perturbatively in the near-horizon region of the black hole and recover the correct geometry obtained in section 2.2 from the full solution. The near-horizon geometry is a small deviation from the extremal one of the form: \(g_{AB}=\bar{g}_{AB}+\tilde{\epsilon}g_{AB}^{(c)},F_{AB}=\bar{F}_{AB}+\tilde{ \epsilon}F_{AB}^{(c)}\) i.e. the unperturbed solution is of the form AdS\({}_{2}\times\)S\({}^{2}\),
\[\bar{g}_{AB}dx^{A}dx^{B}=r_{0}^{2}(d\eta^{2}+\sinh^{2}\eta d\theta ^{2})+r_{0}^{2}(d\psi^{2}+\sin^{2}\psi d\varphi^{2}),\quad\bar{F}_{\mu\nu}= \frac{i}{r_{0}}\varepsilon_{\mu\nu} \tag{114}\]
Here \(\varepsilon_{\mu\nu}\) is the Levi-Civita tensor on \(AdS_{2}\), with the non-zero component being \(\varepsilon_{\eta\theta}=r_{0}^{2}\sinh\eta\). The perturbation parameter \(\tilde{\epsilon}\) is to be determined by matching the geometry with the full solution. Now we consider the near-extremal correction to the extremal background (14) to be of the following form,
\[g_{AB}^{(c)}dx^{A}dx^{B}= \chi(x^{\mu})r_{0}^{2}(d\eta^{2}+\sinh^{2}\eta d\theta^{2})+ \alpha(x^{\mu})r_{0}^{2}(d\eta^{2}-\sinh^{2}\eta d\theta^{2})\] \[+\phi(x^{\mu})r_{0}^{2}(d\psi^{2}+\sin^{2}\psi d\varphi^{2}); \quad F_{\mu\nu}^{(c)}=\frac{i}{r_{0}}\Theta(x^{\mu})\varepsilon_{\mu\nu} \tag{115}\]
Solving the equations of motion up to order \(\tilde{\epsilon}\), we get the following solutions of the parameters appearing in the ansatz (115),
* **Branch-1: Fluctuating AdS\({}_{2}\) radius and gauge field strength** \[\chi(\eta)=c_{2}+\cosh\eta(c_{1}-c_{2}\tanh^{-1}\left(\text{sech }\,\eta\right));\quad\alpha(\eta)=0;\quad\phi(\eta)=0;\quad\Theta(\eta)=\chi(\eta)\] (116) The small \(\eta\) expansion of the solution is given as, \[\chi(\eta)\xrightarrow{\eta\to 0}c_{1}+c_{2}+c_{2}\ln\frac{\eta}{2}+ \frac{\eta^{2}}{12}(6c_{1}-c_{2}+6c_{2}\ln\frac{\eta}{2})\] (117) Imposing regularity at \(\eta=0\), we set \(c_{2}=0\).
* **Branch-2: Traceless fluctuation on AdS\({}_{2}\)** \[\chi(\eta)=0;\quad\alpha(\eta)=a_{2}+\coth^{2}\eta(-a_{2}+a_{1}\, \text{sech}\,\eta);\quad\phi(\eta)=0;\quad\Theta(\eta)=0;\] (100) We consider the small \(\eta\) expansion of \(\alpha(\eta)\), \[\alpha(\eta)\xrightarrow{\eta\to 0}\frac{a_{1}-a_{2}}{\eta^{2}}+\frac{1}{6}(a_{ 1}+2a_{2})+\frac{1}{120}(-7a_{1}-8a_{2})\eta^{2}\] (101) We set \(a_{1}=a_{2}\) so that the solution does not blow up at \(\eta=0\), then we have: \[\alpha(\eta)\xrightarrow{\eta\to 0}\frac{a_{1}}{2}\left(1-\frac{\eta^{2}}{4}\right)\] (102)
* **Branch-3: Traceless fluctuation on AdS\({}_{2}\), fluctuating \(\mathbf{S}^{2}\) radius and gauge field strength** \[\alpha(\eta)=\frac{1}{2}\coth\eta\,\text{csch}\,\eta(1+2b_{1}+ \cosh 2\eta-2b_{2}\,\text{sech}\,\eta);\] \[\chi(\eta)=0;\quad\phi(\eta)=\cosh\eta;\quad\Theta(\eta)=-\cosh\eta\] (103) We study the behavior of these fluctuations near \(\eta\to 0\). \[\alpha(\eta)\xrightarrow{\eta\to 0}\frac{1+b_{1}-b_{2}}{\eta^{2}}+\frac{1}{6 }(7+b_{1}+2b_{2})+\frac{1}{120}(53-7k_{1}-8k_{2})\eta^{2}\] (104) From the demand that it does not blow up at \(\eta=0\), we get \(b_{2}=1+b_{1}\) such that, \[\alpha(\eta)\xrightarrow{\eta\to 0}\frac{3+b_{1}}{2}+\frac{1}{8}(3-b_{1})\eta^{2}\] (105) If we further demand that \(\gamma_{\mu\nu}\to 0\) as \(\eta\to 0\), we get \(b_{1}=-3\) such that, \[\alpha(\eta)=(2+\cosh\eta)\tanh^{2}\left(\frac{\eta}{2}\right)\] (106) On the horizon i.e. at \(\eta=0\), the time component of metric should go to zero. Under this demand, the first two branches of solutions are identically zero.
Therefore, the near-extremal deviation (100) in the near-horizon region is given as,
\[g_{AB}^{(c)}dx^{A}dx^{B}= (2+\cosh\eta)\tanh^{2}\Big{(}\frac{\eta}{2}\Big{)}r_{0}^{2}(d\eta ^{2}-\sinh^{2}\eta d\theta^{2})\] \[+\cosh\eta\ r_{0}^{2}(d\psi^{2}+\sin^{2}\psi d\varphi^{2});\quad F _{\mu\nu}^{(c)}=-\frac{i}{r_{0}}\cosh\eta\ \varepsilon_{\mu\nu} \tag{107}\]
This is the same geometry (2.15) that we obtained from the full near-extremal solution with the identification \(\tilde{\epsilon}=\frac{2\delta}{r_{0}}=4\pi r_{0}T\). Therefore, we conclude that the near-horizon geometry discussed in section 2.2, is the unique spherically symmetric solution of the equations of motion to order \(T\) in the NHR. |
2304.07901 | Brain Tumor classification and Segmentation using Deep Learning | Brain tumors are a complex and potentially life-threatening medical condition
that requires accurate diagnosis and timely treatment. In this paper, we
present a machine learning-based system designed to assist healthcare
professionals in the classification and diagnosis of brain tumors using MRI
images. Our system provides a secure login, where doctors can upload or take a
photo of MRI and our app can classify the model and segment the tumor,
providing the doctor with a folder of each patient's history, name, and
results. Our system can also add results or MRI to this folder, draw on the MRI
to send it to another doctor, and save important results in a saved page in the
app. Furthermore, our system can classify in less than 1 second and allow
doctors to chat with a community of brain tumor doctors.
To achieve these objectives, our system uses a state-of-the-art machine
learning algorithm that has been trained on a large dataset of MRI images. The
algorithm can accurately classify different types of brain tumors and provide
doctors with detailed information on the size, location, and severity of the
tumor. Additionally, our system has several features to ensure its security and
privacy, including secure login and data encryption.
We evaluated our system using a dataset of real-world MRI images and compared
its performance to other existing systems. Our results demonstrate that our
system is highly accurate, efficient, and easy to use. We believe that our
system has the potential to revolutionize the field of brain tumor diagnosis
and treatment and provide healthcare professionals with a powerful tool for
improving patient outcomes. | Belal Badawy, Romario Sameh Samir, Youssef Tarek, Mohammed Ahmed, Rana Ibrahim, Manar Ahmed, Mohamed Hassan | 2023-04-16T21:42:21Z | http://arxiv.org/abs/2304.07901v1 | # Brain Tumor classification and Segmentation using Deep Learning +
###### Abstract
Brain tumor classification and segmentation is a project that involves using medical imaging techniques, such as magnetic resonance imaging (MRI) scans, to classify and segment different types of brain tumors. The goal of the project is to accurately identify and segment different types of brain tumors, such as gliomas and meningiomas, in order to improve diagnosis and treatment planning for patients with brain tumors. This is typically done using various machine learning and image processing techniques, such as deep learning, to analyze the images and classify and segment the tumors. The result of this project is building a model that can automatically classify and segment brain tumors in medical images.
Brain tumors Classification Segmentation Artificial intelligence Machine learning Deep learning Convolutional neural networks Magnetic resonance imaging (MRI) Computer-aided diagnosis Image processing Radionics Feature extraction Tumor detection Tumor localization Tumor volume Medical imaging Radiology Oncology
## 1 Introduction
It uses techniques such as machine learning, deep learning, Neural Networks and image processing to diagnose medical conditions without human intervention. It learns everything by teaching it through previous medical diagnoses The motivation for brain tumor classification and segmentation lies in the need for accurate and reliable diagnosis and treatment planning for patients with brain tumors. Brain tumors can have a significant impact on a person's quality of life and can be life-threatening if not properly diagnosed and treated. However, the diagnosis and treatment of brain tumors can be challenging, as the tumors can be difficult to distinguish from normal brain tissue using traditional imaging techniques. The use of advanced medical imaging techniques, such as MRI scans, has improved the ability to visualize brain tumors and make accurate diagnoses. However, the process of manually analyzing these images to classify and segment tumors can be time-consuming and prone to human error. By using machine learning and image processing techniques to automate the process, the accuracy and reliability of diagnosis and treatment planning can be improved.
### Motivation
The motivation for brain tumor classification and segmentation lies in the need for accurate and reliable diagnosis and treatment planning for patients with brain tumors. Brain tumors can have a significant impact on a person's quality of life and can be life-threatening if not properly diagnosed and treated. However, the diagnosis and treatment of brain tumors can be challenging, as the tumors can be difficult to distinguish from normal brain tissue using traditional imaging techniques. The use of advanced medical imaging techniques, such as MRI scans, has improved the ability to visualize brain tumors and make accurate diagnoses. However, the process of manually analyzing these images to
classify and segment tumors can be time-consuming and prone to human error. By using machine learning and image processing techniques to automate the process, the accuracy and reliability of diagnosis and treatment planning can be improved. The motivation for the project is to improve the diagnosis and treatment of brain tumors by providing doctors with a fast and accurate way to classify and segment tumors using MRI images. application, the accuracy and reliability of diagnosis and treatment planning can be improved. Additionally, the ability to classify tumors in 2 seconds or less, makes the use of the application in real-time practice easier and more efficient. Another motivation is to integrate the application with existing medical imaging systems, which will allow doctors to access the patient's information and results in real-time, and make informed decisions about diagnosis and treatment. This can help to improve the care of patients with brain tumors, and also save time for physicians, allowing them to focus on other aspects of patient care. Overall, the project's main motivation is to leverage the latest technology and machine learning techniques to improve the diagnosis and treatment of brain tumors, and to provide doctors with a fast, accurate, and easy-to-use tool to assist them in their daily practice.
### 1.3 Objective
The main goal of artificial intelligence is to improve patient care by speeding up processes and achieving greater accuracy, which opens the way to the provision of better healthcare in general. Radiographic images, pathology slides and electronic medical records of patients are evaluated through machine learning, assisting in the process of diagnosing and treating patients, this does not erase the role of the human factor from the work In the project, we plan to use efficient net architectures with convolutional neural networks (CNNs) to achieve high accuracy in brain tumor classification and segmentation. EfficientNet is a family of image classification models that were developed to improve the accuracy of CNNs while also reducing their computational complexity. These models use a compound scaling method to scale up the model's architecture and feature resolution, which leads to improved accuracy. By using EfficientNet with CNNs, we aim to achieve an accuracy higher from previous works in classifying brain tumors using MRI images. We plan to fine-tune the pre-trained EfficientNet models on a large dataset of MRI images of brain tumors to improve their performance on this specific task. Additionally, we will incorporate various techniques such as transfer learning and other regularization techniques to further improve the performance of the model. Once the model is trained and fine-tuned, we will test it on a separate dataset to evaluate its performance in terms of accuracy, reliability, and efficiency. If the model meets the desired accuracy better than previous works, we will proceed to integrate it into the application for doctors to use in their daily practice.
### 1.4 Aim
The aim of the project is to develop a machine learning-based application for brain tumor classification and segmentation that can quickly and accurately classify and segment tumors using MRI images. The specific aims include: 1. To create a deep learning-based algorithm that can classify brain tumors into different categories, such as gliomas and meningiomas, with high accuracy. 2. To develop a segmentation module that can accurately segment the tumors from the surrounding tissue, providing a clear visualization of the tumor. 3. To integrate the application with existing medical imaging systems, allowing doctors to access the patient's information and results in real-time. 4. To design the application to be user-friendly and easy to use for doctors in their daily practice, and to classify tumors in 2 seconds or less. 5. To provide doctors with a patient's file, which will include the patient's medical history and the results of the tumor classification and segmentation, allowing them to easily access the patient's information and make informed decisions about diagnosis and treatment. 6. To test the application on a large dataset of MRI images and evaluate its performance in terms of accuracy, reliability and efficiency. Overall, the objective of the project is to create a machine learning-based application that can assist doctors in the diagnosis and treatment of brain tumors, by providing accurate and reliable information in real-time, in a fast and easy-to-use way.
### 1.5 Scope
The Scope of the disease, brain tumors are abnormal growths of cells in the brain or skull, which can be benign (non-cancerous) or malignant (cancerous). Brain tumors can have a significant impact on a person's quality of life and can be life-threatening if not properly diagnosed and treated. Some of the common types of brain tumors are \({}^{\bullet}\) gliomas \({}^{\bullet}\) meningiomas \({}^{\bullet}\) pitaturity the project is to develop a machine learning-based application for brain tumor classification and segmentation using MRI images. The goal is to use deep learning algorithms to analyze the images
### 1.6 General Constraints
The application developed for brain tumor classification and segmentation can be used by a wide range of healthcare professionals, including: \({}^{\bullet}\) Radiologists: Radiologists are medical professionals who specialize in interpreting medical
images, such as MRI scans. They can use the application to quickly and accurately classify and segment brain tumors in these images, which can help to improve the accuracy and reliability of diagnosis and treatment planning. \({}^{\star}\) Neurologists: Neurologists are medical professionals who specialize in the diagnosis and treatment of diseases of the brain and nervous system. They can use the application to assist in the diagnosis and treatment of brain tumors by providing them with accurate and reliable information about the tumors in real-time. \({}^{\star}\) Neurosurgeons: Neurosurgeons are medical professionals who specialize in the surgical treatment of diseases of the brain and nervous system. They can use the application to assist in the planning of surgery for brain tumors by providing them with accurate information about the size and location of the tumors. \({}^{\bullet}\) Oncologists: Oncologists are medical professionals who specialize in the diagnosis and treatment of cancer. They can use the application to assist in the treatment of brain tumors by providing them with accurate information about the tumors in real-time. \({}^{\star}\) Medical students and residents can use the application as a tool to learn and practice the diagnosis and treatment of brain tumors. \({}^{\star}\) Those related to medicine: Those related to medicine can use the application to assist in the diagnosis and treatment of brain tumors in remote areas where the lack of radiologists and physicians can be an issue. Overall, the application can be used by a wide range of healthcare professionals to assist in the diagnosis and treatment of brain tumors, by providing them with accurate and reliable information in real-time.
## 2 Results
For this study, we used the Brain Tumor MRI dataset available on Kaggle, which contains MRI images of brain tumors with corresponding segmentation masks. The dataset consists of 2,618 MRI images, of which 1,955 were used for training, 327 for validation, and 336 for testing. We applied the EfficientNet model for classification, achieving an accuracy of 99.5 on the test set, 99 on the validation set, and 100 on the training set. For segmentation, we used the UNet model, achieving an accuracy of 96. Overall, these results demonstrate the effectiveness of our approach in accurately classifying brain tumor MRI images and segmenting the tumors.
## 3 Model development
For model development, we used the Brain Tumor MRI dataset from Kaggle, which consists of 3064 MRI images of the brain with and without tumor. We preprocessed the images by resizing them to 224x224 and normalizing the pixel values. We then used the EfficientNet architecture, a state-of-the-art convolutional neural network (CNN), for image classification. We trained the model using 80 of the dataset, validated it using 10 and tested it on the remaining 10. The model achieved an accuracy of 99.5 on the test set, 99 on the validation set, and 100 on the training set, demonstrating its effectiveness in classifying brain MRI images. For brain tumor segmentation, we used the UNet architecture, which is a type of CNN commonly used for segmentation tasks. We trained the model on a subset of 500 images from the dataset that were manually segmented by experts to create masks. The model achieved an overall Dice coefficient of 0.96 on the validation set, indicating its high accuracy in segmenting brain tumors.
For model development, we used the Brain Tumor MRI dataset from Kaggle, which consists of 3064 MRI images of the brain with and without tumor. We preprocessed the images by resizing them to 224x224 and normalizing the pixel values. We then used the EfficientNet architecture, a state-of-the-art convolutional neural network (CNN), for image classification. We trained the model using 80 of the dataset, validated it using 10, and tested it on the remaining 10. The model achieved an accuracy of 99.5 on the test set, 99 on the validation set, and 100 on the training set, demonstrating its effectiveness in classifying brain MRI images. For brain tumor segmentation, we used the UNet architecture, which is a type of CNN commonly used for segmentation tasks. We trained the model on a subset of 500 images from the dataset that were manually segmented by experts to create masks. The model achieved an overall Dice coefficient of 0.96 on the validation set, indicating its high accuracy in segmenting brain tumors. The Convolutional Neural Network (CNN) is a deep learning algorithm used primarily for image classification tasks. It consists of multiple layers, including convolutional, pooling, and fully connected layers. In the convolutional layer, filters are applied to the input image to extract features. The pooling layer reduces the dimensionality of the feature maps, while the fully connected layer classifies the image. EfficientNet is a convolutional neural network architecture that was developed to achieve state-of-the-art accuracy on image classification tasks while using fewer parameters and computational resources than other models. It uses a compound scaling method to balance the depth, width, and resolution of the network. In our project, we used both the CNN and EfficientNet architectures for the brain tumor classification task. The CNN architecture consisted of multiple convolutional and pooling layers, followed by a fully connected layer. The EfficientNet architecture consisted of multiple blocks, with each block containing multiple convolutional, depth-wise convolutional, and fully connected layers. Both models were trained on the brain tumor MRI dataset to classify images into one of four categories: glioma tumor, meningioma tumor, pituitary tumor, or no tumor. The EfficientNet model achieved higher accuracy than the CNN model, with a test accuracy of 99.5 and a validation accuracy of 99. The CNN
model had a test accuracy of 96.7 and a validation accuracy of 95.6. These results demonstrate the effectiveness of the EfficientNet architecture for image classification tasks with high accuracy and fewer computational resources.
In our project, we used a U-Net architecture for image segmentation. U-Net is a convolutional neural network architecture designed specifically for biomedical image segmentation tasks. It consists of a contracting path and an expansive path, which allow for both local and global information to be captured during the segmentation process. The contracting path uses convolutional and max pooling layers to downsample the image, while the expansive path uses transposed convolutional layers to upsample the image. The two paths are connected by skip connections, which help to preserve spatial information and improve segmentation accuracy. We trained our U-Net model on a dataset of brain MRI images with tumor labels. The model achieved an accuracy of 96 on the validation set, demonstrating its effectiveness for segmenting brain tumors in MRI images
## 4 Discussion
The development of a mobile app for brain tumor detection and classification is a significant advancement in the field of medical imaging. The use of deep learning models, such as EfficientNet and UNet, has shown remarkable results in the accurate detection and classification of brain tumors. The EfficientNet model achieved an accuracy of 99.5 for testing and 99 for validation, while the UNet model achieved an accuracy of 96 for segmentation. These results demonstrate the potential of the proposed approach for the accurate and efficient detection of brain tumors. The mobile app provides a user-friendly interface for medical students and doctors to upload MRI images and receive accurate classification results, along with detailed information on tumor types, causes, symptoms, and treatments. The app also allows users to view previously saved images and classifications, providing a comprehensive platform for brain tumor diagnosis and management. Overall, the development of this mobile app has the potential to revolutionize the field of medical imaging and enhance the accuracy and efficiency of brain tumor diagnosis and management.
## 5 Conclusions
In conclusion, our mobile application provides an easy-to-use and efficient tool for the classification of brain MRI images and the segmentation of brain tumors. Our model, based on EfficientNet and UNet architectures, achieved high accuracy rates in both tasks. This app can be beneficial for medical students, doctors, and healthcare professionals as a quick reference and aid in diagnosis. However, limitations exist in terms of data availability and generalizability to other datasets. Future work could focus on expanding the dataset, improving the model's robustness, and integrating more features and resources for users. Overall, our project demonstrates the potential of mobile apps in medical imaging analysis and decision support
Figure 1: CNN
## 6 Materials and methods
1. Preprocessing: \(\bullet\) Dataset was split into train, validation, and test sets \(\bullet\) Images were resized to 256x256 and normalized \(\bullet\) Augmentation techniques (e.g. rotation, flip, zoom) were applied to the training set to increase variability 2. Model Training: \(\bullet\) Two models were developed for classification: a Convolutional Neural Network (CNN) and an EfficientNet \(\bullet\) The models were trained on the training set and evaluated on the validation and test sets \(\bullet\) The best-performing model was selected for use in the app 3. Tumor Segmentation: \(\bullet\) A U-Net model was developed for tumor segmentation \(\bullet\) The U-Net was trained on a subset of the MRI dataset that included only tumor images with their corresponding masks \(\bullet\) The trained U-Net was used to segment the tumors in the uploaded MRI images in the app
|
2301.13172 | Cell Systems for $\overline{\operatorname{Rep}(U_q(\mathfrak{sl}_N))}$
Module Categories | In this paper, we define the KW cell system on a graph $\Gamma$, depending on
parameters $N\in \mathbb{N}$, $q$ a root of unity, and $\omega$ an $N$-th root
of unity. This is a polynomial system of equations depending on $\Gamma$ and
the parameters. Using the graph planar algebra embedding theorem, we prove that
when $q = e^{2\pi i \frac{1}{2(N+k)}}$, solutions to the KW cell system on
$\Gamma$ classify module categories over
$\overline{\mathrm{Rep}(U_q(sl_N))^\omega}$ whose action graph for the object
$\Lambda_1$ is $\Gamma$. The KW cell system is a generalisation of the
Etingof-Ostrik and the De Commer-Yamashita classifying data for
$\overline{\mathrm{Rep}(U_q(sl_2))}$ module categories, and Ocneanu's cell
calculus for $\overline{\mathrm{Rep}(U_q(sl_3))}$ module categories.
To demonstrate the effectiveness of this cell calculus, we solve the KW cell
systems corresponding to the exceptional module categories over
$\overline{\mathrm{Rep}(U_q(sl_4))}$ when $q= e^{2\pi i \frac{1}{2(4+k)}}$, as
well as for all three infinite families of charge conjugation modules. Building
on the work of the second author, this explicitly constructs and classifies all
irreducible module categories over $\mathcal{C}(sl_4, k)$ for all $k\in
\mathbb{N}$. These results prove claims made by Ocneanu on the quantum
subgroups of $SU(4)$. We also construct exceptional module categories over
$\overline{\mathrm{Rep}(U_q(sl_4))^\omega}$ where $\omega\in \{-1, i, -i\}$.
Two of these module categories have no analogue when $\omega=1$.
The main technical contributions of this paper are a proof of the graph
planar algebra embedding theorem for oriented planar algebras, and a refinement
of Kazhdan and Wenzl's skein theory presentation of the category
$\overline{\mathrm{Rep}(U_q(sl_N))^\omega}$. We also explicitly describe the
subfactors coming from a solution to a KW cell system. | Daniel Copeland, Cain Edie-Michell | 2023-01-30T18:41:47Z | http://arxiv.org/abs/2301.13172v2 | # Cell systems for \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{N}))\) module categories
###### Abstract.
In this paper, we define the _KW cell system_ on a graph \(\Gamma\), depending on parameters \(N\in\mathbb{N}\), \(q\) a root of unity, and \(\omega\) an \(N\)-th root of unity. This is a polynomial system of equations depending on \(\Gamma\) and the parameters. Using the graph planar algebra embedding theorem, we prove that when \(q=e^{2\pi i\frac{1}{2(N+k)}}\), solutions to the KW cell system on \(\Gamma\) classify module categories over \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{N}))^{\omega}\) whose action graph for the object \(\Lambda_{1}\) is \(\Gamma\). The KW cell system is a generalisation of the Etingof-Ostrik and the De Commer-Yamashita classifying data for \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{2}))\) module categories, and Ocneanu's cell calculus for \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{3}))\) module categories.
To demonstrate the effectiveness of this cell calculus, we solve the KW cell systems corresponding to the exceptional module categories over \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{4}))\) when \(q=e^{2\pi i\frac{1}{2(4+k)}}\), as well as for all three infinite families of charge conjugation modules. Building on the work of the second author, this explicitly constructs and classifies all irreducible module categories over \(\mathcal{C}(\mathfrak{sl}_{4},k)\) for all \(k\in\mathbb{N}\). These results prove claims made by Ocneanu on the _quantum subgroups_ of \(SU(4)\). We also construct exceptional module categories over \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{4}))^{\omega}\) where \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). Two of these module categories have no analogue when \(\omega=1\).
The main technical contributions of this paper are a proof of the graph planar algebra embedding theorem for oriented planar algebras, and a refinement of Kazhdan and Wenzl's skein theory presentation of the category \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{N}))^{\omega}\). We also explicitly describe the subfactors coming from a solution to a KW cell system.
## 1. Introduction
One of the largest (and most interesting) classes of tensor categories comes from the representation theory of the quantum groups \(U_{q}(\mathfrak{g})\) at roots of unity \(q\). Namely, one looks at the category of tilting modules of these objects, and takes an appropriate quotient. Equivalently, these categories can also be described as the category of level-\(k\) integrable representations of \(\hat{\mathfrak{g}}\), with non-standard tensor product given by the level-preserving fusion [10]. These categories are typically denoted by either \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{N}))\) or \(\mathcal{C}(\mathfrak{g},k)\), depending on the context. There are many appearances of these categories in various areas of mathematics [21], as well as physics [17]. Notably, these categories are the representation theory of the Wess-Zumino-Witten chiral conformal field theories \(\mathcal{V}(\mathfrak{g},k)\).
A module category over a tensor category \(\mathcal{C}\) is a natural categorification of a module over a ring or group [12]. More specifically, it is a monoidal functor
\[\mathcal{C}\to\operatorname{End}(\mathcal{M})\]
where \(\mathcal{M}\) is some abelian category. The module categories over \(\mathcal{C}\) have various applications. In particular, when \(\mathcal{C}\) is the representation theory of a chiral conformal field theory \(\mathcal{V}\), the module categories over \(\mathcal{C}\) classify full conformal field theories with a chiral half \(\mathcal{V}\)[13].
In the last several years, there has been a revitalisation in the program to classify module categories over the quantum group categories \(\mathcal{C}(\mathfrak{g},k)\) (building on older works e.g. [17, 1]). This is mainly due to works of Schopieray [14], and Gannon [1]. In particular, the latter work classifies and constructs all of the _type I_ module categories1 for the simple Lie algebras of rank \(\leq 6\).
Footnote 1: These are module categories which have the additional compatible structure of a tensor category.
Recent work of the second author and Gannon extended the results of Gannon to classify all module categories for the Lie algebras \(\mathfrak{sl}_{N}\) for \(N\leq 7\) for all \(k\), as well as for all \(N\) for sufficiently large \(k\)[1, 1]. However, this classification result is non-constructive, as it uses the bijection between Lagrangian algebras in the centre, and indecomposible modules over a category [10, 11].
The purpose of this paper is to develop an efficient method of explicitly constructing module categories over \(\mathcal{C}(\mathfrak{sl}_{N},k)\) (and more generally, the twisted categories \(\operatorname{\overline{Rep}}(U_{q}(\mathfrak{sl}_{N}))^{\omega}\)). We achieve this in the following
theorem. This gives a system of polynomial equations, the solutions of which (in the unitary setting) classify module categories over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\).
**Theorem 1.1**.: _Let \(N\in\mathbb{N}_{\geq 2}\), \(q=e^{2\pi i\frac{1}{2(N+k)}}\) for some \(k\in\mathbb{N}\), \(\omega\) an \(N\)-th root of unity, and \(\Gamma\) a finite graph with norm \([N]_{q}\). There is a bijective correspondence between_
1. _[label=(0)]_
2. _pivotal_ \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\)_-modules_ \(\mathcal{M}\) _whose module fusion graph for action by_ \(\Lambda_{1}\) _is_ \(\Gamma\)_, and_
3. _solutions for the Kazhdan-Wenzl cell system on_ \(\Gamma\)__
_where the Kazhdan-Wenzl cell system on \(\Gamma\) is defined in Definition 5.2._
_The equivalence relation on 1) is equivalence of module categories, and the equivalence relation on 2) can be found in Definition 5.5._
The Kazhdan-Wenzl cell system on \(\Gamma\) is a polynomial system of equations. These polynomial equations are fairly reasonable to solve, as demonstrated in Section 6 where we find many solutions.
**Remark 1.2**.: The reader may be interested in constructing module categories over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) in the non-unitary setting (i.e. when \(q\neq e^{2\pi i\frac{1}{2(N+k)}}\)). We offer two remedies.
The first is Lemma 2.14, which shows that when \(q\) is a root of unity, \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) is Galois conjugate to \(\overline{\operatorname{Rep}(U_{q^{\prime}}(\mathfrak{sl}_{N}))^{\omega^{ \prime}}}\) where \(q^{\prime}=e^{2\pi i\frac{1}{2(N+k)}}\) for some \(k\), and \(\omega^{\prime}\) some \(N\)-th root of unity. For this Galois conjugate the full strength of Theorem 1.1 applies, and the module categories are classified by solutions to KW cell systems on graphs. As Galois conjugate categories have the same representation theory, this allows the representation theory of the non-unitary categories to be determined with our theory.
The second approach is discussed in Remark 5.1. This remark explains how the KW cell system makes sense in the non-unitary setting, and how an additional equation can be added to a KW cell system. A solution to this larger system of equations then guarantees the existence of a module category even in the non-unitary setting. This additional polynomial equation has degree the length of the longest word in the symmetric group \(S_{N}\). In practice this additional equation can take weeks to verify on a computer.
The result of Theorem 1.1 reduces the construction of such a module category to a polynomial system of equations which we call a Kazhdan-Wenzl cell system. The Kazhdan-Wenzl cell system depends on the parameters \(N\), \(q\), \(\omega\) and \(\Gamma\), and is a degree \(3\) polynomial system. In the \(N=2\) case, our polynomial system of equations is related to Etingof and Ostrik's classifying data for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{2}))}\) module categories [1] (see also [1] for the case where \(|q|\leq 1\)). In the \(N=3\) case, our polynomial system is related to Ocneanu's cell calculus for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) module categories. See [1] for solutions in the \(SU(3)\) case. Note that in [2], Ocneanu claims a cell calculus for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{4}))}\) module categories, but no definition is given. Our definition holds for all \(N\), and hence generalises the above definitions. See also [1] for a cell calculus for \(SO(3)\) module categories.
Our definition of a KW cell system can be naturally broken into two pieces. The first is a path representation of the Hecke algebra, satisfying the Markov property, and the second is the solution to a linear system, along with a normalisation convention. Solutions to the first piece have appeared many times in the literature [21, 1, 12], including in the physics literature [20] where the connection to integrable lattice modules in explained. A solution to this first piece can be thought of as the data of a "\(\overline{\operatorname{Rep}(U_{q}(\mathfrak{gl}_{N}))}\)" module. The second piece of data (to our best knowledge) is completely new, and is precisely the data to extend a "\(\overline{\operatorname{Rep}(U_{q}(\mathfrak{gl}_{N}))}\)" module to a \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) module.
In order to obtain our polynomial system, we follow the strategy of [1], using the theory of graph planar algebra embeddings. Let us briefly describe the philosophy of this strategy.
Recall a module category for a tensor category \(\mathcal{C}\) is equivalent to a monoidal functor
\[\mathcal{C}\to\operatorname{End}(\mathcal{M})\]
where \(\mathcal{M}\) is a semi-simple category. This is directly analogous to a module over a group \(G\), which is described by a homomorphism
\[G\to\operatorname{End}(V).\]
Given an explicit group, say \(D_{n}\), the most efficient way to build a module is to use a nice presentation, say \(\langle r,s\mid r^{n}=e,rs=sr^{-1}\rangle\). A module can then be built by given the images of \(r\) and \(s\) in \(\operatorname{End}(V)\), and verifying these images satisfy the relations in the presentation.
As introduced in [1] and [17], an analogous idea holds for modules over a tensor category. In particular for us, we use the Kazhdan-Wenzl presentation [14] for the category \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\), which has a single object generator \(\Lambda_{1}\), and two morphism generators. The image of \(\Lambda_{1}\) in \(\operatorname{End}(\mathcal{M})\) can be described as an oriented graph \(\Gamma\) (whose vertices are the simple objects of \(\mathcal{M}\), and edges determine the action of the endofunctor). The images of the two generating morphisms live in a distinguished subcategory of \(\operatorname{End}(\mathcal{M})\) known as the graph planar algebra on \(\Gamma\), which we denote2\(oGPA(\Gamma)\).
Footnote 2: To distinguish it from the closely related, but distinct, non-oriented version \(GPA(\Gamma)\)[13]. The oriented version was known to Jones, and variants have been defined in [17, 10]
As seen in [13], the distinguished subcategory \(oGPA(\Gamma)\) has an incredibly explicit description in terms of loop on the graph \(\Gamma\). This allows us to describe the images of the two generating morphisms as linear functionals of the space of certain loops in \(\Gamma\). We can then use the (a refinement of) the relations of Kazhdan-Wenzl to obtain polynomial equations that these functionals must satisfy. Extracting all this data gives us our definition of a Kazhdan-Wenzl cell system.
There are several natural choices for a presentation for the category \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\). In order to obtain an efficient cell calculus, we desire several conditions on the presentation
* The presentation is uniform for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) as \(N\) and \(q\) vary,
* The presentation contains as few generating objects and morphisms as possible,
* The relations the generating morphisms satisfy must live in \(\operatorname{Hom}\) spaces between objects of as small length as possible.
The most obvious presentation is the \(6-j\) symbol presentation, where all simple objects are generating objects, and the collection of all trivalent vertices are the generating morphisms. This presentation only satisfies the third condition, and blows out on the first two. Further, to the authors knowledge, this presentation is only explicitly described for \(\mathfrak{sl}_{2}\). This immediately rules out this choice.
Another option is the Cautis-Kamnitzer-Morrison presentation [1]. Here the generating objects are the fundamental representations \(\Lambda_{i}\), and the generating morphisms are trivalent vertices between them. This presentation satisfies the third point, and is uniform with respect to \(q\). The practical issue occurs as the number of generating objects and morphisms grow with \(N\). This makes determining the images of these generators in the graph planar algebra unfeasible in general (see [10] for a specific example where this is achieved).
If we were to follow Ocneanu directly, then we can use a presentation of \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) with generating object \(\Lambda_{1}\), and single generating morphism the intertwiner \(\Lambda_{1}^{\otimes N}\to\mathbf{1}\). For \(\mathfrak{sl}_{3}\) this is exactly Kuperburgs presentation for the \(\mathfrak{sl}_{3}\) spider [16]. This seems ideal at first, however this presentation is not uniform with respect to \(N\) at all. To the authors best knowledge, a presentation is only known for \(N\in\{2,3,4\}\). We suspect that the cell system claimed to exist by Ocneanu in [1] was based on this \(\mathfrak{sl}_{4}\) presentation.
Finally we have the Kazhdan-Wenzl presentation for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) from [14] (see also [16]). This has a single generating object which is \(\Lambda_{1}\), and two generating morphisms; the projection onto \(\Lambda_{2}\), and the intertwiner \(\Lambda_{1}^{\otimes N}\to\mathbf{1}\). While this may seem more complicated than the Kuperburg style presentation, the additional generating morphism allows a presentation which is uniform across all \(N\). For this reason, we use this presentation to describe the module categories.
The major downside of the Kazhdan-Wenzl presentation is that two of the relations occur in \(\operatorname{End}(\Lambda_{1}^{\otimes N})\) and \(\operatorname{Hom}(\Lambda_{1}^{\otimes N+1}\to\Lambda_{1})\). As \(N\) grows, these relations will be computationally infeasible to verify inside \(\operatorname{End}(\mathcal{M})\). To rectify this, we show that these two relations can be replaced with three much simpler relations. This is our first technical result, and can be found in Section 3.
One of the motivation behind this work was to improve on the second authors results of [1, 1]. These results abstractly classify module categories over \(\mathcal{C}(\mathfrak{sl}_{N},k)\) for small \(N\). In particular for \(N=4\) we have the following.
**Theorem 1.3**.: _[_1_]_ _Let \(k\geq 0\), and \(\mathcal{C}(\mathfrak{sl}_{4},k)\) the category of level \(k\) integrable representation of \(\widehat{\mathfrak{sl}_{4}}\). Then there are exactly_
\begin{tabular}{|c|c c c c c c|} \hline \(k\) & \(1\) & \(2\) & \(4\) & \(6\) & \(8\) & \(k>1\) _odd_ & \(k>8\) _even_ \\ \hline \# of Modules_ & \(2\) & \(3\) & \(7\) & \(8\) & \(9\) & \(4\) & \(6\) \\ \hline \end{tabular} _irreducible module categories over_ \(\mathcal{C}(\mathfrak{sl}_{4},k)\) _up to equivalence._
The proof of this theorem is non-constructive, as it uses the correspondence between Lagrangian algebras in \(\mathcal{Z}(\mathcal{C})\), and irreducible module categories over \(\mathcal{C}\)[10, 11]. With the results of this paper, we can explicitly construct all of these modules in the sense that we fully describe the functor \(\mathcal{C}(\mathfrak{sl}_{4},k)\to\operatorname{End}(\mathcal{M})\). Hence we upgrade the abstract classification to a concrete classification.
**Remark 1.4**.: We would like to highlight some relevant work regarding the module categories of \(\mathcal{C}(\mathfrak{sl}_{4},k)\). We first note that in [10] a complete description and classification of \(\mathcal{C}(\mathfrak{sl}_{4},k)\) module categories was claimed. No proofs were supplied.
The three type \(I\) exceptional modules can be constructed via conformal inclusions [21]. However this construction does not immediately give the full structure of the module category. Also note that in [16] a planar algebra presentation for the exceptional type \(I\) module when \(k=6\) is given. The full structure of this module was determined in [11]. Several of these graphs are discussed in [1, Section 6] and [1, Section 8].
In [15] a family of module categories over \(\mathcal{C}(\mathfrak{sl}_{4},k)\) is constructed. We expect that this family corresponds to the second infinite family of graphs below. In this same paper two families of modules over \(\mathcal{C}(\mathfrak{sl}_{4},k)^{\operatorname{ad}}\) are constructed. These families are the restrictions of the first and third families of modules below, from \(\mathcal{C}(\mathfrak{sl}_{4},k)\) down to \(\mathcal{C}(\mathfrak{sl}_{4},k)^{\operatorname{ad}}\).
In Section 6, we construct \(\operatorname{KW}\) cell systems on the following families of graphs:
for all \(k\) (constructing two families of charge conjugation modules), the family of graphs
when \(k\) is even (constructing a third family of charge conjugation modules), the graph
when \(k=4\) (constructing the sole exceptional module for \(\mathcal{C}(\mathfrak{sl}_{4},4)\)), the graphs
when \(k=6\) (constructing the two exceptional modules for \(\mathcal{C}(\mathfrak{sl}_{4},6)\)), and the graphs
when \(k=8\) (constructing the three exceptional modules for \(\mathcal{C}(\mathfrak{sl}_{4},8)\)).
As the module structure of \(\mathcal{C}(\mathfrak{sl}_{4},k)\) acting on itself is well known (see Figure 1), we neglect to solve the KW cell system in these cases. We expect that such a computation should be routine. In fact, a solution for a path representation of the Hecke algebra on the fusion graph for \(\Lambda_{1}\) is given for all \(N\) and \(k\) in [20]. Hence solving the remainder of the KW cell system on this graph just requires solving a linear system.
The action of \(\mathcal{C}(\mathfrak{sl}_{4},k)\) on the de-equivariantisations \(\mathcal{C}(\mathfrak{sl}_{4},k)_{\operatorname{Rep}(\mathbb{Z}_{m})}\) is also well understood. These exist for \(m=4\) when \(2\mid k\), and for \(m=2\) for all \(k\). The structure of the category \(\mathcal{C}(\mathfrak{sl}_{4},k)_{\operatorname{Rep}(\mathbb{Z}_{m})}\) is well known. In particular, the module fusion graph for action by \(\Lambda_{1}\) is the orbifold of the graph in Figure 1 by the canonical \(\mathbb{Z}_{m}\) action.
A quick count-up shows that the number of modules we have constructed above is exactly the number of modules classified abstractly in Theorem 1.3. Hence the modules appearing above (and explicitly constructed in Section 6) provide a classification of semi-simple module categories over \(\mathcal{C}(\mathfrak{sl}_{4},k)\) for all \(k\). This confirms claims made by Ocneanu regarding the "quantum subgroups" of \(SU(4)\)[12].
Furthermore, for the \(6\) exceptional graphs above, we also find solutions to the Kazhdan-Wenzl cell systems when \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). This gives exceptional module categories over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) at the appropriate \(q\) values. These modules cannot be constructed via conformal inclusions3. To the best of our knowledge, this is the first construction of these module categories. We also find KW cell system solutions on the following
graph when \(q=e^{2\pi i\frac{1}{2\omega}}\) and \(\omega=\pm\mathbf{i}\).
**Acknowledgements.** The second author would like to thank Dietmar Bisch for suggesting this problem back in 2019, Dave Penneys for several useful comments on graph planar algebras, Gwen McKinley for advice on drawing graphs in LaTeX, David Evans for useful feedback on an earlier version of this manuscript, as well as BIRS for hosting them while part of this project was completed. The second author was supported by NSF grant DMS 2245935.
Both authors would like to thank Hans Wenzl for many illuminating conversations, as well as for comments on a preliminary draft of this paper.
## 2. Preliminaries
We refer the reader to [1] for the basics of tensor categories and module categories. In this paper, a multi-tensor category is a \(\mathbb{C}\)-linear, locally finite, rigid monoidal category. A tensor category, for us, is a multi-tensor category whose unit object is simple.
### Oriented Planar Algebras
In this section we introduce oriented planar algebras following Jones [19, Notes 3.12.9], and Morrison [10]. Our definition is technically different, yet essentially identical to both of the cited definitions.
**Definition 2.1**.: An **oriented planar algebra** is a strict monoidal, strictly pivotal \(\mathbb{C}\)-linear category whose objects are parameterized by finite sequences \((\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{r})\) with \(\epsilon_{i}\in\{\pm 1\}\). The tensor product is given by concatenation of sequences:
\[(\epsilon_{1},\ldots,\epsilon_{r})\otimes(\delta_{1},\ldots,\delta_{s})=( \epsilon_{1},\ldots,\epsilon_{r},\delta_{1},\ldots,\delta_{s}).\]
The unit is given by the empty sequence, and is denoted by \((\emptyset)\) or \(\mathbb{1}\). The dual of an object \((\epsilon_{1},\ldots,\epsilon_{k})\) is obtained by reversing signs and order:
\[(\epsilon_{1},\ldots,\epsilon_{r})^{*}=(-\epsilon_{r},\ldots,-\epsilon_{1}).\]
**Remark 2.2**.: The strictly pivotal assumption means that there are fixed duality morphisms that are compatible with tensor product.
**Remark 2.3**.: The adjective "oriented" comes from the fact that it is traditional to use oriented strands in the graphical calculus to specify objects. For instance, reading morphisms bottom to top, the following diagram represents a morphism \((1,1,-1,1)\to(-1)\):
Oriented planar algebras are prevalent: they are strictifications of pivotal categories tensor generated by an object \(X\) and its dual \(X^{*}\).
**Definition 2.4**.: Given any pivotal, monoidal \(\mathbb{C}\)-linear category \(\mathcal{C}\), and an object \(X\) in \(\mathcal{C}\), we can form an oriented planar algebra generated by \(X\) and \(X^{*}\), denoted \(\mathcal{P}_{\mathcal{C};X}\), as follows (this strictification construction is due to Ng and Schauenberg [20, Theorem 2.2]). The oriented planar algebra \(\mathcal{P}_{\mathcal{C};X}\) is defined by
\[\operatorname{Hom}_{\mathcal{P}_{\mathcal{C};X}}\bigl{(}(\epsilon_{1},\dots, \epsilon_{r}),(\delta_{1},\dots,\delta_{s})\bigr{)}:=\operatorname{Hom}_{ \mathcal{C}}\left((\dots(X^{\epsilon_{1}}\otimes X^{\epsilon_{2}})\dots) \otimes X^{\epsilon_{r}},(\dots(X^{\delta_{1}}\otimes X^{\delta_{2}})\otimes \dots)\otimes X^{\delta_{s}}\right),\]
where we set \(X^{1}:=X\) and \(X^{-1}:=X^{*}\). The composition and tensor product of morphisms in \(\mathcal{P}_{\mathcal{C};X}\) is obtained from the composition and tensor product of morphisms in \(\mathcal{C}^{4}\). A choice of duality maps in \(\mathcal{C}\) for \(X\), say \(\operatorname{coev}_{X}:1\to X\otimes X^{*}\) and \(\operatorname{ev}_{X}:X^{*}\otimes X\to 1\) can be uniquely extended to duality maps for every object in \(\mathcal{P}_{X}\) in a way that makes \(\mathcal{P}_{\mathcal{C};X}\) strictly pivotal (see [20, Theorem 2.2] for details). Thus \(\mathcal{P}_{\mathcal{C};X}\) is an oriented planar algebra.
If \(\mathcal{C}\) is tensor generated by \(X\) and \(X^{*}\), then it is well-known that the Cauchy completion of \(\mathcal{P}_{\mathcal{C};X}\) is equivalent to \(\mathcal{C}\).
**Remark 2.5**.: When the ambient category \(\mathcal{C}\) is clear and unambiguous, we will use the shorthand notation \(\mathcal{P}_{X}\) instead of \(\mathcal{P}_{\mathcal{C};X}\).
Oriented planar algebras form a category, whose morphisms are strictly pivotal, strict monoidal functors [18, Section 1.7.5] which act as the identity on objects. In particular, strictly pivotal functors are required to preserve the choice of duality morphisms (not just be compatible with duality functors). The construction described above extends to a "strictification functor" from the category of pointed pivotal monoidal categories \((\mathcal{C},X)\) to the category of oriented planar algebras. When restricted to the class of categories tensor-generated by \(X\) and \(X^{*}\), the functor is an equivalence, and this establishes the following folklore result (cf. [1, Section 3] or [22, Theorem 4.1])
**Theorem 2.6**.: _The map \((\mathcal{C},X)\mapsto\mathcal{P}_{\mathcal{C};X}\) extends to an equivalence of categories_
\[\left\{\begin{aligned} \text{ Pairs }(\mathcal{C},X)\text{ with }\mathcal{C}\text{ a pivotal multi-tensor}\\ \text{ category generated by }X\text{ and }X^{*}\end{aligned}\right\}\cong\{ \text{Oriented planar algebras}\}.\]
### The Oriented Graph Planar Algebra
The oriented graph planar algebra associated to a finite directed graph \(\Gamma\), which we denote \(oGPA(\Gamma)\), is an important example of a unitary oriented planar algebra. In Section 4 we will explain the connection of this oriented planar algebra to the classification of module categories.
For the oriented GPA, it is important to consider paths on our graph which traverse edges backwards. To formalize this, we introduce new edges corresponding to the original edges, but with their directions reversed. This results in a signed graph (a graph where the edges are labeled by \(\pm 1\)), with the original edges labelled \(+1\) and the new edges labeled \(-1\). More precisely, we make the following definitions:
**Definition 2.7**.: Let \(\Gamma=(V,E)\) be a directed graph. Given \(e\in E\), let \(\overline{e}\) denote a new edge with the source and target of \(e\) swapped. The **signed graph associated to**\(\Gamma\) is given by
\[\overline{\Gamma}=(V,E\cup\overline{E}),\]
where \(\overline{E}=\{\overline{e}\ :e\in E\}\). Edges in \(E\) are given the sign \(+1\) while edges in \(\overline{E}\) are given the sign \(-1\).
**Definition 2.8**.: Suppose \(\epsilon=(\epsilon_{1},\dots,\epsilon_{r})\) is a sequence of \(1\)'s and \(-1\)'s. An \(\epsilon\)**-path** is a path \((f_{1},\dots,f_{r})\) in \(\overline{\Gamma}\) such that
\[\operatorname{sign}(f_{i})=\epsilon_{i}\quad\text{ for all }i.\]
When \(\epsilon=(\emptyset)\) then an \(\epsilon\)-path is a path of length zero, ie a vertex in \(\Gamma\). We denote such paths by vertex labels, ie \(v\in V\).
Any path in \(\overline{\Gamma}\) is an \(\epsilon\)-path for some \(\epsilon\). If \(p\) is a path, let \(s(p)\) denote the first vertex of the path and \(t(p)\) the final vertex. If \(p=v\) is a path of length \(0\) then \(s(p)=t(p)=v\).
**Definition 2.9**.: Let \(\Gamma=(V,E)\) be a finite directed graph. The **oriented graph planar algebra** associated to \(\Gamma\), denoted \(oGPA(\Gamma)\), is an oriented planar algebra defined as follows. The objects of \(oGPA(\Gamma)\) are finite sequences \(\epsilon=(\epsilon_{1},\dots,\epsilon_{r})\) of \(+1\)'s and \(-1\)'s. Given two objects \(\epsilon\) and \(\delta\), define a vector space
\[\operatorname{Hom}_{oGPA(\Gamma)}(\epsilon,\delta):=\operatorname{span}\{(p, q):p\text{ is an }\epsilon\text{-path, }q\text{ is a }\delta\text{-path, }s(p)=s(q),\text{ and }t(p)=t(q)\}.\]
Composition is defined as follows: given \((p,q)\in\operatorname{Hom}_{oGPA(\Gamma}(\epsilon,\delta)\) and \((p^{\prime},q^{\prime})\in\operatorname{Hom}_{oGPA(\Gamma}(\delta,\gamma)\), the composition \((p^{\prime},q^{\prime})\circ(p,q)\in\operatorname{Hom}_{oGPA(\Gamma)}( \epsilon,\gamma)\) is defined as
\[(p^{\prime},q^{\prime})\circ(p,q)=\delta_{q^{\prime},p}(p^{\prime},q). \tag{1}\]
Extending this linearly makes \(oGPA(\Gamma)\) into a \(\mathbb{C}\)-linear category.
The tensor structure is defined as follows: given \((p,q)\in\operatorname{Hom}_{oGPA(\Gamma}(\epsilon,\delta)\) and \((p^{\prime},q^{\prime})\in\operatorname{Hom}_{oGPA(\Gamma)}(\gamma,\beta)\), define
\[(p,q)\otimes(p^{\prime},q^{\prime})=\delta_{t(p),s(p^{\prime})}\delta_{t(q),s (q^{\prime})}(pp^{\prime},qq^{\prime}). \tag{2}\]
Here \(pp^{\prime}\) and \(qq^{\prime}\) denotes the concatenation of paths. Extending linearly, this definition makes \(oGPA(\Gamma)\) into a strict monoidal category.
We define a dagger structure on \(oGPA(\Gamma)\) as the anti-linear extension of
\[(p,q)^{\dagger}=(q,p).\]
As the morphisms \((p,q)\) are a full basis of matrix units for \(\operatorname{End}_{oGPA(\Gamma)}(\delta)\), we have that \(oGPA(\Gamma)\) is semisimple. We also immediately see that these algebras are \(C^{*}\)-algebras.
To define a pivotal structure on \(oGPA(\Gamma)\), let \(\lambda=(\lambda_{1},\dots,\lambda_{k})\) be the positive Frobenius-Perron eigenvector of \(\Gamma\). It is uniquely defined up to multiplication by a positive real number. As \(oGPA(\Gamma)\) is an oriented planar algebra, we have that \(\epsilon^{*}=(\epsilon_{1},\cdots,\epsilon_{n})^{*}=(-\epsilon_{n},\cdots,- \epsilon_{1})\). We define
\[\operatorname{ev}_{(+,-)} :=\sum_{(e,\,\overline{e})\text{ a }(1,\,-1)\text{-path}}\sqrt{\frac{ \lambda_{\epsilon(e)}}{\lambda_{\epsilon(e)}}}((e,\overline{e}),s(e)):(1,-1) \to\mathbb{1} \tag{4}\] \[\operatorname{coev}_{(-,+)} :=\sum_{(\overline{e},\,e)\text{ a }(-1,\,1)\text{-path}}\sqrt{\frac{ \lambda_{s(e)}}{\lambda_{t(e)}}}(t(e),(\overline{e},e)):\mathbb{1}\to(-1,1)\] (5) \[\operatorname{ev}_{(-,+)} :=\sum_{(\overline{e},\,e)\text{ a }(-1,\,1)\text{-path}}\sqrt{\frac{ \lambda_{s(e)}}{\lambda_{t(e)}}}((\overline{e},e),t(e)):(-1,1)\to\mathbb{1}\] (6) \[\operatorname{coev}_{(+,-)} :=\sum_{(e,\,\overline{e})\text{ a }(1,\,-1)\text{-path}}\sqrt{\frac{ \lambda_{t(e)}}{\lambda_{s(e)}}}(s(e),(e,\overline{e})):\mathbb{1}\to(1,-1) \tag{3}\]
Clearly these definitions do not change if \(\lambda\) is rescaled by a positive real number. A simple computation shows that these maps satisfy the zig-zag relations.
A direct computation shows that the identity map is a monoidal natural isomorphism \(**\to\operatorname{id}_{oGPA(\Gamma)}\). Hence we choose this as our pivotal structure. Note that \(\operatorname{ev}_{(-,+)}^{\dagger}=\operatorname{coev}_{(-,+)}\), and thus our chosen pivotal structure is a unitary pivotal structure in the sense of [10, Definition 3.11].
Finally, we verify that \(oGPA(\Gamma)\) is a unitary category. From the explicit basis of the hom spaces, the inner product coming from the \(\dagger\)-structure is easily seen to be positive definite. This then implies unitarity by [18, Lemma 3.51.].
### The multi-tensor category \(M_{k}(\text{Vec})\)
In this subsection we introduce the multi-tensor category \(M_{k}(\text{Vec})\). As we will see in Section 4 (following ideas of [1]), there is a close connection between \(M_{k}(\text{Vec})\) and the graph planar algebra for \(\Gamma\).
The category \(M_{k}(\text{Vec})\) is a semisimple multi-tensor category which is a categorification of the ring \(M_{k}(\mathbb{N})\). Informally, we replace natural numbers by vector spaces, addition of natural numbers by direct sum, and multiplication of natural numbers by tensor product. The category is recognizable as the category of endomorphisms in the 2-category \(2\,\text{Vec}\) (specifically, the 2-category \(2\,\text{Vec}_{c}\) in [11, 12]). Equivalently, \(M_{k}(\mathbb{N})\) is monoidally equivalent to \(\text{End}(\mathcal{M})\) where \(M\) is the unique semisimple category of rank \(k\). More formally, the category is defined as follows.
The objects are \(k\times k\) matrices whose entries are (finite-dimensional) Hilbert spaces. The morphisms are \(k\times k\) matrices of linear transformations. The composition of morphisms is given by entry-wise composition of linear transformations. This category is \(\mathbb{C}\)-linear and semisimple, with the direct sum of objects given by entry-wise direct sum of vector spaces. Every simple object is isomorphic to an object with a copy of \(\mathbb{C}\) in one entry of the matrix, and the 0 vector space in all other entries. The simple object whose non-zero entry occurs in the \((i,j)\)-th entry is denoted \(E_{ij}\).
The tensor structure on \(M_{k}(\text{Vec})\) is defined as follows. Given two objects, say
\[A=\begin{bmatrix}A_{11}&\dots&A_{1k}\\ \vdots&\ddots&\vdots\\ A_{k1}&\dots&A_{kk}\end{bmatrix},\quad B=\begin{bmatrix}B_{11}&\dots&B_{1k}\\ \vdots&\ddots&\vdots\\ B_{k1}&\dots&B_{kk}\end{bmatrix}\]
then the object \(A\otimes B\) is defined by
\[A\otimes B=[(A\otimes B)_{ij}]:=\left[\bigoplus_{l=1}^{k}A_{il} \otimes B_{lj}\right]_{ij}.\]
Similarly, given two morphisms, say \(f=[f_{ij}]_{i,j}\) and \(g=[g_{ij}]_{i,j}\) (where \(f_{ij}\) and \(g_{ij}\) denote linear transformations), define
\[f\otimes g=[(f\otimes g)_{ij}]_{i,j}:=\left[\bigoplus_{l=1}^{k}f_{i,l}\otimes g _{l,j}\right]_{i,j}.\]
The unit for the category is given by \(1=E_{11}\oplus\dots\oplus E_{kk}\) (i.e. the identity matrix, with a copy of \(\mathbb{C}\) in each diagonal entry). The tensor structure is not strict, but has standard associators and unitors coming from the standard associativity and distributivity isomorphisms in Vec.
The category is rigid. If \(A=[A_{ij}]_{ij}\) is an object, then the dual object is obtained by transposing the matrix for \(A\) and applying the duality functor in Vec to every entry:
\[A^{*}:=[A_{ji}^{*}]_{ij}.\]
The category \(M_{k}(\text{Vec})\) has a dagger structure which makes it a unitary category. It is defined by
\[\begin{bmatrix}f_{11}&\dots&f_{1k}\\ \vdots&\ddots&\vdots\\ f_{k1}&\dots&f_{kk}\end{bmatrix}^{\dagger}=\begin{bmatrix}f_{11}^{\dagger}& \dots&f_{1k}^{\dagger}\\ \vdots&\ddots&\vdots\\ f_{k1}^{\dagger}&\dots&f_{kk}^{\dagger}\end{bmatrix}\]
where \(f_{ij}^{\dagger}\) denotes the usual complex conjugate of a matrix. It is easily checked this gives \(M_{k}(\text{Vec})\) the structure of a unitary category.
The category admits a pivotal structure. We fix explicit standard (left) duality morphisms. Given an object \(A=[A_{ij}]_{ij}\), define
\[\text{ev}_{A}^{std}:=\begin{bmatrix}\bigoplus_{l=1}^{k}\text{ev}_{A_{l1}}&& \\ &\ddots&\\ &&\bigoplus_{l=1}^{k}\text{ev}_{A_{lk}}\end{bmatrix}:A^{*}\otimes A\to \mathbb{1}\,,\]
\[\mathrm{coev}_{A}^{std}:=\begin{bmatrix}\bigoplus_{l=1}^{k}\mathrm{ coeev}_{A_{il}}&&&\\ &\ddots&\\ &&\bigoplus_{l=1}^{k}\mathrm{coev}_{A_{kl}}\end{bmatrix}:\mathbb{1}\to A \otimes A^{*},\]
where \(\mathrm{ev}_{A_{ij}}:A_{ij}^{*}\otimes A_{ij}\to\mathbb{C}\) and \(\mathrm{coev}_{A_{ij}}:\mathbb{C}\to A_{ij}\otimes A_{ij}^{*}\) denote the standard left duality morphisms in Vec. The pivotal structure \(A\to A^{**}\) is inherited from the usual natural isomorphism between a vector space and its double dual. It is straightforward to check this choice of pivotal structure is spherical, and every simple object has dimension \(\mathrm{id}_{1}\in\mathrm{End}_{M_{k}(\mathrm{Vec})}(\mathbb{1})\). The right duality maps corresponding to this pivotal structure are given by
\[\widetilde{\mathrm{ev}}_{A}^{std} :=\begin{bmatrix}\bigoplus_{l=1}^{k}\widetilde{\mathrm{ev}}_{A_{ il}}&&&\\ &\ddots&\\ &&\bigoplus_{l=1}^{k}\widetilde{\mathrm{ev}}_{A_{kl}}\end{bmatrix}:A\otimes A ^{*}\to\mathbb{1},\] \[\widetilde{\mathrm{coev}}_{A}^{std} :=\begin{bmatrix}\bigoplus_{l=1}^{k}\widetilde{\mathrm{coev}}_{A_ {Il}}&&&\\ &\ddots&\\ &&\bigoplus_{l=1}^{k}\widetilde{\mathrm{coev}}_{A_{lk}}\end{bmatrix}:\mathbb{1} \to A^{*}\otimes A^{*},\]
where \(\widetilde{\mathrm{ev}}_{A_{ij}}:A_{ij}\otimes A_{ij}^{*}\to\mathbb{C}\) and \(\widetilde{\mathrm{coev}}_{A_{ij}}:\mathbb{C}\to A_{ij}^{*}\otimes A_{ij}\) are the standard right duality morphisms in Vec.
The pivotal structure chosen above is not unique. Given any vector \(\lambda=(\lambda_{1},\dots,\lambda_{k})\) of non-zero complex numbers, we may define new left and right duality morphisms by
\[\mathrm{ev}_{A}^{\lambda} :=\begin{bmatrix}\bigoplus_{l=1}^{k}\sqrt{\frac{\lambda_{1}}{ \lambda_{l}}}\,\mathrm{ev}_{A_{l1}}&&&\\ &\ddots&\\ &&\bigoplus_{l=1}^{k}\sqrt{\frac{\lambda_{k}}{\lambda_{l}}}\,\mathrm{ev}_{A_{ lk}}\end{bmatrix}:A^{*}\otimes A\to\mathbb{1}, \tag{8}\] \[\widetilde{\mathrm{ev}}_{A}^{\lambda} :=\begin{bmatrix}\bigoplus_{l=1}^{k}\sqrt{\frac{\lambda_{l}}{ \lambda_{1}}}\widetilde{\mathrm{ev}}_{A_{il}}&&&\\ &\ddots&\\ &&\bigoplus_{l=1}^{k}\sqrt{\frac{\lambda_{l}}{\lambda_{k}}}\widetilde{ \mathrm{ev}}_{A_{kl}}\end{bmatrix}:A\otimes A^{*}\to\mathbb{1} \tag{7}\]
The modified coevaluation maps are fixed by requiring the zig-zag relations hold. The choices for square roots are required to satisfy \(\sqrt{\frac{\lambda_{i}}{\lambda_{j}}}\sqrt{\frac{\lambda_{i}}{\lambda_{i}}}=1\). If \(\lambda\) consists of all positive real numbers, we pick the positive square roots. These duality maps only depend on \(\lambda\) up to multiplication by a scalar.
In general, this pivotal structure is not spherical. We denote the pivotal category with this choice of pivotal structure \((M_{k}(\mathrm{Vec}),\lambda)\).
### Kazhdan-Wenzl Skein Theory
In this subsection we describe (a slight modification of5) the Kazhdan-Wenzl presentation [10] for the tensor category \(\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\).
Footnote 5: The results of [10] gives a presentation only allowing upwards pointing strands. We present a slight generalisation here which allows for strands in any orientation. As such, we have to give slight extensions to the results of [10] throughout this section. These extensions are all routine.
We begin by presenting the pre-semisimplified version of this category. Let \(N\in\mathbb{N}_{\geq 2}\), \(q\in\mathbb{C}\), and \(\omega\) an \(N\)-th root of unity, and let \(\Lambda_{1}\in\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}\) be the "vector representation". Our first goal is to describe the oriented \(\dagger\)-planar algebra \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega};\Lambda_{1}}\). The results of [10] give generators for this planar algebra.
**Lemma 2.10**.: _[_10_]_ _The oriented \(\dagger\)-planar algebra \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega};\Lambda_{1}}\) is generated by the projection_
\[\xygraph{\xygraph{\xygraph{\xygraph{\xygraph{\xygraph{\xygraph{\xygraph{\xygraph{\xygraph{ \xygraph{\xygraph{\xygraph{\xyxygraph{\xyxygraphgraph{\xyxyxygraphgraph{\xy
_onto \(\Lambda_{2}\), along with an element (unique up to scalar)_
Proof.: It is shown in [13, Theorem 4.1 and Proposition 2.2] that these morphisms generate all the spaces \(\operatorname{Hom}_{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))\vdash}( \Lambda_{1}^{\otimes n}\to\Lambda_{1}^{\otimes m})\) with \(n,m\geq 0\). This result then extends to all spaces in \(\mathcal{P}_{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))\dashv;\Lambda_{1}}\) (which recall allows homs from arbitrary strings of \(\Lambda_{1}\) and \(\Lambda_{1}^{*}\)) using the rigidity maps, and the element
which is a braiding on the subcategory generated by \(\boxed{\begin{array}{c}\end{array}}\)
We will write \(\boxed{\begin{array}{c}\end{array}}\) for \(\left(\boxed{\begin{array}{c}\end{array}}\right)^{\dagger}\). To simplify our relations, we use the rescaled generator
in our presentation instead of the projection. These generators satisfy the following relations:
(R3): \(\boxed{\begin{array}{c}\end{array}}=\)\(\boxed{\begin{array}{c}\end{array}}-\)\(\boxed{\begin{array}{c}\end{array}}\)\(\boxed{\begin{array}{c}\end{array}}-\)\(\boxed{\begin{array}{c}\end{array}}\)\(\boxed{\begin{array}{c}\end{array}}-\)\(\boxed{\begin{array}{c}\end{array}}\)\(\boxed{\begin{array}{c}\end{array}}-\)\(\boxed{\begin{array}{c}\end{array}}\)\(\boxed{\begin{array}{c}\end{array}}-\)\(\boxed{\begin{array}{c}\end{array}}\)\(\boxed{\begin{array}{c}\end{array}}-\)\(\boxed{\begin{array}{c}\end{array}}\)\(
Here \(Y\) is the invertible element
\[\xygraph{!(0,0)[c]{c}@(0,0)[c]{c}@(0,0)[c]{c}@(0,0)[c]{c}@(0,0)[c]{c}@(0,0)[c]{ c}@(0,0)[c]{c
From the above isomorphism of groups, we get an element \(\ell^{\prime}\in\mathbb{Z}_{\mathrm{LCM}(2(N+k),N)}^{\times}\) such that \(q^{\ell^{\prime}}=e^{2\pi i\frac{1}{2(N+k)}}\). Hence we can Galois conjugate \(\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) by \(\ell^{\prime}\) to obtain the category \(\overline{\mathrm{Rep}(U_{e^{2\pi i\frac{1}{2(N+k)}}}(\mathfrak{sl}_{N}))^{ \omega^{\prime}}}\) where \(\omega^{\prime}=\omega^{\ell^{\prime}}\).
**Remark 2.15**.: Note that the above lemma implies that the inner product coming from the \(\dagger\) structure on \(\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) is non-degenerate for all \(q\).
In the unitary setting (and more generally when the inner product in non-degenerate), we obtain the relation (Anti-Sym 2) for free, and also that all negligibles are \(0\). This gives the following result, which is essentially shown in [10].
**Lemma 2.16**.: _[_10_]_ _Let \(\mathcal{P}\) be an oriented unitary planar algebra generated by morphisms_
_satisfying relations (R1), (R2), (R3), (Hecke), (Over Braid), and (Anti-Sym 1). Then_
\[\mathcal{P}\cong\mathcal{P}_{\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N})) ^{\omega},\Lambda_{1}}}.\]
Proof.: This is a standard technique. First note that (Anti-Sym 2) holds in \(\mathcal{P}\) as \(p_{\Lambda_{N+1}}^{\dagger}=p_{\Lambda_{N+1}}\), and so \(\langle p_{\Lambda_{N+1}},p_{\Lambda_{N+1}}\rangle=\mathrm{tr}(p_{\Lambda_{N+1} })=0\) as the inner product is positive definite. Similarly any negligible element \(f\in\mathcal{P}\) has \(\langle f,f\rangle=0\), and hence is \(0\). We thus have a surjective map
\[\mathcal{P}\to\mathcal{P}_{\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N})) ^{\omega},\Lambda_{1}}}.\]
Applying [1, Proposition 3.5], the kernel of this map is a sub-ideal of the negligible ideal of \(\mathcal{P}\) as \(\mathcal{P}_{\emptyset}\) is \(1\)-dimensional. Hence the kernel is trivial. Thus the map is an isomorphism.
## 3. A Refinement of Kazhdan-Wenzl
There are two scary relations (from a GPA point of view) in the Kazhdan-Wenzl presentation for \(\mathcal{P}_{\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}, \Lambda_{1}}}\) described in Subsection 2.4. These are (Anti-Sym 1) and (Over Braid). This is due to the fact that to solve for (Anti-Sym 1) in the graph planar algebra of \(\Gamma\), we have to consider loops of length \(2N\) in \(\Gamma\), as well as summing over all internal configurations of the morphism \(p_{\Lambda_{N}}\). From a computational point of view this is impractical for large \(N\). The relation (Over Braid) is not as bad, but still requires solving a degree \(N\) polynomial. Again this will scale badly with \(N\).
The goal of this section is to replace the two relations (Anti-Sym 1) and (Over Braid) with simpler relations (from a GPA point of view). In our setting this means relations with fewer external boundary edges, and fewer internal faces. The main result of this section shows that we can achieve this with the three relations below.
**Lemma 3.1**.: _We have the following relations in \(\mathcal{P}_{\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega};\Lambda _{1}}}\) for all \(q\):_
_(Braid Absorption)_ _(Rotational Invariance)_ _(Norm)_
Proof.: The relation (Norm) comes from taking the categorical trace of relation (Anti-Sym 1). Then by glueing a to the bottom of relation (Anti-Sym 1), we obtain
As sticking a on top of \(p_{\Lambda_{N}}\) gives \([2]_{q}p_{\Lambda_{N}}\)[13, Theorem 4] (their crossing is \(-q^{2}Y\) in our basis), we get (Braid Absorption).
To get (Rotational Invariance), we take the left partial trace of (Over Braid) to get
The left hand side simplifies to using relations (R1) and (Braid Absorption).
With some work, we can show that in the non-classical case, (Over Braid) follows from these new relations.
**Lemma 3.2**.: _Let \(\mathcal{P}\) be an oriented planar algebra satisfying relations (R1), (R2), (R3), (Hecke), (Anti-Sym 1), (Anti-Sym 2), (Braid Absorption), (Rotational Invariance), and (Norm). Suppose \(q^{2}\neq 1\), then \(\mathcal{P}\) satisfies (Over Braid)._
Proof.: From (Hecke) we get that \(Y\) is invertible, with inverse
We then have the equation
From [11, Corollary 2.1]7 that
Footnote 7: We have to be extremely careful here, as [11] assumes the oriented Reidemeister move (oR2) that we do not have without the assumption of (Over Braid). However carefully working through the details of their proof show that only the relations we have listed are required. Morally the reason we don’t require (oR2) is because this relation takes place in the Kazhdan-Wenzl subcategory where all strands are upwards pointing.
It follows from (Rotational Invariance) and (Braid Absorption) that absorbs a \(Y\) in any position at the cost of \(\frac{1}{q^{2}}\). Using the recursive formula for \(p_{\Lambda_{N}}\) this implies the relations
We can now compute
We also compute
\[=(-1)^{N+1}\overline{\omega}\]
Here the first equality follows from (Rotational Invariance), the second is rigid isotopy, the third is from (Anti-Sym 1), and the fourth follows from (R1) and the recursive definition of \(p_{\Lambda_{N}}\).
We now use the relation
to compute
where
\[X_{i}\in\left\{q^{-2}\left|\begin{array}{c}\includegraphics[scale=0.5]{figs/.eps}\end{array}\right|^{-1},(q^{-2}-1)\left|\begin{array}{c}\includegraphics[scale=0.5]{figs/.eps}\end{array}\right| \right\}.\]
and the sum is taken over the \(2^{N}-1\) possibilities for \(X_{i}\) where not all \(X_{i}=q^{-2}Y^{-1}\). As each term in the above sum contains as least one \(X_{i}\) with identity strands, we can use (Braid Absorption), along with the earlier relations to simplify the right hand side to obtain
We can simplify the last term as follows
\[\sum_{i=0}^{N-1}(q^{-2}-1)^{N-i}q^{-2i}q^{2i}\binom{N}{i}=\frac{(-1)^{N+1} \overline{\omega}}{[N]_{q}}(q^{-2}-1)^{N}\sum_{i=0}^{N-1}\left(\frac{1}{q^{-2 }-1}\right)^{i}\binom{N}{i}\]
\[=\frac{(-1)^{N+1}\overline{\omega}}{[N]_{q}}(q^{-2}-1)^{N}\left(\left(1+\frac{1}{q ^{-2}-1}\right)^{N}-\left(\frac{1}{q^{-2}-1}\right)^{N}\right)\]
\[=(-1)^{N+1}\overline{\omega}\frac{\left(q^{-N-1}-q^{N-1}\right)\left(q^{2}-1 \right)}{q^{2N}-1}.\]
With this simplification, we can simplify and rearrange the original equation to obtain
\[\left(1-q^{-2}\right)\left(\begin{array}{c}\includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./.eps}\\ \includegraphics[width=142.364pt]{./eps}\\ \includegraphics[width=142.
Finally we get the desired
which is exactly (Over-Braid) up to rearranging.
If we assume that our planar algebra is unitary (which will come for free in our setting where we have our planar algebra realised as \(\dagger\)-planar algebra of the unitary \(oGPA(\Gamma)\)), it is fairly easy to show that (Anti-Sym 1) is a consequence of these three new relations.
**Lemma 3.3**.: _Let \(\mathcal{P}\) be an oriented unitary planar algebra satisfying relations (R1), (R2), (R3), (Hecke), (Braid Absorption), (Rotational Invariance), and (Norm). Then \(\mathcal{P}\) satisfies (Anti-Sym 1)._
Proof.: To show that (Anti-Sym 1) holds, we use the standard inner product trick (see [1] for an example). Let
From Remark 2.12 we know \(p_{\Lambda_{N}}^{\dagger}=p_{\Lambda_{N}}\), and so \(f^{\dagger}=f\). Not assuming (Anti-Sym 1) (nor (Over Braid)), we compute
Note that (Braid Absorption) with (Rotational Invariance) imply that absorbs a \(Y\) in any position at the cost of \(q^{-2}\). With this we compute
\[\includegraphics[width=142.26378pt]{FigS1.eps}=\frac{1}{\sum_{j=0}^{i-1}q^{-2j}} \left(\includegraphics[width=142.26378pt]{FigS2.eps}+\cdots+\includegraphics[width=142.26378pt]{FigS3.eps}=\frac{1}{\sum_{j=0}^{i-1}q^{-2j}}\left(\includegraphics[width=142.26378pt]{FigS4.eps}\right)=\frac{1}{\sum_{j=0}^{i-1}q^{-2j}}\left(\includegraphics [width=142.26378pt]{FigS5.eps}\right)=\frac{1}{\sum_{j=0}^{i-1}q^{-2j}}\left( \includegraphics[width=142.26378pt]{FigS6.eps}\right)=\left(\includegraphics[width=142.26378pt]{FigS7.eps}\right)\]
This recursively shows that
As \(\mathcal{P}\) is unitary, we have that \(f=0\), and thus
Summarising the results of this section, we have the following:
**Theorem 3.4**.: _Let \(\mathcal{P}\) be an oriented unitary planar algebra, generated by morphisms_
_satisfying relations (R1), (R2), (R3), (Hecke), (Braid Absorption), (Rotational Invariance), and (Norm). If \(q^{2}\neq 1\) then_
\[\mathcal{P}\cong\mathcal{P}_{\overline{\operatorname{Rep}(U_{q}(\mathfrak{s} \mathfrak{t}_{N}))^{\varpi};\Lambda_{1}}}.\]
Proof.: The results of this section show that (Over Braid) and (Anti-Sym 1) hold in this setting. The result then follows from Lemma 2.16.
## 4. The oriented module embedding theorem
The goal of this section is the prove the equivalence between \(\mathcal{C}\)-module categories, and embeddings of a planar algebra of \(\mathcal{C}\) in \(oGPA(\Gamma)\). We do this by following the methods of [1]. Let us briefly outline this strategy.
A \(\mathcal{C}\)-module category \(\mathcal{M}\) with \(k\) distinct simples is described by a monoidal functor (not necessarily strict)
\[F:\mathcal{C}\rightarrow\operatorname{End}(\mathcal{M}).\]
Recall from Subsection 2.3 that \(\operatorname{End}(\mathcal{M})\) is equivalent to \(M_{k}(\operatorname{Vec})\) as a multi-tensor category. Suppose \(X\) is a \(\otimes\)-generator for \(\mathcal{C}\), and \(\mathcal{P}_{X}\) the associated oriented planar algebra. Then \(X\) will map to \(F(X):=\Gamma\in\operatorname{End}(\mathcal{M})\), and \(\mathcal{P}_{X}\) will embed into \(\mathcal{P}_{\Gamma}\). The main substance of this section of then showing that \(\mathcal{P}_{\Gamma}\) is equivalent to \(oGPA(\Gamma)\). This shows that embeddings \(\mathcal{P}_{X}\to oGPA(\Gamma)\) give \(\mathcal{C}\)-module categories, and _vice versa_. With the high level argument in mind, let us now proceed with the details.
Let \(\Gamma\) be a directed graph and let \(d_{ij}\) denote the number of edges from \(i\) to \(j\). Abusing notation, we also let \(\Gamma\) denote the following object of \(M_{k}(\operatorname{Vec})\):
\[\Gamma=\begin{bmatrix}V_{11}&\dots&V_{1k}\\ \vdots&\ddots&\vdots\\ V_{k1}&\dots&V_{kk},\end{bmatrix}\]
where each \(V_{ij}\) is an arbitrary, but fixed, vector space of dimension \(d_{ij}\).
Let \(\lambda=(\lambda_{1},\dots,\lambda_{k})\) be the positive eigenvector for \(\Gamma\). Using the construction in Definition 2.4 we have the oriented planar algebra \(\mathcal{P}_{\Gamma}\) in \((M_{k}(\operatorname{Vec}),\lambda)\).
**Theorem 4.1**.: _There is a \(\dagger\)-isomorphism of \(\dagger\)-planar algebras_
\[\mathcal{P}_{\Gamma}\cong oGPA(\Gamma).\]
Proof.: We construct an isomorphism explicitly, following the same strategy as [GMP\({}^{+}\)18] which covered the unoriented case. This is essentially identical to their proof, with minor adjustments to account for non self-dual objects.
Suppose \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{r})\) and \(\delta=(\delta_{1},\ldots,\delta_{s})\) are two sequences of \(\pm 1\)'s (possibly with \(r\) or \(s\) equal to \(0\)). We describe linear bijections
\[\operatorname{Hom}_{oGPA(\Gamma)}(\epsilon,\delta)\to\operatorname{Hom}_{ \mathcal{P}_{\Gamma}}(\epsilon,\delta)\]
and check these give an oriented planar algebra isomorphism. First, we establish notation.
Let \((V_{ij})^{-1}:=V_{ji}^{*}\) and \(\Gamma^{-1}=\Gamma^{*}\). In this notation,
\[\Gamma^{-1}=\begin{bmatrix}V_{11}^{-1}&\ldots&V_{1k}^{-1}\\ \vdots&\ddots&\vdots\\ V_{k1}^{-1}&\ldots&V_{kk}^{-1}.\end{bmatrix}.\]
Given \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{r})\), we have \(\Gamma^{\epsilon}=\Gamma^{\epsilon_{1}}\otimes\cdots\otimes\Gamma^{\epsilon_ {r}}\). Hence the \((i,j)\)-th entry of the object \(\Gamma^{\epsilon}\) may be written
\[(\Gamma^{\epsilon})_{ij}=\bigoplus_{l_{1},\ldots,l_{r-1}=1}^{k}V_{il_{1}}^{ \epsilon_{1}}\otimes V_{l_{1}l_{2}}^{\epsilon_{2}}\otimes\cdots\otimes V_{l_{ r-1}j}^{\epsilon_{r}}. \tag{9}\]
If \(r=0\), then \(\Gamma^{\epsilon}\) is the unit in \(M_{k}(\operatorname{Vec})\), so \((\Gamma^{\epsilon})_{ij}=(\mathbb{1})_{ij}=\delta_{ij}\,\mathbb{C}\). For each \(V_{ij}\), let
\[\{v_{i,j}^{l}\in V_{ij}\ |\ l=1,\ldots,k\}\]
denote an arbitrary, but fixed, basis of \(V_{ij}\). We assume the edges in \(\Gamma\) from \(i\) to \(j\) are labeled by \(\{1,\ldots,d_{ij}\}\), so there is a natural bijection between these edges and the basis of \(V_{ij}\) chosen above. With these bases chosen let
\[\{\overline{v}_{i,j}^{l}\in V_{ij}^{*}\ |\ l=1,\ldots,k\}\]
denote the corresponding dual bases of \(V_{ij}^{*}\). The vectors we have chosen index the edges in \(\Gamma\cup\overline{\Gamma}\). Explicitly, there is a bijection
\[\psi:\{\text{edges in }\Gamma\cup\overline{\Gamma}\}\to\bigsqcup_{i,j,l} \{v_{i,j}^{l},\overline{v}_{i,j}^{l}\}.\]
This map \(\psi\) extends to all paths in \(\Gamma\cup\overline{\Gamma}\): given a path \(p=(p_{1},\ldots,p_{r})\) of type \(\epsilon\) from \(i\) to \(j\), define
\[\psi(p):=\begin{cases}\psi(p_{1})\otimes\cdots\otimes\psi(p_{r})\in(\Gamma^{ \epsilon})_{ij}&r>0\\ 1\in(1)_{ii}&r=0.\end{cases}\]
By Eq. (9), the set
\[\{\psi(p)\ |\ p\text{ is an }\epsilon\text{-path from }i\text{ to }j\}\]
is a basis of the vector space \((\Gamma^{\epsilon})_{ij}\).
We are ready to define an isomorphism
\[oGPA(\Gamma)\xrightarrow{F}\mathcal{P}_{\Gamma}.\]
Given \((p,q)\in\operatorname{Hom}_{oGPA(\Gamma)}(\epsilon,\delta)\) with \(s(p)=s(q)=i\) and \(t(p)=t(q)=j\), define the linear map \(F_{pq}:(\Gamma^{\epsilon})_{ij}\to(\Gamma^{\delta})_{ij}\) which acts on basis elements by
\[F_{p,q}(\psi(p^{\prime}))=\begin{cases}\psi(q)&\text{ if }p^{\prime}=p\\ 0&\text{ otherwise.}\end{cases}\]
Finally, define \(F((p,q))\) by
\[F((p,q)):=\begin{bmatrix}&j\\ &F_{p,q}&\\ &&\end{bmatrix}\ i\ \in\operatorname{Hom}_{\mathcal{P}_{\Gamma}}(\epsilon, \delta).\]
Extending this definition linearly defines \(F\) on all of \(oGPA(\Gamma)\). It is clearly a linear bijection on hom-spaces. We must check that it provides an oriented planar algebra isomorphism, or in other words a pivotal strict monoidal functor. The proof that \(F\) is a strict monoidal functor is identical to [GMP\({}^{+}\)18, Section 3] and omitted.
We check that \(F\) is strictly pivotal. It suffices to check \(F\) preserves cups, i.e. \(F(\operatorname{ev}_{(+,-)})=\operatorname{ev}_{\Gamma}^{\lambda}\in\operatorname {Hom}_{\mathcal{P}_{\Gamma}}((+,-),\mathbb{1})\) and similarly for the right evaluation morphisms. Comparing Eqs. (3) and (7)), it suffices to prove
\[F\left(\sum_{e:\ s(e)=i,\ t(e)=j}((\overline{e},e),s(e))\right)=\bigoplus_{l=1 }^{k}\operatorname{ev}_{V_{ij}}.\]
By the definition of \(F_{p,q}\), if \(e\) is an edge in \(\Gamma\), then \(F_{(e,\overline{e}),s(e)}\) is the linear map \(V_{s(e),t(e)}^{*}\otimes V_{s(e),t(e)}\to\mathbb{C}\) which sends \(\psi(\overline{e})\otimes\psi(e)\) to \(\psi(s(e))=1\). Therefore
\[\sum_{e:\ s(e)=i,\ t(e)=j}F_{(e,\overline{e}),s(e)}=\operatorname{ev}_{V_{ij}}. \tag{10}\]
This proves that \(F\) preserves the left duality caps. The argument for the right duality caps is similar.
Finally from the explicit description of the \(\dagger\) structure on both \(oGPA(\Gamma)\) and \(M_{k}(\operatorname{Vec})\), we see this isomorphism preserves these dagger structures.
With the above theorem in hand, we now obtain the oriented version of the graph planar algebra embedding theorem.
**Theorem 4.2**.: _Let \(\mathcal{C}\) be a (unitary) pivotal fusion category, and \(X\in\mathcal{C}\) a \(\otimes\)-generator. Then there is a bijective correspondence between_
1. _semisimple pivotal (_\(C^{*}\)_-)module categories_ \(\mathcal{M}\) _over_ \(\mathcal{C}\) _whose module fusion graph for_ \(X\) _is_ \(\Gamma\)_, and_
2. _embeddings of oriented (unitary) planar algebras_ \(\mathcal{P}_{X}\to oGPA(\Gamma)\)_._
_The equivalence relation on 1) is (unitary) equivalence of module categories, and the equivalence relation on 2) is (unitary) natural isomorphism of planar algebra morphisms._
Proof.: By Theorem 2.6 and [GMP\({}^{+}\)18, Corollary 3.53], a (\(C^{*}\))- module category of 1) is equivalent to a (\(\dagger\)) planar algebra morphism
\[\mathcal{P}_{X}\to\mathcal{P}_{\Gamma}.\]
We then have from Theorem 4.1 the (\(\dagger\)) isomorphism
\[\mathcal{P}_{\Gamma}\cong oGPA(\Gamma).\]
Hence the module category of 1) is equivalent to a (\(\dagger\)) morphism
\[\mathcal{P}_{X}\to oGPA(\Gamma).\]
As \(\mathcal{C}\) is semisimple and fusion this morphism must be injective. Hence we have the data of 2).
The equivalence relation is obtained by pushing through the definition of module category equivalence through the above chain of isomorphisms and equivalences.
Before we end this section, we would like to prove one more general result regarding GPA embeddings. This result is well-known to experts8, however we could not find a proof in the literature. This lemma is useful, as it allows categorical data to be deduced from combinatorial data. In the reverse direction, this lemma allows the module fusion graphs for every object of \(\mathcal{C}\) to be determined from the GPA embedding.
Footnote 8: We thank Hans Wenzl for informing us of the following lemma.
**Lemma 4.3**.: _Let \(X\in\mathcal{C}\), and \(p_{Z}\in\operatorname{\mathit{End}}_{\mathcal{C}}(X^{\otimes n})\) a minimal projection onto \(Z\in\mathcal{C}\). Let \(\mathcal{M}\) be a \(\mathcal{C}\)-module, and \(\Gamma_{Y}\) the module fusion graph for action by \(Y\in\mathcal{C}\). Let_
\[\phi:\mathcal{P}_{X}\to oGPA(\Gamma_{X})\]
_be an embedding corresponding to the \(\mathcal{C}\)-module \(\mathcal{M}\) under the bijection of Theorem 4.2. Let \(M_{1},M_{2}\) simple objects of \(\mathcal{M}\), then we have_
\[\sum_{(q,q):\quad q\text{ is a }+^{n}\text{-path},s(q)=M_{1},t(q)=M_{2}}\phi(p_{Z}) [(q,q)]=(\Gamma_{Z})_{M_{1}\to M_{2}}\,.\]
Proof.: Consider the commutative diagram of semi-simple algebras
The top most arrow is exactly the map \(\phi\). The downward arrow is the restriction of a functional to basis elements \((p,q)\) where both \(p\) and \(q\) begin at the vertex \(M_{1}\) (and implicitly using the isomorphism from Theorem 4.1). The bottom inclusion is the natural embedding \(f\mapsto f\otimes\operatorname{id}_{M_{1}}\). This diagram commutes due to the bijection between \(\mathcal{C}\)-module categories and monoidal functors \(\mathcal{C}\to\operatorname{End}(\mathcal{M})\).
Consider the projection \(p_{Z}\in\operatorname{End}_{\mathcal{C}}(X^{\otimes n})\) from the statement of the lemma. By definition of the module fusion rules, we have that \(p_{Z}\) maps to \(\sum_{M_{j}\in\mathcal{M}}\sum_{1\leq\ell\leq(\Gamma_{Z})_{M_{1}\to M_{j}}}p_{ M_{j}}^{\ell}\) where the \(p_{M_{j}}^{\ell}\) is some set of minimal projections onto \(M_{j}\) in \(\operatorname{End}_{\mathcal{M}}(X^{\otimes n}\otimes M_{1})\). Restricting to the block corresponding to the subobject \(M_{2}\) hence gives \(\sum_{1\leq\ell\leq(\Gamma_{Z})_{M_{1}\to M_{2}}}p_{M_{j}}^{\ell}\). The trace of this morphism (calculated w.r.t the fixed basis of matrix units) is thus \((\Gamma_{Z})_{M_{1}\to M_{2}}\).
On the other hand, the projection \(p_{Z}\) maps to \(\phi(p_{Z})\in oGPA(\Gamma)\), and restricting this morphism to the block of \(\operatorname{End}_{\operatorname{End}(\mathcal{M})}(\Gamma_{X}^{\otimes n}[M _{1}])\) corresponding to the summand \(M_{2}\) is (again by the equivalence between \(\mathcal{C}\)-module categories and monoidal functors \(\mathcal{C}\to\operatorname{End}(\mathcal{M})\)) gives the restriction of \(\phi(p_{Z})\) to the space of loops \((p,q)\) with source \(M_{1}\) and target \(M_{2}\). Taking the trace of this restriction (w.r.t. the basis of matrix units \((p,q)\)) gives
\[\sum_{(q,q):\quad q\text{ is a }+^{n}\text{-path},s(q)=M_{1},t(q)=M_{2}}\phi(p_{Z} )[(q,q)].\]
From the commutativity of the diagram at the start of the proof, our two expressions for the trace are equal. This gives the statement of the lemma.
## 5. Kazhdan-Wenzl Cells
In this section we introduce the definition of a KW cell system on a graph \(\Gamma\). This is a polynomial system of equations depending on parameters \(N\geq 2\) a natural number, \(q\) a root of unity, and \(\omega\) an \(N-th\) root of unity.
When \(q=e^{2\pi i\frac{1}{2(N+k)}}\) for some \(k\geq 0\), the data of a KW cell system is (by definition, and from the results of the previous section) an embedding
\[\mathcal{P}_{\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}},\Lambda_{1}}\to oGPA(\Gamma).\]
Hence by Theorem 4.2 a KW cell system on \(\Gamma\) give a module category over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) whose module fusion graph for \(\Lambda_{1}\) is \(\Gamma\). We also define the notion of equivalence of KW cell systems, which is defined to be the pull-back of equivalence of module categories. Hence solutions to KW cell systems (up to equivalence) with \(q=e^{2\pi i\frac{1}{2(N+k)}}\) classify module categories (up to equivalence) over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\). This is all expanded on in the proof of Theorem 1.1 at the end of this section.
**Remark 5.1**.: Our definition of KW cell system also makes sense for roots of unity \(q\) which are not of the form \(e^{2\pi i\frac{1}{2(N+k)}}\). However, there are two issues which stop us from obtaining module categories for these \(q\) values. The first is that the implicit image of the cups and caps in \(oGPA(\Gamma)\) no longer satisfy the correct loop parameter to take a homomorphism from \(\mathcal{P}_{\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}},\Lambda_{1}}\). This can be fixed by choosing a different pivotal structure on \(oGPA(\Gamma)\). The more serious issue is that if we change the pivotal structure on \(oGPA(\Gamma)\), then it is no longer a unitary pivotal structure. In particular, we can't assume the image of the elements specified by a KW cell system form a unitary subcategory. Hence we do not have that (Anti-Sym 1) holds for free.
To show existence of module categories over these non-unitary \(q\)'s, one would have to verify that (Anti-Sym 1) holds in the graph planar algebra manually. We have done this computation for several examples, and unfortunately the relation (Anti-Sym 1) can takes weeks of computer time to verify.
Our definition of a KW cell system is as follows.
**Definition 5.2**.: Let, \(N\in\mathbb{N}_{\geq 2}\), \(q\) a root of unity, and \(\omega\) an \(N\)-th root of unity. Further, let \(\Gamma\) be a graph with norm \([N]_{q}\), and let \(\lambda\) be the positive Frobenius-Perron eigenvector of \(\Gamma\).
A _Kazhdan-Wenzl cell system_ with parameters \((N,q,\omega)\) on the graph \(\Gamma\) is a map \(KW\) which assigns to every loop \(\tikzfig{Kw}\) in \(\Gamma\), a complex scalar and to every loop of length \(N\)\(\tikzfig{Kw}\) in \(\Gamma\), a complex scalar
These scalars satisfy the following conditions:
\[(R1):\quad\sum_{p}\frac{\lambda_{\text{source}(p)}}{\lambda_{\text{target}(p )}}KW\left(\tikzfig{Kw}\right)\quad=\sum_{p}\frac{\lambda_{\text{target}(p)}}{ \lambda_{\text{source}(p)}}KW\left(\tikzfig{Kw}\right)\quad=\quad[N-1]_{q} \delta_{i,j}\] \[(R2):\quad KW\left(\tikzfig{Kw}\right)\quad=\quad KW\left(\tikzfig{ Kw}\right)\quad=\quad[N-1]_{q}\delta_{i,j}\] \[(R3):\quad\sum_{p,q,r}KW\left(\tikzfig{Kw}\right)-\delta_{k,l}KW \left(\tikzfig{Kw}\right)\quad=\]
\[\sum_{p^{\prime},q^{\prime},r^{\prime}}KW\left(\begin{array}{c}\includegraphics[ 14]{figs/p-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
determined, the equation (BA) is also linear. The solution for the \(N\)-path cells is then obtained as a solution to the linear system to (RI) and (BA) (with (N) used to normalise the solution). We refer to the 4-path cells as the _U-cells_, and the \(N\)-path cells as the _B-cells_. We refer the reader to Section 6 where several solutions are determined for examples of this 2-step procedure.
**Remark 5.4**.: To present a solution to a KW cell system, it can be convenient to use matrix notation. For the cells corresponding to loops of the form \(\raisebox{-15.0pt}{\includegraphics[]{figures/1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -
\[KW^{1}\left(\raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{includegraphics[ height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{includegraphics[height=14.226378pt]{includegraphics[ height=14.226378pt]{includegraphics[height=14.226378pt]{includegraphics[ height=14.226378pt]{includegraphics[height=14.226378pt]{includegraphics[ height=14.226378pt]{includegraphics[height=14.226378pt]{includegraphics[ height=14.226378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.226378pt]{includegraphics[height=14.226378pt]{includegraphics[ height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.226378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[ height=14.26378pt]{includegraphics[height=14.26378pt]{includegraphics[ height=14.
Conversely, a KW cell system with parameters \((N,q,\omega)\) on a graph \(\Gamma\) is by definition a pair of elements in \(oGPA(\Gamma)\) satisfying the relations (R1), (R2), (R3), (Hecke), (Rotational Invariance), (Braid Absorption), and (Norm) of Section 3. As the \(\dagger\)-structure on \(oGPA(\Gamma)\) is unitary, it restricts to a unitary \(\dagger\) structure on the \(\dagger\)-subcategory generated by the two elements specified by the KW cell system solution. We then apply Theorem 3.4 to see this subcategory is isomorphic to \(\mathcal{P}_{\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}} ;\Lambda_{1}}^{\cdot}\). We thus have an embedding
\[\mathcal{P}_{\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}} ;\Lambda_{1}}\to oGPA(\Gamma).\]
Hence Theorem 4.2 gives us a module category over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) such that the module fusion graph for \(\Lambda_{1}\) is \(\Gamma\).
The same argument as in the converse case (in reverse), shows that equivalent KW cell systems give rise to equivalent module categories over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\).
In order to find a solution to a KW cell system on a given graph, we often need some addition equations. In the situation where we know the action graphs for \(\Lambda_{2}\) and \(\Lambda_{3}\), we have the following additional equations. These equations first appeared in [10] as a conjecture. With our technical machinery, we can easily prove the equations always hold.
**Lemma 5.6**.: _Suppose \(\mathcal{M}\) is a module category for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\), with fusion graph for the action of \(X\) given by \(\Gamma_{X}\). Then any cell system corresponding to the module \(\mathcal{M}\) satisfies the following relations:_
\[\text{(Tr}(U_{1})):\quad\sum_{i,j}KW\left(\begin{array}{c}\includegraphics[ 140.0]{fig/fpcp-140.png}\end{array}\right) =(\Gamma_{\Lambda_{2}})_{s(i),r(j)}\cdot[2]_{q}\] \[\text{(Tr}(U_{1}U_{2})):\quad\sum_{i,j,k}KW\left(\begin{array}{c }\includegraphics[140.0]{fig/fpcp-140.png}\end{array}\right) =(\Gamma_{\Lambda_{3}})_{s(i),r(k)}\cdot[2]_{q}^{2}+(\Gamma_{ \Lambda_{1}}\cdot\Gamma_{\Lambda_{2}}-\Gamma_{\Lambda_{3}})_{s(i),r(k)}\,.\]
Proof.: These are a consequence of Lemma 4.3, which determines the trace of the embedding of an idempotent in terms of the module fusion rules. The first equation holds as we have
\[\begin{array}{c}\includegraphics[140.0]{fig/fpcp-140.png}\end{array}\]
The second holds as
\[\begin{array}{c}\includegraphics[140.0]{fig/fpcp-140.png}\end{array}\]
along with the fusion rule \(\Lambda_{1}\otimes\Lambda_{2}\cong\Lambda_{3}\oplus(\Lambda_{2}+\Lambda_{1})\).
Certainly many more equation of this form can be derived using Lemma 4.3. For the examples considered in Section 6, these two equations were sufficient (and incredibly useful).
## 6. Examples
In this section we compute several solutions to a KW cell system on a variety of graphs. We restrict our attention to the \(\mathfrak{sl}_{4}\) case. As this is the first case which hasn't been solved before. Solutions for the \(\mathfrak{sl}_{3}\) case can be found in [1].
The graph in the \(\mathfrak{sl}_{4}\) case we take from the work of Ocneanu [10]. We can also find the graphs for action by \(\Lambda_{2}\) in this work, allowing us to apply the equations from Lemma 5.6. Note that the results of this
section are not dependent on the correctness of [12]9. However our results in this section verify that Ocneanu's claims were correct.
Footnote 9: No proofs were given in this paper.
We assume that Ocneanu found these graphs by solving the modular splitting equation [12, 13] for the \(SU(4)\) modular invariants. If one wanted to extend the results of this section to higher \(SU(N)\), the modular splitting equation should allow one to obtain the graphs corresponding to the higher \(SU(N)\) modular invariants. Discussion on this problem can be found in [1, Section 6] and [1, Section 8].
From [1], we have Theorem 1.3 which abstractly classifies irreducible module categories over \(\mathcal{C}(\mathfrak{sl}_{4},k)\). To remind the reader, we have
\begin{tabular}{|c|c c c c c c c|} \hline \(k\) & 1 & 2 & 4 & 6 & 8 & \(k>1\) odd & \(k>8\) even \\ \hline \# of Modules & 2 & 3 & 7 & 8 & 9 & 4 & 6 \\ \hline \end{tabular} irreducible module categories over \(\mathcal{C}(\mathfrak{sl}_{4},k)\) up to equivalence.
In this section we construct all of these abstractly classified module categories of \(\mathcal{C}(\mathfrak{sl}_{4},k)\). Hence we upgrade the abstract classification to a concrete classification.
**Remark 6.1**.: We will use the matrix notation from Remark 5.4 to express our solutions. Recall that we will refer to the entries of the \(U\) matrices as \(U\)-cells, and the entries of the \(B\) matrices as \(B\)-cells. In the interest of space we do not include the solution to the B-cells for the exceptional solutions. These are easily obtained by solving the linear system (RI) + (BA) once the U-cells have been determined. The full solutions to the KW cell systems on the exceptional graphs can be found in Mathematica notebooks attached to the arXiv submission of this paper. We also include the Mathematica files "CellLibrary.nb" and "CellLibrary-MultiEdge.nb" which contain general functions that takes as input a graph (or multi-edged graph) and returns the polynomial equations for a KW system on that graph.
For the 6 exceptional modules, we also construct KW cell system solutions with \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). That is, exceptional module categories for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{4}))^{\omega}}\) at the appropriate \(q\) values. As far as we know, these modules over the twisted categories cannot be constructed by conformal inclusions, and this paper is the first time they have been constructed in any capacity. _A-priori_ we should expect to have to find a new solution to both the U and B cells for each value of \(\omega\). We are fortunate in the sense that for each exceptional graph, the four solutions for each value of \(\omega\) share the same U-cell solution. Hence once the U-cell solution is found, the four B-cell solutions can be found by simply solving the linear system (RI) + (BA). In slightly different language, this means that all four \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{4}))^{\omega}}\) modules restrict to the same \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{gl}_{4}))}\) module.
For several of our solutions, we use the notion of _orbifolding_ to help us obtain a solution. We expect this to be equivariantisation for module categories (see [14]). The concept of orbifolding has been explored in the physics context [15] and in the mathematical context [1]. In the later paper, the orbifold procedure is made rigorous for relations occurring in the endomorphism algebras of the graph planar algebra. For us this encompasses relations (R2), (R3), and (H). For relations occurring in more general spaces, to the best of our knowledge, the orbifold procedure remains un-rigorous. Still, we can use this un-rigorous procedure to help determine a solution to a KW cell system. To make everything rigorous again, we simply explicitly verify all the relations for the solution. We use this orbifolding procedure to help find solutions in Subsections 6.1, 6.3, and 6.5.
### An Exceptional Module for \(Su(4)\) at Level 4
In this section we construct a cell system with parameters \(q=e^{2\pi i\frac{i}{16}}\) on the following graph
The unique positive eigenvector is
\[\lambda=\left\{1,1,[4]_{q},[4]_{q},\frac{[3]_{q}[4]_{q}}{[2]_{q}},[3]_{q},[3]_{q },[2]_{q},[2]_{q},[2]_{q},[2]_{q},\frac{[4]_{q}}{[2]_{q}}\right\}.\]
We assume that graph for action by \(\Lambda_{2}\) is
The above data for this graph can also be found in the Mathematica file "k=4/Conformal Inclusion/Data.nb".
To solve for the U-cells on this graph, we first observe the following \(\mathbb{Z}_{2}\) symmetry on \(\Gamma^{4,4,\subset}_{\Lambda_{1}}\):
\[1\longleftrightarrow 2\qquad 6\longleftrightarrow 7\qquad 8\longleftrightarrow 10 \qquad 9\longleftrightarrow 11.\]
The orbifold graph of \(\Gamma^{4,4,\subset}_{\Lambda_{1}}\) with respect to this symmetry, is again \(\Gamma^{4,4,\subset}_{\Lambda_{1}}\). This suggests that there is a U-cell solution on \(\Gamma^{4,4,\subset}_{\Lambda_{1}}\) which comes from orbifolding a U-cell solution invariant under the \(\mathbb{Z}_{2}\) symmetry. Hence we have two avenues of attack. We can either try assume that our solution looks like an orbifold solution (and hence contains many \(0\)'s), or that the solution is invariant under the \(\mathbb{Z}_{2}\) symmetry.
We first attempted the approach of assuming the \(\mathbb{Z}_{2}\) symmetry. However we were unable to solve the system in this setting. By assuming our solution comes from an orbifold, we see that the \(4\times 4\) block \(U^{3}\)\({}_{4}\) is of the form
\[U^{3}\ _{4}=\left[\begin{array}{cccc}x&0&0&y\\ 0&z&w&0\\ 0&\overline{w}&[2]_{q}-z&\\ \overline{y}&0&0&[2]_{q}-x\end{array}\right]\]
for some complex scalars \(x,y,z,w\in\mathbb{C}\). To determine these scalars, we hand-pick a collection of equations from \(\operatorname{Tr}(U_{1}U_{2})\) and (R3) which contain these four variables (along with several other coefficients), and numerically approximate a solution. From this numerical approximation, we can guess that (up to the graph
automorphisms exchanging \(\alpha_{1}\leftrightarrow\alpha_{2}\) and \(\beta_{1}\leftrightarrow\beta_{2}\)) that \(x=\frac{[3]_{q}}{[4]_{q}}+\frac{\sqrt{[3]_{q}}}{[2]_{q}}\) and \(y=\frac{[3]_{q}}{[4]_{q}}\). We then use (Hecke) to pin down \(w\) and \(y\) up to phases.
With this seed information, we can then solve \(\operatorname{Tr}(U_{1}U_{2})\) completely. This gives the diagonal elements of all of our matrices. We can then use (Hecke) to solve the \(2\times 2\) blocks up to phases. At this point, there are enough linear and quadratic equations in (R3) to pin down the five remaining \(4\times 4\) blocks (making a couple of arbitary choices). After choosing a concrete gauge choice, we arrive at the following solution10:
Footnote 10: We should point out that the initial solution we found for the U-cells did not admit a solution for the B-cells. This is because we had actually found the embedding of \(p_{2\Lambda_{1}}\), instead of \(p_{\Lambda_{2}}\). Due to level rank duality these morphisms are indistinguishable with respect to the relations (R1), (R2), (R3), and (Hecke). To obtain the correct solution, we used the formula \(p_{\Lambda_{2}}=\operatorname{id}_{\Lambda_{1}\otimes\Lambda_{1}}-p_{2 \Lambda_{1}}\).
\[U^{3}\ _{4}=\begin{bmatrix}\frac{[3]_{q}}{[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}& 0&-\frac{1}{[4]_{q}}\\ 0&\frac{[3]_{q}}{[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}&0\\ 0&\frac{[3]_{q}}{[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}&0\\ -\frac{1}{[4]_{q}}&0&0&\frac{[3]_{q}}{[4]_{q}}+\frac{\sqrt{[3]_{q}}}{[2]_{q}} \end{bmatrix}\ \ U^{4}\ _{3}=\begin{bmatrix}\frac{[3]_{q}}{[4]_{q}}&-\frac{1}{[4]_{q}}&- \frac{\sqrt{[3]_{q}}}{[4]_{q}}\\ -\frac{1}{[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}&\mathbf{i}\frac{\sqrt{[3]_{q}}}{[4] _{q}}&-\mathbf{i}\frac{\sqrt{[3]_{q}}}{[4]_{q}}\\ -\frac{1}{[4]_{q}}&-\mathbf{i}\frac{\sqrt{[3]_{q}}}{[4]_{q}}&\mathbf{i}\frac{ \sqrt{[3]_{q}}}{[4]_{q}}&\mathbf{i}\frac{\mathbf{i}}{[4]_{q}}\\ -\frac{\sqrt{[3]_{q}}}{[4]_{q}}&\mathbf{i}\frac{\sqrt{[3]_{q}}}{[4]_{q}}&- \mathbf{i}\frac{1}{[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}\end{bmatrix}\]
\[U^{5}\ _{6}=\begin{bmatrix}\frac{[3]_{q}}{[4]_{q}}+\frac{1}{\sqrt{[3]_{q}}[4]_{q}^{2 }}&-\frac{1}{\sqrt{[2]_{q}}[4]_{q}}&0&-\frac{\sqrt{[4]_{q}}}{\sqrt{[2]_{q}}[3] _{q}}\\ -\frac{1}{[2]_{q}[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}-\frac{1}{\sqrt{[3]_{q}}[4]_{ q}^{2}}&-\frac{\sqrt{[4]_{q}}}{\sqrt{[2]_{q}}[3]_{q}}&0\\ 0&-\frac{\sqrt{[4]_{q}}}{\sqrt{[2]_{q}}[3]_{q}}&\frac{[3]_{q}}{[4]_{q}}+\frac {1}{\sqrt{[3]_{q}}[4]_{q}^{2}}&\frac{1}{\sqrt{[2]_{q}}[4]_{q}}\\ -\frac{\sqrt{[4]_{q}}}{\sqrt{[2]_{q}}[3]_{q}}&0&\frac{1}{\sqrt{[2]_{q}}[4]_{q} }&\frac{[3]_{q}}{[4]_{q}}-\frac{1}{\sqrt{[3]_{q}}[4]_{q}^{2}}\end{bmatrix}=U^{6} \ _{5}=U^{5}\ _{7}\]
\[U^{7}\ _{5}=\begin{bmatrix}\frac{[3]_{q}}{[4]_{q}}&\frac{1}{\sqrt{[ 3]_{q}}[4]_{q}}&-\frac{1}{\sqrt{[3]_{q}}[4]_{q}^{2}}&0&-\mathbf{i}\frac{ \sqrt{[4]_{q}}}{\sqrt{[2]_{q}}[3]_{q}}\\ -\frac{\sqrt{[2]_{q}}[3]_{q}}{\sqrt{[2]_{q}}[3]_{q}}&0&\frac{[3]_{q}}{[4]_{q} }-\frac{1}{\sqrt{[3]_{q}}[4]_{q}^{2}}&-\mathbf{i}\frac{\sqrt{[3]_{q}}}{\sqrt{[ 2]_{q}}[4]_{q}}\\ 0&\mathbf{i}\frac{\sqrt{[4]_{q}}}{\sqrt{[2]_{q}}[3]_{q}}&\mathbf{i}\frac{ \mathbf{i}}{\sqrt{[2]_{q}}[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}+\frac{1}{\sqrt{[3] _{q}}[4]_{q}^{2}}\end{bmatrix}\ \ U^{10}\ _{11}=\begin{bmatrix}\frac{[3]_{q}}{[4]_{q}}+\frac{ \sqrt{[3]_{q}}}{[2]_{q}}&-\frac{1}{[4]_{q}}\\ -\frac{1}{[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}-\frac{\sqrt{[3]_{q}}}{\sqrt{[2]_{q }}[4]_{q}}\\ 0&\mathbf{i}\frac{\sqrt{[4]_{q}}}{\sqrt{[2]_{q}}[3]_{q}}&\mathbf{i}\frac{ \mathbf{i}}{\sqrt{[2]_{q}}[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}+\frac{1}{\sqrt{[3] _{q}}[4]_{q}^{2}}\end{bmatrix}\ \ U^{10}\ _{11}=\begin{bmatrix}\frac{[3]_{q}}{[4]_{q}}+\frac{ \sqrt{[3]_{q}}}{[2]_{q}}&-\frac{1}{[4]_{q}}\\ -\frac{1}{[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}-\frac{\sqrt{[3]_{q}}}{\sqrt{[2]_{q }}[4]_{q}}\end{bmatrix}\]
\[U^{1}\ _{5}=\begin{bmatrix}\frac{[3]_{q}}{[4]_{q}}&\frac{3^{\alpha_{2}}}{[4]_{q}}& \mathbf{i}\frac{\sqrt{[3]_{q}}}{\sqrt{[2]_{q}}[4]_{q}}\\ -\frac{\sqrt{[3]_{q}}}{\sqrt{[2]_{q}}[4]_{q}}&\frac{[3]_{q}-\sqrt{[3]_{q}}}{[4] _{q}}\end{bmatrix}=U^{4}\ _{9}=U^{5}\ _{1}=U^{7}\ _{12}=U^{8}\ _{3}=U^{10}\ _{3}\ \
\[\begin{array}{ccccc}3^{\alpha_{1}}&3^{\alpha_{2}}&5&12\\ U^{2}&{}_{5}=\begin{bmatrix}\frac{[3]_{q}+\sqrt{[3]_{q}}}{[4]_{q}}&-\mathbf{i} \frac{\sqrt{[3]_{q}}}{\sqrt{[2]_{q}[4]_{q}}}\\ \frac{\sqrt{[3]_{q}}}{\sqrt{[2]_{q}[4]_{q}}}&\frac{[3]_{q}+\sqrt{[3]_{q}}}{[4]_ {q}}\end{bmatrix}=U^{5}&{}_{2}&U^{9}&{}_{8}=\begin{bmatrix}\frac{[1]}{[2]_{q}}&- \frac{\sqrt{[3]_{q}}}{[2]_{q}}\\ -\frac{\sqrt{[3]_{q}}}{[2]_{q}}&\frac{[3]_{q}}{[2]_{q}}\end{bmatrix}=U^{9}&{}_ {10}=U^{11}&{}_{8}\end{array}\] \[U^{3}\ \ {}_{10}=\begin{bmatrix}\frac{[3]_{q}+\sqrt{[3]_{q}}}{[4]_{q}}& \frac{\sqrt{[3]_{q}}}{\sqrt{[2]_{q}[4]_{q}}}\\ \frac{\sqrt{[3]_{q}}}{\sqrt{[2]_{q}[4]_{q}}}&\frac{[3]_{q}-\sqrt{[3]_{q}}}{[4] _{q}}\end{bmatrix}=U^{11}&{}_{4}\qquad U^{11}&{}_{10}=\begin{bmatrix}\frac{[1]_{ q}}{[2]_{q}}&\frac{\sqrt{[3]_{q}}}{[2]_{q}}\\ \frac{\sqrt{[3]_{q}}}{[2]_{q}}&\frac{[3]_{q}}{[2]_{q}}\end{bmatrix}\qquad U^{10 }&{}_{9}=\begin{bmatrix}\frac{[3]_{q}}{[4]_{q}}&\mathbf{i}\frac{[3]_{q}}{[4]_ {q}}\\ -\mathbf{i}\frac{[3]_{q}}{[4]_{q}}&\frac{[3]_{q}}{[4]_{q}}\end{bmatrix}\]
The unlabeled row/column orderings are
\[\begin{array}{ccccc}U^{6}&{}_{5}:\{3^{\alpha_{1}},3^{\alpha_{2}},9,11\}&U^{5 }&{}_{7}&:\{10,8,5^{\beta_{1}},5^{\beta_{2}}\}&U^{5}&{}_{1}&:\{{}^{\beta_{1}} 4,{}^{\beta_{2}}4\}&U^{10}&{}_{3}&:\{6,7\}\\ U^{8}&{}_{3}:\{6,7\}&U^{4}&{}_{9}&:\{7,6\}&U^{7}&{}_{12}&:\{11,9\}&U^{4}&{}_{11 }&:\{7,6\}\\ U^{6}&{}_{12}:\{11,9\}&U^{9}&{}_{4}&:\{5^{\beta_{1}},5^{\beta_{2}}\}&U^{12}&{}_ {6}&:\{10,8\}&U^{12}&{}_{7}&:\{8,10\}\\ U^{5}&{}_{2}:\{{}^{\beta_{1}}4,{}^{\beta_{2}}4\}&U^{9}&{}_{10}&:\{12,5\}&U^{11 }&{}_{8}&:\{12,5\}&U^{11}&{}_{4}&:\{5^{\beta_{1}},5^{\beta_{2}}\}\end{array}\]
This solution can be found in the Mathematica file "k=4/Conformal Inclusion/Solutions/w=1/Solution.nb". A computer verifies relations (R1), (R2), (R3), and (Hecke) in just under 3 minutes. A record of this verification can be found in "k=4/Conformal Inclusion/Solutions/w=1/Verification.nb".
Solving the linear equations (RI) and (BA) give a 1-dimensional solution space for the B-cells, which we normalise to satisfy (N). This solution for the B-cells can also be found in the Mathematica file "k=4/Conformal Inclusion/Solutions/w=1/Solution/w=1.nb". A computer verifies relations (RI), (BA), and (N) for this solution in 15 seconds. A record of this can be found in "k=4/Conformal Inclusion/Solutions/w=1/Verification.nb". As we have a solution to the KW cell system on \(\Gamma^{4,4,\mathbb{C}}_{\Lambda_{1}}\), we have the following theorem.
**Theorem 6.2**.: _There exists a rank 12 module category \(\mathcal{M}\) for \(\mathcal{C}(\mathfrak{sl}_{4},4)\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,4,\mathbb{C}}_{\Lambda_{1}}\)._
We also find KW cell system solutions on the graph \(\Gamma^{4,4,\mathbb{C}}_{\Lambda_{1}}\) when \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). The solutions and verification of these solutions can be found in the folder "k=4/Conformal Inclusion/Solutions".
**Theorem 6.3**.: _For each \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\) there exists a rank 12 module category \(\mathcal{M}\) for \(\overline{\mathrm{Rep}(U_{e^{2\pi i}\frac{1}{2}}\left(\mathfrak{sl}_{4}\right))^ {\omega}}\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,4,\mathbb{C}}_{\Lambda_{1}}\)._
### An Exceptional Module for \(Su(4)\) at Level 6
We will construct a cell system on the following graph for \(N=4\) and \(q=e^{2\pi i\frac{1}{20}}\) (i.e. a module category for \(\mathcal{C}(\mathfrak{sl}_{4},6)\)).
**Remark 6.4**.: For ease of notation, we will assume the subscripts of the \(a,b,\) and \(c\) vertices are taken mod \(10\) (so that \(a_{11}=a_{1}\) ect.), and the subscripts of the \(d\) vertices are taken mod \(2\) (so that \(d_{3}=d_{1}\)).
This graph has the positive eigenvector \(\lambda\) (with eigenvalue \([4]_{q}\)):
\[\lambda_{a_{i}}=1,\qquad\lambda_{b_{i}}=[4]_{q},\qquad\lambda_{c_{i}}=\frac{[ 3]_{q}[4]_{q}}{[2]_{q}},\qquad\lambda_{d_{i}}=\frac{[2]_{q}[4]_{q}^{2}}{[3]_{q}}.\]
The graph for action by \(\Lambda_{2}\) we assume to be
The above data for this graph can also be found in the Mathematica file "k=6/Conformal Inclusion/Data.nb".
To try find a cell system, we begin by assuming that the \(U\) cells are invariant under the \(\mathbb{Z}_{10}\) symmetry of the graph, and that the coefficients of the \(U\) cells are all real. These assumptions reduce the complexity to
a level that a computer can quickly find the following solution (hence justifying our assumptions):
\[U^{a_{i}}_{\phantom{a_{i+1}}a_{i+1}} =U^{a_{i}}_{\phantom{a_{i+2}}c_{i+2}}=0, U^{a_{i}}_{\phantom{a_{i}}c_{i}} =[2]_{q}\] \[U^{b_{i}}_{\phantom{b_{i+1}}b_{i+1}} =\frac{1}{[2]_{q}^{\frac{3}{2}}}\begin{bmatrix}\sqrt{[2]_{q}[3]_{ q}}&\sqrt{[4]_{q}}&\sqrt{[4]_{q}}\\ \sqrt{[4]_{q}}&\sqrt{[2]_{q}}&\sqrt{[2]_{q}}\\ \sqrt{[4]_{q}}&\sqrt{[2]_{q}}&\sqrt{[2]_{q}}\end{bmatrix} U^{b_{i}}_{\phantom{b_{i+3}}b_{i+3}} =0\] \[U^{b_{i}}_{\phantom{b_{i}}d_{i}} =\frac{1}{[2]_{q}}\begin{bmatrix}c_{i}&c_{i+2}\\ 1&\sqrt{[3]_{q}}&[3]_{q}\end{bmatrix} U^{b_{i}}_{\phantom{b_{i-1}}b_{i-1}} =[2]_{q} U^{c_{i}}_{\phantom{c_{i}}a_{i}} =[2]_{q}\] \[U^{c_{i}}_{\phantom{c_{i+1}}c_{i+1}} =\frac{1}{[3]_{q}^{\frac{3}{2}}}\begin{bmatrix}[4]_{q}&-[4]_{q}&- \sqrt{[2]_{q}[3]_{q}}\\ -\sqrt{[2]_{q}[3]_{q}}&\sqrt{[2]_{q}[3]_{q}}&\sqrt{[2]_{q}[3]_{q}}\\ -\sqrt{[2]_{q}[3]_{q}}&\sqrt{[2]_{q}[3]_{q}}&\end{bmatrix} U^{c_{i}}_{\phantom{c_{i}}a_{i+2}} =U^{c_{i}}_{\phantom{c_{i+5}}c_{i+5}} =U^{c_{i}}_{\phantom{c_{i-3}}a_{i-3}} =0\] \[U^{c_{i}}_{\phantom{c_{i+1}}c_{i+3}} =\frac{b_{i+1}}{[3]_{q}}\begin{bmatrix}[4]_{q}&\sqrt{[2]_{q}[4]_{q} }\\ \sqrt{[2]_{q}[4]_{q}}&[2]_{q}\end{bmatrix} U^{c_{i}}_{\phantom{c_{i-1}}c_{i-1}} =\frac{1}{[3]_{q}}\begin{bmatrix}[2]_{q}&\sqrt{[2]_{q}[4]_{q}}\\ \sqrt{[2]_{q}[4]_{q}}&[4]_{q}\end{bmatrix}\] \[U^{d_{i}}_{\phantom{d_{i+1}}d_{i+1}} =\frac{1}{[2]_{q}^{2}[4]_{q}}\begin{bmatrix}[3]_{q}&[3]_{q}&-[3]_{ q}^{2}&-[3]_{q}^{2}\\ -[3]_{q}^{2}&[3]_{q}&[3]_{q}&-[3]_{q}^{2}&-[3]_{q}^{2}\\ -[3]_{q}^{2}&[3]_{q}&[3]_{q}[5]_{q}&[3]_{q}&-[3]_{q}^{2}\\ -[3]_{q}^{2}&-[3]_{q}^{2}&[3]_{q}&[3]_{q}[5]_{q}\end{bmatrix} U^{d_{i}}_{\phantom{c_{j-1}}b_{j}} =\frac{1}{[2]_{q}}\begin{bmatrix}[3]_{q}&-\sqrt{[3]_{q}}\\ -\sqrt{[3]_{q}}&1\end{bmatrix}\]
This solution can be found in the Mathematica file "k=6/Conformal Inclusion/Solutions/w=1/Solution.nb". A computer verifies relations (R1), (R2), (R3), and (Hecke) in just over 3 minutes. A record of this verification can be found in "k=6/Conformal Inclusion/Solutions/w=1/Verification.nb".
We can now solve the linear system given by equations (RI) and (BA) to determine the \(B\) cells up to a single scalar. Solving relation (N) determines the norm of this scalar. Hence we have a solution for the \(B\) cells, up to a single phase. Fixing a natural choice for this phase gives a concrete solution to a KW cell system on \(\Gamma^{4,6,\mathbb{C}}_{\Lambda_{1}}\). This solution can be found in "k=6/Conformal Inclusion/Solutions/w=1/Solution.nb".
**Remark 6.5**.: Note that these \(B\) coefficients have projective \(\mathbb{Z}_{10}\) symmetry, with factor \(-1\) for a one-click rotation.
A computer verifies relations (RI), (BA), and (N) for our solution in under half a minute. As a consequence of our computations, we have the following result.
**Theorem 6.6**.: _There exists a rank 32 module category \(\mathcal{M}\) for \(\mathcal{C}(\mathfrak{sl}_{4},6)\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,6,\mathbb{C}}_{\Lambda_{1}}\)._
From [1, 1], there is a unique rank 32 module category over \(\mathcal{C}(\mathfrak{sl}_{4},6)\), and it has the additional structure of a fusion category. Hence the rank 32 module category we have constructed must be this fusion category. In particular, the adjoint subcategory of this fusion category is an interesting \(3^{\mathbb{Z}_{5}}\) quadratic category.
We also find KW cell system solutions on the graph \(\Gamma^{4,6,\mathbb{C}}_{\Lambda_{1}}\) when \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). The solutions and verification of these solutions can be found in the folder"k=6/Conformal Inclusion/Solutions".
**Theorem 6.7**.: _For each \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\) there exists a rank 32 module category \(\mathcal{M}\) for \(\operatorname{Rep}(U_{e^{2\pi i\frac{1}{20}}}(\mathfrak{sl}_{4}))^{\omega}\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,6,\subset}_{\Lambda_{1}}\)._
### A Second Exceptional Module for \(Su(4)\) at Level \(6\)
We will construct a cell system on the following graph where \(N=4\) and \(q=e^{2\pi i\frac{1}{20}}\).
This graph has the positive eigenvector \(\lambda\) (with eigenvalue \([4]_{q}\)):
\[\lambda_{a_{odd}}=\lambda_{a_{even}}=1,\quad\lambda_{b_{odd}}=\lambda_{b_{even }}=[4]_{q},\quad\lambda_{c_{odd}}=\lambda_{c_{even}}=\frac{[3]_{q}[4]_{q}}{[2] _{q}},\quad\lambda_{(d_{odd})_{i}}=\lambda_{(d_{even})_{i}}=\frac{[3]_{q}}{[2] _{q}}.\]
The graph for action by \(\Lambda_{2}\) we assume to be
The above data for this graph can also be found in the Mathematica file "k=6/Conjugate/Data.nb".
Recall the \(\mathbb{Z}_{5}\) symmetry on \(\Gamma^{4,6,\subset}_{\Lambda_{1}}\) from Subsection 6.2. By making the natural orbifold identifications of \(a_{odd}\leftrightarrow\{a_{i}\mid\text{ i odd }\}\) ect. with the \(\mathbb{Z}_{5}\) orbifold of \(\Gamma^{4,6,\subset}_{\Lambda_{1}}\), along with
\[\begin{array}{l}\alpha_{1}\leftrightarrow\{b_{i}\to c_{i}\mid i\text{ odd}\}\quad\alpha_{2}\leftrightarrow\{b_{i}\to c_{i+2}\mid i\text{ odd}\}\quad\beta_{1}\leftrightarrow\{b_{i}\to c_{i}\mid i\mid i\text{ even}\}\quad\beta_{2}\leftrightarrow\{b_{i}\to c_{i|i+2}\mid i\text{ even}\}\\ \gamma_{1}\leftrightarrow\{c_{i}\to b_{i-1}\mid i\text{ even}\}\quad\gamma_{2} \leftrightarrow\{c_{i}\to b_{i+1}\mid i\text{ even}\}\quad\lambda_{1} \leftrightarrow\{c_{i}\to b_{i-1}\mid i\text{ odd}\}\quad\lambda_{2} \leftrightarrow\{c_{i}\to b_{i+1}\mid i\text{ odd}\}\end{array}\]
we can determine the U-cells for any pair of paths not passing through either \((d_{odd})_{i}\) or \((d_{even})_{i}\) by taking representatives. For example, the cell \(U^{a_{odd},b_{odd}^{a_{odd}}}_{b_{odd}^{a_{odd}}}\) is equal to \(U^{a_{1},b_{1}}_{b_{1},c_{1}}=[2]_{q}\), where as \(U^{a_{odd},b_{odd}^{a_{odd},b_{odd}^{a_{odd}}}}_{b_{odd}^{a_{odd},c_{odd}}}\) is equal to \(U^{a_{1},b_{1}}_{b_{1},c_{3}}=0\). Note that the representatives of \(\alpha_{1}\) and \(\alpha_{2}\) do not connect in \(\Gamma^{4,6,\mathbb{C}}_{\Lambda_{1}}\), which implies \(U^{a_{odd},b_{odd}^{a_{odd}}}_{b_{odd}^{a_{odd}}}=0\). Together we deduce
\[\begin{array}{c}b_{odd}^{\alpha_{1}}&b_{odd}^{\alpha_{2}}\\ U^{a_{odd}}_{c_{odd}}=\begin{bmatrix}[2]_{q}&0\\ 0&0\end{bmatrix}.\end{array}\]
This non-rigorous procedure gives us the U-cells for any paths which do not pass through either \((d_{odd})_{i}\) or \((d_{even})_{i}\). Recall the U-cells for the graph \(\Gamma^{4,6,\mathbb{C}}_{\Lambda_{1}}\) satisfied \(\mathbb{Z}_{10}\) symmetry. As we only used \(\mathbb{Z}_{5}\) symmetry for the orbifold, we naivly assume that the U-cells for \(\Gamma^{4,6,\mathbb{Z}_{5}}_{\Lambda_{1}}\) have \(\mathbb{Z}_{2}\) symmetry under the following graph automorphism:
\[a_{odd}\leftrightarrow a_{even},\quad b_{odd}\leftrightarrow b_{even},\quad c _{odd}\leftrightarrow c_{even},\quad(d_{odd})_{i}\leftrightarrow(d_{even})_{i}, \quad\alpha_{i}\leftrightarrow\beta_{i},\quad\gamma_{i}\leftrightarrow\lambda_{ i}.\]
This gives us enough of a seed to solve the remaining U-cells, with the help of the additional equation \(\operatorname{Tr}(U_{1})\) (for this case, the equation \(\operatorname{Tr}(U_{1}U_{2})\) gives no additional information). The remaining cells (up to the above symmetry) are:
\[\begin{array}{c}(d_{odd})_{0}&(d_{odd})_{1}&(d_{odd})_{2}&(d_{odd})_{3}&(d_ {odd})_{4}&\lambda_{1}b_{even}^{\beta_{1}}&\lambda_{1}b_{even}^{\beta_{2}}& \lambda_{2}b_{even}^{\beta_{1}}&\lambda_{2}b_{even}^{\beta_{2}}\\ \left[\begin{array}{cccccc}\frac{\sqrt{|3|}_{q}}{|2|_{q}}&\frac{-1}{|2|_{q} \sqrt{|3|}_{q}}&\frac{-\zeta_{5}}{|2|_{q}\sqrt{|3|}_{q}}&\frac{-\zeta_{5}^{-1} }{|2|_{q}\sqrt{|3|}_{q}}&\frac{1}{|2|_{q}\sqrt{|3|}_{q}}&\frac{1}{|2|_{q}\sqrt {|3|}_{q}}&\frac{-1}{|2|_{q}\sqrt{|3|}_{q}|_{q}}&\frac{1}{\sqrt{|2|_{q}\sqrt{| 3|}_{q}|_{q}|_{q}}}&\frac{1}{\sqrt{|2|_{q}\sqrt{|3|}_{q}|_{q}|_{q}}}\\ \frac{-1}{|2|_{q}\sqrt{|3|}_{q}}&\frac{\sqrt{|3|}_{q}}{|2|_{q}}&\frac{-1}{|2_{q }\sqrt{|3|}_{q}}&\frac{-\zeta_{5}}{|2|_{q}\sqrt{|3|}_{q}}&\frac{-\zeta_{5}^{-1} }{|2|_{q}\sqrt{|3|}_{q}}&\frac{\zeta_{5}}{|2|_{q}\sqrt{|3|}_{q}|_{q}}&\frac{ \zeta_{5}}{|2|_{q}\sqrt{|3|}_{q}|_{q}}&\frac{\zeta_{5}}{|2|_{q}\sqrt{|3|}_{q}| _{q}}\\ \frac{-\zeta_{5}^{-1}}{|2|_{q}\sqrt{|3|}_{q}}&\frac{-1}{|2_{q}\sqrt{|3|}_{q}}& \frac{\sqrt{|3|}_{q}}{|2_{q}\sqrt{|3|}_{q}}&\frac{-1}{|2_{q}\sqrt{|3|}_{q}}& \frac{-\zeta_{5}}{|2|_{q}\sqrt{|3|}_{q}}&\frac{\zeta_{5}}{|2|_{q}\sqrt{|3|}_{q} }&\frac{\zeta_{5}}{|2|_{q}\sqrt{|3|}_{q}|_{q}}&\frac{\zeta_{5}}{|2|_{q}\sqrt{|3|} _{q}|_{q}}&\frac{\zeta_{5}}{|2|_{q}\sqrt{|3|}_{q}|_{q}}\\ \frac{-\zeta_{5}^{-1}}{|2|_{q}\sqrt{|3|}_{q}}&\frac{-\zeta_{5}^{-1}}{|2_{q}\sqrt {|3|}_{q}}&\frac{\sqrt{|3|}_{q}}{|2_{q}\sqrt{|3|}_{q}}&\frac{-1}{|2_{q}\sqrt {|3|}_{q}}&\frac{\zeta_{5}}{|2_{q}\sqrt{|3|}_{q}}&\frac{\zeta_{5}}{|2_{q}\sqrt{| 3|}_{q}|_{q}}&\frac{\zeta_{5}}{|2_{q}\sqrt{|3|}_{q}|_{q}}&\frac{\zeta_{5}}{ |2_{q}\sqrt{|3|}_{q}|_{q}}&\frac{\zeta_{5}}{|2_{q}\sqrt{|3|}_{q}|_{q}}\\ \frac{-\zeta_{5}}{|2_{q}\sqrt{|3|}_{q}}&\frac{-\zeta_{5}^{-1}}{|2_{q}\sqrt{|3|} _{q}}&\frac{-1}{|2_{q}\sqrt{|3|}_{q}}&\frac{\sqrt{|3|}_{q}}{|2_{q}\sqrt{|3|}_{q} }&\frac{\zeta_{5}}{|2_{q}\sqrt{|3|}_{q}}&\frac{\zeta_{5}}{\sqrt{|2|}_{q}|_{q} \sqrt{|3|}_{q}}&\frac{\zeta_{5}}{|2_{q}\sqrt{|3|}_{q}|_{q}}&\frac{\zeta_{5}}{ |2_{q}\sqrt{|3|}_{q}|_{q}}&\frac{\zeta_{5}}{|2_{q}\sqrt{|3|}_{q}|_{q}}&\frac{ \zeta_{5}}{|2_{q}\sqrt{|3|}_{q}|_{q}}\\ \frac{-1}{|2_{q}\sqrt{|3|}_{q}}&\frac{-\zeta_{5}}{|2_{q}\sqrt{|3|}_{q}}&\frac{- \zeta_{5}^{-1}}{|2_{q}\sqrt{|3|}_{q}}&\frac{-1}{|2_{q}\sqrt{|3|}_{q}}&\frac{ \sqrt{|3|}_{q}}{|2_{q}\sqrt{|3|}_{q}}&\frac{\zeta_{5}}{\sqrt{|2|}_{q}|_{q}}& \frac{\zeta_{5}}{\sqrt{|2|}_{q}|_{q}}&\frac{\zeta_{5}}{\sqrt{|2|}_{q}|_{q}}& \frac{\zeta_{5}}{\sqrt{|2|}_{q}|_{q}}\\ \frac{-1}{|2_{q}\sqrt{|3|}_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}|_{q}}& \frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}|_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q} }&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}|_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q} \sqrt{|3|}_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q} }_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q} }_{q}|_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q} }_{q}|_{q}}\\ \frac{-1}{\sqrt{|2_{|q}|}_{q}|_{q}}&\frac{-\zeta_{5}}{\sqrt{|2_{|q}|}_{q}|_{q}} &\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}|_{q}}&\frac{-1}{\sqrt{|2_{|q}|}_{q}|_{q}}& \frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}|_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}}_{q}|_{q} }&\frac{\zeta_{5}}{\sqrt{|2_{|q}}_{q}|_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q} }&\frac{\zeta_{5}}{\sqrt{|2_{|q}}_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}}& \frac{\zeta_{5}}{\sqrt{|2_{|q}|}_{q}}&\frac{\zeta_{5}}{\sqrt{|2_{|q}}_{q}}& \frac{\
**Theorem 6.8**.: _There exists a rank 16 module category \(\mathcal{M}\) for \(\mathcal{C}(\mathfrak{sl}_{4},6)\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,6,\subset\mathsf{Z}_{5}}_{\Lambda_{1}}\)_
We also find KW cell system solutions on the graph \(\Gamma^{4,6,\subset\mathsf{Z}_{5}}_{\Lambda_{1}}\) when \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). The solutions and verification of these solutions can be found in the folder "k=6/Orbifold/Solutions".
**Theorem 6.9**.: _For each \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\) there exists a rank 16 module category \(\mathcal{M}\) for \(\overline{\operatorname{Rep}(U_{e^{2\pi i\frac{1}{20}}}(\mathfrak{sl}_{4}))^{ \omega}}\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,6,\subset\mathsf{Z}_{5}}_{\Lambda_{1}}\)._
Note that the U-cells for these KW cell systems solutions respect the \(\mathbb{Z}_{2}\) symmetry from the start of this subsection. However the B-cells only respect the symmetry when \(\omega=\pm\mathbf{i}\). This suggests that when \(\omega=\pm\mathbf{i}\) we can orbifold the KW cell system solutions to obtain KW cell system solutions on the following \(\mathbb{Z}_{2}\) orbifold graph of \(\Gamma^{4,6,\subset\mathsf{Z}_{5}}_{\Lambda_{1}}\).
Note that by classification [1] there can't be a module associated to this graph when \(\omega=1\). This example is interesting as it suggests that the classification of exceptional modules over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) is richer when \(\omega\neq 1\).
As expected, we find solutions to the KW cell system on \(\Gamma^{4,6,\subset\mathsf{Z}_{10}}_{\Lambda_{1}}\) when \(\omega=\pm\mathbf{i}\). These solutions and their verifications can be found in "k=6/Orbifold2/Solutions". There appears to be no solution when \(\omega=-1\).
**Theorem 6.10**.: _For each \(\omega\in\{\mathbf{i},-\mathbf{i}\}\) there exists a rank 8 module category \(\mathcal{M}\) for \(\overline{\operatorname{Rep}(U_{e^{2\pi i\frac{1}{20}}}(\mathfrak{sl}_{4}))^{ \omega}}\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,6,\subset\mathsf{Z}_{10}}_{\Lambda_{1}}\)._
### An Exceptional Module for \(Su(4)\) at Level 8
We will construct a cell system on the following graph for \(N=4\) and \(q=e^{2\pi i\frac{1}{24}}\) (i.e. a module category for \(\mathcal{C}(\mathfrak{sl}_{4},8)\)).
This graph has the positive eigenvector \(\lambda\) (with eigenvalue \([4]_{q}\)):
\[\lambda_{v_{1}}=\lambda_{v_{24}}=1, \lambda_{v_{2}}=\lambda_{v_{3}}=\lambda_{v_{22}}=\lambda_{v_{23}} =[4]_{q},\]
\[\lambda_{v_{4}}=\lambda_{v_{6}}=\lambda_{v_{19}}=\lambda_{v_{21}}= \frac{[4]_{q}[5]_{q}}{[2]_{q}[3]_{q}}, \lambda_{v_{5}}=\lambda_{v_{20}} =\frac{[4]_{q}[5]_{q}}{[2]_{q}}\] \[\lambda_{v_{7}}=\lambda_{v_{8}}=\lambda_{v_{17}}=\lambda_{v_{18}}= \frac{[2]_{q}[3]_{q}[5]_{q}}{[6]_{q}} \lambda_{v_{9}}=\lambda_{v_{10}}=\lambda_{v_{11}}=\lambda_{v_{12}}= \lambda_{v_{13}}=\lambda_{v_{14}}=\lambda_{v_{15}}=\lambda_{v_{16}} =\frac{[3]_{q}[4]_{q}[5]_{q}}{[2]_{q}[6]_{q}}\]
The graph for action by \(\Lambda_{2}\) we assume to be:
To try find a cell system on \(\Gamma^{4,8,\subset_{\Lambda_{1}}}_{\Lambda_{1}}\), we begin by assuming that the coefficients of the \(U\) cells are all real. We initially tried to find a solution invariant under the rotational 180 degree symmetry. However it appears that a solution with such symmetry does not exist.
To obtain a solution for the \(U\) cells, we solve the linear systems (R1), (R2), and (\(\operatorname{Tr}(U_{1})\)), and then numerically approximate (\(\operatorname{Tr}(U_{1}U_{2})\)). This yields two solutions to a subset of the \(U\) cells. We pick one of these solutions, and guess exact values for the coefficients (as products of half-integer powers of quantum integers). We then solve (Hecke) to determine nearly all of the remaining coefficients up to sign. Taking a subset of the equations from (R3) allows us to pin down the remaining coefficients, and to choose signs for our coefficients. The solution we obtain is too large (568 coefficients) to include here, so we include it in the Mathematica notebook "k=8/Orbifold/Solutions/w=1/Solution.nb", attached to the arXiv submission of this article.
A computer verifies relations (R1), (R2), (Hecke), and (R3) in just under 5 hours for our solution. A record of this verification can be found in "k=8/Orbifold/Solutions/w=1/Verification.nb"
Solving the linear systems (RI) and (BA) gives a unique solution for the \(B\) cells up to scalar, and we solve (N) to pin down the norm of this scalar. Fixing a natural gauge gives us a solution, which can be found in "k=8/Orbifold/Solutions/w=1/Solution.nb".
A computer verifies (RI), (BA), and (N) for this solution in under 30 seconds. A record of this verification can be found in "k=8/Orbifold/Solutions/w=1/Verification.nb".
**Theorem 6.11**.: _There exists a rank 24 module category \(\mathcal{M}\) for \(\mathcal{C}(\mathfrak{sl}_{4},8)\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,8,\subset_{\Lambda_{2}}}_{\Lambda_{1}}\)._
We also find KW cell system solutions on the graph \(\Gamma^{4,8,\subset_{\Lambda_{1}}}_{\Lambda_{1}}\) when \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). The solutions and verification of these solutions can be found in the folder"k=8/Orbifold/Solutions".
**Theorem 6.12**.: _For each \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\) there exists a rank 24 module category \(\mathcal{M}\) for \(\overline{\operatorname{Rep}(U_{e^{2\pi i\frac{1}{24}}}(\mathfrak{sl}_{4}))^{ \omega}}\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,8,\subset_{\Lambda_{2}}}_{\Lambda_{1}}\)._
### A Second Exceptional Module for \(Su(4)\) at Level \(8\)
In this subsection we construct a cell system with parameters \(N=4\) and \(q=e^{2\pi i\frac{1}{24}}\) on the following graph:
and hence an exceptional module category over \(\overline{\operatorname{Rep}\left(U_{e^{2\pi i\frac{1}{24}}}(\mathfrak{sl}_{4}) \right)}\). This module will correspond the the conformal inclusion \((SU(4))_{8}\subset(SO(20))_{1}\).
The positive eigenvector for \(\Gamma^{4,8,\mathbb{C}}_{\Lambda_{1}}\) is
\[\lambda_{v_{1}^{\pm}}=\lambda_{v_{24}^{\pm}} =1 \lambda_{v_{2}^{\pm}}=\lambda_{v_{3}^{\pm}}=\lambda_{v_{22}^{\pm} }=\lambda_{v_{23}^{\pm}} =[4]_{q}\] \[\lambda_{v_{5}^{\pm}}=\lambda_{v_{20}^{\pm}} =\frac{[4]_{q}[5]_{q}}{[2]_{q}} \lambda_{\{v_{9},v_{12}\}}=\lambda_{\{v_{10},v_{11}\}}=\lambda_{ \{v_{13},v_{15}\}}=\lambda_{\{v_{14},v_{16}\}} =\frac{[4]_{q}[5]_{q}[6]_{q}}{[2]_{q}[3]_{q}}\] \[\lambda_{\{v_{4},v_{6}\}}=\lambda_{\{v_{19},v_{21}\}} =\frac{[4]_{q}[6]_{q}}{[3]_{q}} \lambda_{\{v_{7},v_{8}\}}=\lambda_{\{v_{17},v_{18}\}} =[3]_{q}[5]_{q}\]
We assume the graph for action by \(\Lambda_{2}\) is
To assist with constructing a cell system on this graph, we observe that the KW-cell solution on \(\Gamma^{4,8,\subset\mathbb{Z}_{2}}_{\Lambda_{1}}\) of Subsection 6.4 is gauge equivalent to a solution which is invariant under the following symmetry11:
Footnote 11: We failed to find this symmetric solution when initially solving the system, as it requires making several coefficients non-real.
\[v_{4}\leftrightarrow v_{6}\quad v_{7}\leftrightarrow v_{8}\quad v_{9} \leftrightarrow v_{12}\quad v_{10}\leftrightarrow v_{11}\quad v_{13} \leftrightarrow v_{15}\quad v_{14}\leftrightarrow v_{16}\quad v_{17} \leftrightarrow v_{18}\quad v_{19}\leftrightarrow v_{21}.\]
We can then identify \(\Gamma^{4,8,\subset\mathbb{Z}}_{\Lambda_{1}}\) with the orbifold of \(\Gamma^{4,8,\subset\mathbb{Z}_{2}}_{\Lambda_{1}}\) under this symmetry (hence the suggestive vertex labels of \(\Gamma^{4,8,\subset\mathbb{Z}}_{\Lambda_{1}}\)). Namely we have the natural identifications suggested by the labelling of the vertices. We have the following identifications for the edges with multiplicity:
\[\alpha_{1} \leftrightarrow\{v_{9}\to v_{7},v_{12}\to v_{8}\} \alpha_{2} \leftrightarrow\{v_{9}\to v_{8},v_{12}\to v_{7}\}\] \[\beta_{1} \leftrightarrow\{v_{7}\to v_{10},v_{8}\to v_{11}\} \beta_{2} \leftrightarrow\{v_{7}\to v_{11},v_{8}\to v_{10}\}\] \[\gamma_{1} \leftrightarrow\{v_{14}\to v_{17},v_{16}\to v_{18}\} \gamma_{2} \leftrightarrow\{v_{14}\to v_{18},v_{16}\to v_{17}\}\] \[\lambda_{1} \leftrightarrow\{v_{17}\to v_{13},v_{18}\to v_{15}\} \gamma_{2} \leftrightarrow\{v_{17}\to v_{15},v_{18}\to v_{13}\}.\]
We can then use the (non-rigorous) orbifold procedure to deduce all the cells which only pass through the vertices
\[\{v_{4},v_{6}\},\quad\{v_{7},v_{8}\},\quad\{v_{9},v_{12}\},\quad\{v_{10},v_{1 1}\},\quad\{v_{13},v_{15}\},\quad\{v_{14},v_{16}\},\quad\{v_{17},v_{18}\}, \quad\{v_{19},v_{20}\}.\]
In particular this determines the following two \(5\times 5\) blocks:
\[U^{\{v_{9},v_{12}\}}_{\{v_{10},v_{11}\}}=\begin{bmatrix}\frac{[3]_{q}[5]_{q}}{ [2]_{q}[4]_{q}[6]_{q}}&0&0&\frac{[3]_{q}[5]_{q}}{[2]_{q}[4]_{q}[6]_{q}}&-\frac {[3]_{q}[5]_{q}^{2}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}}\\ 0&\frac{[5]_{q}}{[6]_{q}}-\frac{\sqrt{[4]_{q}[5]_{q}^{2}}}{\sqrt{[2]_{q}^{2}[3 ]_{q}^{2}[6]_{q}}}&-\frac{[5]_{q}}{[2]_{q}[3]_{q}}&0&0\\ 0&-\frac{[5]_{q}}{[2]_{q}[3]_{q}}&\sqrt{\frac{[3]_{q}[5]_{q}}{[2]_{q}[6]_{q}}} +\frac{\sqrt{[4]_{q}[5]_{q}^{2}}}{[2]_{q}[6]_{q}^{2}}&0&0\\ \frac{[3]_{q}[5]_{q}}{[2]_{q}[4]_{q}[6]_{q}}&0&0&\frac{[3]_{q}[5]_{q}}{[2]_{q}[ 4]_{q}[6]_{q}}&-\frac{[3]_{q}[5]_{q}^{2}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}}\\ -\frac{[3]_{q}[5]_{q}^{2}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}}&0&0&-\frac{[3]_{ q}[5]_{q}^{2}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}}&\frac{[3]_{q}[5]_{q}}{\sqrt{[2]_{q}[4]_{q}^{ 2}[6]_{q}}}\end{bmatrix}\]
\[U^{\{v_{14},v_{16}\}}_{\{v_{13},v_{15}\}}=\begin{bmatrix}\frac{[3]_{q}[5]_{q}}{ [2]_{q}[4]_{q}[6]_{q}}&0&0&\frac{[3]_{q}[5]_{q}}{[2]_{q}[4]_{q}[6]_{q}}&\frac{[ 3]_{q}[5]_{q}^{2}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}}\\ 0&\frac{[5]_{q}}{[6]_{q}}-\frac{\sqrt{[4]_{q}[5]_{q}^{2}}}{\sqrt{[2]_{q}^{2}[3 ]_{q}^{2}[6]_{q}}}&\frac{[5]_{q}}{[2]_{q}[3]_{q}}&0&0\\ 0&\frac{[5]_{q}}{[2]_{q}[3]_{q}}&\sqrt{\frac{[3]_{q}[5]_{q}}{[2]_{q}[6]_{q}}+ \frac{\sqrt{[4]_{q}[5]_{q}^{2}}}{[2]_{q}[6]_{q}}}&0&0\\ \frac{[3]_{q}[5]_{q}}{[2]_{q}[4]_{q}[6]_{q}}&0&0&\frac{[3]_{q}[5]_{q}}{[2]_{q}[ 4]_{q}[6]_{q}}&\frac{[3]_{q}[5]_{q}^{2}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}}\\ \frac{[3]_{q}[5]_{q}^{2}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}}&0&0&\frac{[3]_{q}[5]_{ q}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}}&\frac{[3]_{q}[5]_{q}}{\sqrt{[2]_{q}[4]_{q}^{2}[6]_{q}}} \end{bmatrix}\]
With this seed information, we can solve to find the remaining U-cells. Solving the linear relations (R1), (R2), and (Tr(\(U_{1}\))) determines all but \(\approx 100\) coefficients. We then can fully solve (Tr(\(U_{1}\)U\({}_{2}\))) which gives the diagonal entries of every \(U\) matrix. From (Hecke), we can determine all of the \(2\times 2\) and \(3\times 3\) blocks up to some phase choices. Finally at this point (R3) contains enough linear equations to pin down the remaining coefficients, and pin down the earlier phases.
In the interest of space, we only present the remaining two \(5\times 5\) blocks.
\[U^{\{v_{10},v_{11}\}}_{\{v_{9},v_{12}\}}=\begin{bmatrix}\{v_{4},v_{6}\}&v_{5}^{+}&v _{5}^{-}&v_{20}^{+}&v_{20}^{-}\\ \hline[\frac{6]_{q}}{[5]_{q}}&-\frac{1}{\sqrt{[3]_{q}[5]_{q}}}&-\frac{1}{\sqrt{ [3]_{q}[5]_{q}}}&\frac{-[3]_{q}+\mathbf{i}\sqrt{[4]_{q}[5]_{q}[6]_{q}}}{\sqrt{ [2]_{q}^{2}[3]_{q}[6]_{q}^{2}}}&\frac{-[3]_{q}+\mathbf{i}\sqrt{[4]_{q}[5]_{q}[6] _{q}}}{\sqrt{[2]_{q}^{2}[3]_{q}[6]_{q}^{2}}}\\ -\frac{1}{\sqrt{[3]_{q}[5]_{q}}}&\frac{[4]_{q}}{[2]_{q}[6]_{q}}&-\frac{[5]_{q}} {[2]_{q}[6]_{q}}&-\frac{1+\frac{1}{\sqrt{[4]_{q}}}{\sqrt{[6]_{q}}}}{[3]_{q}}& \frac{[5]_{q}}{[2]_{q}[6]_{q}}&\frac{[5]_{q}}{[2]_{q}[6]_{q}}\\ \frac{-[3]_{q}-\mathbf{i}\sqrt{[4]_{q}[5]_{q}[6]_{q}}}{\sqrt{[2]_{q}^{2}[3]_{q }[6]_{q}^{2}}}&-\frac{1-\frac{1}{\sqrt{[4]_{q}}}{\sqrt{[6]_{q}}}}{[3]_{q}}& \frac{[5]_{q}}{[2]_{q}[6]_{q}}&\frac{[5]_{q}}{[6]_{q}}&-\frac{1}{[6]_{q}}\\ \frac{-[3]_{q}-\mathbf{i}\sqrt{[4]_{q}[5]_{q}[6]_{q}}}{\sqrt{[2]_{q}^{2}[3]_{ q}[6]_{q}^{2}}}&\frac{[5]_{q}}{[2]_{q}[6]_{q}}&-\frac{1-\frac{\mathbf{i} \sqrt{[4]_{q}}}{\sqrt{[6]_{q}}}}{[3]_{q}}&-\frac{1}{[6]_{q}}&\frac{[5]_{q}}{[6] _{q}}\end{bmatrix}\]
\[U^{\{v_{13},v_{15}\}}_{\{v_{14},v_{16}\}}=\begin{bmatrix}v_{5}^{+}&v_{5}^{-}&\{v _{19},v_{21}\}&v_{20}^{+}&v_{20}^{-}\\ \hline[\frac{[5]_{q}}{[6]_{q}}&-\frac{1}{[6]_{q}}&\frac{-[3]_{q}-\mathbf{i} \sqrt{[4]_{q}[5]_{q}[6]_{q}}}{\sqrt{[2]_{q}^{2}[3]_{q}[6]_{q}^{2}}}&\frac{[5]_ {q}}{[2]_{q}[6]_{q}}&-\frac{1-\frac{1}{\sqrt{[4]_{q}}}{\sqrt{[6]_{q}}}}{[3]_{q }}\\ -\frac{1}{[6]_{q}}&\frac{[5]_{q}}{[6]_{q}}&\frac{-[3]_{q}-\mathbf{i}\sqrt{[4]_ {q}[5]_{q}[6]_{q}}}{\sqrt{[2]_{q}^{2}[3]_{q}[6]_{q}^{2}}}&\frac{-1-\frac{1}{ \sqrt{[4]_{q}}}{\sqrt{[6]_{q}}}}{[3]_{q}}&\frac{[5]_{q}}{[2]_{q}[6]_{q}}\\ \frac{-[3]_{q}+\mathbf{i}\sqrt{[4]_{q}[5]_{q}[6]_{q}}}{\sqrt{[2]_{q}^{2}[3]_{q }[6]_{q}^{2}}}&\frac{-[3]_{q}+\mathbf{i}\sqrt{[4]_{q}[5]_{q}[6]_{q}}}{\sqrt{[2] _{q}^{2}[3]_{q}[6]_{q}^{2}}}&\frac{[6]_{q}}{[5]_{q}}&-\frac{1}{\sqrt{[3]_{q}[5] _{q}}}&-\frac{1}{\sqrt{[3]_{q}[5]_{q}}}\\ \frac{[5]_{q}}{[2]_{q}[6]_{q}}&-\frac{1+\frac{1}{\sqrt{[4]_{q}}}{\sqrt{[6]_{q} }}}{[3]_{q}}&-\frac{1}{\sqrt{[3]_{q}[5]_{q}}}&\frac{[4]_{q}}{[2]_{q}[6]_{q}}& -\frac{[5]_{q}}{[2]_{q}[6]_{q}}\\ -\frac{1+\frac{1}{\sqrt{[4]_{q}}}{\sqrt{[6]_{q}}}}{[3]_{q}}&\frac{[5]_{q}}{[2]_ {q}[6]_{q}}&-\frac{1}{\sqrt{[3]_{q}[5]_{q}}}&-\frac{[5]_{q}}{[2]_{q}[6]_{q}}& \frac{[4]_{q}}{[2]_{q}[6]_{q}}\end{bmatrix}\]
The full solution for the U-cells can be found in the Mathematica file "k=8/Conformal Inclusion/Solution.nb". In this file we use the ordering:
\[\{v_{1}^{+},v_{1}^{-},v_{2}^{+},v_{2}^{-},v_{3}^{+},v_{3}^{-},\{v_{4},v_{6}\}, v_{5}^{+},v_{5}^{-},\{v_{7},v_{8}\},\{v_{9},v_{12}\},\{v_{10},v_{11}\},\{v_{13},v_{15}\},\{v_{14},v_{14}\},\{v_{17},v_{18}\},\{v_{19},v_{2 1}\},v_{20}^{+},v_{20}^{-},v_{20}^{+},v_{22}^{-},v_{23}^{+},v_{24}^{-},v_{24}^{- }].\]
A computer verifies relations (R1), (R2), (R3), and (Hecke) in 2 hours. A record of this verification can be found in "k=8/Conformal Inclusion/Verification.nb". Note that this verification removes any dependency we had on using the non-rigorous orbifold procedure to obtain certain values for our U-cells.
We now solve (RI) and (BA) to obtain a 1-dimensional solution space for our B-cells. The equations (N) determine this solution up to a choice of phase, which we pick a natural choice for. The solution for the B-cells can be found in "k=8/Conformal Inclusion/Solution.nb". A computer verifies relations (RI), (BA), and (N) in just over 3 minutes for this solution. A record of this verification can be found in "k=8/Conformal Inclusion/Verification.nb".
**Theorem 6.13**.: _There exists a rank 24 module category \(\mathcal{M}\) for \(\mathcal{C}(\mathfrak{sl}_{4},8)\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,8,\subset}_{\Lambda_{1}}\)._
We also find KW cell system solutions on the graph \(\Gamma^{4,8,\subset}_{\Lambda_{1}}\) when \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). The solutions and verification of these solutions can be found in the folder "k=8/Conformal Inclusions/Solutions".
**Theorem 6.14**.: _For each \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\) there exists a rank 24 module category \(\mathcal{M}\) for \(\overline{\mathrm{Rep}(U_{e^{2\alpha+\frac{1}{24}}}(\mathfrak{sl}_{4}))^{\omega}}\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,8,\subset}_{\Lambda_{1}}\)._
### A Third Exceptional Module for \(Su(4)\) at Level \(8\)
In this section we construct a cell system with parameters \(N=4\) and \(q=e^{2\pi i\frac{1}{24}}\) on the following graph
and hence an exceptional module category over \(\overline{\mathrm{Rep}\left(U_{e^{2\pi i\frac{1}{24}}}(\mathfrak{sl}_{4})\right)}\). This module corresponds to the exceptional triple found in [1] constructed from the exceptional braided auto-equivalence of \(\mathcal{C}(\mathfrak{sl}_{4},8)^{0}_{\mathrm{Rep}(\mathbb{Z}_{4})}\)
The Frobenius-Perron eigenvector of this graph is
\[\lambda=\{1,[4]_{q},\frac{[4]_{q}}{[3]_{q}},\frac{[4]_{q}^{2}[6]_{ q}}{[3]_{q}^{2}},\frac{[3]_{q}[4]_{q}[5]_{q}}{[2]_{q}[6]_{q}},[4]_{q}^{2}[3]_{q}, \frac{[4]_{q}[6]_{q}}{[2]_{q}[6]_{q}},\frac{[3]_{q}[4]_{q}[5]_{q}}{[2]_{q}[6]_{ q}},\frac{[4]_{q}[5]_{q}}{[2]_{q}[3]_{q}},\frac{[4]_{q}}{[3]_{q}},\] \[\frac{[4]_{q}}{[3]_{q}},\frac{[4]_{q}[5]_{q}}{[2]_{q}[3]_{q}}, \frac{[3]_{q}[4]_{q}[5]_{q}}{[2]_{q}[6]_{q}},\frac{[4]_{q}[6]_{q}}{[2]_{q}}, \frac{[4]_{q}^{2}[5]_{q}}{[3]_{q}},\frac{[4]_{q}}{[2]_{q}[6]_{q}},\frac{[4]_{q} ^{2}[6]_{q}}{[3]_{q}},\frac{[4]_{q}}{[3]_{q}},[4]_{q},\frac{[5]_{q}[6]_{q}}{[2] _{q}[3]_{q}},\frac{[4]_{q}^{2}}{[3]_{q}},\] \[\frac{[2]_{q}[3]_{q}[5]_{q}}{[6]_{q}},\frac{[4]_{q}[6]_{q}}{[3]_{q}}, \frac{[4]_{q}[5]_{q}}{[2]_{q}},\frac{[5]_{q}[6]_{q}}{[2]_{q}},\frac{[4]_{q}[6] _{q}}{[3]_{q}},\frac{[3]_{q}[5]_{q}}{[2]_{q}[6]_{q}},\frac{[3]_{q}[5]_{q}}{[2] _{q}[6]_{q}},\frac{[5]_{q}[6]_{q}}{[2]_{q}[3]_{q}},\frac{[4]_{q}}{[2]_{q}}\}\]
We assume the graph for action by \(\Lambda_{2}\) is
\[\Gamma^{4,8,\mathrm{Twist}}_{\Lambda_{2}}=\raisebox{-14.226378pt}{\includegraphics[ ]{14.eps}}\]
\[\Gamma^{4,8,\mathrm{Twist}}_{\Lambda_{2}}=\raisebox{-14.226378pt}{\includegraphics[ ]{14.eps}}\]
\[\Gamma^{4,8,\mathrm{Twist}}_{\Lambda_{2}}=\raisebox{-14.226378pt}{\includegraphics[ ]{14.eps}}\]
The data of these graphs can be found in "k=8/AmbichiralTwist/Data.nb".
To find a solution to the U-cells on \(\Gamma^{4,8,\text{Twist}}_{\Lambda_{1}}\), we make the naive assumption that all our coefficients are real, and hence our \(U\) matrices are symmetric by (R2). Solving the linear equations (R1) and (\(\text{Tr}(U_{1})\)) determines 417 of the 616 coefficients.
The quadratic equation (\(\text{Tr}(U_{1}U_{2})\)) is especially useful for this example. Several of the equations coming from (\(\text{Tr}(U_{1}U_{2})\)) are in fact linear. Solving these linear equations, and plugging back in the values to (\(\text{Tr}(U_{1}U_{2})\)) then turns other equations into linear equations. This cascades, and allows us to completely solve (\(\text{Tr}(U_{1}U_{2})\)). This determines the diagonals of all the blocks in our solution, which gives 35 of the remaining coefficients.
We then use (Hecke) to determine the remaining 2x2 and 3x3 blocks, up to some sign ambiguities which we will resolve later. This leaves one 5x5 block, and five 4x4 blocks to determine. i.e. 40 coefficients. At this point, many of the equations from (R3) are now linear. Solving these determines the 4x4 and 5x5 blocks. We also use (R3) to resolve the sign ambiguities from earlier. Finally we choose an arbitrary real gauge to get a concrete solution.
The solution for the U-cells consists of 616 complex numbers. We just present the 5x5 block here in the interest of space, as this is the largest (and hence most difficult to determine) block in our solution. The rest of the solution can be found in the Mathematica notebook "k=8/AmbichiralTwist/Solutions/w=1/Solution.nb".
A computer verifies (R1), (R2), (R3), and (Hecke) for our solution in under two minutes. A record of this solution can be found in "k=8/AmbichiralTwist/Solutions/w=1/Verification.nb"
Solving the linear equations (RI) and (BA) solves the B-cells up to a single scalar, and (N) determines this scalar up to a phase. We pick a natural choice for this phase to obtain a concrete solution for our B-cells. This solution consists of 536 complex scalars. This solution can also be found in "k=8/AmbichiralTwist/Solutions/w=1/Solution.nb". A computer verifies relations (BA), (RI), and (N) in under 30 seconds for our solution. A record of this solution can also be found in "k=8/AmbichiralTwist/Solutions/w=1/Verification.
**Theorem 6.15**.: _There exists a rank 32 module category \(\mathcal{M}\) for \(\mathcal{C}(\mathfrak{sl}_{4},8)\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,8,\text{Twist}}_{\Lambda_{1}}\)._
Note that this theorem gives a construction of the exceptional braided auto-equivalence of \(\mathcal{C}(\mathfrak{sl}_{4},8)^{0}_{\text{Rep}(\mathbb{Z}_{4})}\), independent from the construction in [1].
We also find KW cell system solutions on the graph \(\Gamma^{4,8,\text{Twist}}_{\Lambda_{1}}\) when \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\). The solutions and verification of these solutions can be found in the folder"k=8/AmbichiralTwist/Solutions".
**Theorem 6.16**.: _For each \(\omega\in\{-1,\mathbf{i},-\mathbf{i}\}\) there exists a rank 32 module category \(\mathcal{M}\) for \(\overline{\text{Rep}(U_{e^{2\pi i\frac{1}{2}}}(\mathfrak{sl}_{4}))^{\omega}}\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,8,\text{Twist}}_{\Lambda_{1}}\)._
### Charge-Conjugation Modules for \(Su(4)\) at Level \(k\)
In this subsection, we will construct a family of KW cell systems on the graphs \(\Gamma^{4,k,*}_{\Lambda_{1}}\), where \(N=4\) and \(q=e^{2\pi i\frac{q(4+1)}{2}}\) for all \(k\in\mathbb{N}\). These will construct the _charge conjugation_ module categories over \(\mathcal{C}(\mathfrak{sl}_{4},k)\) for all \(k\).
Following [1] we define the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\) as follows. The vertices of \(\Gamma^{4,k,*}_{\Lambda_{1}}\) are \(a=(i,j)\) such that \(i+2j\leq k+3\). There is an edge \(\Gamma^{4,k,*}_{\Lambda_{1}}\) from \(a\to b\) if
\[a-b\in\{\varepsilon_{-2}:=(1,-1),\quad\varepsilon_{-1}:=(-1,0),\quad \varepsilon_{1}:=(1,0),\quad\varepsilon_{2}:=(-1,1)\}.\]
When \(k\) is odd, the graph \(\Gamma_{\Lambda_{1}}^{4,k,*}\) is of the form:
When \(k\) is even, the graph \(\Gamma_{\Lambda_{1}}^{4,k,*}\) is of the form:
For \(a\in\Gamma_{\Lambda_{1}}^{4,k,*}\) and \(j\in\{1,2\}\), define \(\overline{a}_{j}:=\sum_{i=j}^{2}a_{i}\), and \(\overline{a}_{-j}=-\overline{a}_{j}\). The positive eigenvector \(\lambda\) is given by the following formula:
\[\lambda_{a}=[\overline{a}_{1}][\overline{a}_{2}][\overline{a}_{1}-\overline{ a}_{2}][\overline{a}_{1}+\overline{a}_{2}].\]
The following values will be useful
\[\lambda_{a\pm\epsilon_{1}} =[\overline{a}_{1}\pm 1][\overline{a}_{2}][\overline{a}_{1}- \overline{a}_{2}\pm 1][\overline{a}_{1}+\overline{a}_{2}\pm 1]\] \[\lambda_{a\pm\epsilon_{2}} =[\overline{a}_{1}][\overline{a}_{2}\pm 1][\overline{a}_{1}-\overline{a}_{2} \mp 1][\overline{a}_{1}+\overline{a}_{2}\pm 1]\] \[\lambda_{a\pm(\epsilon_{1}+\epsilon_{2})} =[\overline{a}_{1}\pm 1][\overline{a}_{2}\pm 1][\overline{a}_{1}- \overline{a}_{2}][\overline{a}_{1}+\overline{a}_{2}\pm 2]\] \[\lambda_{a\pm(\epsilon_{1}-\epsilon_{2})} =[\overline{a}_{1}\pm 1][\overline{a}_{2}\mp 1][\overline{a}_{1}- \overline{a}_{2}\pm 2][\overline{a}_{1}+\overline{a}_{2}].\]
A solution for the U-cells on \(\Gamma_{\Lambda_{1}}^{4,k,*}\) is computed in [1]. They find
\[U^{a}\ \ a+\varepsilon_{i} =\ \begin{bmatrix}a+\varepsilon_{i}&a+\varepsilon_{j}\\ \sqrt{[\overline{a}_{i}-\overline{a}_{j}+1][\overline{a}_{i}-\overline{a}_{j}- 1]}&\sqrt{[\overline{a}_{i}-\overline{a}_{j}+1][\overline{a}_{i}-\overline{a}_{ j}-1]}\\ \sqrt{[\overline{a}_{i}-\overline{a}_{j}+1][\overline{a}_{i}-\overline{a}_{j}-1]}&[ \overline{a}_{i}-\overline{a}_{j}-1]\end{bmatrix}\ \ \ \text{if}\ i\neq\pm j\] \[U^{a}\ \ a+\varepsilon_{-2}
where
\[w_{a,i}:=\begin{cases}\frac{\lambda_{a}[2\overline{a}_{i}+2]+\lambda_{a+\varepsilon_ {i}}}{[2\overline{a}_{i}+1]}&\text{if }[2\overline{a}_{i}+1]\neq 0\\ \frac{\lambda_{a}[2\overline{a}_{i}-2]-\sum_{j\neq i}\overline{[\overline{a}_ {i}+\overline{a}_{j}-3]}\lambda_{a+\varepsilon_{j}}}{[2\overline{a}_{i}-3]}& \text{if }[2\overline{a}_{i}+1]=0\end{cases}\]
satisfies (R2), (R3), and (Hecke). We can directly verify that this solution also satisfies (R1).
**Lemma 6.17**.: _The above solution for the U-cells on \(\Gamma^{4,k,*}_{\Lambda_{1}}\) satisfies (R1)._
Proof.: Let \(a\) be any vertex in \(\Gamma^{4,k,*}_{\Lambda_{1}}\). We have to show for all \(i\in\{-2,-1,1,2\}\) that
\[\frac{\omega_{a,i}}{\lambda_{a}}+\sum_{j\neq\pm i}\frac{[\overline{a}_{i}- \overline{a}_{j}+1]}{[\overline{a}_{i}-\overline{a}_{j}]}\frac{\lambda_{a+ \varepsilon_{i}+\varepsilon_{j}}}{\lambda_{a+\varepsilon_{i}}}=[3].\]
In all four cases, this can equality be verified for generic \(q\) by a computer12 for both forms of \(\omega_{a,i}\) by expanding out the expression in terms of the variable \(q\). For the first form of \(\omega_{a,i}\), we have to simplify \(\frac{[2\overline{a}_{i}+1]}{[2\overline{a}_{i}+1]}\), and for the second for we have to simplify \(\frac{[2\overline{a}_{i}-3]}{[2\overline{a}_{i}-3]}\). Hence this equality holds for all \(q\).
Footnote 12: This could be done by hand, but a computer is faster, and more accurate.
To obtain a solution for the B-cells, we solve the linear system from (BA) and (RI), and normalise with (N). This gives the following:
\[B_{a\_\_a,a+2\varepsilon_{i}\_}=\begin{array}{c}a+\varepsilon_{i}\\ [\quad 0\quad]\end{array}a+\varepsilon_{i}\]
\[B_{a\_\_a,a+\varepsilon_{i}\_}=\begin{array}{c}a+\varepsilon_{i} \\ \sqrt{[4]!}\operatorname{sign}(i)\operatorname{sign}(j)\begin{bmatrix}\frac{[ \overline{a}_{j}-\overline{a}_{i}-1]}{[\overline{a}_{j}-\overline{a}_{i}]} \sqrt{\frac{\lambda_{a+\varepsilon_{i}+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{ \frac{[\overline{a}_{i}+\overline{a}_{j}][\overline{a}_{i}+\overline{a}_{j}+2] }{[\overline{a}_{i}+\overline{a}_{j}+1]^{2}}}\sqrt{\frac{\lambda_{a+ \varepsilon_{i}}\lambda_{a+\varepsilon_{j}}}{\lambda_{a}^{2}}}\\ \sqrt{\frac{[\overline{a}_{i}+\overline{a}_{j}][\overline{a}_{i}+\overline{a}_ {j}+2]}{[\overline{a}_{i}+\overline{a}_{j}+1]^{2}}}\sqrt{\frac{\lambda_{a+ \varepsilon_{i}}\lambda_{a+\varepsilon_{j}}}{\lambda_{a}^{2}}}&\frac{[ \overline{a}_{i}-\overline{a}_{j}-1]}{[\overline{a}_{i}-\overline{a}_{j}]} \sqrt{\frac{\lambda_{a+\varepsilon_{i}+\varepsilon_{j}}}{\lambda_{a}}}\\ \end{array}a+\varepsilon_{i}\]
\[B_{a\_a,a\_}=\begin{smallmatrix}\frac{1}{\lambda_{a}[\overline{a}_{i}+1][ \overline{a}_{i}-\overline{a}_{j}-\overline{a}_{i}-1][\overline{a}_{i}+ \overline{a}_{j}]}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{[\overline{a}_{i}+\overline{a}_{ j}][\overline{a}_{i}+\overline{a}_{j}+2]}{[\overline{a}_{i}+\overline{a}_{j}+1]^{2}}} \sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+\varepsilon_{j}}}{\lambda_ {a}^{2}}}&a+\varepsilon_{i}\\ \sqrt{\frac{[\overline{a}_{i}+\overline{a}_{j}][\overline{a}_{i}+\overline{a} _{j}+2]}{[\overline{a}_{i}+\overline{a}_{j}+1]^{2}}}\sqrt{\frac{\lambda_{a+ \varepsilon_{i}}\lambda_{a+\varepsilon_{j}}}{\lambda_{a}^{2}}}&\frac{[ \overline{a}_{i}-\overline{a}_{j}-1]}{[\overline{a}_{i}-\overline{a}_{j}]} \sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+\varepsilon_{j}}}{\lambda_ {a}}}\\ \end{smallmatrix}a+\varepsilon_{i}\]
\[B_{a\_a,a\_}=\begin{smallmatrix}\frac{1}{\lambda_{a}[\overline{a}_{i}+1][ \overline{a}_{i}-\overline{a}_{j}-\overline{a}_{i}-1][\overline{a}_{i}+ \overline{a}_{j}]}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a +\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{a+ \varepsilon_{j}}}{\lambda_{a}}}&\sqrt{\frac{\lambda_{a+\varepsilon_{i}}\lambda_{ a+\varepsilon_{j}}}{\lambda_{a}}}&\sqrt{
To verify that \(B_{a\_a,a,}\) is an eigenmatrix for \(U^{a}_{\quad\quad a}\) in the we enlist the help of a computer (as in Lemma 6.17) to express both \(U^{a}_{\quad\quad a}\cdot\overline{B_{a,a,}}\) and \([2]B_{a,\_a,}\) in terms of a formal variable \(q\). We find equality by considering terms. As our expression for \(\overline{U}^{a}_{\quad a}\) (in particular the \(w_{a,i}\) term) changes depending on the value of \([2\overline{a}_{i}+1]\) we have to consider both cases here. In both cases we perform a division which is non-zero only if \([2\overline{a}_{1}+1]\neq 0\) in the first case, and \([2\overline{a}_{1}-3]\neq 0\) in the second case.
The relation (RI) can easily be verified by hand. We compute case as an example. We let \(b:=a+\epsilon_{2}\), so that \(\overline{b}_{1}=\overline{a}_{1}\) and \(\overline{b}_{2}=\overline{a}_{2}+1\).
\[\frac{\lambda_{a+\epsilon_{2}}}{\lambda_{a}}B_{a+\epsilon_{2},a, a+\epsilon_{1},a} =\frac{\lambda_{a+\epsilon_{2}}}{\lambda_{a}}B_{b,b+\epsilon_{-2},b+\epsilon_{1}+\epsilon_{-2},b+\epsilon_{-2}}\] \[=\frac{-1}{\sqrt{[4]!}}\frac{\lambda_{a+\epsilon_{2}}}{\lambda_{ a}}\frac{[\overline{b}_{1}-\overline{b}_{-2}-1]}{[\overline{b}_{1}-\overline{b}_{- 2}]}\sqrt{\frac{\lambda_{b+\epsilon_{1}+\epsilon_{-2}}}{\lambda_{b}}}\] \[=\frac{-1}{\sqrt{[4]!}}\frac{\lambda_{a+\epsilon_{2}}}{\lambda_{ a}}\frac{[\overline{a}_{1}+\overline{a}_{2}+1]}{[\overline{a}_{1}+\overline{a}_{2}+1 ]}\sqrt{\frac{\lambda_{a+\epsilon_{1}}}{\lambda_{a+\epsilon_{2}}}}\] \[=\frac{-1}{\sqrt{[4]!}}\frac{1}{\lambda_{a}}\frac{[\overline{a}_{ 1}+\overline{a}_{2}]}{[\overline{a}_{1}+\overline{a}_{2}+1]}\sqrt{\lambda_{a+ \epsilon_{1}}\lambda_{a+\epsilon_{2}}}\] \[=(-1)^{4+1}\cdot 1\cdot B_{a,a+\epsilon_{2},a,a+\epsilon_{1}}.\]
The remaining cases all follow the same procedure.
We thus have a KW-cell system on \(\Gamma^{4,k,*}_{\Lambda_{1}}\) for all \(k\geq 1\). This gives the following theorem.
**Theorem 6.19**.: _Let \(k\in\mathbb{N}_{>1}\). Then there exists a module category \(\mathcal{M}\) for \(\mathcal{C}(\mathfrak{sl}_{4},k)\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma^{4,\overline{k},*}_{\Lambda_{1}}\)._
### Unfolded Charge-Conjugation Modules for \(Su(4)\) at Level \(k\)
In this subsection, we will construct a family of KW cell systems on the graphs \(\Gamma^{4,k,\overline{Z}_{2}}_{\Lambda_{1}}\), where \(N=4\) and \(q=e^{2\pi i\frac{1}{2(4+k)}}\) for all \(k\in\mathbb{N}\). These will construct the _unfolded charge conjugation_ module categories over \(\mathcal{C}(\mathfrak{sl}_{4},k)\) for all \(k\).
We define the graph \(\Gamma^{4,k,\overline{Z}_{2}^{*}}\) as follows. The vertices of \(\Gamma^{4,k,\overline{Z}_{2}^{*}}_{\Lambda_{1}}\) are \(a=(i,j,\delta)\) such that \(i+2j\leq k+3\), and \(\delta=\pm\). There is an edge in \(\Gamma^{4,k,\overline{Z}_{2}^{*}}_{\Lambda_{1}}\) from \((a,\delta_{1})\to(b,\delta_{2})\) if
\[a-b\in\{\varepsilon_{-2}:=(1,-1),\quad\varepsilon_{-1}:=(-1,0),\quad \varepsilon_{1}:=(1,0),\quad\varepsilon_{2}:=(-1,1)\},\]
and
\[\delta_{1}\delta_{2}=\begin{cases}+&\text{ if }\overline{a}_{1}-\overline{a}_{2} \text{ is odd}\\ -&\text{ if }\overline{a}_{1}-\overline{a}_{2}\text{ is even}\end{cases}\]
When \(k\) is odd, the graph \(\Gamma^{4,k,\overline{Z}_{2}^{*}}_{\Lambda_{1}}\) is of the form:
When \(k\) is even, the graph \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) is of the form:
From the definition of the graph \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) we have that
is a valid path in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) if and only if
is a valid path in \(\Gamma^{4,k,*}_{\Lambda_{1}}\). Furthermore, for each valid path in \(\Gamma^{4,k,*}_{\Lambda_{1}}\) of the above form, there are exactly two valid paths in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) of the above form (corresponding to the choice of \(\delta_{1}=\pm\)).
The positive eigenvector \(\lambda\) for \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) is essentially the same as for the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\).
\[\lambda_{a,\delta}=[\overline{a}_{1}][\overline{a}_{2}][\overline{a}_{1}- \overline{a}_{2}][\overline{a}_{1}+\overline{a}_{2}].\]
The graph \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) inherits the cell system from the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\). More precisely, we have
\[U^{(a,\delta_{1}),(b,\delta_{2})}_{(c,\delta_{3}),(d,\delta_{4})}:=U^{a,b}_{c, d}\qquad B_{(e,\delta_{5}),(f,\delta_{6}),(g,\delta_{7}),(h,\delta_{8})}:=B_{e,f,g,h}\]
where \(U^{a,b}_{c,d}\) and \(B_{e,f,g,h}\) are the cell system solution on \(\Gamma^{4,k,*}_{\Lambda_{1}}\) from Subsection 6.7.
**Lemma 6.20**.: _The KW cell system on \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) above satisfies (R1), (R2), (R3), (Hecke), (BA) and (RI) with \(\omega=1\)._
Proof.: This follows by construction of \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\), along with the fact that the positive eigenvector for \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) is essentially the same as for \(\Gamma^{4,k,*}_{\Lambda_{1}}\). For an example, we will show (R1) holds. The other relations follow in the same manner.
Let \((a,\delta_{1}),(b,\delta_{2})\in\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\), we compute
is a valid path in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) if and only if
is a valid path in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\). Furthermore, for each valid path in \(\Gamma^{4,k,*}_{\Lambda_{1}}\) of the above form, there are exactly two valid paths in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) of the above form (corresponding to the choice of \(\delta_{1}=\pm\)).
The positive eigenvector \(\lambda\) for \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) is essentially the same as for the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\).
\[\lambda_{a,\delta}=[\overline{a}_{1}][\overline{a}_{2}][\overline{a}_{1}- \overline{a}_{2}][\overline{a}_{1}+\overline{a}_{2}].\]
The graph \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) inherits the cell system from the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\). More precisely, we have
\[U^{(a,\delta_{1}),(b,\delta_{2})}_{(c,\delta_{3}),(d,\delta_{4})}:=U^{a,b}_{c,d}\qquad B_{(e,\delta_{5}),(f,\delta_{6}),(g,\delta_{7}),(h,\delta_{8})}:=B_{ e,f,g,h}\]
where \(U^{a,b}_{c,d}\) and \(B_{e,f,g,h}\) are the cell system solution on \(\Gamma^{4,k,*}_{\Lambda_{1}}\) from Subsection 6.7.
**Lemma 6.21**.: _The KW cell system on \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) above satisfies (R1), (R2), (R3), (Hecke), (BA) and (RI) with \(\omega=1\)._
Proof.: This follows by construction of \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\), along with the fact that the positive eigenvector for \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) is essentially the same as for \(\Gamma^{4,k,*}_{\Lambda_{1}}\). For an example, we will show (R1) holds. The other relations follow in the same manner.
Let \((a,\delta_{1}),(b,\delta_{2})\in\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\), we compute
is a valid path in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) if and only if
is a valid path in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\). Furthermore, for each valid path in \(\Gamma^{4,k,*}_{\Lambda_{1}}\) of the above form, there are exactly two valid paths in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) of the above form (corresponding to the choice of \(\delta_{1}=\pm\)).
The positive eigenvector \(\lambda\) for \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) is essentially the same as for the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\).
\[\lambda_{a,\delta}=[\overline{a}_{1}][\overline{a}_{2}][\overline{a}_{1}- \overline{a}_{2}][\overline{a}_{1}+\overline{a}_{2}].\]
The graph \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) inherits the cell system from the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\). More precisely, we have
\[U^{(a,\delta_{1}),(b,\delta_{2})}_{(c,\delta_{3}),(d,\delta_{4})}:=U^{a,b}_{c,d} \qquad B_{(e,\delta_{5}),(f,\delta_{6}),(g,\delta_{7}),(h,\delta_{8})}:=B_{e,f,g,h}\]
where \(U^{a,b}_{c,d}\) and \(B_{e,f,g,h}\) are the cell system solution on \(\Gamma^{4,k,*}_{\Lambda_{1}}\) from Subsection 6.7.
**Lemma 6.22**.: _The KW cell system on \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) above satisfies (R1), (R2), (R3), (Hecke), (BA) and (RI) with \(\omega=1\)._
Proof.: This follows by construction of \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\), along with the fact that the positive eigenvector for \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) is essentially the same as for \(\Gamma^{4,k,*}_{\Lambda_{1}}\). For an example, we will show (R1) holds. The other relations follow in the same manner.
Let \((a,\delta_{1}),(b,\delta_{2})\in\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\), we compute
is a valid path in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) if and only if
is a valid path in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\). Furthermore, for each valid path in \(\Gamma^{4,k,*}_{\Lambda_{1}}\) of the above form, there are exactly two valid paths in \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) of the above form (corresponding to the choice of \(\delta_{1}=\pm\)).
The positive eigenvector \(\lambda\) for \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) is essentially the same as for the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\).
\[\lambda_{a,\delta}=[\overline{a}_{1}][\overline{a}_{2}][\overline{a}_{1}- \overline{a}_{2}][\overline{a}_{1}+\overline{a}_{2}].\]
The graph \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) inherits the cell system from the graph \(\Gamma^{4,k,*}_{\Lambda_{1}}\). More precisely, we have
\[U^{(a,\delta_{1}),(b,\delta_{2})}_{(c,\delta_{3}),(d,\delta_{4})}:=U^{a,b}_{c,d} \qquad B_{(e,\delta_{5}),(f,\delta_{6}),(g,\delta_{7}),(h,\delta_{8})}:=B_{e,f,g,h}\]
where \(U^{a,b}_{c,d}\) and \(B_{e,f,g,h}\) are the cell system solution on \(\Gamma^{4,k,*}_{\Lambda_{1}}\) from Subsection 6.7.
**Lemma 6.23**.: _The KW cell system on \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) above satisfies (R1), (R2), (R3), (Hecke), (BA) and (RI) with \(\omega=1\)._
\[=\sum_{p:b\to p}\frac{\lambda_{p}}{\lambda_{b}}KW\left(\begin{array}{c} \includegraphics[height=142.26378pt]{20.0pt}{\includegraphics[height=142.26378pt]{20.0pt}{\includegraphics[height=142.
The Frobenius-Perron eigenvector for \(\Gamma^{4,k,Z_{4}^{*}}_{\Lambda_{1}}\) is easily described in terms of the eigenvector for eigenvector for \(\Gamma^{4,k,*}_{\Lambda_{1}}\). Note that we will overload the symbol \(\lambda\) for both these eigenvectors, as the indexing resolves the ambiguity. We have
\[\lambda_{(a,\delta)} =\lambda_{a}=[\overline{a}_{1}][\overline{a}_{2}][\overline{a}_{ 1}-\overline{a}_{2}][\overline{a}_{1}+\overline{a}_{2}]\] \[\lambda_{(a,\delta,\chi_{\ell})} =\frac{\lambda_{a}}{2}=\frac{[\overline{a}_{1}][\overline{a}_{2} ][\overline{a}_{1}-\overline{a}_{2}][\overline{a}_{1}+\overline{a}_{2}]}{2}.\]
To build a cell system solution on \(\Gamma^{4,k,Z_{4}^{*}}_{\Lambda_{1}}\), we observe that the U-cells for our cell system solution on \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) respects the horizontal flip symmetry (however the B-cells do not). This means we can orbifold the U-cell solution on \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\) to get a potential U-cell solution on \(\Gamma^{4,k,Z_{4}^{*}}_{\Lambda_{1}}\). From the results of [1] we get that this solution will satisfy (R2), (R3), and (Hecke). Following their construction gives the following.
For any path that doesn't pass through a split vertex, the corresponding U-cell is directly inherited from the solution on \(\Gamma^{4,k,Z_{2}^{*}}_{\Lambda_{1}}\). Let \(a=(i,j)\). For the remaining U-cells we obtain:
If \(\overline{a}_{1}=\frac{k}{2}\), then
\[U^{(a,\delta)}_{(a+\epsilon_{1},\delta^{\prime})} =\frac{(a+\varepsilon_{1},\delta^{\prime})}{[\overline{a}_{1}- \overline{a}_{2}]}\bigg{[}\begin{matrix}[\overline{a}_{1}\mp\overline{a}_{2}+1 ]&\sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a }_{2}-1]}\\ \sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a}_{ 2}-1]}&[\overline{a}_{1}\mp\overline{a}_{2}-1]\end{matrix}\]
If \(\overline{a}_{1}=\frac{k+2}{2}\), then
\[U^{(a,\delta)}_{(a+\epsilon_{1},\delta^{\prime})} =\frac{(a+\varepsilon_{1},\delta^{\prime})}{\lambda_{a}}\bigg{[} \begin{matrix}[\overline{a}_{1}\mp\overline{a}_{2}+1]&\sqrt{[\overline{a}_{1} \mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a}_{2}-1]}\\ \sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a}_{ 2}-1]}&[\overline{a}_{1}\mp\overline{a}_{2}-1]\end{matrix}\] \[U^{(a,\delta)}_{(a,-\delta)} =\frac{1}{\lambda_{a}}\bigg{[}\begin{matrix}[\overline{a}_{1} \mp\overline{a}_{2}+1]&\sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][ \overline{a}_{1}\mp\overline{a}_{2}-1]}\\ \sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a}_ {2}-1]}&[\overline{a}_{1}\mp\overline{a}_{2}-1]\end{matrix}\] \[U^{(a,\delta)}_{(a,-\delta)} =\frac{(a+\varepsilon_{1},\delta^{\prime})}{\lambda_{a}}\bigg{[} \begin{matrix}[\overline{a}_{1}\mp\overline{a}_{2}+1]&\sqrt{[\overline{a}_{1} \mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a}_{2}-1]}\\ \sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a} _{2}-1]}&[\overline{a}_{1}\mp\overline{a}_{2}-1]\end{matrix}\] \[U^{(a,\delta)}_{(a,-\delta)} =\frac{1}{\lambda_{a}}\bigg{[}\begin{matrix}[\overline{a}_{1} \mp\overline{a}_{2}+1]&\sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][ \overline{a}_{1}\mp\overline{a}_{2}-1]}\\ \sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a} _{2}-1]}&[\overline{a}_{1}\mp\overline{a}_{2}-1]\end{matrix}\] \[U^{(a,\delta)}_{(a,-\delta)} =\frac{1}{\lambda_{a}}\bigg{[}\begin{matrix}[\overline{a}_{1}\mp \overline{a}_{2}+1]&\sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][ \overline{a}_{1}\mp\overline{a}_{2}-1]}\\ \sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a} _{2}-1]}&[\overline{a}_{1}\mp\overline{a}_{2}-1]\end{matrix}\] \[U^{(a,\delta)}_{(a,-\delta)} =\frac{1}{\lambda_{a}}\bigg{[}\begin{matrix}[\overline{a}_{1} \mp\overline{a}_{2}+1]&\sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][ \overline{a}_{1}\mp\overline{a}_{2}-1]}\\ \sqrt{[\overline{a}_{1}\mp\overline{a}_{2}+1][\overline{a}_{1}\mp\overline{a} _{2}-1]}&[\overline{a}_{1}\mp\overline{a}_{2}-1]\end{matrix}\] \[U^{(a,\delta)}_{(a,-\delta)} =\frac{1}{\
\[\begin{array}{ccc}(a+\varepsilon_{-1},\delta^{\prime\prime})&(a\pm\varepsilon_{2}, \delta^{\prime\prime},\chi_{l^{\prime}})\\ U^{(a,\delta,\chi_{l})}_{(a-\varepsilon_{1}\pm\varepsilon_{2},-\delta)}=\frac{1}{[ \overline{a}_{-1}\mp\overline{a}_{2}]}\bigg{[}\sqrt{[\overline{a}_{-1}\mp \overline{a}_{2}+1][\overline{a}_{-1}\mp\overline{a}_{2}-1]}&\sqrt{[ \overline{a}_{-1}\mp\overline{a}_{2}+1][\overline{a}_{-1}\mp\overline{a}_{2}- 1]}\\ \sqrt{[\overline{a}_{-1}\mp\overline{a}_{2}+1][\overline{a}_{-1}\mp\overline{a }_{2}-1]}&[\overline{a}_{-1}\mp\overline{a}_{2}-1]\end{array}\]
As the results of [1] only establish that their construction gives an embedding of path algebras (and not of the entire graph planar algebra), we still have to verify relation (R1), which occurs outside of the path algebra.
**Lemma 6.22**.: _The above U-cells for \(\Gamma^{4,k,\mathcal{Z}^{*}_{4}}_{\Lambda_{1}}\) satisfy (R1)._
Proof.: This is a direct computation as done in Lemma 6.17. As the U-cell solution is the same for paths that do not pass through a split vertex, we only have to consider vertices at most distance \(2\) away from a split vertex. This has to be done in several cases. Let us demonstrate the case where \(s(i)=(a,1,\chi_{1})\), and \(r(i)=(a+\varepsilon_{2},1,\chi_{1})\). Note that \(\overline{a}_{1}-\overline{a}_{2}\) is odd for this edge to exist. We also have that \(\overline{a}_{1}=\frac{k+4}{2}\). We have to show that
\[\frac{\lambda_{(a,-1,\chi_{1})}}{\lambda_{(a+\varepsilon_{2},1,\chi_{1})}}U^{( a,1,\chi_{1}),(a+\varepsilon_{2},1,\chi_{1})}_{(a+\varepsilon_{2},1,\chi_{1}),(a,-1,\chi_{1})}+\frac{\lambda_{(a+2\varepsilon_{2},-1,\chi_{1})}}{\lambda_{ (a+\varepsilon_{2},1,\chi_{1})}}U^{(a,1,\chi_{1}),(a+\varepsilon_{2},1,\chi_{ 1})}_{(a+\varepsilon_{2},1,\chi_{1})}+\frac{\lambda_{(a-\varepsilon_{1}+ \varepsilon_{2},-1)}}{\lambda_{(a+\varepsilon_{2},1,\chi_{1})}}U^{(a,1,\chi_{ 1}),(a+\varepsilon_{2},-1)}_{(a+\varepsilon_{2},1,\chi_{1}),(a-\varepsilon_{ 1}+\varepsilon_{2},-1)}=[3].\]
Expanding out the left hand side gives
\[\frac{[\overline{a}_{1}][\overline{a}_{2}][\overline{a}_{1}-\overline{a}_{2} ][\overline{a}_{1}+\overline{a}_{2}]}{[\overline{a}_{1}][\overline{a}_{2}+1][ \overline{a}_{1}-\overline{a}_{2}-1][\overline{a}_{1}+\overline{a}_{2}+1]}\frac {w_{a,2}}{\lambda_{a}}+\frac{2[\overline{a}_{1}-1][\overline{a}_{2}+1][ \overline{a}_{1}-\overline{a}_{2}-2][\overline{a}_{1}+\overline{a}_{2}]}{[ \overline{a}_{1}][\overline{a}_{2}+1][\overline{a}_{1}-\overline{a}_{2}-1][ \overline{a}_{1}+\overline{a}_{2}+1]}\cdot\frac{[\overline{a}_{-1}-\overline{a }_{2}-1]}{[\overline{a}_{-1}-\overline{a}_{2}]}.\]
Note that \(w_{a,2}\) must be of the first form in this situation, as \([2\overline{a}_{2}+1]\neq 0\). We can then use a computer to expand this in the formal variable \(q\) (and using that \(\overline{a}_{1}=\frac{k+4}{2}\)) to obtain the uninspiring
\[\frac{\frac{\left(q^{2}-1\right)\left(q^{k+2}-1\right)q^{k+4}}{q^{2k}-q^{k+2}}- \frac{\left(q^{2}-1\right)\left(q^{k+6}-1\right)}{q^{k+2}+2q^{k+6}-2q^{4}-1}}{q ^{2}\left(q^{k+4}-1\right)}.\]
However recalling that \(q=e^{2\pi i\frac{1}{2(4+k)}}\), we have that \(q^{4+k}=-1\). This simplifies the above expression to give \(1+q^{2}+q^{-2}=[3]\) as desired.
All the remaining cases are resolved in the same manner.
Now that we have a solution for our U-cells, we can solve the linear system (RI) + (BA) to obtain the following solution for the B-cells. For any path not passing through a split vertex, the B-cell value is directly inherited from the corresponding B-cell value on \(\Gamma^{4,k,\mathcal{Z}^{*}_{4}}_{\Lambda_{1}}\). For the remaining cells, we have:
If \(\overline{a}_{1}=\frac{k}{2}\), then
\[B_{(a,\delta),\_(a+2\varepsilon_{1},-\delta,\chi_{\pm 1}),\_}=\begin{array}{cc}(a+ \varepsilon_{1},\delta^{\prime})\\ \@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus }\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus }\omit\cr\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2
If \(\overline{a}_{1}=\frac{k+4}{2}\), then
\[B_{(a,\delta,\chi_{l}),\_(a,-\delta,\chi_{l^{\prime}}),\_=}\] \[B_{(a,\delta,\chi_{l}),\_(a,-\delta,\chi_{l^{\prime}}),\_=}\] \[B_{(a,\delta,\chi_{l}),\_(a,-\delta,\chi_{l^{\prime}}),\_=} \frac{(a+\varepsilon_{-1},\delta^{\prime})}{\sqrt{|q|}[\sqrt{2} \sqrt{\lambda_{a+\varepsilon_{-1}}\lambda_{a+\varepsilon_{-2}}\frac{[\overline {a}_{-1}+\overline{a}_{-2}]}{[\overline{a}_{-1}+\overline{a}_{-2}+1]}}]} \sqrt{\frac{\lambda_{a+\varepsilon_{-1}}\lambda_{a+\varepsilon_{-2}}}{\lambda_ {a}^{2}}}\] \[\frac{(a+\varepsilon_{-1},\delta^{\prime})}{\lambda_{a}}\] \[B_{(a,\delta,\chi_{l}),\_(a,-\delta,\chi_{l^{\prime}}),\_=} \frac{(a+\varepsilon_{-1},\delta^{\prime})}{\sqrt{|q|}[\sqrt{2} \sqrt{\lambda_{a+\varepsilon_{-1}}\lambda_{a+\varepsilon_{-2}}\frac{[ \overline{a}_{-1}+\overline{a}_{-2}]}{[\overline{a}_{-1}+\overline{a}_{-2}+1] }}]}\sqrt{\frac{\lambda_{a+\varepsilon_{-1}}\lambda_{a+\varepsilon_{-2}}}{ \lambda_{a}^{2}}}\] \[\frac{(a+\varepsilon_{-1},
**Theorem 6.24**.: _Let \(k\in\mathbb{N}_{\geq 1}\) be even. Then there exists a module category \(\mathcal{M}\) for \(\mathcal{C}(\mathfrak{sl}_{4},k)\) such that the fusion graph for action by \(\Lambda_{1}\) is \(\Gamma_{\Lambda_{1}}^{4,k,\mathcal{Z}_{4}^{*}}\)._
## 7. Subfactors
One of the main goals of this paper is to explicitly construct subfactors from the interesting module categories of \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\). Assuming \(q=e^{2\pi i\frac{1}{2(N+k)}}\) for some \(k\geq 1\), then \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) is unitary, and a solution to a \(\operatorname{KW}\) cell system on a graph gives a unitary module category by [20, 1]. It then follows from [21] that we get a subfactor of the hyperfinite type \(\operatorname{II}_{1}\) factor for each simple object of the module category.
The first class of subfactors are constructed from \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) acting on \(\mathcal{M}\). We get an irreducible subfactor of the hyperfinite type \(\operatorname{II}_{1}\) factor for each simple object \(M\in\mathcal{M}\). We label this subfactor \(\rho_{M}\).
Note that for this subfactor, the category \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) is the even part of the subfactor. This consists of certain \(N-N\) bimodules. The odd part, which consists of certain \(M-M\) bimodules is the dual module category. See [15] for more details.
The principal graph \(\Gamma^{M}\) for \(\rho_{M}\) has even vertices the simple objects of \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\), and odd vertices the simple objects of \(\mathcal{M}\). The number of edges from \(V\in\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) to \(M^{\prime}\in\mathcal{M}\) is given by
\[\Gamma_{V\to M^{\prime}}^{M}=\dim\operatorname{Hom}_{\mathcal{M}}(V\otimes M \to M^{\prime}).\]
We can get an explicit formula for these multiplicities from the \(\operatorname{KW}\)-cell solutions.
**Lemma 7.1**.: _Let \(p_{V}\) be a projection onto \(V\) in the planar algebra \(\overline{\mathcal{P}_{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega},\Lambda_{1}}}\), and \(KW(p_{V})\) the image of this projection in the associated graph planar algebra. Then_
\[\dim\operatorname{\mathit{Hom}}_{\mathcal{M}}(V\otimes M\to M^{\prime})= \operatorname{Tr}\left(KW(p_{V})|_{M\to M^{\prime}}\right)\]
_where \(\operatorname{Tr}\left(KW(p_{V})|_{M\to M^{\prime}}\right)\) is the trace of the linear operator \(KW(p_{V})\in oGPA(\Gamma)\) restricted to loops beginning at \(M\) and ending at \(M^{\prime}\)._
Proof.: This is precisely Lemma 4.3.
In practice, one just needs to determine the multiplicities \(\dim\operatorname{Hom}_{\mathcal{M}}(V\otimes M\to M^{\prime})\) where \(V\) runs over the fundamental representations of \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\). From this information, fusion ring arguments can determine the remaining multiplicities. Explicitly, let \(G_{\Lambda_{i}}\) be the matrix for action of \(\Lambda_{i}\) on \(\mathcal{M}\), and let \(V_{\lambda}\) be a simple object of \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\) indexed by a partition \(\lambda\). Then the matrix for the action by \(V_{\lambda}\) on \(\mathcal{M}\) is given by the formula [14, Equation 3.5]:
\[G_{V_{\lambda}}=\det\left[G_{\Lambda_{(\lambda_{i}^{\mathrm{tr}}-i+j)}} \right]_{1\leq i,j\leq l(\lambda^{\mathrm{tr}})}\]
Formulas for the projections onto \(p_{\Lambda_{i}}\) are given in Subsection 2.4, and hence the matrices \(G_{\Lambda_{i}}\) can be explicitly computed in all examples using Lemma 7.1.
In practice, these subfactors are huge. We can obtain smaller subfactors with the following observation. Consider the graded subcategory \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}^{ad}\subset \overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}\). Over this subcategory, the module \(\mathcal{M}\) may no longer be simple. Let us write \(\bigoplus_{i}\mathcal{M}_{i}\) for the simple decomposition of \(\mathcal{M}\) as a \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}^{ad}\) module. We then get a subfactor of \(\mathcal{R}\) for each \(i\) and simple object \(M\in\mathcal{M}^{\prime}\). We label this subfactor \(\rho_{\mathcal{M}_{i},M}\).
The principal graph has even vertices the simple objects of \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))^{\omega}}^{ad}\), and odd vertices the simple objects of \(\mathcal{M}_{i}\). The number of edges from \(V\) to \(M^{\prime}\) is then equal to
\[\dim\operatorname{Hom}_{\mathcal{M}_{i}}(V\otimes M\to M^{\prime})=\dim \operatorname{Hom}_{\mathcal{M}}(V\otimes M\to M^{\prime}).\]
Hence we can determine the principal graph using Lemma 7.1.
**Remark 7.2**.: We wish to point out that the explicit structure of this second class of subfactors has been worked out for the charge-conjugation modules by Wenzl [20].
As an example, we determine the structure of one of the subfactors corresponding to the second exceptional module of \(\mathcal{C}(\mathfrak{sl}_{4},6)\).
**Example 7.3**.: Let \(\mathcal{M}\) be the module constructed in Theorem 6.8. The object \(M\) is chosen as \(a_{odd}\). The above construction yields a subfactor with the following principal graph:
Note that directly building a flat connection on this graph would be a computational nightmare due to the high multiplicity.
|
2307.01820 | Failure of the curvature-dimension condition in sub-Finsler manifolds | The Lott-Sturm-Villani curvature-dimension condition $\mathsf{CD}(K,N)$
provides a synthetic notion for a metric measure space to have curvature
bounded from below by $K$ and dimension bounded from above by $N$. It has been
recently proved that this condition does not hold in sub-Riemannian geometry
for every choice of the parameters $K$ and $N$. In this paper, we extend this
result to the context sub-Finsler geometry, showing that the $\mathsf{CD}(K,N)$
condition is not well-suited to characterize curvature in this setting.
Firstly, we show that this condition fails in (strict) sub-Finsler manifolds
equipped with a smooth strongly convex norm and with a positive smooth measure.
Secondly, we focus on the sub-Finsler Heisenberg group, proving that
curvature-dimension bounds can not hold also when the reference norm is less
regular, in particular when it is of class $C^{1,1}$. The strategy for proving
these results is a non-trivial adaptation of the work of Juillet [Rev. Mat.
Iberoam., 37(1):177-188, 2021], and it requires the introduction of new tools
and ideas of independent interest. Finally, we demonstrate the failure of the
(weaker) measure contraction property $\mathsf{MCP}(K,N)$ in the sub-Finsler
Heisenberg group, equipped with a singular strictly convex norm and with a
positive smooth measure. This result contrasts with what happens in the \sr
Heisenberg group, which instead satisfies $\mathsf{MCP}(0,5)$. | Mattia Magnabosco, Tommaso Rossi | 2023-07-04T16:49:49Z | http://arxiv.org/abs/2307.01820v2 | # Failure of the curvature-dimension condition in sub-Finsler manifolds
###### Abstract
The Lott-Sturm-Villani curvature-dimension condition \(\mathsf{CD}(K,N)\) provides a synthetic notion for a metric measure space to have curvature bounded from below by \(K\) and dimension bounded from above by \(N\). It has been recently proved that this condition does not hold in sub-Riemannian geometry for every choice of the parameters \(K\) and \(N\). In this paper, we extend this result to the context sub-Finsler geometry, showing that the \(\mathsf{CD}(K,N)\) condition is not well-suited to characterize curvature in this setting. Firstly, we show that this condition fails in (strict) sub-Finsler manifolds equipped with a smooth strictly convex norm and with a positive smooth measure. Secondly, we focus on the sub-Finsler Heisenberg group, proving that curvature-dimension bounds can not hold also when the reference norm is less regular, in particular when it is of class \(C^{1,1}\). The strategy for proving these results is a non-trivial adaptation of the work of Juillet [13], and it requires the introduction of new tools and ideas of independent interest. Finally, we demonstrate the failure of the (weaker) measure contraction property \(\mathsf{MCP}(K,N)\) in the sub-Finsler Heisenberg group, equipped with a singular strictly convex norm and with a positive smooth measure. This result contrasts with what happens in the sub-Riemannian Heisenberg group, which instead satisfies \(\mathsf{MCP}(0,5)\).
###### Contents
* 1 Introduction
* 1.1 Curvature-dimension conditions
* 1.2 The curvature-dimension condition in sub-Riemannian geometry
* 1.3 Other curvature-dimension bounds in sub-Riemannian geometry
* 1.4 Sub-Finsler manifolds and Carnot groups
* 1.5 Main results
* 2 Preliminaries
* 2.1 The \(\mathsf{CD}(K,N)\) condition
* 2.2 Sub-Finsler structures
* 3 The geometry of smooth sub-Finsler manifolds
* 3.1 The energy functional and the optimal control problem
* 3.2 Characterization of extremals and the exponential map
* 3.3 The end-point map
* 4
* 4 Failure of the \(\mathsf{CD}(K,N)\) condition in smooth sub-Finsler manifolds
* 4.1 Construction of a geodesic without abnormal sub-segments
* 4.2 Regularity of the distance function
* 4.3 Volume contraction rate along geodesics
* 4.4 Proof of Theorem 1.5
* 5 Failure of the \(\mathsf{CD}(K,N)\) condition in the sub-Finsler Heisenberg group
* 5.1 Convex trigonometry
* 5.2 Geodesics in the Heisenberg group
* 5.3 Failure of the \(\mathsf{CD}(K,N)\) condition for \(C^{1,1}\)-norms
* 5.4 Failure of the \(\mathsf{MCP}(K,N)\) condition for singular norms
## 1 Introduction
In the present paper, we address the validity of the Lott-Sturm-Villani curvature-dimension (in short \(\mathsf{CD}(K,N)\)) condition in the setting of sub-Finsler geometry. In particular, we prove that this condition can not hold in a large class of sub-Finsler manifolds. Thus, on the one hand, this work shows that the \(\mathsf{CD}(K,N)\) condition is not well-suited to characterize curvature in sub-Finsler geometry. On the other hand, we discuss how our results could provide remarkable insights about the geometry of \(\mathsf{CD}(K,N)\) spaces.
### Curvature-dimension conditions
In their groundbreaking works, Sturm [14, 15] and Lott-Villani [13] introduced independently a synthetic notion of curvature-dimension bounds for non-smooth spaces, using Optimal Transport. Their theory stems from the crucial observation that, in the Riemannian setting, having a uniform lower bound on the Ricci curvature and an upper bound on the dimension, can be equivalently characterized in terms of a convexity property of suitable entropy functionals in the Wasserstein space. In particular, it was already observed in [12] that the Ricci bound \(\mathrm{Ric}\geq K\cdot g\) holds if and only if the Boltzmann-Shannon entropy functional is \(K\)-convex in the Wasserstein space. More generally, let \((M,g)\) be a complete Riemannian manifold, equipped with a measure of the form \(\mathfrak{m}=e^{-V}\mathrm{vol}_{g}\), where \(\mathrm{vol}_{g}\) is the Riemannian volume and \(V\in C^{2}(M)\). Given \(K\in\mathbb{R}\) and \(N\in(n,+\infty]\), Sturm [14] proved that the (generalized) Ricci lower bound
\[\mathrm{Ric}_{N,V}:=\mathrm{Ric}+\nabla^{2}V-\frac{\nabla V\otimes\nabla V}{N- n}\geq K\cdot g, \tag{1}\]
holds if and only if a \((K,N)\)-convexity inequality holds for Renyi entropy functionals, defined with respect to the reference measure \(\mathfrak{m}\). While (1) involves a differential object, the Ricci tensor, entropy convexity can be formulated relying solely upon a reference distance and a reference measure, without the need of the underlying smooth structure of the Riemannian manifold. Therefore, it can be introduced in the non-smooth setting of metric measure spaces and taken as definition of curvature-dimension bound. This condition is called \(\mathsf{CD}(K,N)\) and represents a synthetic lower bound on the (Ricci) curvature by \(K\in\mathbb{R}\) and a synthetic upper bound on the dimension by \(N\in(1,\infty]\), see Definition 2.2. In this sense, according to the discussion above, the \(\mathsf{CD}(K,N)\) condition is coherent with the Riemannian setting. Moreover, it was proved by Ohta [15] that the relation between curvature and \(\mathsf{CD}(K,N)\) condition holds also in the context of Finsler manifolds.
Remarkably, \(\mathsf{CD}(K,N)\) spaces (i.e. spaces satisfying the \(\mathsf{CD}(K,N)\) condition) enjoy several geometric properties which hold in the smooth setting. Some of them are expected (and in a way
necessary) for a reasonable curvature-dimension bound, such as the scaling [23], tensorization [17] and globalization [18] properties or the monotonicity with respect to the parameters [22], i.e.
\[\mathsf{CD}(K^{\prime},N^{\prime})\implies\mathsf{CD}(K,N)\qquad\text{if $K^{ \prime}\geq K$ and $N^{\prime}\leq N.$}\]
Others are completely non-trivial and highlight some notable geometric features. Among them, we mention the Bonnet-Myers diameter bound and the Bishop-Gromov inequality, that provides an estimate on the volume growth of concentric balls. Particularly interesting in the context of this work is the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\), which, given two sets \(A\) and \(B\) in the reference metric measure space \((\mathsf{X},\mathsf{d},\mathfrak{m})\), provides a lower estimate on the measure of the set of \(t\)-midpoints
\[M_{t}(A,B)=\left\{x\in\mathsf{X}\,:\,\mathsf{d}(a,x)=t\mathsf{d}(a,b),\, \mathsf{d}(x,b)=(1-t)\mathsf{d}(a,b)\,\text{ for some $a\in A,b\in B$}\right\},\]
in terms of \(\mathfrak{m}(A)\) and \(\mathfrak{m}(B)\), for every \(t\in[0,1]\), cf. (6). The notable feature of the \(\mathsf{BM}(K,N)\) inequality is that its formulation does not invoke optimal transport, or Wasserstein interpolation, and because of that, it is easier to handle than the \(\mathsf{CD}(K,N)\) condition. Nonetheless, it contains a strong information about the curvature of the underlying space, to the extent that it is equivalent to the \(\mathsf{CD}(K,N)\) condition in the Riemannian setting, cf. [13]. In particular, in the proof of Theorem 1.5 and Theorem 1.7, we show the failure of the \(\mathsf{CD}(K,N)\) condition by contradicting the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\).
Finally, another fundamental property of the \(\mathsf{CD}(K,N)\) condition is its stability with respect to the (pointed) measured Gromov-Hausdorff convergence [23, 24, 25]. This notion of convergence for metric measure spaces essentially combines the Hausdorff convergence for the metric side and the weak convergence for the reference measures. As in a metric measure space, the tangent spaces at a point are identified with a measured Gromov-Hausdorff limit procedure of suitably rescalings of the original space, the stability of the curvature-dimension condition implies that the metric measure tangents of a \(\mathsf{CD}(K,N)\) space is a \(\mathsf{CD}(0,N)\) space.
In the setting of metric measure spaces, it is possible to define other curvature-dimension bounds, such as the so-called measure contraction property (in short \(\mathsf{MCP}(K,N)\)), introduced by Ohta in [19]. In broad terms, the \(\mathsf{MCP}(K,N)\) condition can be interpreted as the Brunn-Minkowski inequality where one of the two sets degenerates to a point. In particular, it is implied by (and strictly weaker than) the \(\mathsf{BM}(K,N)\) inequality, and therefore it is also a consequence of the \(\mathsf{CD}(K,N)\) condition.
### The curvature-dimension condition in sub-Riemannian geometry
While the \(\mathsf{CD}(K,N)\) condition is equivalent to having bounded geometry in the Riemannian setting, a similar result does not hold in the sub-Riemannian setting. Sub-Riemannian geometry is a broad generalization of Riemannian geometry where, given a smooth manifold \(M\), we define a smoothly varying scalar product only on a subset of _horizontal_ directions \(\mathcal{D}_{p}\subset T_{p}M\) (called distribution) at each point \(p\in M\). Under the so-called Hormander condition, \(M\) is horizontally-path connected, and the usual length-minimization procedure yields a well-defined distance \(\mathsf{d}_{SR}\). In particular, differently from what happens in Riemannian geometry, the rank of the distribution \(r(p):=\dim\mathcal{D}_{p}\) may be strictly less than the dimension of the manifold and may vary with the point. This may influence the behavior of geodesics, emphasizing singularities of the distance \(\mathsf{d}_{SR}\). For this reason, we can not expect the \(\mathsf{CD}(K,N)\) condition to hold for _truly_ sub-Riemannian manifolds. This statement is confirmed by a series of papers, most notably [16, 17, 18], that contributed to the proof of the following result.
**Theorem 1.1**.: Let \(M\) be a complete sub-Riemannian manifold, equipped with a positive smooth measure \(\mathfrak{m}\). Then, the metric measure space \((M,\mathsf{d}_{SR},\mathfrak{m})\) does not satisfy the \(\mathsf{CD}(K,N)\) condition, for any \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
In [10], Juillet proved Theorem 1.1 for sub-Riemannian manifolds where the rank of the distribution \(r(p)\) is strictly smaller than the topological dimension \(n:=\dim M\), for every \(p\in M\). His strategy relies on the construction of two Borel subsets for which the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\) does not hold. Namely, for all \(R,\varepsilon>0\), one can find \(A,B\subset M\) such that \(\operatorname{diam}(A\cup B)<R\), \(\mathfrak{m}(A)\approx\mathfrak{m}(B)\), and such that there exists \(t\in(0,1)\) for which
\[\mathfrak{m}(M_{t}(A,B))\leq\frac{1}{2^{N-n}}\,\mathfrak{m}(B)(1+\varepsilon), \tag{2}\]
where \(\mathcal{N}\) is the so-called _geodesic dimension_ of \(M\), see [1, Def. 5.47] for a precise definition. The sets \(A\) and \(B\) are metric balls of small radius, centered at the endpoints of a short segment of an _ample geodesic_, see [1] for details. The inequality (2) allows to contradict the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\) if and only if the geodesic dimension \(\mathcal{N}\) is strictly greater than \(n\), which is the case if \(r(p)<n\), for every \(p\in M\).
While Julliet's result is quite general, it does not include almost-Riemannian geometry. Roughly speaking, an almost-Riemannian manifold is a sub-Riemannian manifold where the rank of the distribution coincides with the dimension of \(M\), at almost every point. In [11], we addressed this issue, proposing a new strategy for proving Theorem 1.1 in this setting. Our idea is to exploit the following one-dimensional characterization of the \(\mathsf{CD}(K,N)\) condition:
\[\mathsf{CD}(K,N)\quad\Rightarrow\quad\mathsf{CD}^{1}(K,N), \tag{3}\]
proved by Cavalletti and Mondino in [13], and contradict the \(\mathsf{CD}^{1}(K,N)\) condition. On a metric measure space \((\mathsf{X},\mathsf{d},\mathfrak{m})\), given a \(1\)-Lipschitz function \(u\in\mathsf{Lip}(\mathsf{X})\), it is possible to partition \(\mathsf{X}\) in one-dimensional transport rays, associated with \(u\), and disintegrate the measure \(\mathfrak{m}\) accordingly. Then, the \(\mathsf{CD}^{1}(K,N)\) condition asks for the validity of the \(\mathsf{CD}(K,N)\) condition along the transport rays of the disintegration associated with \(u\), for any choice of \(u\in\mathsf{Lip}(\mathsf{X})\). In [11, Thm. 1.2], when \(M\) is either strongly regular or \(\dim M=2\), we are able to explicitly build a \(1\)-Lipschitz function, and compute the associated disintegration, showing that the \(\mathsf{CD}(K,N)\) condition along the rays does not hold for any \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
Most recently, Rizzi and Stefani [14] proposed yet another strategy to prove Theorem 1.1. Differently from the strategies presented above, they pursue the "Eulerian" approach to curvature-dimension bounds, based on a suitable Gamma calculus, see [1] for details. This approach can be adopted for metric measure spaces that satisfy the _infinitesimal Hibertian_ condition (cf. [1, 2]) which forces the space to be _Riemannian-like_ and ensures the linearity of the heat flow. According to [1], an infinitesimally Hilbertian \(\mathsf{CD}(K,N)\) space supports the so-called Bakry-Emery inequality \(\mathsf{BE}(K,\infty)\), which, in the sub-Riemannian setting reads as
\[\frac{1}{2}\Delta\left(\|\nabla f\|^{2}\right)\geq g(\nabla f,\nabla\Delta f)+ K\|\nabla f\|^{2},\qquad\forall\,f\in C_{c}^{\infty}(M), \tag{4}\]
where \(\nabla\) is the horizontal gradient and \(\Delta\) is the sub-Laplacian. In [14], the authors show that (4) implies the existence of enough isometries on the metric tangent to force it to be Euclidean at _each point_, proving Theorem 1.1 (including also the case \(N=\infty\)).
### Other curvature-dimension bounds in sub-Riemannian geometry
Given that the \(\mathsf{CD}(K,N)\) condition does not hold in sub-Riemannian geometry, considerable efforts have been undertaken to explore potential curvature-dimension bounds that may hold in this class. A first observation in this direction is that the weaker \(\mathsf{MCP}(K,N)\) condition does hold in many examples of sub-Riemannian manifolds. In particular, it was proved by Juillet [10] that the sub-Riemannian Heisenberg group satisfies the \(\mathsf{MCP}(0,5)\) condition, where the curvature-dimension parameters can not be improved. Moreover, in [1] it was observed
that the optimal dimensional parameter for the measure contraction property coincides with the geodesic dimension of the sub-Riemannian Heisenberg group (i.e. \(\mathcal{N}=5\)). This result has been subsequently extended to a large class of sub-Riemannian manifolds, including ideal Carnot groups [10], corank-1 Carnot groups [10], generalised H-type Carnot groups [11] and two-step analytic sub-Riemannian structures [11]. In all these cases, the \(\mathsf{MCP}(0,N)\) condition holds with the dimensional parameter \(N\) greater than or equal to the geodesic dimension \(\mathcal{N}\).
Another attempt is due to Milman [12], who introduced the quasi curvature-dimension condition, inspired by the interpolation inequalities along Wasserstein geodesics in ideal sub-Riemannian manifolds, proved by Barilari and Rizzi [11]. Finally, these efforts culminated in the recent work by Barilari, Mondino and Rizzi [1], where the authors propose a unification of Riemannian and sub-Riemannian geometries in a comprehensive theory of synthetic Ricci curvature lower bounds. In the setting of gauge metric measure spaces, they introduce the \(\mathsf{CD}(\beta,n)\) condition, encoding in the distortion coefficient \(\beta\) finer geometrical information of the underlying structure. Moreover they prove that the \(\mathsf{CD}(\beta,n)\) condition holds for compact fat sub-Riemannian manifolds, thus substantiating the definition.
### 1.4 Sub-Finsler manifolds and Carnot groups
In the present paper, we focus on sub-Finsler manifolds, which widely generalize both sub-Riemannian and Finsler geometry. Indeed, in this setting, given a smooth manifold \(M\), we prescribe a smoothly varying _norm_ (which needs not be induced by a scalar product) on the distribution \(\mathcal{D}_{p}\subset T_{p}M\), at each point \(p\in M\). As in the sub-Riemannian setting, \(\mathcal{D}\) must satisfy the Hormander condition, and consequently the length-minimization procedure among admissible curves gives a well-defined distance \(\mathsf{d}_{SF}\). Note that, on the one hand, if the \(\mathcal{D}_{p}=T_{p}M\) for every \(p\in M\), we recover the classical Finsler geometry. On the other hand, if the norm on \(\mathcal{D}_{p}\) is induced by a scalar product for every \(p\in M\), we fall back into sub-Riemannian geometry.
Replacing the scalar product with a (possibly singular) norm is not merely a technical choice, as the metric structure of a sub-Finsler manifold reflects the singularities of the reference norm. Indeed, even though sub-Finsler manifolds can still be investigated by means of classical control theory [1], deducing finer geometrical properties is more delicate compared to what happens in the sub-Riemannian setting, as the Hamiltonian function has a low regularity, cf. Section 3. In this regard, sub-Finsler manifolds provide an interesting example of smooth structures which present both the typical sub-Riemannian and Finsler singular behavior. A particularly relevant class of sub-Finsler manifolds is the one of _sub-Finsler Carnot groups_.
**Definition 1.2** (Carnot group).: A Carnot group is a connected, simply connected Lie group \(G\) with nilpotent Lie algebra \(\mathfrak{g}\), admitting a stratification
\[\mathfrak{g}=\mathfrak{g}_{1}\oplus\cdots\oplus\mathfrak{g}_{k},\]
where \(\mathfrak{g}_{i+1}=[\mathfrak{g}_{1},\mathfrak{g}_{i}]\), for every \(i=1,\ldots,k-1\), and \([\mathfrak{g}_{1},\mathfrak{g}_{k}]=\{0\}\).
Given a Carnot group \(G\), if we equip the first layer \(\mathfrak{g}_{1}\) of its Lie algebra with a norm, we naturally obtain a left-invariant sub-Finsler structure on \(G\). We refer to the resulting manifold as a sub-Finsler Carnot group.
Motivated from the results presented in the previous section, cf. Theorem 1.1, and especially from the ones obtained in the present work (see Section 1.5), we formulate the following conjecture.
**Conjecture 1.3**.: Let \(G\) be a sub-Finsler Carnot group, endowed with a positive smooth measure \(\mathfrak{m}\). Then, the metric measure space \((G,\mathsf{d}_{SF},\mathfrak{m})\) does not satisfy the \(\mathsf{CD}(K,N)\) condition for any \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
Our interest in Carnot groups stems from the fact that they are the only metric spaces that are locally compact, geodesic, isometrically homogeneous and self-similar (i.e. admitting a dilation) [10]. According to this property, sub-Finsler Carnot groups naturally arise as metric tangents of metric measure spaces.
**Theorem 1.4** (Le Donne [10]).: Let \((\mathsf{X},\mathsf{d},\mathfrak{m})\) be a geodesic metric measure space, equipped with a doubling measure \(\mathfrak{m}\). Assume that, for \(\mathfrak{m}\)-almost every \(x\in\mathsf{X}\), the set \(\operatorname{Tan}(\mathsf{X},x)\) of all metric tangent spaces at \(x\) contains only one element. Then, for \(\mathfrak{m}\)-almost every \(x\in\mathsf{X}\), the element in \(\operatorname{Tan}(\mathsf{X},x)\) is a sub-Finsler Carnot group \(G\).
In particular, this result applies to \(\mathsf{CD}(K,N)\) spaces, where the validity of the doubling property is guaranteed by the Bishop-Gromov inequality. Moreover, as already mentioned, the metric measure tangents of a \(\mathsf{CD}(K,N)\) space are \(\mathsf{CD}(0,N)\). Therefore, the study of the \(\mathsf{CD}(K,N)\) condition in sub-Finsler Carnot groups, and especially the validity of Conjecture 1.3, has the potential to provide deep insights on the structure of tangents of \(\mathsf{CD}(K,N)\) spaces. This could be of significant interest, particularly in connection with Bate's recent work [1], which establishes a criterion for rectifiability in metric measure spaces, based on the structure of metric tangents.
### Main results
The aim of this paper is to show the failure of the \(\mathsf{CD}(K,N)\) condition in the sub-Finsler setting, with a particular attention to Conjecture 1.3. Our results offer an advance into two different directions: on the one hand we deal with general sub-Finsler structures, where the norm is smooth, cf. Theorem 1.5 and Theorem 1.6, and, on the other hand, we deal with the sub-Finsler Heisenberg group, equipped with more general norms, cf. Theorem 1.8 and Theorem 1.7.
In order to extend the sub-Riemannian result of Theorem 1.1 to the sub-Finsler setting, one can attempt to adapt the strategies discussed in Section 1.2, however this can present major difficulties. Specifically, the argument developed in [14] has little hope to be generalized, because the infinitesimal Hilbertianity assumption does not hold in Finsler-like spaces, see [13]. It is important to note that this is not solely a "regularity" issue, in the sense that it also occurs when the norm generating the sub-Finsler structure is smooth, but not induced by a scalar product. Instead, the approach proposed in [15] could potentially be applied to sub-Finsler manifolds as it relies on tools developed in the non-smooth setting, see (3). However, adapting the sub-Riemannian computations that led to a contradiction of the \(\mathsf{CD}^{1}(K,N)\) condition seems non-trivial already when the reference norm is smooth. Finally, the strategy illustrated in [16] hinges upon geometrical constructions and seems to be well-suited to generalizations to the sub-Finsler setting. In this paper, we build upon this observation and adapt the latter strategy to prove our main theorems.
Our first result is about the failure of the \(\mathsf{CD}(K,N)\) condition in _smooth_ sub-Finsler manifolds, cf. Theorem 4.26.
**Theorem 1.5**.: Let \(M\) be a complete sub-Finsler manifold with \(r(p)<n:=\dim M\) for every \(p\in M\), equipped with a smooth, strictly convex norm \(\|\cdot\|\) and with a positive smooth measure \(\mathfrak{m}\). Then, the metric measure space \((M,\mathsf{d}_{SF},\mathfrak{m})\) does not satisfy the \(\mathsf{CD}(K,N)\) condition, for any \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
This result is the sub-Finsler analogue of [17, Cor. 1.2]. Although the strategy of its proof follows the blueprint [17], the adaptation to our setting is non-trivial and requires many intermediate results of independent interest. First of all, we establish the existence of geodesics _without abnormal sub-segments_, cf. Theorem 4.11, proposing a construction that is new even in the sub-Riemannian framework and relies on the regularity properties of the distance function from the boundary of an open set. Note that, while these properties are well-known in the sub
Riemannian context (cf. [12, Prop. 3.1]), inferring them in the sub-Finsler setting becomes more challenging due to the low regularity of the Hamiltonian, which affects the regularity of the normal exponential map. Nonetheless, we settle a weaker regularity result that is enough for our purposes, cf. Theorem 4.8. Second of all, we prove an analogue of the sub-Riemannian theorem, indicating that the volume contraction along ample geodesics is governed by the geodesic dimension, see [1, Thm. D]. Indeed, in a smooth sub-Finsler manifold, we establish that the volume contraction rate along geodesics without abnormal sub-segments is bigger than \(\dim M+1\), cf. Theorem 4.22. Finally, we mention that these technical challenges lead us to a simplification of Juillet's argument (cf. Theorem 4.26), which revealed itself to be useful also in the proof of Theorem 1.7.
Observe that, since sub-Finsler Carnot groups are equiregular (and thus \(r(p)<n\), for every \(p\in G\)) and complete, we immediately obtain the following consequence of Theorem 1.5, which constitutes a significant step forward towards the proof of Conjecture 1.3.
**Theorem 1.6**.: Let \(G\) be a sub-Finsler Carnot group, equipped with a smooth, strictly convex norm \(\left\|\cdot\right\|\) and with a positive smooth measure \(\mathfrak{m}\). Then, the metric measure space \((G,\mathsf{d}_{SF},\mathfrak{m})\) does not satisfy the \(\mathsf{CD}(K,N)\) condition, for any \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
In the proof of Theorem 1.5, the smoothness of the norm plays a pivotal role in establishing the correct volume contraction rate along geodesics. When the norm is less regular, it is not clear how to achieve an analogue behavior in full generality. Nonetheless, we are able to recover such a result in the context of the _sub-Finsler Heisenberg group_\(\mathbb{H}\), equipped with a possibly singular norm (see Section 5). Working in this setting is advantageous since, assuming strict convexity of the norm, the geodesics and the cut locus are completely described [1] and there exists an explicit expression for them in terms of convex trigonometric functions [13] (see also [1] for an example of the non-strictly convex case).
For the sub-Finsler Heisenberg group, we prove two different results, with the first addressing the case of \(C^{1,1}\) reference norms and thus substantially relaxing the smoothness assumption of Theorem 1.5, cf. Theorem 5.24.
**Theorem 1.7**.: Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a strictly convex and \(C^{1,1}\) norm and with a positive smooth measure \(\mathfrak{m}\). Then, the metric measure space \((\mathbb{H},\mathsf{d}_{SF},\mathfrak{m})\) does not satisfy the \(\mathsf{CD}(K,N)\), for any \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
The proof of this statement follows the same lines of [14, Cor. 1.2]. However, the low regularity of the norm, and thus of geodesics, prevent us to exploit the same differential tools developed for Theorem 1.5. Nonetheless, using the explicit expression of geodesics and of the exponential map, we can still recover an analogue result. In particular, guided by the intuition that a contraction rate along geodesics, similar to the one appearing in the smooth case, should still hold, we thoroughly study the Jacobian determinant of the exponential map. Building upon a fine analysis of convex trigonometric functions, cf. Section 5.1 and Proposition 5.13, we obtain an estimate on the contraction rate of the Jacobian determinant of the exponential map, but only for a large (in a measure-theoretic sense) set of covectors in the cotangent space. This poses additional challenges that we are able to overcome with a delicate density-type argument, together with an extensive use of the left-translations of the group, cf. Theorem 5.24 and also Remark 5.25. Remarkably, for every \(C^{1,1}\) reference norm, we obtain the exact same contraction rate, equal to the geodesic dimension \(\mathcal{N}=5\), that characterizes the sub-Riemannian Heisenberg group.
Our second result in the sub-Finsler Heisenberg group deals with the case of singular (i.e. non-\(C^{1}\)) reference norms, cf. Theorem 5.26.
**Theorem 1.8**.: Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a strictly convex norm \(\|\cdot\|\) which is not \(C^{1}\), and let \(\mathfrak{m}\) be a positive smooth measure on \(\mathbb{H}\). Then, the metric measure space \((\mathbb{H},\mathsf{d}_{SF},\mathfrak{m})\) does not satisfy the measure contraction property \(\mathsf{MCP}(K,N)\) for any \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
Observe that this theorem also shows the failure of the \(\mathsf{CD}(K,N)\) condition, which is stronger than the measure contraction property \(\mathsf{MCP}(K,N)\). However, Theorem 1.8 has an interest that goes beyond this consequence, as it reveals a phenomenon that stands in contrast to what typically happens in the sub-Riemannian setting. In fact, as already mentioned in section 1.3, the \(\mathsf{MCP}(K,N)\) condition holds in many sub-Riemannian manifolds, and, in the particular case of the sub-Riemannian Heisenberg group, holds with parameters \(K=0\) and \(N=5\). Therefore, Theorem 1.8 shows that a singularity of the reference norm can cause the failure of the measure contraction property \(\mathsf{MCP}(K,N)\). A similar phenomenon is highlighted in the recent paper by Borza and Tashiro [1], where the authors prove that the Heisenberg group equipped with the \(l^{p}\)-norm cannot satisfy the \(\mathsf{MCP}(K,N)\) condition if \(p>2\).
Our strategy to show Theorem 1.8 consists in finding a set \(A\subset\mathbb{H}\), having positive \(\mathfrak{m}\)-measure, such that the set of \(t\)-midpoints \(M_{t}(\{\mathrm{e}\},A)\) (where \(\mathrm{e}\) denotes the identity in \(\mathbb{H}\)) is \(\mathfrak{m}\)-null for every \(t\) sufficiently small. This construction is based on a remarkable geometric property of the space \((\mathbb{H},\mathsf{d}_{SF},\mathfrak{m})\), where geodesics can branch, even though they are unique. This has independent interest, as examples of branching spaces usually occur when geodesics are not unique.
We conclude this section highlighting that the combination of Theorem 1.7 and Theorem 1.8 proves Conjecture 1.3 for a large class of sub-Finsler Heisenberg groups. This is particularly interesting as the sub-Finsler Heisenberg groups are the unique sub-Finsler Carnot groups with Hausdorff dimension less than 5 (or with topological dimension less than or equal to 3), up to isometries.
## Structure of the paper
In Section 2 we introduce all the necessary preliminaries. In particular, we present the precise definition of the \(\mathsf{CD}(K,N)\) condition with some of its consequences, and we introduce the notion of sub-Finsler structure on a manifold. Section 3 is devoted to the study of the geometry of sub-Finsler manifolds. For the sake of completeness, we include generalizations of various sub-Riemannian results, especially regarding the characterizations of normal and abnormal extremals and the exponential map. In Section 4, we present the proof of Theorem 1.5. We start by developing the building blocks for it, namely the existence of a geodesic without abnormal sub-segments and the regularity of the distance function. Then, we estimate the volume contraction rate, along the previously selected geodesic. Finally, in Section 4.4, we adapt Juillet's strategy to obtain our first main theorem. Section 5 collects our results about the failure of the \(\mathsf{CD}(K,N)\) condition in the sub-Finsler Heisenberg group. After having introduced the convex trigonometric functions in Section 5.1, we use them to provide the explicit explicit expression of geodesics, cf. Section 5.2. We conclude by proving Theorem 1.7 in Section 5.3 and Theorem 1.8 in Section 5.4.
## Acknowledgments
T.R. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the collaborative research centre "The mathematics of emerging effects" (CRC 1060, Project-ID 211504053). The autors wish to thank Luca Rizzi for stimulating discussions regarding the regularity of the distance function in sub-Finsler manifolds and Lorenzo Portinale for his careful reading of the introduction.
## 2 Preliminaries
### The \(\mathsf{CD}(K,N)\) condition
A metric measure space is a triple \((\mathsf{X},\mathsf{d},\mathfrak{m})\) where \((\mathsf{X},\mathsf{d})\) is a complete and separable metric space and \(\mathfrak{m}\) is a locally finite Borel measure on it. In the following \(C([0,1],\mathsf{X})\) will stand for the space of continuous curves from \([0,1]\) to \(\mathsf{X}\). A curve \(\gamma\in C([0,1],\mathsf{X})\) is called _geodesic_ if
\[\mathsf{d}(\gamma(s),\gamma(t))=|t-s|\cdot\mathsf{d}(\gamma(0),\gamma(1))\quad \text{for every $s,t\in[0,1]$},\]
and we denote by \(\mathrm{Geo}(\mathsf{X})\) the space of geodesics on \(\mathsf{X}\). The metric space \((\mathsf{X},\mathsf{d})\) is said to be geodesic if every pair of points \(x,y\in\mathsf{X}\) can be connected with a curve \(\gamma\in\mathrm{Geo}(\mathsf{X})\). For any \(t\in[0,1]\) we define the evaluation map \(e_{t}\colon C([0,1],\mathsf{X})\to\mathsf{X}\) by setting \(e_{t}(\gamma):=\gamma(t)\). We denote by \(\mathscr{P}(\mathsf{X})\) the set of Borel probability measures on \(\mathsf{X}\) and by \(\mathscr{P}_{2}(\mathsf{X})\subset\mathscr{P}(\mathsf{X})\) the set of those having finite second moment. We endow the space \(\mathscr{P}_{2}(\mathsf{X})\) with the Wasserstein distance \(W_{2}\), defined by
\[W_{2}^{2}(\mu_{0},\mu_{1}):=\inf_{\pi\in\mathsf{Adm}(\mu_{0},\mu_{1})}\int \mathsf{d}^{2}(x,y)\ \mathrm{d}\pi(x,y),\]
where \(\mathsf{Adm}(\mu_{0},\mu_{1})\) is the set of all the admissible transport plans between \(\mu_{0}\) and \(\mu_{1}\), namely all the measures in \(\mathscr{P}(\mathsf{X}\times\mathsf{X})\) such that \((\mathsf{p}_{1})_{\sharp}\pi=\mu_{0}\) and \((\mathsf{p}_{2})_{\sharp}\pi=\mu_{1}\). The metric space \((\mathscr{P}_{2}(\mathsf{X}),W_{2})\) is itself complete and separable, moreover, if \((\mathsf{X},\mathsf{d})\) is geodesic, then \((\mathscr{P}_{2}(\mathsf{X}),W_{2})\) is geodesic as well. In particular, every geodesic \((\mu_{t})_{t\in[0,1]}\) in \((\mathscr{P}_{2}(\mathsf{X}),W_{2})\) can be represented with a measure \(\eta\in\mathscr{P}(\mathrm{Geo}(\mathsf{X}))\), meaning that \(\mu_{t}=(e_{t})_{\sharp}\pi\)?
We are now ready to introduce the \(\mathsf{CD}(K,N)\) condition, pioneered by Sturm and Lott-Villani [23, 24, 25]. As already mentioned, this condition aims to generalize, to the context metric measure spaces, the notion of having Ricci curvature bounded from below by \(K\in\mathbb{R}\) and dimension bounded above by \(N>1\). In order to define the \(\mathsf{CD}(K,N)\) condition, let us introduce the following distortion coefficients: for every \(K\in\mathbb{R}\) and \(N\in(1,\infty)\),
\[\tau_{K,N}^{(t)}(\theta):=t^{\frac{1}{N}}\left[\sigma_{K,N-1}^{(t)}(\theta) \right]^{1-\frac{1}{N}}, \tag{5}\]
where
\[\sigma_{K,N}^{(t)}(\theta):=\begin{cases}\frac{\sin(t\theta\sqrt{K/N})}{\sin( \theta\sqrt{K/N})}&\text{if $N\pi^{2}>K\theta^{2}>0$},\\ t&\text{if $K=0$},\\ \frac{\sinh(t\theta\sqrt{-K/N})}{\sinh(\theta\sqrt{-K/N})}&\text{if $K<0$}.\end{cases}\]
_Remark 2.1_.: Observe that for every \(K\in\mathbb{R}\), \(N\in(1,\infty)\) and \(t\in[0,1]\) we have
\[\lim_{\theta\to 0}\sigma_{K,N}^{(t)}(\theta)=t\qquad\text{and}\qquad\lim_{ \theta\to 0}\tau_{K,N}^{(t)}(\theta)=t.\]
**Definition 2.2**.: A metric measure space \((\mathsf{X},\mathsf{d},\mathfrak{m})\) is said to be a \(\mathsf{CD}(K,N)\) space (or to satisfy the \(\mathsf{CD}(K,N)\) condition) if for every pair of measures \(\mu_{0}=\rho_{0}\mathfrak{m},\mu_{1}=\rho_{1}\mathfrak{m}\in\mathscr{P}_{2}( \mathsf{X})\), absolutely continuous with respect to \(\mathfrak{m}\), there exists a \(W_{2}\)-geodesic \((\mu_{t})_{t\in[0,1]}\) connecting them and induced by \(\eta\in\mathscr{P}(\mathrm{Geo}(\mathsf{X}))\), such that for every \(t\in[0,1]\), \(\mu_{t}=\rho_{t}\mathfrak{m}\ll\mathfrak{m}\) and the following inequality holds for every \(N^{\prime}\geq N\) and every \(t\in[0,1]\)
\[\int_{\mathsf{X}}\rho_{t}^{1-\frac{1}{N^{\prime}}}\,\mathrm{d}\mathfrak{m}\geq \int_{\mathsf{X}\times\mathsf{X}}\left[\tau_{K,N^{\prime}}^{(1-t)}\big{(} \mathsf{d}(x,y)\big{)}\rho_{0}(x)^{-\frac{1}{N^{\prime}}}+\tau_{K,N^{\prime}}^ {(t)}\big{(}\mathsf{d}(x,y)\big{)}\rho_{1}(y)^{-\frac{1}{N^{\prime}}}\right] \mathrm{d}\pi(x,y),\]
where \(\pi=(e_{0},e_{1})_{\#}\eta\).
One of the most important merits of the \(\mathsf{CD}(K,N)\) condition is that it is sufficient to deduce geometric and functional inequalities that hold in the smooth setting. An example which is particularly relevant for this work is the so-called Brunn-Minkowski inequality, whose definition in the metric measure setting requires the following notion.
**Definition 2.3**.: Let \((\mathsf{X},\mathsf{d})\) be a metric space and let \(A,B\subset\mathsf{X}\) be two Borel subsets. Then for \(t\in(0,1)\), we defined the set of \(t\)-_midpoints_ between \(A\) and \(B\) as
\[M_{t}(A,B):=\left\{x\in\mathsf{X}\,:\,x=\gamma(t)\,,\,\gamma\in \mathrm{Geo}(\mathsf{X})\,,\,\gamma(0)\in A\,,\text{ and }\gamma(1)\in B\right\}.\]
We can now introduce the metric measure version of the Brunn-Minkowski inequality, whose formulation is stated in terms of the distortion coefficients (5).
**Definition 2.4**.: Given \(K\in\mathbb{R}\) and \(N\in(1,\infty)\), we say that a metric measure space \((\mathsf{X},\mathsf{d},\mathfrak{m})\) satisfies the _Brunn-Minkowski inequality_\(\mathsf{BM}(K,N)\) if, for every nonempty \(A,B\subset\mathrm{spt}(\mathfrak{m})\) Borel subsets, \(t\in(0,1)\), we have
\[\mathfrak{m}\big{(}M_{t}(A,B)\big{)}\big{)}^{\frac{1}{N}}\geq \tau^{(1-t)}_{K,N}(\Theta(A,B))\cdot\mathfrak{m}(A)^{\frac{1}{N}}+\tau^{(t)}_ {K,N}(\Theta(A,B))\cdot\mathfrak{m}(B)^{\frac{1}{N}}\,, \tag{6}\]
where
\[\Theta(A,B):=\left\{\begin{array}{ll}\inf_{x\in A,\,y\in B} \mathsf{d}(x,y)&\text{ if }K\geq 0\,,\\ \sup_{x\in A,\,y\in B}\mathsf{d}(x,y)&\text{ if }K<0\,.\end{array}\right.\]
As already mentioned, the Brunn-Minkowski inequality is a consequence of the \(\mathsf{CD}(K,N)\) condition, in particular we have that
\[\mathsf{CD}(K,N)\implies\mathsf{BM}(K,N),\]
for every \(K\in\mathbb{R}\) and every \(N\in(1,\infty)\). In Sections 4 and 5, we are going to disprove the \(\mathsf{CD}(K,N)\) condition for every choice of the parameters \(K\in\mathbb{R}\) and \(N\in(1,\infty)\), by contradicting the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\). A priori, this is a stronger result than the ones stated in Theorem 1.5 and Theorem 1.7, since the Brunn-Minkowski inequality is (in principle) weaker than the \(\mathsf{CD}(K,N)\) condition. However, recent developments (cf. [12, 13]) suggest that the Brunn-Minkowski \(\mathsf{BM}(K,N)\) could be equivalent to the \(\mathsf{CD}(K,N)\) condition in a wide class of metric measure spaces.
Another curvature-dimension bound, which can be defined for metric measure spaces, is the so-called measure contraction property (in short \(\mathsf{MCP}(K,N)\)), that was introduced by Ohta in [10]. The idea behind it is basically to require the \(\mathsf{CD}(K,N)\) condition to hold when the first marginal degenerates to \(\delta_{x}\), a delta-measure at \(x\in\mathrm{spt}(\mathfrak{m})\), and the second marginal is \(\frac{\mathfrak{m}|_{A}}{\mathfrak{m}(A)}\), for some Borel set \(A\subset\mathsf{X}\) with \(0<\mathfrak{m}(A)<\infty\).
**Definition 2.5** (\(\mathsf{MCP}(K,N)\) condition).: Given \(K\in\mathbb{R}\) and \(N\in(1,\infty)\), a metric measure space \((\mathsf{X},\mathsf{d},\mathfrak{m})\) is said to satisfy the _measure contraction property_\(\mathsf{MCP}(K,N)\) if for every \(x\in\mathrm{spt}(\mathfrak{m})\) and a Borel set \(A\subset\mathsf{X}\) with \(0<\mathfrak{m}(A)<\infty\), there exists a Wasserstein geodesic induced by \(\eta\in\mathscr{P}(\mathrm{Geo}(\mathsf{X}))\) connecting \(\delta_{x}\) and \(\frac{\mathfrak{m}|_{A}}{\mathfrak{m}(A)}\) such that, for every \(t\in[0,1]\),
\[\frac{1}{\mathfrak{m}(A)}\mathfrak{m}\geq(e_{t})_{\#}\Big{(}\tau^{(t)}_{K,N} \big{(}\mathsf{d}(\gamma(0),\gamma(1))\big{)}^{N}\eta(\mathrm{d}\gamma)\Big{)}. \tag{7}\]
_Remark 2.6_.: For our purposes, we will use an equivalent formulation of the inequality (7), which holds whenever geodesics are unique, cf. [10, Lemma 2.3] for further details. More precisely,
let \(x\in\operatorname{spt}(\mathfrak{m})\) and a Borel set \(A\subset\mathsf{X}\) with \(0<\mathfrak{m}(A)<\infty\). Assume that for every \(y\in A\), there exists a unique geodesic \(\gamma_{x,y}:[0,1]\to\mathsf{X}\) joining \(x\) and \(y\). Then, (7) is verified for the measures \(\delta_{x}\) and \(\frac{\mathfrak{m}|_{A}}{\mathfrak{m}(A)}\) if and only if
\[\mathfrak{m}\big{(}M_{t}(\{x\},A^{\prime}))\big{)}\geq\int_{A^{\prime}}\tau_{ K,N}^{(t)}(\mathsf{d}(x,y))^{N}\,\mathrm{d}\mathfrak{m}(y),\qquad\text{for any Borel $A^{\prime}\subset A$}. \tag{8}\]
The \(\mathsf{MCP}(K,N)\) condition is weaker than the \(\mathsf{CD}(K,N)\) one, i.e.
\[\mathsf{CD}(K,N)\,\Longrightarrow\,\mathsf{MCP}(K,N),\]
for every \(K\in\mathbb{R}\) and every \(N\in(1,\infty)\). In Theorem 4.26 (for the case of non-ample geodesics) and Theorem 5.26 (cf. Theorem 1.8) in the Heisenberg group, equipped with singular norms, we contradict the \(\mathsf{MCP}(K,N)\) condition. More precisely, we find a counterexample to (8).
### Sub-Finsler structures
Let \(M\) be a smooth manifold of dimension \(n\) and let \(k\in\mathbb{N}\). A _sub-Finsler structure_ on \(M\) is a couple \((\xi,\|\!\cdot\!\|)\) where \(\|\!\cdot\!\|:\mathbb{R}^{k}\to\mathbb{R}_{+}\) is a strictly convex norm on \(\mathbb{R}^{k}\) and \(\xi:M\times\mathbb{R}^{k}\to TM\) is a morphism of vector bundles such that:
1. each fiber of the (trivial) bundle \(M\times\mathbb{R}^{k}\) is equipped with the norm \(\|\!\cdot\!\|\);
2. The set of horizontal vector fields, defined as \[\mathcal{D}:=\big{\{}\xi\circ\sigma\,:\,\sigma\in\Gamma(M\times\mathbb{R}^{k}) \big{\}}\subset\Gamma(TM),\] is a _bracket-generating_ family of vector fields (or it satisfies the Hormander condtion), namely setting \[\operatorname{Lie}_{q}(\mathcal{D}):=\big{\{}X(q)\,:\,X\in\operatorname{ span}\{[X_{1},\ldots,[X_{j-1},X_{j}]]\,:\,X_{i}\in\mathcal{D},j\in\mathbb{N}\} \big{\}},\qquad\forall\,q\in M,\] we assume that \(\operatorname{Lie}_{q}(\mathcal{D})=T_{q}M\), for every \(q\in M\).
We say that \(M\) is a _smooth sub-Finsler manifold_, if the norm of the sub-Finsler structure \((\xi,\|\!\cdot\!\|)\) is smooth, namely \(\|\!\cdot\!\|\in C^{\infty}(\mathbb{R}^{k}\setminus\{0\})\).
_Remark 2.7_.: Although this definition is not completely general in sub-Finsler context, since it does not allow the norm to vary on the fiber of \(M\times\mathbb{R}^{k}\), it includes sub-Riemannian geometry (where \(\|\!\cdot\!\|\) is induced by a scalar product), as every sub-Riemannian structure is equivalent to a free one, cf. [1, Sec. 3.1.4].
At every point \(q\in M\) we define the _distribution_ at \(q\) as
\[\mathcal{D}_{q}:=\big{\{}\xi(q,w)\,:\,w\in\mathbb{R}^{k}\big{\}}=\big{\{}X(q) \,:\,X\in\mathcal{D}\big{\}}\subset T_{q}M. \tag{9}\]
This is a vector subspace of \(T_{q}M\) whose dimension is called _rank_ (of the distribution) and denoted by \(r(q):=\dim\mathcal{D}_{q}\leq n\). Moreover, the distribution is described by a family of horizontal vector fields. Indeed, letting \(\{e_{i}\}_{i=1,\ldots,k}\) be the standard basis of \(\mathbb{R}^{k}\), the _generating frame_ is the family \(\{X_{i}\}_{i=1,\ldots,k}\), where
\[X_{i}(q):=\xi(q,e_{i})\qquad\forall\,q\in M,\quad\text{for $i=1,\ldots,k$}.\]
Then, according to (9), \(\mathcal{D}_{q}=\operatorname{span}\{X_{1}(q),\ldots,X_{k}(q)\}\). On the distribution we define the _induced norm_ as
\[\|v\|_{q}:=\inf\big{\{}\,\|w\|\,:\,v=\xi(q,w)\big{\}}\qquad\text{for every $v\in\mathcal{D}_{q}$}.\]
Since the infimum is actually a minimum, the function \(\left\|\cdot\right\|_{q}\) is a norm on \(\mathcal{D}_{q}\), so that \(\left(\mathcal{D}_{q},\left\|\cdot\right\|_{q}\right)\) is a normed space. Moreover, the norm depends smoothly on the base point \(q\in M\). A curve \(\gamma:[0,1]\to M\) is _admissible_ if its velocity \(\dot{\gamma}(t)\) exists almost everywhere and there exists a function \(u=(u_{1},\ldots,u_{k})\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\left\|\cdot\right\| )\big{)}\) such that
\[\dot{\gamma}(t)=\sum_{i=1}^{k}u_{i}(t)X_{i}(\gamma(t)),\qquad\text{for a.e. }t\in[0,1]. \tag{10}\]
The function \(u\) is called _control_. Furthermore, given an admissible curve \(\gamma\), there exists \(\bar{u}=(\bar{u}_{1},\ldots,\bar{u}_{k}):[0,1]\to\mathbb{R}^{k}\) such that
\[\dot{\gamma}(t)=\sum_{i=1}^{k}\bar{u}_{i}(t)X_{i}(\gamma(t)),\qquad\text{and} \qquad\left\|\dot{\gamma}(t)\right\|_{\gamma(t)}=\left\|\bar{u}(t)\right\|, \qquad\text{for a.e. }t\in[0,1]. \tag{11}\]
The function \(\bar{u}\) is called _minimal control_, and it belongs to \(L^{2}\big{(}[0,1];(\mathbb{R}^{k},\left\|\cdot\right\|)\big{)}\), cf. [1, Lem. 3.12]. We define the _length_ of an admissible curve:
\[\ell(\gamma):=\int_{0}^{1}\left\|\dot{\gamma}(t)\right\|_{\gamma(t)}\,\mathrm{ d}t\in[0,\infty).\]
We can rewrite the length of a curve as the \(L^{1}\)-norm of the associated minimal control, indeed by (11),
\[\ell(\gamma)=\int_{0}^{1}\left\|\bar{u}(t)\right\|\,\mathrm{d}t=\left\|\bar{u }\right\|_{L^{1}([0,1];(\mathbb{R}^{k},\left\|\cdot\right\|))}. \tag{12}\]
For every couple of points \(q_{0},q_{1}\in M\), define the _sub-Finsler distance_ between them as
\[\mathsf{d}_{SF}(q_{0},q_{1})=\inf\left\{\ell(\gamma)\,:\,\gamma\text{ admissible, }\gamma(0)=q_{0}\text{ and }\gamma(1)=q_{1}\right\}.\]
Since every norm on \(\mathbb{R}^{k}\) is equivalent to the standard scalar product on \(\mathbb{R}^{k}\), it follows that the sub-Riemannian structure on \(M\) given by \((\xi,\langle\cdot,\cdot\rangle)\) induces an equivalent distance. Namely, denoting by \(\mathsf{d}_{SR}\) the induced sub-Riemannian distance, there exist constants \(C>c>0\) such that
\[c\,\mathsf{d}_{SR}\leq\mathsf{d}_{SF}\leq C\mathsf{d}_{SR},\qquad\text{on }M \times M. \tag{13}\]
Thus, as a consequence of the classical Chow-Rashevskii Theorem in sub-Riemannian geometry, we obtain the following.
**Proposition 2.8** (Chow-Rashevskii).: Let \(M\) be a sub-Finsler manifold. The sub-Finsler distance is finite, continuous on \(M\times M\) and the induced topology is the manifold one.
From this proposition, we get that \((M,\mathsf{d}_{SF})\) is a locally compact metric space. The local existence of minimizers of the length functional can be obtained as in the sub-Riemannian setting, in particular, one can repeat the proof of [1, Thm. 3.43]. Finally, if \((M,\mathsf{d}_{SF})\) is complete, then it is also a geodesic metric space.
## 3 The geometry of smooth sub-Finsler manifolds
### The energy functional and the optimal control problem
Let \(\gamma:[0,1]\to M\) be an admissible curve. Then, we define the _energy_ of \(\gamma\) as
\[J(\gamma)=\frac{1}{2}\int_{0}^{1}\left\|\dot{\gamma}(t)\right\|_{\gamma(t)}^{ 2}\,\mathrm{d}t.\]
By definition of admissible curve, \(J(\gamma)<+\infty\). In addition, a standard argument shows that \(\gamma:[0,1]\to M\) is a minimum for the energy functional if and only if it is a minimum of the length functional with constant speed.
_Remark 3.1_.: The minimum of \(J\) is not invariant under reparametrization of \(\gamma\), so one needs to fix the interval where the curve is defined. Here and below, we choose \([0,1]\).
The problem of finding geodesics between two points \(q_{0},q_{1}\in M\) can be formulated using the energy functional as the following costrained minimization problem:
\[\begin{cases}\gamma:[0,1]\to M,\quad\text{admissible},\\ \gamma(0)=q_{0}\text{ and }\gamma(1)=q_{1},\\ J(\gamma)=\frac{1}{2}\int_{0}^{1}\|\dot{\gamma}(t)\|_{\gamma(t)}^{2}\,\, \mathrm{d}t\to\min.\end{cases}\] (P)
The problem (P) can be recasted as an optimal control problem. First of all, a curve is admissible if and only if there exists a control in \(L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\) satisfying (10). Second of all, we can consider the energy as a functional on the space of controls. Indeed, as in (12), given an admissible curve \(\gamma:[0,1]\to M\), we let \(\bar{u}\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\) be its minimal control, as in (11). Then, we have
\[J(\gamma)=\frac{1}{2}\int_{0}^{1}\|\bar{u}(t)\|^{2}\,\,\mathrm{d}t=\frac{1}{2} \,\|\bar{u}\|_{L^{2}([0,1];(\mathbb{R}^{k},\|\cdot\|))}^{2}. \tag{14}\]
Hence, we regard the energy as a functional on \(L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\), namely
\[J:L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\to\mathbb{R}_{+};\qquad J (u):=\frac{1}{2}\,\|u\|_{L^{2}([0,1];(\mathbb{R}^{k},\|\cdot\|))}^{2},\]
and we look for a constrained minimum of it. Thus, the problem (P) becomes:
\[\begin{cases}\dot{\gamma}(t)=\sum_{i=1}^{k}u_{i}(t)X_{i}(\gamma(t)),\\ \gamma(0)=q_{0}\text{ and }\gamma(1)=q_{1},\\ J(u)=\frac{1}{2}\int_{0}^{1}\|u(t)\|^{2}\,\,\mathrm{d}t\to\min.\end{cases}\] (P \[{}^{\prime}\] )
Note that, by (14), the solutions of (P) and (P\({}^{\prime}\)) coincide. An application of Pontryagin Maximum Principle (see [1, Thm. 12.10]) yields necessary conditions for optimality. For every \(u\in\mathbb{R}^{k}\) and \(\nu\in\mathbb{R}\), introduce the following Hamiltonian:
\[h_{u}^{\nu}(\lambda):=\langle\lambda,\xi(\pi(\lambda),u)\rangle+\frac{\nu}{2} \,\|u\|^{2}\,,\qquad\forall\lambda\in T^{*}M. \tag{15}\]
Recall that for \(h\in C^{1}(T^{*}M)\), its Hamiltonian vector field \(\vec{h}\in\mathrm{Vec}(T^{*}M)\) is defined as the unique vector field in \(T^{*}M\) satisfying
\[d_{\lambda}h=\sigma(\cdot,\vec{h}(\lambda)),\qquad\forall\,\lambda\in T^{*}M,\]
where \(\sigma\) is the canonical symplectic form on \(T^{*}M\).
**Theorem 3.2** (Pontryagin Maximum Principle).: Let \(M\) be a sub-Finsler manifold and let \((\gamma,\bar{u})\) be a solution of (P\({}^{\prime}\)). Then, there exists \((\nu,\lambda_{t})\neq 0\), where \(\nu\in\mathbb{R}\) and \(\lambda_{t}\in T^{*}_{\gamma(t)}M\) for every \(t\in[0,1]\), such that
\[\begin{cases}\dot{\lambda}_{t}=\vec{h}_{\bar{u}(t)}^{\nu}(\lambda_{t})&\text{ for a.e. }t\in[0,1],\\ h_{\bar{u}(t)}^{\nu}(\lambda_{t})=\max_{v\in\mathbb{R}^{k}}h_{v}^{\nu}(\lambda_{t})& \text{for a.e. }t\in[0,1],\\ \nu\leq 0.\end{cases}\] (H)
**Definition 3.3**.: If \(\nu<0\) in (H), \((\lambda_{t})_{t\in[0,1]}\) is called _normal extremal_. If \(\nu=0\) in (H), \((\lambda_{t})_{t\in[0,1]}\) is called _abnormal extremal_.
_Remark 3.4_.: By homogeneity of the Hamiltonian system, if \(\nu\neq 0\) in (H) we can fix \(\nu=-1\).
### Characterization of extremals and the exponential map
In this section, we recall some characterizations of normal and abnormal extremal, which are well-known in sub-Riemannian geometry. We include the proofs in our case, for the sake of completeness.
Recall that the annihilator \(\operatorname{Ann}(\mathcal{D})\subset T^{*}M\) is defined by
\[\operatorname{Ann}(\mathcal{D})_{q}:=\{\lambda\in T^{*}_{q}M\,:\,\langle \lambda,w\rangle=0,\,\forall\,w\in\mathcal{D}_{q}\},\qquad\forall\,q\in M.\]
**Lemma 3.5**.: Let \(M\) be a sub-Finsler manifold. Let \((\gamma,\bar{u})\) be a non-trivial solution to (P\({}^{\prime}\)) and let \((\lambda_{t})_{t\in[0,1]}\) be its lift. Then, \((\lambda_{t})_{t\in[0,1]}\) is an abnormal extremal if and only if \(\lambda_{t}\neq 0\) and \(\lambda_{t}\in\operatorname{Ann}(\mathcal{D})_{\gamma(t)}\) for every \(t\in[0,1]\), where \(\gamma(t):=\pi(\lambda_{t})\).
Proof.: The claim is an easy consequence of the maximization property of the Hamiltonian along the dynamic. More precisely, by (H), we have
\[h^{\nu}_{\bar{u}(t)}(\lambda_{t})=\max_{v\in\mathbb{R}^{k}}h^{\nu}_{v}(\lambda _{t}),\qquad\text{for a.e. }t\in[0,1], \tag{16}\]
where the function \(h^{\nu}_{u}\) is defined in (15). Assume that \(\nu=0\), then (16) reads as
\[\langle\lambda_{t},\xi(\gamma(t),\bar{u}(t))\rangle=\max_{v\in\mathbb{R}^{k}} \left(\lambda_{t},\xi(\gamma(t),v)\right),\qquad\text{for a.e. }t\in[0,1],\]
with \(\lambda_{t}\neq 0\) for every \(t\in[0,1]\). Now, since \(\xi\) is linear in the controls, the right-hand side is \(+\infty\) unless \(\lambda_{t}\in\operatorname{Ann}(\mathcal{D})_{\gamma(t)}\) for a.e. \(t\in[0,1]\). By continuity of \(t\mapsto\lambda_{t}\), this is true for every \(t\in[0,1]\). Conversely, assume that \(\lambda_{t}\in\operatorname{Ann}(\mathcal{D})_{\gamma(t)}\), then the maximization condition (16) becomes
\[\frac{\nu}{2}\|\bar{u}(t)\|^{2}=\max_{v\in\mathbb{R}^{k}}\frac{\nu}{2}\|v\|^{2}.\]
Since \(\nu\leq 0\), we may distinguish two cases, either \(\nu=0\) and the extremal is abnormal, or \(\nu=-1\) and the extremal is normal. In the second case, the optimal control must be \(0\), so that \(\dot{\lambda}_{t}=0\) and the extremal is constant and constantly equal to \(\lambda_{0}\). Since we are assuming \((\lambda_{t})_{t\in[0,1]}\) to be non-constant the latter can not happen.
The _sub-Finsler (or maximized) Hamiltonian_ is defined as
\[H(\lambda):=\max_{u\in\mathbb{R}^{k}}h^{-1}_{u}(\lambda_{t})=\max_{u\in \mathbb{R}^{k}}\Bigg{(}\sum_{i=1}^{k}\langle\lambda,u_{i}X_{i}(\pi(\lambda)) \rangle-\frac{\|u\|^{2}}{2}\Bigg{)} \tag{17}\]
The sub-Finsler Hamiltonian can be explicitly characterized in terms of the norm \(\left\|\cdot\right\|_{*}\), which denotes the dual norm of \(\left\|\cdot\right\|\). To this aim, we prove the following lemma describing the dual element of \(v\in(\mathbb{R}^{k},\left\|\cdot\right\|)\), when \(\left\|\cdot\right\|\) is a \(C^{1}\) norm. Recall that its dual vector \(v^{*}\in(\mathbb{R}^{k},\left\|\cdot\right\|)^{*}\) is uniquely characterized by
\[\left\|v^{*}\right\|_{*}=\left\|v\right\|\qquad\text{and}\qquad\langle v^{*},v\rangle=\left\|v\right\|^{2}, \tag{18}\]
where \(\langle\cdot,\cdot\rangle\) is the dual coupling.
**Lemma 3.6**.: Let \((\mathbb{R}^{k},\left\lVert\cdot\right\rVert)\) be a normed space and assume \(\left\lVert\cdot\right\rVert:\mathbb{R}^{k}\to\mathbb{R}_{+}\) is a strictly convex \(C^{1}\) norm, i.e. \(\left\lVert\cdot\right\rVert\in C^{1}(\mathbb{R}^{k}\setminus\{0\})\). Then, for every non-zero vector \(v\in\mathbb{R}^{k}\), it holds that
\[v^{*}=\left\lVert v\right\rVert\cdot d_{v}\left\lVert\cdot\right\rVert.\]
Proof.: Set \(\lambda:=d_{v}\|\cdot\|\in(\mathbb{R}^{k},\left\lVert\cdot\right\rVert)^{*}\), where we recall that
\[d_{v}\|\cdot\|(u):=\lim_{t\to 0}\frac{\left\lVert v+tu\right\rVert-\left\lVert u \right\rVert}{t},\qquad\forall\,u,v\in\mathbb{R}^{k}.\]
Then, on the one hand, it holds that
\[\left\langle\lambda,v\right\rangle=d_{v}\|\cdot\|(v)=\lim_{t\to 0}\frac{\left\lVert v+tv \right\rVert-\left\lVert v\right\rVert}{t}=\left\lVert v\right\rVert.\]
On the other hand, we have
\[\left\lVert\lambda\right\rVert_{*}=\sup_{u\in B_{1}}\left\langle\lambda,u \right\rangle=\sup_{u\in B_{1}}d_{v}\|\cdot\|(u)=\sup_{u\in B_{1}}\lim_{t\to 0} \frac{1}{t}\big{(}\left\lVert v+tu\right\rVert-\left\lVert v\right\rVert \big{)}\leq\sup_{u\in B_{1}}\left\lVert u\right\rVert=1,\]
where \(B_{1}:=B_{1}^{\left\lVert\cdot\right\rVert}(0)\subset\mathbb{R}^{k}\) is the ball of radius \(1\) and centered at \(0\), with respect to the norm \(\left\lVert\cdot\right\rVert\). The converse inequality can be obtained by taking \(u=\frac{v}{\left\lVert v\right\rVert}\). Finally, the conclusion follows by homogeneity of the dual norm.
**Lemma 3.7**.: Let \(M\) be a smooth sub-Finsler manifold. Given \(\lambda\in T^{*}M\), define \(\hat{\lambda}=(\hat{\lambda}_{i})_{i=1,\ldots,k}\) where \(\hat{\lambda}_{i}:=\left\langle\lambda,X_{i}(\pi(\lambda))\right\rangle\) for every \(i=1,\ldots,k\). Then, \(H^{-1}(0)=\operatorname{Ann}(\mathcal{D})\) and
\[H(\lambda)=\frac{1}{2}\big{\|}\hat{\lambda}\big{\|}_{*}^{2},\qquad\forall \lambda\in T^{*}M\setminus\operatorname{Ann}(\mathcal{D}).\]
where \(\left\lVert\cdot\right\rVert_{*}\) is the dual norm to \(\left\lVert\cdot\right\rVert\) in \(\mathbb{R}^{k}\). Moreover, \(H\in C^{\infty}(T^{*}M\setminus\operatorname{Ann}(\mathcal{D}))\cap C^{1}(T^ {*}M)\).
Proof.: Let \(q\in M\). Assume that \(\lambda\in T^{*}_{q}M\setminus\operatorname{Ann}(\mathcal{D})_{q}\) and set
\[F(u):=\left\langle\hat{\lambda},u\right\rangle-\frac{\left\lVert u\right\rVert ^{2}}{2},\qquad\forall u\in\mathbb{R}^{k}.\]
Since \(\left\lVert\cdot\right\rVert\) is smooth, its square is a \(C^{1}\)-function, thus \(F\in C^{1}(\mathbb{R}^{k})\). Moreover, by homogeneity of the norm, \(F(u)\to-\infty\) as \(\left\lVert u\right\rVert\to\infty\), hence \(F\) admits a maximum. We compute its differential:
\[d_{u}F=\hat{\lambda}-\left\lVert u\right\rVert\cdot d_{u}\|\cdot\|=\hat{ \lambda}-u^{*}, \tag{19}\]
according to Lemma 3.6. Therefore, \(F\) has a unique critical point (which is also the unique point of maximum) given by \(u=u^{**}=\hat{\lambda}^{*}\). Finally, using also (18), this implies that
\[H(\lambda)=\max_{u\in\mathbb{R}^{k}}F(u)=F(\hat{\lambda}^{*})=\left\langle \hat{\lambda},\hat{\lambda}^{*}\right\rangle-\frac{1}{2}\big{\|}\hat{\lambda} \big{\|}_{*}^{2}=\frac{1}{2}\big{\|}\hat{\lambda}\big{\|}_{*}^{2}.\]
To conclude, observe that if \(\lambda\in\operatorname{Ann}(\mathcal{D})_{q}\), then \(\hat{\lambda}=0\) and
\[H(\lambda)=\max_{u\in\mathbb{R}^{k}}\left(-\frac{\left\lVert u\right\rVert^{2 }}{2}\right)=0.\]
Conversely, if \(H(\lambda)=0\) we must have \(\lambda\in\operatorname{Ann}(\mathcal{D})\). Indeed, if this is not the case, \(\hat{\lambda}\neq 0\) and hence \(\big{\|}\hat{\lambda}\big{\|}_{*}\neq 0\), giving a contradiction. This proves that \(H^{-1}(0)=\operatorname{Ann}(\mathcal{D})\).
Finally, we prove the regularity of \(H\). Note that \(\|\cdot\|_{*}\) is a smooth norm itself. Indeed, as \(\|\cdot\|\) is smooth and strictly convex, the dual map of Lemma 3.6, which is
\[v^{*}=\|v\|\,d_{v}\|\cdot\|=\frac{1}{2}d_{v}\big{(}\|\cdot\|^{2}\big{)}=:N(v),\]
is smooth on \(v\neq 0\), invertible and with invertible differential on \(v\neq 0\). Thus, by the inverse function theorem, \(N^{-1}\in C^{\infty}(\mathbb{R}^{k}\setminus\{0\})\). But now the dual norm satisfies (18), hence
\[\|\hat{\lambda}\|_{*}=\|N^{-1}(\hat{\lambda})\|,\]
and the claim follows. Therefore, we deduce that \(H\in C^{\infty}(T^{*}M\setminus H^{-1}(0))\cap C^{1}(T^{*}M)=C^{\infty}(T^{*}M \setminus\operatorname{Ann}(\mathcal{D}))\cap C^{1}(T^{*}M)\).
**Corollary 3.8**.: Let \((\lambda_{t})_{t\in[0,1]}\) be a normal extremal for the problem (H), then the associated control is given by
\[\bar{u}(t)=\hat{\lambda}_{t}^{*},\qquad\text{for a.e. }t\in[0,1]. \tag{20}\]
Proof.: This is again a consequence of the maximality condition in (H), together with the characterization of the sub-Finsler Hamiltonian. In particular, we must have
\[h_{\bar{u}(t)}^{-1}(\lambda_{t})=\max_{u\in\mathbb{R}^{k}}h_{u}^{-1}(\lambda_ {t})=H(\lambda_{t}),\qquad\forall\,t\in[0,1],\]
From this and (19), we deduce that the control associated with \((\lambda_{t})_{t\in[0,1]}\) satisfies the identity (20).
The next result relates the system (H) with the Hamiltonian system associated with the sub-Finsler Hamiltonian (17). A similar statement can be found in [1, Prop. 12.3]. The main difference is the regularity of the Hamiltonian function, which in the classical statement is assumed to be smooth outside the zero section.
**Proposition 3.9**.: Let \(M\) be a smooth sub-Finsler manifold. Let \(H\in C^{\infty}(T^{*}M\setminus\operatorname{Ann}(\mathcal{D}))\cap C^{1}(T^{ *}M)\) be the sub-Finsler Hamiltonian defined in (17). If \((\lambda_{t})_{t\in[0,1]}\) is a normal extremal, and \(\lambda_{0}\in\operatorname{Ann}(\mathcal{D})\), then \(\lambda_{t}\equiv\lambda_{0}\). If \(\lambda_{0}\in T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\), then
\[\dot{\lambda}_{t}=\bar{H}(\lambda_{t}). \tag{21}\]
Conversely, if \((\lambda_{t})_{t\in[0,1]}\) is a solution of (21) with initial condition \(\lambda_{0}\in T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\), then there exists \(\bar{u}\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\) such that \((\lambda_{t})_{t\in[0,1]}\) is a normal extremal with control \(\bar{u}\) (i.e. the pair \((-1,(\lambda_{t}))\) is a solution of (H)).
Proof.: If \((\lambda_{t})_{t\in[0,1]}\) is a normal extremal, there exists an optimal control \(\bar{u}\) such that the pair \((-1,(\lambda_{t}))\) is a solution to (H). Now, if the initial covector \(\lambda_{0}\in\operatorname{Ann}(\mathcal{D})\), then \(\lambda_{t}\in\operatorname{Ann}(\mathcal{D})\) for all \(t\in[0,1]\), as the sub-Finsler Hamiltonian is constant along the motion and \(H(\lambda_{0})=0\). Using Corollary 3.8, this implies that the control \(\bar{u}\equiv 0\) and that \(\lambda_{t}\equiv\lambda_{0}\) as claimed. If \(\lambda_{0}\in T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\), we follow the blueprint of [1, Prop. 12.3]. By the definition of sub-Finsler Hamiltonian (17), we have
\[H(\lambda)\geq h_{\bar{u}(t)}(\lambda),\qquad\forall\,\lambda\in T^{*}M,\,t \in[0,1],\]
with equality along the dynamic \(t\mapsto\lambda_{t}\). This means that the function \(T^{*}M\ni\lambda\mapsto H(\lambda)-h_{\bar{u}(t)}(\lambda)\) has a maximum at \(\lambda_{t}\). Therefore, using that \(H\in C^{1}(T^{*}M)\), we deduce that
\[d_{\lambda_{t}}H=d_{\lambda_{t}}h_{\bar{u}(t)},\qquad\forall\,t\in[0,1].\]
Such an equality immediately implies that the Hamiltonian vector fields are equal along the dynamic, namely
\[\vec{H}(\lambda_{t})=\vec{h}_{\hat{u}(t)}(\lambda_{t}),\qquad\forall\,t\in[0,1].\]
For the converse implication, recall that the Hamiltonian is constant along the motion. So, if \((\lambda_{t})_{t\in[0,1]}\) is a solution to (21) with initial condition \(\lambda_{0}\in T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\), then \(H(\lambda_{0})=H(\lambda_{t})\) and \(\lambda_{t}\in T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\), for every \(t\in[0,1]\). Since \(H\) is smooth outside the annihilator bundle of \(\mathcal{D}\), we deduce that \((\lambda_{t})_{t\in[0,1]}\) is uniquely determined by \(\lambda_{0}\) and, repeating verbatim the argument of [1, Prop. 12.3], we conclude the proof.
Fix \(t\in\mathbb{R}\) and consider the _(reduced) flow of_\(\vec{H}\) on \(T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\):
\[e_{\mathrm{r}}^{t\vec{H}}:\mathscr{A}_{t}\to T^{*}M\setminus\operatorname{ Ann}(\mathcal{D}),\]
where \(\mathscr{A}_{t}\subset T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\) is the set of covectors such that the associated maximal solution \((\lambda_{s})_{s\in I}\), with \(I\subset\mathbb{R}\) such that \(0\in I\), is defined up to time \(t\). Under the assumption of completeness of \((M,\mathsf{d}_{SF})\), \(\vec{H}\) is complete as a vector field on \(T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\) (and thus \(\mathscr{A}_{t}=T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\)). We state below this result without proof as the latter is analogous to the classical sub-Riemannian proof, cf. [1, Prop. 8.38], in view of Lemma 3.6 and Lemma 3.7.
**Proposition 3.10**.: Let \(M\) be a smooth complete sub-Finsler manifold. Then any normal extremal \(t\mapsto\lambda_{t}=e_{\mathrm{r}}^{t\vec{H}}(\lambda_{0})\), with \(\lambda_{0}\in T^{*}M\setminus\operatorname{Ann}(\mathcal{D})\), is extendable to \(\mathbb{R}\).
**Definition 3.11** (Sub-Finsler exponential map).: Let \((M,\mathsf{d}_{SF})\) be a complete smooth sub-Finsler manifold and let \(q\in M\). Then, the _sub-Finsler exponential map_ at \(q\) is defined as
\[\exp_{q}(\lambda):=\begin{cases}\pi\circ e_{\mathrm{r}}^{\vec{H}}(\lambda)& \text{if }\lambda\in T_{q}^{*}M\setminus\operatorname{Ann}(\mathcal{D})_{q},\\ q&\text{if }\lambda\in\operatorname{Ann}(\mathcal{D})_{q}.\end{cases}\]
_Remark 3.12_.: The exponential map is smooth in \(T_{q}^{*}M\setminus\operatorname{Ann}(\mathcal{D})_{q}\) and, by homogeneity, we also have \(\exp_{q}\in C^{1}(T_{q}^{*}M)\). However, note that the we do not have spatial regularity. More precisely, setting \(\mathcal{E}:T^{*}M\to M\) to be \(\mathcal{E}(q,\lambda)=\exp_{q}(\lambda)\), then
\[\mathcal{E}\in C(T^{*}M)\cap C^{\infty}(T^{*}M\setminus\operatorname{Ann}( \mathcal{D})),\]
and we can not expect a better spatial regularity as the vector field \(\vec{H}\) is only continuous on the annihilator bundle.
### The end-point map
Let \(M\) be a sub-Finsler manifold, consider \(u\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\) and fix \(t_{0}\in[0,1]\). Define the non-autonomous vector field
\[\xi_{u(t)}(q):=\xi(q,u(t))=\sum_{i=1}^{k}u_{i}(t)X_{i}(q),\qquad\forall\,q\in M,t\in[0,1],\]
and denote by \(P_{t_{0},t}^{u}:M\to M\) its flow. This means that, for \(q_{0}\in M\), the curve \(t\mapsto P_{t_{0},t}^{u}(q_{0})\) is the unique maximal solution to the Cauchy problem
\[\begin{cases}\dot{\gamma}(t)=\sum_{i=1}^{m}u_{i}(t)X_{i}(\gamma(t)),\\ \gamma(t_{0})=q_{0}.\end{cases}\]
Moreover, we denote by \(\gamma_{u}:I\to M\), the trajectory starting at \(q_{0}\) and corresponding to \(u\), namely \(\gamma_{u}(t):=P_{0,t}^{u}(q_{0})\), for every \(t\in I\), which is the maximal interval of definition of \(\gamma_{u}\).
**Definition 3.13** (End-point map).: Let \(M\) be a sub-Finsler manifold and let \(q_{0}\in M\). We call \(\mathcal{U}_{q_{0}}\subset L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\) the open set of controls for which the corresponding trajectory \(\gamma_{u}\) is defined on the interval \([0,1]\). We define the _end-point map_ based at \(q_{0}\) as
\[E_{q_{0}}:\mathcal{U}_{q_{0}}\to M,\qquad E_{q_{0}}(u)=\gamma_{u}(1).\]
**Lemma 3.14** (Differential of the end-point map, [1, Prop. 8.5]).: Let \(M\) be a sub-Finsler manifold. The end-point map is smooth on \(\mathcal{U}_{q_{0}}\). Moreover, for every \(u\in\mathcal{U}_{q_{0}}\) the differential \(d_{u}E_{q_{0}}:L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\to T_{E_{q_ {0}}(u)}M\) has the expression
\[d_{u}E_{q_{0}}(v)=\int_{0}^{1}\big{(}P^{u}_{t,1}\big{)}_{*}\xi_{v(t)}(E_{q_{0} }(u))\,\mathrm{d}t,\qquad\text{for every $v\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|) \big{)}$.}\]
Using the explicit expression for the differential of the end-point map we deduce the following characterization for normal and abnormal extremals in sub-Finsler geometry, see [1, Prop. 8.9] for the analogous sub-Riemannian result.
**Proposition 3.15**.: Let \(M\) be a smooth sub-Finsler manifold and let \((\gamma,u)\) be a non-trivial solution to (P\({}^{\prime}\)). Then, there exists \(\lambda_{1}\in T^{*}_{q_{1}}M\), where \(q_{1}=E_{q_{0}}(u)\), such that the curve \((\lambda_{t})_{t\in[0,1]}\), with
\[\lambda_{t}:=(P^{u}_{t,1})^{*}\lambda_{1}\in T^{*}_{\gamma(t)}M,\qquad\forall \,t\in[0,1], \tag{22}\]
is a solution to (H). Moreover, one of the following conditions is satisfied:
1. \((\lambda_{t})_{t\in[0,1]}\) is a normal extremal if and only if \(u\) satisfies \[\langle\lambda_{1},d_{u}E_{q_{0}}(v)\rangle=\langle u^{*},v\rangle\qquad \text{for every $v\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}$;}\] (23)
2. \((\lambda_{t})_{t\in[0,1]}\) is an abnormal extremal if and only if \(u\) satisfies \[\langle\lambda_{1},d_{u}E_{q_{0}}(v)\rangle=0\qquad\text{for every $v\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}$.}\]
Proof.: Firstly, (22) is a well-known consequence of the Pontryagin maximum principle. We prove (i), the proof of (ii) is analogous. For every \(v\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\), using Lemma 3.14 we deduce that
\[\begin{split}\langle\lambda_{1},d_{u}E_{q_{0}}(v)\rangle& =\int_{0}^{1}\langle\lambda_{1},(P^{u}_{t,1})_{*}\xi_{v(t)}(E_{q_ {0}}(u))\rangle\,\mathrm{d}t=\int_{0}^{1}\langle(P^{u}_{t,1})^{*}\lambda_{1}, \xi_{v(t)}(\gamma(t))\rangle\,\mathrm{d}t\\ &=\int_{0}^{1}\langle\lambda_{t},\xi_{v(t)}(\gamma(t))\rangle \,\mathrm{d}t=\int_{0}^{1}\sum_{i=1}^{k}\langle\lambda_{t},\xi_{i}(\gamma(t)) \rangle v_{i}(t)\,\mathrm{d}t.\end{split} \tag{24}\]
Assume \((\lambda_{t})_{t\in[0,1]}\) is a normal extremal, then, by Corollary 3.8, the associated optimal control \(u\) satisfies \(\langle\lambda_{t},\xi_{i}(\gamma(t))\rangle=u_{i}^{*}(t)\) for \(i=1,\ldots,k\). Therefore, using (24) we deduce that for every \(v\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\) it holds
\[\langle\lambda_{1},d_{u}E_{q_{0}}(v)\rangle=\int_{0}^{1}\sum_{i=1}^{k}\langle \lambda_{t},\xi_{i}(\gamma(t))\rangle v_{i}(t)\,\mathrm{d}t=\int_{0}^{1} \langle u^{*}(t),v(t)\rangle\,\mathrm{d}t=\langle u^{*},v\rangle.\]
This proves (23).
Conversely, assume that the control \(u\) satisfies (23). We are going to prove that \((\lambda_{t})_{t\in[0,1]}\) is a normal extremal. Using (23) and (24) we deduce that for every \(v\in L^{2}\big{(}[0,1];(\mathbb{R}^{k},\|\cdot\|)\big{)}\)
\[\langle u^{*},v\rangle_{L^{2}}=\langle\lambda_{1},d_{u}E_{q_{0}}(v)\rangle= \int_{0}^{1}\sum_{i=1}^{k}\langle\lambda_{t},\xi_{i}(\gamma(t))\rangle v_{i}(t )\,\mathrm{d}t,\]
As a consequence, since \(v\) is arbitrary, we conclude that \(\langle\lambda_{t},\xi_{i}(\gamma(t))\rangle=u_{i}^{*}(t)\) for \(i=1,\ldots,k\).
## 4 Failure of the \(\mathsf{CD}(K,N)\) condition in smooth sub-Finsler manifolds
In this section, we prove our main result regarding smooth sub-Finsler manifolds, cf. Theorem 1.5. One of the crucial ingredients for the proof is the construction of a geodesic, enjoying good regularity properties, cf. Theorem 4.11.
### Construction of a geodesic without abnormal sub-segments
This section is devoted to the construction of a geodesic without abnormal sub-segments, in smooth sub-Finsler manifolds. The main idea is to choose a short segment of a normal geodesic that minimizes the distance from a hypersurface without characteristic points. We recall the definition of strongly normal geodesic and of geodesic without abnormal sub-segments.
**Definition 4.1**.: Let \(M\) be a sub-Finsler manifold and let \(\gamma:[0,1]\to M\) be a normal geodesic. Then, we say that \(\gamma\) is
1. _left strongly normal_, if for all \(s\in[0,1]\), the restriction \(\gamma|_{[0,s]}\) is not abnormal;
2. _right strongly normal_, if for all \(s\in[0,1]\), the restriction \(\gamma|_{[s,1]}\) is not abnormal;
3. _strongly normal_, if \(\gamma\) is left and right strongly normal.
Finally, we say that \(\gamma\) does not admit abnormal sub-segments if any restriction of \(\gamma\) is strongly normal.
Let \(\Sigma\subset M\) be a hypersurface and let \(\gamma\colon[0,T]\to M\) be a horizontal curve, parameterized with constant speed, such that \(\gamma(0)\in\Sigma\), \(\gamma(T)=p\in M\setminus\Sigma\). Assume \(\gamma\) is a minimizer for \(\mathsf{d}_{SF}(\cdot,\Sigma)\), that is \(\ell(\gamma)=\mathsf{d}_{SF}(p,\Sigma)\). Then, \(\gamma\) is a geodesic and any corresponding normal or abnormal lift, say \(\lambda:[0,T]\to T^{*}M\), must satisfy the transversality conditions, cf. [1, Thm 12.13],
\[\langle\lambda_{0},w\rangle=0,\qquad\forall\,w\in T_{\gamma(0)}\Sigma. \tag{25}\]
Equivalently, the initial covector \(\lambda_{0}\) must belong to the annihilator bundle \(\operatorname{Ann}(\Sigma)\) of \(\Sigma\) with fiber \(\operatorname{Ann}(\Sigma)_{q}=\{\lambda\in T_{q}^{*}M\mid\langle\lambda,T_{q }\Sigma\rangle=0\}\), for any \(q\in\Sigma\).
_Remark 4.2_.: In the sub-Riemannian setting, the normal exponential map \(E\), defined as the restriction of the exponential map to the annihilator bundle of \(\Sigma\), allows to build (locally) a smooth tubular neighborhood around non-characteristic points, cf. [1, Prop. 3.1]. This may fail in the sub-Finsler setting as \(E\) is not regular at \(\Sigma\), cf. Remark 3.12. Nonetheless, we are able to deduce a weaker result that is enough for our construction, see Theorem 4.8.
Recall that \(q\in\Sigma\) is a _characteristic point_, and we write \(q\in C(\Sigma)\), if \(\mathcal{D}_{q}\subset T_{q}\Sigma\). As it happens in the sub-Riemannian case, also in the sub-Finsler setting, minimizers of \(\mathsf{d}_{SF}(\cdot,\Sigma)\) whose initial point is a non-characteristic point, can not be abnormal geodesics.
**Lemma 4.3**.: Let \(M\) be a smooth sub-Finsler manifold. Let \(p\in M\setminus\Sigma\) and let \(\gamma:[0,1]\to M\) be a horizontal curve such that
\[\gamma(0)\in\Sigma,\quad\gamma(1)=p\quad\text{and}\quad\ell(\gamma)=\mathsf{d }_{SF}(p,\Sigma).\]
Then, \(\gamma(0)\in C(\Sigma)\) if and only if \(\gamma\) is an abnormal geodesic.
Proof.: The proof is a straightforward adaptation of the analogous result in the sub-Riemannian setting. We sketch here the argument for completeness. By the Pontryagin maximum principle, cf. Theorem 3.2, there exists a lift \(\lambda:[0,1]\to T^{*}M\) verifying the system (H) with the additional condition (25). Using the characterization of Lemma 3.5, \(\lambda\) is an abnormal lift if and only if \(\lambda_{0}\in\operatorname{Ann}(\mathcal{D})_{\gamma(0)}\). The latter, combined with the transversality condition concludes the proof.
From now on, we assume that \(\Sigma\) is the boundary of an open set \(\Omega\subset M\). As our results are local in nature, this assumption is not necessary, however it makes the presentation easier. Let \(\Omega\subset M\) be a non-characteristic domain in \(M\), so that \(\partial\Omega\) is compact and without characteristic points. Then, there exists a never-vanishing smooth section of \(\operatorname{Ann}(\partial\Omega)\), i.e. a smooth map \(\lambda^{+}:\partial\Omega\to\operatorname{Ann}(\partial\Omega)\) such that
\[\lambda^{+}(q)\in\operatorname{Ann}_{q}(\partial\Omega)\qquad\text{and} \qquad 2H(\lambda^{+})=1, \tag{26}\]
which is uniquely determined, up to a sign. Define the _normal exponential map_ as the restriction of the sub-Finsler exponential map to the annihilator bundle, namely
\[E:D\to M,\qquad E(q,\lambda)=\exp_{q}(\lambda),\]
where \(D\subset\operatorname{Ann}(\partial\Omega)\) is the largest open sub-bundle where \(E\) is defined. Furthermore, we define the _distance function from_\(\partial\Omega\) as
\[\delta:M\to[0,\infty),\qquad\delta(p):=\mathsf{d}_{SF}(p,\partial\Omega).\]
**Lemma 4.4**.: Let \(M\) be a smooth sub-Finsler manifold. There exists \(\epsilon>0\) such that on the sub-bundle
\[D_{\epsilon}:=\{(q,\lambda)\in\operatorname{Ann}(\partial\Omega):E(q, \lambda)\in\Omega\ \text{ and }\ 0<\sqrt{2H(\lambda)}<\epsilon\}\subset D\]
the map \(E|_{D_{\epsilon}}\) is injective and \(E(D_{\epsilon})=\{0<\delta<\epsilon\}\cap\Omega\).
Proof.: Without loss of generality, we assume that \(M\) is complete, so that \(D=\operatorname{Ann}(\partial\Omega)\). We may proceed by contradiction and assume that there does not exist a choice of \(\epsilon>0\) so that \(E|_{D_{\epsilon}}\) is injective. Hence, we can find sequences \(\{(q_{n},\lambda_{n})\},\{(q^{\prime}_{n},\lambda^{\prime}_{n})\}\subset \operatorname{Ann}(\partial\Omega)\) such that
\[(q_{n},\lambda_{n})\neq(q^{\prime}_{n},\lambda^{\prime}_{n})\qquad E(q_{n}, \lambda_{n})=E(q^{\prime}_{n},\lambda^{\prime}_{n}),\qquad\text{and}\qquad H (\lambda_{n}),\,H(\lambda^{\prime}_{n})\to 0. \tag{27}\]
Note that, as \(\partial\Omega\) has no characteristic points, the sub-Finsler Hamiltonian is a norm on the fibers of \(\operatorname{Ann}(\partial\Omega)\). Therefore, by compactness, \((q_{n},\lambda_{n})\to(q,0)\) and \((q^{\prime}_{n},\lambda^{\prime}_{n})\to(q^{\prime},0)\), up to subsequences. Thus, recalling that \(E\) is continuous on \(D\), passing to the limit in (27), we get that \(E(q,0)=E(q^{\prime},0)\), meaning that \(q=q^{\prime}\). As a consequence \(\lambda_{n},\lambda^{\prime}_{n}\in\operatorname{Ann}(\partial\Omega)_{q}\) so they are multiple of the section defined in (26), namely
\[\lambda_{n}=t_{n}\lambda^{+}(q),\qquad\lambda^{\prime}_{n}=t^{\prime}_{n} \lambda^{+}(q),\]
where \(t_{n},t^{\prime}_{n}\to 0\) and their signs agree. Finally, recall that the length of the normal curve \([0,1]\ni t\mapsto E(q,t\lambda)\) is exactly \(\sqrt{2H(\lambda)}\). This forces \(t_{n}=t^{\prime}_{n}\) which is a contradiction with (27). We are left to prove the last part of the statement. Fix \(\epsilon>0\) so that \(E|_{D_{\epsilon}}\) is injective, then \(E(D_{\epsilon})\subset\{0<\delta<\epsilon\}\cap\Omega\). For the converse inclusion, pick \(p\in\{0<\delta<\epsilon\}\cap\Omega\) and let \(\gamma:[0,1]\to M\) be a geodesic joining \(\gamma(0)\in\partial\Omega\) and \(p=\gamma(1)\) such that \(\ell(\gamma)=\delta(p)\). Then, \(\gamma(0)\) is a non-characteristic point, therefore \(\gamma\) is a normal geodesic, whose lift satisfies (25), according to Lemma 4.3. Hence, there exists \(0\neq\lambda\in\operatorname{Ann}(\partial\Omega)_{\gamma(0)}\) such that \(\gamma(t)=E(\gamma(0),t\lambda)\) and \(\ell(\gamma)=\sqrt{2H(\lambda)}<\epsilon\). Thus, \((q,\lambda)\in D_{\epsilon}\) concluding the proof.
We state here a useful lemma regarding the regularity of the distance function from a boundary. Recall that a function \(f:M\to\mathbb{R}\) is said to be _locally semiconcave_ if, for every \(p\in M\), there exist a coordinate chart \(\varphi:U\subset M\to\mathbb{R}^{n}\), with \(p\in U\), and a constant \(C\in\mathbb{R}\) such that
\[F:\mathbb{R}^{n}\to\mathbb{R};\qquad F(x):=f\circ\varphi^{-1}(x)-C\frac{|x|^{2 }}{2},\]
is concave, where \(|\cdot|\) denotes the Euclidean norm.
**Lemma 4.5**.: Let \(M\) be a smooth sub-Finsler manifold. Let \(\Omega\subset M\) be an open and bounded subset. Assume that \(\partial\Omega\) is smooth and without characteristic points. Then, the distance function from \(\partial\Omega\), \(\delta\) is locally semiconcave in \(\Omega\).
Proof.: We do not report here a complete proof, since it follows the same arguments of [1], with the obvious modification for the sub-Finsler case. In particular, applying Lemma 4.3, we deduce there are no abnormal geodesic joining points of \(\Omega\) to its boundary and realizing \(\delta\). Thus, the proof of [1, Thm. 3.2] shows that \(\delta\) is locally Lipschitz in coordinates, meaning that the function \(\delta\) written in coordinates is Lipschitz with respect to the Euclidean distance. Then, using [1, Thm. 4.1] implication \((3)\Rightarrow(2)\), we conclude.
Since \(\delta\) is locally semiconcave, Alexandrov's theorem ensures that \(\delta\) is differentiable two times \(\mathscr{L}^{n}\)-a.e. (in coordinates) and, letting \(\mathcal{U}\subset\Omega\) be the set where \(\delta\) is differentiable, the function \(d\delta:\mathcal{U}\to T^{*}M\) is differentiable \(\mathscr{L}^{n}\)-a.e., cf. [1, Thm. 6.4] for the precise statement of Alexandrov's theorem. This observation, combined with Lemma 4.6 below, gives us an alternative description of geodesics joining \(\partial\Omega\) and differentiablity points of \(\delta\) in \(\{0<\delta<\epsilon\}\cap\Omega\).
**Lemma 4.6**.: Let \(M\) be a smooth sub-Finsler manifold. Let \(p,q\in M\) be distinct points and assume there is a function \(\phi:M\to\mathbb{R}\) differentiable at \(p\) and such that
\[\phi(p)=\frac{1}{2}\mathrm{d}_{SF}^{2}(p,q)\qquad\text{and}\qquad\frac{1}{2} \mathrm{d}_{SF}^{2}(z,q)\geq\phi(z),\quad\forall\,z\in M. \tag{28}\]
Then, the geodesic joining \(p\) and \(q\) is unique, has a normal lift and is given by \(\gamma:[0,1]\to M\); \(\gamma(t)=\exp_{p}(-td_{p}\phi)\).
Proof.: This is a well-known result in sub-Riemannian geometry, cf. [16, Lem. 2.15]. The same proof can be carried out without substantial modifications in the setting of sub-Finsler manifolds, in light of Proposition 3.15.
**Corollary 4.7**.: Let \(M\) be a smooth sub-Finsler manifold and let \(\Omega\subset M\) be an open and bounded subset. Assume that \(\partial\Omega\) is smooth and without characteristic points. Let \(p\in\{0<\delta<\epsilon\}\cap\Omega\) be a differentiability point of \(\delta\). Then, the unique geodesic \(\gamma:[0,1]\to M\) joining \(p\) and \(\partial\Omega\) and such that \(\delta(p)=\ell(\gamma)\) is defined by \(\gamma(t)=\exp_{p}\big{(}-\frac{t}{2}d_{p}\delta^{2}\big{)}\).
Proof.: Since \(p\in\{0<\delta<\epsilon\}\cap\Omega\), from Lemma 4.4 we know that the geodesic joining \(p\) and \(\partial\Omega\), and realizing \(\delta\) is normal and unique. Let \(q\in\partial\Omega\) be its endpoint and define
\[\phi:M\to\mathbb{R};\qquad\phi(z):=\frac{1}{2}\delta^{2}(z).\]
Note that \(\phi\) is differentiable at the point \(p\), \(\phi(p)=\frac{1}{2}\ell(\gamma)^{2}=\frac{1}{2}\mathrm{d}_{SF}^{2}(p,q)\) and, since \(q\in\partial\Omega\), it also satisfies the inequality in (28). Thus, we may apply Lemma 4.6 and conclude the proof.
Collecting all the previous results, we are in position to prove the following theorem concerning the regularity of the normal exponential map.
**Theorem 4.8**.: Let \(M\) be a smooth sub-Finsler manifold. The restriction of the sub-Finsler normal exponential map to \(D_{\epsilon}\), namely \(E|_{D_{\epsilon}}:D_{\epsilon}\to\{0<\delta<\epsilon\}\cap\Omega\), defines a diffeomorphism on an open and dense subset \(\mathcal{O}\subset D_{\epsilon}\). Moreover, \(\delta\) is smooth on \(E(\mathcal{O})\subset\{0<\delta<\epsilon\}\cap\Omega\), which is open and with full-measure.
Proof.: We are going to show that \(d_{(q,\lambda)}E\) is invertible for every \((q,\lambda)\) in a suitable subset of \(D_{\epsilon}\). By Corollary 4.7, letting \(U\subset\{0<\delta<\epsilon\}\cap\Omega\) be the set where \(\delta\) is twice-differentiable, the map
\[\Phi:U\to\mathrm{Ann}(\partial\Omega);\qquad\Phi(p)=e^{-\bar{H}}(d_{p}\delta)\]
is a right-inverse for the normal exponential map, namely \(E\circ\Phi=\mathrm{id}_{U}\). Note that \(\Phi(U)\subset\mathrm{Ann}(\partial\Omega)\) by Corollary 4.7, in combination with the transversality condition (25). Moreover, recalling that the Hamiltonian is constant along the motion, we also have:
\[\sqrt{2H(\Phi(p))}=\ell(\gamma)=\delta(p)\in(0,\epsilon),\]
so that \(\Phi(U)\subset D_{\epsilon}\). But now by the choice of the set \(U\), \(\delta\) is twice-differentiable on this set and it has a Taylor expansion up to order \(2\). Thus, expanding the identity \(E\circ\Phi=\mathrm{id}_{U}\) at a point \(p=E(q,\lambda)\), we deduce that \(d_{(q,\lambda)}E\) must be invertible for every \((q,\lambda)\in\Phi(U)\subset D_{\epsilon}\), and thus \(E\) is a local diffeomorphism around every point in \(\Phi(U)\). Furthermore, observing that \(\Phi(U)\) is dense in \(D_{\epsilon}\), we see that \(E\) is a local diffeomorphism everywhere on a open and dense subset \(\mathcal{O}\subset D_{\epsilon}\), containing \(\Phi(U)\). Hence, we conclude that \(E|_{\mathcal{O}}\) is a diffeormorphism onto its image, being a local diffeomorphism that is also invertible, thanks to Lemma 4.4. Finally, in order to prove that \(\delta\) is smooth on \(E(\mathcal{O})\subset\{0<\delta<\epsilon\}\cap\Omega\), it is enough to observe that, by construction,
\[\delta(E(q,\lambda))=\sqrt{2H(\lambda)},\qquad\forall\,(q,\lambda)\in D_{ \epsilon}.\]
On \(D_{\epsilon}\), \(H\) is smooth, hence we conclude that \(\delta\) is smooth on \(E(\mathcal{O})\). Now \(U\subset E(\mathcal{O})\) so that \(E(\mathcal{O})\) is open, dense and has full-measure in \(\{0<\delta<\epsilon\}\cap\Omega\).
An immediate consequence of the previous theorem is the existence of many geodesics that are strongly normal in the sense of Definition 4.1.
**Corollary 4.9**.: Let \(M\) be a smooth sub-Finsler manifold and let \(\Omega\subset M\) be an open and bounded subset. Assume that \(\partial\Omega\) is smooth and without characteristic points. Let \(p\in E(\mathcal{O})\subset\{0<\delta<\epsilon\}\cap\Omega\) and let \(\gamma:[0,1]\to M\) be the unique geodesic joining \(p\) and \(\partial\Omega\) and realizing \(\delta\). Then, \(\gamma\) is strongly normal.
Proof.: Let \(q:=\gamma(0)\in\partial\Omega\) the endpoint of \(\gamma\) on the boundary of \(\Omega\). Then, since \(q\) is non-characteristic point, Lemma 4.3 ensures that \(\gamma|_{[0,s]}\) can not have an abnormal lift. Hence, \(\gamma\) is left strongly normal. In order to prove that \(\gamma\) is also right strongly normal, we reason in a similar way but with \(\{\delta=\delta(p)\}\) in place of \(\partial\Omega\). Indeed, since \(\delta\) is smooth on the open set \(E(\mathcal{O})\) by Theorem 4.8 and \(d_{\bar{p}}\delta\) is not vanishing for every \(\bar{p}\in E(\mathcal{O})\) as a consequence of Corollary 4.7, the set \(\Sigma:=\{\delta=\delta(p)\}\) defines a smooth hypersurface in a neighborhood of the point \(p\). In addition, \(\delta_{\Sigma}(q):=\mathsf{d}_{SF}(q,\Sigma)=\delta(p)\) and \(\gamma\) is the unique geodesic realizing \(\delta_{\Sigma}\). Finally, applying once again Lemma 4.3, we also deduce that \(p\notin C(\Sigma)\), so repeating the argument we did before, we conclude that \(\gamma\) must be also right strongly normal. This concludes the proof.
_Remark 4.10_.: Since \(E(\mathcal{O})\) has full measure in \(\{0<\delta<\epsilon\}\cap\Omega\), we can find \((q,\lambda)\in\mathcal{O}\) such that, denoting by \(\gamma:[0,1]\to M\) the corresponding geodesic minimizing \(\delta\), we have that \(\gamma(t)\in E(\mathcal{O})\) for \(\mathscr{L}^{1}\)-a.e. \(t\in[0,1]\). This means that \(\mathscr{L}^{1}\)-almost every level set defines locally a hypersurface and, recalling that restrictions of abnormal geodesics are still abnormal, the proof of Corollary 4.9 can be repeated to show that the curve \(\gamma\) does not contain abnormal sub-segments.
**Theorem 4.11** (Existence of strongly normal geodesics without abnormal sub-segments).: Let \(M\) be a smooth sub-Finsler manifold. Then, there exists a strongly normal geodesic \(\gamma:[0,1]\to M\), which does not contain abnormal sub-segments.
Proof.: Note that Theorem 4.8 was stated for a hypersurface that is the boundary of non-characteristic domain \(\Omega\). However, without substantial modifications, one can prove that an analogous result holds locally around a non-characteristic point of a given smooth hypersurface \(\Sigma\subset M\). In particular, letting \(q\in\Sigma\setminus C(\Sigma)\), there exists \(V_{q}\subset\Sigma\) open neighborhood of \(q\), and \(\epsilon>0\) such that, denoting by
\[\tilde{D}_{\epsilon}:=\{(\bar{q},\lambda):\bar{q}\in V_{q},\,0<\sqrt{2H( \lambda)}<\epsilon\},\]
the map \(E|_{\hat{D}_{\epsilon}}:\hat{D}_{\epsilon}\to E(\hat{D}_{\epsilon})\subset\{0<\delta _{\Sigma}<\epsilon\}\) is a diffeomorphism on an open and dense subset \(\mathcal{O}\subset\hat{D}_{\epsilon}\) and \(\delta_{\Sigma}\) is smooth on \(E(\mathcal{O})\). Now, Corollary 4.9 shows that there exists a point \(p\in E(\mathcal{O})\) such that the unique geodesic \(\gamma:[0,1]\to M\) minimizing \(\delta\) is strongly normal and, also according to Remark 4.10, it does not contain abnormal sub-segments. In order to conclude, we need to show that there exists a hypersurface \(\Sigma\) with \(\Sigma\setminus C(\Sigma)\neq\emptyset\). But this is a consequence of the Hormander condition, indeed if \(\mathcal{D}_{q}\subset\Sigma_{q}\) for every \(q\in\Sigma\), then Frobenius' theorem would ensure \(\mathcal{D}\) be involutive and thus it would not be bracket-generating.
### 4.2 Regularity of the distance function
We state below the definition of conjugate and cut loci in a sub-Finsler manifold, following the blueprint of the sub-Riemannian setting, cf. [1, Chap. 11] or [1].
**Definition 4.12** (Conjugate point).: Let \(M\) be a smooth sub-Finsler manifold and let \(\gamma:[0,1]\to M\) be a normal geodesic with initial covector \(\lambda\in T_{p}^{*}M\), that is \(\gamma(t)=\exp_{p}(t\lambda)\). We say that \(q=\exp_{p}(\bar{t}\lambda)\) is a _conjugate point_ to \(p\) along \(\gamma\) if \(\bar{t}\lambda\) is a critical point for \(\exp_{p}\).
**Definition 4.13** (Cut locus).: Let \(M\) be a smooth sub-Finsler manifold and let \(p\in M\). We say that \(q\in M\) is a _smooth point_ with respect to \(p\), and write \(q\in\Sigma_{p}\), if there exists a unique geodesic \(\gamma:[0,1]\to M\) joining \(p\) and \(q\), which is not abnormal and such that \(q\) is not conjugate along \(p\). Define the _cut locus_ of \(p\in M\) as \(\operatorname{Cut}(p):=M\setminus\Sigma_{p}\). Finally, the cut locus of \(M\) is the set
\[\operatorname{Cut}(M):=\{(p,q)\in M\times M:q\in\operatorname{Cut}(p)\} \subset M\times M.\]
_Remark 4.14_.: In the sub-Riemannian setting, according to [1], the set of smooth points with respect to \(p\) is open and dense. However, it is an open question to understand whether its complement, that is the cut locus, is negligible.
Outside the cut locus of a sub-Finsler manifold, we can define the _\(t\)-midpoint map_, for \(t\in[0,1]\), as the map \(\phi_{t}:M\times M\setminus\operatorname{Cut}(M)\to M\) assigning to \((p,q)\) the \(t\)-midpoint of the (unique) geodesic \(\gamma_{p,q}\) joining \(p\) and \(q\). More precisely, for every \((p,q)\in M\times M\setminus\operatorname{Cut}(M)\),
\[\phi_{t}(p,q):=e_{t}(\gamma_{p,q})=\exp_{q}((t-1)\lambda_{p}),\qquad\text{ where }\lambda_{p}\in T_{q}^{*}M\text{ such that }p=\exp_{q}(-\lambda_{p}). \tag{29}\]
Note that, by definition of cut locus, the \(t\)-midpoint map is well-defined since the geodesic joining \(p\) and \(q\) for \(q\notin\operatorname{Cut}(p)\) is unique and strictly normal, i.e. without abnormal lifts.
We report a useful result relating the regularity of the squared distance function on a sub-Finsler manifold \(M\) with the cut locus. Such result can be proved repeating verbatim the proof of [1, Prop. 11.4], in light of Proposition 3.15 and Lemma 4.6. For every \(p\in M\), let \(\mathfrak{f}_{p}:=\frac{1}{2}\mathsf{d}_{SF}^{2}(\cdot,p)\).
**Proposition 4.15**.: Let \(M\) be a smooth sub-Finsler manifold and let \(p,q\in M\). Assume there exists an open neighborhood \(\mathcal{O}_{q}\subset M\) of \(q\) such that \(\mathfrak{f}_{p}\) is smooth. Then, \(\mathcal{O}_{q}\subset\Sigma_{p}\) and
\[\phi_{t}(p,z)=\exp_{z}((t-1)d_{z}\mathfrak{f}_{p}),\qquad\forall\,z\in \mathcal{O}_{q}.\]
Thanks to Proposition 4.15, the regularity of the squared distance ensures uniqueness of geodesics and smoothness of the \(t\)-midpoint map. Thus, it is desirable to understand where the squared distance is smooth. In this regard, [1, Thm. 2.19] proves the regularity of the squared distance function along left strongly normal geodesics. We refer to [1, App. A] for further details.
**Theorem 4.16** ([1, Thm. 2.19]).: Let \(M\) be a smooth sub-Finsler manifold and let \(\gamma:[0,1]\to M\) be a left strongly normal geodesic. Then there exists \(\epsilon>0\) and an open neighborhood \(U\subset M\times M\) such that:
1. \((\gamma(0),\gamma(t))\in U\) for all \(t\in(0,\epsilon)\);
2. For any \((p,q)\in U\) there exists a unique (normal) geodesic joining \(p\) and \(q\), shorter than \(\epsilon\);
3. The squared distance function \((p,q)\mapsto\mathsf{d}_{SF}^{2}(p,q)\) is smooth on \(U\).
The regularity of the squared distance function can be "propagated" along geodesics that do not admit abnormal sub-segments, applying the previous theorem for every sub-segment.
**Corollary 4.17**.: Let \(M\) be a smooth sub-Finsler manifold and let \(\gamma:[0,1]\to M\) be a geodesic that does not admit abnormal sub-segments. Then, for every \(s\in[0,1]\), there exists \(\epsilon>0\) and an open neighborhood \(U\subset M\times M\) such that:
1. \((\gamma(s),\gamma(t))\in U\) for all \(t\in[0,1]\) such that \(0<|t-s|<\epsilon\);
2. For any \((p,q)\in U\) there exists a unique (normal) geodesic joining \(p\) and \(q\), shorter than \(\epsilon\);
3. The squared distance function \((p,q)\mapsto\mathsf{d}_{SF}^{2}(p,q)\) is smooth on \(U\).
### Volume contraction rate along geodesics
Our goal is to quantify the contraction rate of small volumes along geodesics. To do this, we combine the smoothness of the \(t\)-midpoint map, with a lower bound on the so-called geodesic dimension. The latter has been introduced in [1] for sub-Riemannian manifolds and in [11, Def. 5.47] for general metric measure spaces. We recall below the definition.
Let \(M\) be a smooth sub-Finsler manifold. Given a point \(p\in M\) and a Borel set \(\Omega\subset M\setminus\operatorname{Cut}(p)\), we define the _geodesic homothety_ of \(\Omega\) with center \(p\) and ratio \(t\in[0,1]\) as
\[\Omega_{t}^{p}:=\{\phi_{t}(p,q):q\in\Omega\}.\]
In the sequel, we say that \(\mathfrak{m}\) is a _smooth measure_ if, in coordinates, is absolutely continuous with respect the Lebesgue measure of the chart with a smooth and positive density. We will consider the metric measure space \((M,\mathsf{d}_{SF},\mathfrak{m})\).
**Definition 4.18**.: Let \(M\) be a smooth sub-Finsler manifold, equipped with a smooth measure \(\mathfrak{m}\). For any \(p\in M\) and \(s>0\), define
\[C_{s}(p):=\sup\left\{\limsup_{t\to 0}\frac{1}{t^{s}}\frac{\mathfrak{m}( \Omega_{t}^{p})}{\mathfrak{m}(\Omega)}:\Omega\subset M\setminus\operatorname{ Cut}(p)\text{ Borel, bounded and }\mathfrak{m}(\Omega)\in(0,+\infty)\right\}, \tag{30}\]
We define the _geodesic dimension_ of \((M,\mathsf{d}_{SF},\mathfrak{m})\) at \(p\in M\) as the non-negative real number
\[\mathcal{N}(p):=\inf\{s>0:C_{s}(p)=+\infty\}=\sup\{s>0:C_{s}(p)=0\},\]
with the conventions \(\inf\emptyset=+\infty\) and \(\sup\emptyset=0\).
_Remark 4.19_.: In [11], the definition of geodesic dimension is given for metric measure spaces with _negligible cut loci_. In this work, we adapted the definition by taking the supremum (30) over sets \(\Omega\) which are outside the cut locus \(\operatorname{Cut}(p)\).
We now prove a fundamental theorem which relates the geodesic and topological dimensions of a sub-Finsler manifold \(M\). This result is a suitable adaptation of [11, Thm. 4] to our setting.
**Proposition 4.20**.: Let \(M\) be a smooth sub-Finsler manifold, equipped with a smooth measure \(\mathfrak{m}\). Assume that \(r(p)<n:=\dim M\) for every \(p\in M\). Then,
\[\mathcal{N}(p)\geq n+1,\qquad\forall\,p\in M.\]
Proof.: Let \(\mathsf{d}_{SR}\) be a sub-Riemannian distance on the manifold \(M\), equivalent to \(\mathsf{d}_{SF}\) (see (13)). The Ball-Box theorem, cf. [13, Cor. 2.1], ensures that for every \(p\in M\) there exist \(n_{p}\geq n+1\) and a positive constant \(C_{p}\) such that
\[\mathfrak{m}\big{(}B_{r}^{SR}(p)\big{)}\leq C_{p}\cdot r^{n_{p}}\qquad\text{ for $r$ sufficiently small.} \tag{31}\]
Since \(\mathsf{d}_{SF}\) and \(\mathsf{d}_{SR}\) are equivalent, up to changing the constant, the same estimate holds for sub-Finsler balls, in particular
\[\limsup_{r\to 0}\frac{\mathfrak{m}\big{(}B_{r}^{SF}(p)\big{)}}{r^{k}}=0 \tag{32}\]
for every \(k<n+1\). Take any \(\Omega\subset M\setminus\operatorname{Cut}(p)\) Borel, bounded and with \(\mathfrak{m}(\Omega)\in(0,+\infty)\) and consider \(R>0\) such that \(\Omega\subset B_{R}^{SF}(p)\). Note that \(\Omega_{t}^{p}\subset B_{tR}^{SF}(p)\) and thus for every \(k<n+1\) we have that
\[\limsup_{t\to 0}\frac{\mathfrak{m}(\Omega_{t}^{p})}{t^{k}\mathfrak{m}( \Omega)}\leq\limsup_{t\to 0}\frac{\mathfrak{m}\big{(}B_{tR}^{SF}(p)\big{)}}{t^{k} \mathfrak{m}(\Omega)}=\limsup_{t\to 0}\frac{\mathfrak{m}\big{(}B_{tR}^{SF}(p) \big{)}}{(tR)^{k}}\cdot\frac{R^{k}}{\mathfrak{m}(\Omega)}=0,\]
where we used (32) for the last equality. Since \(\Omega\) was arbitrary, we deduce that \(C_{k}(p)=0\) for every \(k<n+1\) and then \(\mathcal{N}(p)\geq n+1\).
_Remark 4.21_.: For an equiregular sub-Finsler manifold, with the same proof, it is possible to improve the estimate of Proposition 4.20. In fact, in this case the Ball-Box theorem provides the estimate (31) with \(n_{p}\) equal to the Hausdorff dimension \(\dim_{H}(M)\), for every \(p\), and consequently \(\mathcal{N}(p)\geq\dim_{H}(M)\), cf. [1, Prop. 5.49].
By construction, the geodesic dimension controls the contraction rate of volumes along geodesics. This information can be transferred to the \(t\)-midpoint map, provided that is smooth. By invoking Theorem 4.16, we can always guarantee the smoothness of the \(t\)-midpoint map for a sufficiently short segment of a geodesic without abnormal sub-segments.
**Theorem 4.22**.: Let \(M\) be a smooth sub-Finsler manifold equipped with a smooth measure \(\mathfrak{m}\) and such that \(r(p)<n:=\dim M\) for every \(p\in M\). Let \(\gamma:[0,1]\to M\) be a geodesic that does not admit abnormal sub-segments, with endpoints \(p\) and \(q\). Assume that \((p,q)\) belongs to the open set \(U\), found in Theorem 4.16. Then, either \(|\det\big{(}d_{q}\phi_{t}(p,\cdot)\big{)}|\) has infinite order at \(t=0\) or
\[|\det\big{(}d_{q}\phi_{t}(p,\cdot)\big{)}|\sim t^{m_{p}},\qquad\text{as $t\to 0$} \tag{33}\]
for some integer \(m_{p}\geq\mathcal{N}(p)\geq n+1\).
Proof.: Since, by assumption \((p,q)\in U\), we can apply item (iii) of Theorem 4.16, deducing the regularity of the distance function. Combining this with Proposition 4.15 and the homogeneity of the Hamiltonian flow, there exists an open neighborhood \(V\subset M\) of \(q\), such that the function
\[[0,1)\times V\ni(t,z)\mapsto d_{z}\phi_{t}(p,\cdot)=d_{z}\left(\exp_{z}((t-1)d _{z}\mathfrak{f}_{p})\right)\]
is smooth. Thus, we can compute the Taylor expansion of its determinant in the \(t\)-variable at order \(N:=[\mathcal{N}(p)]-1<\mathcal{N}(p)\), obtaining:
\[\det\big{(}d_{z}\phi_{t}(p,\cdot)\big{)}=\sum_{i=0}^{N}a_{i}(z)t^{i}+t^{N+1}R_ {N}(t,z),\qquad\forall\,z\in V,\]
where the functions \(a_{i}\) and \(R_{N}\) are smooth. Arguing by contradiction, we assume that there exists \(j\leq N\) such that \(a_{j}(q)\neq 0\) and define
\[m:=\min\{i\leq N\,:\,\exists\,z\in V\text{ such that }a_{i}(z)\neq 0\}.\]
Note that \(m\leq j\) since \(a_{j}(q)\neq 0\) and thus \(m\leq N\). Without loss of generality, we can assume that \(V\) and \(p\) are contained in the same coordinate chart and that \(a_{m}>0\) on an open subset \(\tilde{V}\subset V\) with positive measure. Then, in charts, it holds that
\[\mathscr{L}^{n}\big{(}\tilde{V}_{t}^{p}\big{)}=\int_{\tilde{V}}\big{|}\det \big{(}d_{z}\phi_{t}(p,\cdot)\big{)}\big{|}\,\mathrm{d}z=\int_{\tilde{V}}a_{m} (z)\,\mathrm{d}z\cdot t^{m}+o(t^{m})\qquad\text{as $t\to 0$.}\]
Therefore, recalling that \(\mathfrak{m}\) is a smooth measure, there exists a constant \(a>0\) such that
\[\mathfrak{m}\big{(}\tilde{V}_{t}^{p}\big{)}\geq a\cdot t^{m},\]
for every \(t\) sufficiently small. As a consequence, taking any \(s\in(N,\mathcal{N}(p))\) we have that
\[\limsup_{t\to 0}\frac{1}{t^{s}}\frac{\mathfrak{m}(\tilde{V}_{t}^{p})}{ \mathfrak{m}(\tilde{V})}\geq\limsup_{t\to 0}\frac{1}{\mathfrak{m}(\tilde{V})} \frac{a\cdot t^{m}}{t^{s}}=+\infty,\]
and therefore we deduce \(C_{s}(p)=+\infty\), which in turn implies \(\mathcal{N}(p)\leq s\), giving a contradiction.
Theorem 4.22 motivates the following definition.
**Definition 4.23** (Ample geodesic).: Let \(M\) be a smooth sub-Finsler manifold and let \(\gamma:[0,1]\to M\) be a strictly normal geodesic not admitting abnormal sub-segments. We say that \(\gamma\) is _ample_ if, for every couple of distinct points \(p,q\in\gamma([0,1])\), \(|\det\big{(}d_{q}\phi_{t}(p,\cdot)\big{)}|\) exists and has finite order in \(t=0\).
_Remark 4.24_.: The concept of ample geodesic in the sub-Riemannian setting has been introduced in [1] and it differs from Definition 4.23. However, we remark that, in sub-Riemannian manifolds, for ample geodesics in the sense of [1], \(|\det\big{(}d_{q}\phi_{t}(p,\cdot)\big{)}|\) has finite order equal to the geodesic dimension at \(p\), cf. [1, Lem. 6.27]. Thus, our definition is weaker, but enough for our purposes.
### Proof of Theorem 1.5
Let \(M\) be a smooth sub-Finsler manifold and let \(\phi_{t}\) the \(t\)-midpoint map, defined as in (29). For ease of notation, set
\[\mathcal{M}(p,q):=\phi_{1/2}(p,q),\qquad\forall\,(p,q)\in M\times M\setminus \mathrm{Cut}(M), \tag{34}\]
be the \(1/2\)-midpoint map or simply midpoint map. Reasoning as in [11, Prop. 3.1], we obtain the following result as a consequence of Corollary 4.17 and Theorem 4.22. This argument hinges upon Theorem 4.11, which establishes the existence of a geodesic without abnormal sub-segments in a sub-Finsler manifold.
**Proposition 4.25**.: Let \(M\) be a smooth sub-Finsler manifold equipped with a smooth measure \(\mathfrak{m}\) and such that \(r(p)<n:=\dim M\) for every \(p\in M\). Let \(\gamma:[0,1]\to M\) be the geodesic identified in Theorem 4.11 and let \(\varepsilon>0\). Then, there exist \(0\leq a<b\leq 1\) such that, letting \(\bar{p}:=\gamma(a)\), \(\bar{q}:=\gamma(b)\), the following statements hold:
1. \(\bar{p}\notin\mathrm{Cut}(\bar{q})\), \(\bar{q}\notin\mathrm{Cut}(\bar{p})\) and, for every \(t\in(a,b)\), we have \(\bar{p},\bar{q}\notin\mathrm{Cut}(\gamma(t))\). Moreover, for every \(t\in(a,b)\), \(\mathfrak{f}_{\gamma(t)}\) is smooth in a neighborhood of \(\bar{p}\) and in a neighborhood \(\bar{q}\).
2. If, in addition, \(\gamma\) is ample, the midpoint map satisfies \[|\det d_{\bar{p}}\mathcal{M}(\bar{p},\cdot)|\leq(1+\varepsilon)2^{-m_{\bar{p} }},\qquad|\det d_{\bar{p}}\mathcal{M}(\cdot,\bar{q})|\leq(1+\varepsilon)2^{-m_ {\bar{q}}}\] (35) where \(m_{\bar{p}}\) and \(m_{\bar{q}}\) are defined by (33) and \(m_{\bar{p}},m_{\bar{q}}\geq n+1\).
Given \(z\in M\), define the _inverse geodesic map_\(\mathcal{I}_{z}:M\setminus\operatorname{Cut}(z)\to M\) as
\[\mathcal{I}_{z}(p)=\exp_{z}(-\lambda)\qquad\text{where $\lambda\in T_{z}^{*}M$ such that $p=\exp_{z}(\lambda)$.} \tag{36}\]
We may interpret this map as the one associating to \(p\) the point \(\mathcal{I}_{z}(p)\) such that \(z\) is the midpoint of \(x\) and \(\mathcal{I}_{z}(p)\).
We prove now the main theorem of this section, which also implies Theorem 1.5. Our strategy is an adaptation to the sub-Finsler setting of the one proposed in [10].
**Theorem 4.26**.: Let \(M\) be a complete smooth sub-Finsler manifold equipped with a smooth measure \(\mathfrak{m}\) and such that \(r(p)<n:=\dim M\) for every \(p\in M\). Then, the metric measure space \((M,\mathsf{d}_{SF},\mathfrak{m})\) does not satisfy the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\), for every \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
Proof.: Fix \(\varepsilon>0\), \(K\in\mathbb{R}\) and \(N\in(1,\infty)\). Let \(\gamma:[0,1]\to M\) be the geodesic identified by Theorem 4.11 and assume it is contained in a coordinate chart with (sub-Finsler) diameter \(D>0\). Up to restricting the domain of the chart and the geodesic, we can also assume that
\[(1-\varepsilon)\mathscr{L}^{n}\leq\mathfrak{m}\leq(1+\varepsilon)\mathscr{L}^ {n}\qquad\text{and}\qquad\tau_{K,N}^{(1/2)}(\theta)\geq\frac{1}{2}-\varepsilon,\quad\forall\theta\leq D, \tag{37}\]
where the second inequality can be fulfilled, according to Remark 2.1. Moreover, let \(0\leq a<b\leq 1\) be as in Proposition 4.25. We proceed by contradiction and assume that \((M,\mathsf{d}_{SF},\mathfrak{m})\) satisfies the \(\mathsf{BM}(K,N)\).
First of all, suppose that \(\gamma\) is not ample. According to [12, Prop. 5.3], the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\) implies the \(\mathsf{MCP}(K,N)\) condition1. Therefore, \((M,\mathsf{d}_{SF},\mathfrak{m})\) satisfies the \(\mathsf{MCP}(K,N)\) condition and, for the moment, assume \(K=0\). Set \(\bar{p}:=\gamma(a)\) and \(\bar{q}:=\gamma(b)\) and let \(\Omega_{\varrho}:=B_{\varrho}(\bar{q})\) for \(\varrho>0\). From the \(\mathsf{MCP}(0,N)\) condition we get
Footnote 1: In that paper, the proposition is proved for essentially non-branching metric measure spaces. However, what is needed is a measurable selection of geodesics which we have locally around the curve \(\gamma\).
\[\mathfrak{m}\big{(}\Omega_{\varrho,t}^{\bar{p}}\big{)}\geq t^{N}\mathfrak{m}( \Omega_{\varrho}),\qquad\forall\,t\in[0,1],\,\varrho>0. \tag{38}\]
If \(\varrho\) is sufficiently small, then \(\Omega_{\varrho,t}^{\bar{p}}=\phi_{t}(\bar{p},\Omega_{\varrho})\) for \(t\in[0,1)\), therefore, employing the first estimate in (37), the inequality (38) can be reformulated as follows:
\[\frac{1+\varepsilon}{1-\varepsilon}\fint_{\Omega_{\varrho}}|\det\big{(}d_{z} \phi_{t}(\bar{p},\cdot)\big{)}|\,\mathrm{d}z\geq\frac{\mathfrak{m}(\Omega_{ \varrho,t}^{\bar{p}})}{\mathfrak{m}(\Omega_{\varrho})}\geq t^{N},\qquad\forall \,t\in[0,1),\,\varrho>0.\]
Taking the limit as \(\varrho\to 0\), and then the limit as \(t\to 0\), we find that the order of \(\big{|}\det\big{(}d_{\bar{q}}\phi_{t}(\bar{p},\cdot)\big{)}\big{|}\) should be smaller than or equal to \(N\), giving a contradiction. Finally, if \(K\neq 0\), observe that the behavior of the distortion coefficients, as \(t\to 0\), is comparable with \(t\), namely there exists a constant \(C=C(K,N,\mathsf{d}(\bar{p},\bar{q}))>0\) such that
\[\tau_{K,N}^{(t)}(\theta)\geq Ct,\qquad\text{as $t\to 0$},\qquad\forall\, \theta\in(\mathsf{d}(\bar{p},\bar{q})-\varrho,\mathsf{d}(\bar{p},\bar{q})+ \varrho).\]
Therefore, repeating the same argument that we did for the case \(K=0\), we obtain the sought contradiction.
Suppose instead that the geodesic \(\gamma\) is ample and let \(m\) be the unique midpoint between \(\bar{p}=\gamma(a)\) and \(\bar{q}=\gamma(b)\). According to item (i) of Proposition 4.25, the map \(\mathcal{I}_{m}\) is well-defined and smooth in a neighborhood of \(\bar{p}\) and \(\bar{q}\), moreover by definition \(\mathcal{I}_{m}(\bar{q})=\bar{p}\) and \(\mathcal{I}_{m}(\bar{p})=\bar{q}\). Note that \(\mathcal{I}_{m}\circ\mathcal{I}_{m}=\mathrm{id}\) (where defined), thus
\[|\det(d_{\bar{p}}\mathcal{I}_{m})|\cdot|\det(d_{\bar{q}}\mathcal{I}_{m})|= \big{|}\det\big{(}d_{\bar{q}}(\mathcal{I}_{m}\circ\mathcal{I}_{m})\big{)} \big{|}=1.\]
Therefore, at least one between \(|\det(d_{\bar{q}}\mathcal{I}_{m})|\) and \(|\det(d_{\bar{p}}\mathcal{I}_{m})|\) is greater than or equal to \(1\), without loss of generality we assume
\[|\det(d_{\bar{q}}\mathcal{I}_{m})|\geq 1.\]
Let \(B_{\varrho}:=B_{\varrho}^{eu}(\bar{q})\) the (Euclidean) ball of radius \(\varrho>0\) centered in \(\bar{q}\). Introduce the function \(F:B_{\varrho}\times B_{\varrho}\to M\), defined as
\[B_{\varrho}\times B_{\varrho}\ni(x,y)\mapsto F(x,y):=\mathcal{M}(\mathcal{I}_ {m}(x),y).\]
Observe that, for \(\varrho\) small enough, \(F\) is well-defined and by construction \(F(x,x)=m\) for every \(x\in B_{\varrho}\). Therefore, we deduce that for every vector \(v\in T_{\bar{q}}M\cong\mathbb{R}^{n}\), the following holds:
\[0=d_{(\bar{q},\bar{q})}F(v,v)=\left(d_{\bar{p}}\mathcal{M}(\cdot,\bar{q}) \circ d_{\bar{q}}\mathcal{I}_{m}\right)v+d_{\bar{q}}\mathcal{M}(\bar{p},\cdot )\,v.\]
Since the former identity is true for every vector \(v\in\mathbb{R}^{n}\), we can conclude that
\[d_{\bar{p}}\mathcal{M}(\cdot,\bar{q})\circ d_{\bar{q}}\mathcal{I}_{m}+d_{ \bar{q}}\mathcal{M}(\bar{p},\cdot)=0,\]
and consequently, for every \(v,w\in\mathbb{R}^{n}\), we have
\[d_{(\bar{q},\bar{q})}F(v,w)=\left(d_{\bar{p}}\mathcal{M}(\cdot,\bar{q})\circ d _{\bar{q}}\mathcal{I}_{m}\right)v+d_{\bar{q}}\mathcal{M}(\bar{p},\cdot)\,w=d_ {\bar{q}}\mathcal{M}(\bar{p},\cdot)\,(w-v).\]
In particular, we obtain a Taylor expansion of the function \(F\) at the point \((\bar{q},\bar{q})\) that in coordinates takes the form:
\[\left\|F(\bar{q}+v,\bar{q}+w)-m-d_{\bar{q}}\mathcal{M}(\bar{p},\cdot)\left(w- v\right)\right\|_{eu}=o(\left\|v\right\|_{eu}+\left\|w\right\|_{eu}),\qquad \text{as $v,w\to 0$.}\]
Then, as \(v\) and \(w\) vary in \(B_{\varrho}^{eu}(0)\), \(v-w\) varies in \(B_{2\varrho}^{eu}(0)\), and we obtain that
\[F(B_{\varrho},B_{\varrho})\subseteq m+d_{\bar{q}}\mathcal{M}(\bar{p},\cdot) \left(B_{2\varrho}^{eu}(0)\right)+B_{\omega(\varrho)}^{eu}(0), \tag{39}\]
where \(\omega:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is such that \(\omega(r)=o(r)\) when \(r\to 0^{+}\). Now, consider \(A_{\varrho}:=\mathcal{I}_{m}(B_{\varrho})\) and note that by definition \(M_{1/2}(A_{\varrho},B_{\varrho})=F(B_{\varrho},B_{\varrho})\), then using (39) we conclude that, as \(\varrho\to 0\),
\[\mathscr{L}^{n} \big{(}M_{1/2}(A_{\varrho},B_{\varrho})\big{)}=\mathscr{L}^{n} \big{(}F(B_{\varrho},B_{\varrho})\big{)}\leq\mathscr{L}^{n}\Big{(}d_{\bar{q}} \mathcal{M}(p,\cdot)\left(B_{2\varrho}^{eu}(0)\right)\Big{)}+o(\varrho^{n})\] \[=|\det(d_{\bar{q}}\mathcal{M}(p,\cdot))|\cdot\omega_{n}2^{n} \varrho^{n}+o(\varrho^{n})\leq(1+\varepsilon)2^{n-m_{\bar{q}}}\omega_{n} \varrho^{n}+o(\varrho^{n})\leq\frac{1}{2}(1+\varepsilon)\omega_{n}\varrho^{n} +o(\varrho^{n})\]
where \(\omega_{n}=\mathscr{L}^{n}(B_{1}^{eu}(0))\) and the two last inequalities follow from (35) and \(m_{\bar{q}}\geq n+1\). On the other hand, it holds that \(\mathscr{L}^{n}(B_{\varrho})=\omega_{n}\varrho^{n}\) and, as \(\varrho\to 0\),
\[\mathscr{L}^{n}(A_{\varrho})=\mathscr{L}^{n}(\mathcal{I}_{m}(B_{\varrho}))= \left(|\det(d_{\bar{q}}\mathcal{I}_{m})|+O(\varrho)\right)\mathscr{L}^{n}(B_{ \varrho})\geq\omega_{n}\varrho^{n}+o(\varrho^{n}).\]
Taking into account the first estimate of (37), we deduce the following inequalities for the measure \(\mathfrak{m}\), as \(\varrho\to 0\),
\[\mathfrak{m}\big{(}M_{1/2}(A_{\varrho},B_{\varrho})\big{)}\leq\frac{1}{2}(1+ \varepsilon)^{2}\omega_{n}\varrho^{n}+o(\varrho^{n}),\]
Finally, if \(\varepsilon\) is small enough we can find \(\varrho\) sufficiently small such that
\[\mathfrak{m}\big{(}M_{1/2}(A_{\varrho},B_{\varrho})\big{)}^{\frac {1}{N}} <\left(\frac{1}{2}-\varepsilon\right)\mathfrak{m}(A_{\varrho})^{ \frac{1}{N}}+\left(\frac{1}{2}-\varepsilon\right)\mathfrak{m}(B_{\varrho})^{ \frac{1}{N}}\] \[\leq\tau_{K,N}^{(1/2)}\big{(}\Theta(A_{\varrho},B_{\varrho}) \big{)}\,\mathfrak{m}(A_{\varrho})^{\frac{1}{N}}+\tau_{K,N}^{(1/2)}\big{(} \Theta(A_{\varrho},B_{\varrho})\big{)}\,\mathfrak{m}(B_{\varrho})^{\frac{1}{N}},\]
which contradicts the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\).
_Remark 4.27_.: Observe that the argument presented in this section is local, around the geodesic without abnormal sub-segments. Thus, repeating the same proof, we can extend Theorem 4.26 if the assumption on the rank holds on an open set \(V\subset M\), namely \(r(p)<n\) for every \(p\in V\).
## 5 Failure of the \(\mathsf{CD}(K,N)\) condition in the sub-Finsler Heisenberg group
In this section, we disprove the curvature-dimension condition in the sub-Finsler Heisenberg group, cf. Theorem 1.8. Our strategy relies on the explicit expression of geodesics in terms of convex trigonometric functions, found in [10].
### Convex trigonometry
In this section, we recall the definition and main properties of the convex trigonometric functions, firstly introduced in [10]. Let \(\Omega\subset\mathbb{R}^{2}\) be a convex, compact set, such that \(O:=(0,0)\in\mathrm{Int}(\Omega)\) and denote by \(\mathbb{S}\) its surface area.
**Definition 5.1**.: Let \(\theta\in\mathbb{R}\) denote a generalized angle. If \(0\leq\theta<2\mathbb{S}\) define \(P_{\theta}\) as the point on the boundary of \(\Omega\), such that the area of the sector of \(\Omega\) between the rays \(Ox\) and \(OP_{\theta}\) is \(\frac{1}{2}\theta\) (see Figure 2). Moreover, define \(\sin_{\Omega}(\theta)\) and \(\cos_{\Omega}(\theta)\) as the coordinates of the point \(P_{\theta}\), i.e.
\[P_{\theta}=\big{(}\sin_{\Omega}(\theta),\cos_{\Omega}(\theta)\big{)}.\]
Finally, extend these trigonometric functions outside the interval \([0,2\mathbb{S})\) by periodicity (of period \(2\mathbb{S}\)), so that for every \(k\in\mathbb{Z}\).
\[\cos_{\Omega}(\theta)=\cos_{\Omega}(\theta+2k\mathbb{S}),\quad\sin_{\Omega}( \theta)=\sin_{\Omega}(\theta+2k\mathbb{S})\quad\text{and}\quad P_{\theta}=P_{ \theta+2k\mathbb{S}}.\]
Observe that by definition \(\sin_{\Omega}(0)=0\) and that when \(\Omega\) is the Euclidean unit ball we recover the classical trigonometric functions.
Consider now the polar set:
\[\Omega^{\circ}:=\{(p,q)\in\mathbb{R}^{2}\,:\,px+qy\leq 1\text{ for every }(x,y)\in\Omega\},\]
which is itself a convex, compact set such that \(O\in\mathrm{Int}(\Omega^{\circ})\). Therefore, we can consider the trigonometric functions \(\sin_{\Omega^{\circ}}\) and \(\cos_{\Omega^{\circ}}\). Observe that, by definition of polar set, it holds that
\[\cos_{\Omega}(\theta)\cos_{\Omega^{\circ}}(\psi)+\sin_{\Omega}(\theta)\sin_{ \Omega^{\circ}}(\psi)\leq 1,\qquad\text{for every }\theta,\psi\in\mathbb{R}. \tag{40}\]
**Definition 5.2**.: We say that two angles \(\theta,\psi\in\mathbb{R}\)_correspond_ to each other and write \(\theta\stackrel{{\Omega}}{{\longleftrightarrow}}\psi\) if the vector \(Q_{\psi}:=(\cos_{\Omega^{\circ}}(\psi),\sin_{\Omega^{\circ}}(\psi))\) determines a half-plane containing \(\Omega\) (see Figure 2).
By the bipolar theorem [10, Thm. 14.5], it holds that \(\Omega^{\circ\circ}=\Omega\) and this allow to prove the following symmetry property for the correspondence just defined.
**Proposition 5.3**.: Let \(\Omega\subset\mathbb{R}^{2}\) be a convex and compact set, with \(O\in\text{Int}(\Omega)\). Given two angles \(\theta,\psi\in\mathbb{R}\), \(\theta\stackrel{{\Omega}}{{\longleftrightarrow}}\psi\) if and only if \(\psi\stackrel{{\Omega^{\circ}}}{{\longleftrightarrow}}\theta\). Moreover, the following analogous of the Pythagorean equality holds:
\[\theta\stackrel{{\Omega}}{{\longleftrightarrow}}\psi\qquad \text{ if and only if}\qquad\cos_{\Omega}(\theta)\cos_{\Omega^{\circ}}(\psi)+\sin_{ \Omega}(\theta)\sin_{\Omega^{\circ}}(\psi)=1. \tag{41}\]
The correspondence \(\theta\stackrel{{\Omega}}{{\longleftrightarrow}}\psi\) is not one-to-one in general, in fact if the boundary of \(\Omega\) has a corner at the point \(P_{\theta}\), the angle \(\theta\) corresponds to an interval of angles (in every period). Nonetheless, we can define a monotone multi-valued map \(C^{\circ}\) that maps an angle \(\theta\) to the maximal closed interval containing angles corresponding to \(\theta\). This function has the following periodicity property:
\[C^{\circ}(\theta+2\mathbb{S}k)=C^{\circ}(\theta)+2\mathbb{S}^{\circ}k\qquad \text{ for every }k\in\mathbb{Z},\]
where \(\mathbb{S}^{\circ}\) denotes the surface area of \(\Omega^{\circ}\). If \(\Omega\) is strictly convex, then the map \(C^{\circ}\) is strictly monotone, while if the boundary of \(\Omega\) is \(C^{1}\), then \(C^{\circ}\) is a (single-valued) map from \(\mathbb{R}\) to \(\mathbb{R}\) and it is continuous. Analogously, we can define the map \(C_{\circ}\) associated to the correspondence \(\psi\stackrel{{\Omega^{\circ}}}{{\longleftrightarrow}}\theta\). Proposition 5.3 guarantees that \(C_{\circ}\circ C^{\circ}=C^{\circ}\circ C_{\circ}=\text{id}\).
**Proposition 5.4**.: Let \(\Omega\subset\mathbb{R}^{2}\) as above. The trigonometric functions \(\sin_{\Omega}\) and \(\cos_{\Omega}\) are Lipschitz and therefore differentiable almost everywhere. At every differentiability point \(\theta\) of both functions, there exists a unique angle \(\psi\) corresponding to \(\theta\) and it holds that
\[\sin_{\Omega}^{\prime}(\theta)=\cos_{\Omega^{\circ}}(\psi)\qquad\text{and} \qquad\cos_{\Omega}^{\prime}(\theta)=-\sin_{\Omega^{\circ}}(\psi).\]
Naturally, the analogous result holds for the trigonometric functions \(\sin_{\Omega^{\circ}}\) and \(\cos_{\Omega^{\circ}}\).
As a corollary of the previous proposition, we obtain the following convexity properties for the trigonometric functions.
**Corollary 5.5**.: The functions \(\sin_{\Omega}\) and \(\cos_{\Omega}\) are concave in every interval in which they are non-negative and convex in every interval in which they are non-positive.
This convexity properties of the trigonometric functions will play a small but fundamental role in Section 5.3 in the form of the following corollaries.
**Corollary 5.6**.: Given a non-null constant \(k\in\mathbb{R}\) and every angle \(\theta\), consider the function
\[g:\mathbb{R}\to\mathbb{R};\qquad g(t):=\sin_{\Omega}(\theta)\cos_{\Omega}( \theta+kt)-\cos_{\Omega}(\theta)\sin_{\Omega}(\theta+kt). \tag{42}\]
If \(k>0\) this function is convex for positive values of \(t\) and concave for negative values of \(t\), locally around \(0\). Vice versa, If \(k<0\) it is concave for positive values of \(t\) and convex for negative values of \(t\), locally around \(0\).
Proof.: The function \(g(t)\) can be seen as a scalar product of two vectors in \(\mathbb{R}^{2}\), therefore it is invariant by rotations. In particular, we consider the rotation that sends \(\theta\) to \(0\): this maps \(P_{\theta}\) to the the positive \(x\)-axis and the set \(\Omega\) to a convex, compact set \(\tilde{\Omega}\subset\mathbb{R}^{2}\). Then, \(g(t)\) in (42) is equal to the function
\[t\mapsto-\cos_{\tilde{\Omega}}(0)\sin_{\tilde{\Omega}}(kt).\]
The conclusion immediately follows from Corollary 5.5.
**Corollary 5.7**.: Given a non-null constant \(k\in\mathbb{R}\) and every angle \(\psi\), the function
\[h:\mathbb{R}\to\mathbb{R};\qquad h(t):=1-\sin_{\Omega^{\circ}}(\psi)\sin_{ \Omega}\big{(}(\psi+kt)_{\circ}\big{)}-\cos_{\Omega^{\circ}}(\psi)\cos_{\Omega }\big{(}(\psi+kt)_{\circ}\big{)}.\]
is non-decreasing for positive values of \(t\) and non-increasing for negative values of \(t\), locally around \(0\).
Proof.: Note that \(h\) is the derivative of the function
\[\mathbb{R}\ni t\mapsto kt+\sin_{\Omega^{\circ}}(\psi)\cos_{\Omega^{\circ}}( \psi+kt)-\cos_{\Omega^{\circ}}(\psi)\sin_{\Omega^{\circ}}(\psi+kt), \tag{43}\]
divided by \(k\). The thesis follows from Corollary 5.6, since the function (43) is the sum of a linear function and of a function of the type (42).
In the following we are going to consider the trigonometric functions associated to the unit ball of a strictly convex norm \(\|\cdot\|\) on \(\mathbb{R}^{2}\), i.e. \(\Omega:=B^{\|\cdot\|}_{1}(0)\). In this case, the polar set \(\Omega^{\circ}\) is the unit ball \(B^{\|\cdot\|_{*}}_{1}(0)\) of the dual norm \(\|\cdot\|_{*}\). Moreover, according to the Pythagorean identity (41), if \(\theta\xleftrightarrow{\Omega}\psi\)\(\|\cdot\|\) then \(Q_{\psi}\) is a dual vector of \(P_{\theta}\). In particular, if \(\|\cdot\|\) is a \(C^{1}\) norm, Lemma 3.6 ensures that
\[(\cos_{\Omega^{\circ}}(\psi),\sin_{\Omega^{\circ}}(\psi))=Q_{\psi}=d_{P_{ \theta}}\|\cdot\|=d_{(\cos_{\Omega}(\theta),\sin_{\Omega}(\theta))}\|\cdot\|.\]
We conclude this section by recalling a well-known result on the relation between a norm \(\|\cdot\|\) and its dual \(\|\cdot\|_{*}\). This will be employed in the subsequent sections, as the geodesics of the sub-Finsler Heisenberg group, equipped with the norm \(\|\cdot\|\), follow the shape of the boundary of \(B^{\|\cdot\|_{*}}_{1}(0)\), cf. Theorem 5.9.
**Proposition 5.8**.: Let \(\|\cdot\|\) be a norm on \(\mathbb{R}^{2}\), and let \(\|\cdot\|_{*}\) be its dual norm, then:
* \(\|\cdot\|_{*}\) is a strictly convex norm if and only if \(\|\cdot\|\) is a \(C^{1}\) norm;
* \(\|\cdot\|_{*}\) is a strongly convex norm if and only if \(\|\cdot\|\) is a \(C^{1,1}\) norm.
### 5.2 Geodesics in the Heisenberg group
We present here the sub-Finsler Heisenberg group and study its geodesics. Let us consider the Lie group \(M=\mathbb{R}^{3}\), equipped with the non-commutative group law, defined by
\[(x,y,z)\star(x^{\prime},y^{\prime},z^{\prime})=\bigg{(}x+x^{\prime},y+y^{ \prime},z+z^{\prime}+\frac{1}{2}(xy^{\prime}-x^{\prime}y)\bigg{)},\qquad \forall\,(x,y,z),(x^{\prime},y^{\prime},z^{\prime})\in\mathbb{R}^{3},\]
with identity element \(\mathrm{e}=(0,0,0)\). In the notation of Section 2.2, we define the following morphism of bundles
\[\xi:M\times\mathbb{R}^{2}\to TM,\qquad\xi(x,y,z;u_{1},u_{2})=\bigg{(}x,y,z;u_{ 1},u_{2},\frac{1}{2}(u_{2}x-u_{1}y)\bigg{)}.\]
The associated distribution of rank \(2\) is spanned by the following left-invariant vector fields:
\[X_{1}=\partial_{x}-\frac{y}{2}\partial_{z},\qquad X_{2}=\partial_{y}+\frac{x} {2}\partial_{z},\]
namely \(\mathcal{D}=\mathrm{span}\{X_{1},X_{2}\}\). It can be easily seen that \(\mathcal{D}\) is bracket-generating. Then, letting \(\|\cdot\|:\mathbb{R}^{2}\to\mathbb{R}_{+}\) be a norm, the _sub-Finsler Heisenberg group_\(\mathbb{H}\) is the Lie group \(M\) equipped with the sub-Finsler structure \((\xi,\|\cdot\|)\). By construction, also the resulting norm on the distribution is left-invariant, so that the left-translations defined by
\[L_{p}:\mathbb{H}\to\mathbb{H};\qquad L_{p}(q):=p\star q, \tag{44}\]
are isometries for every \(p\in\mathbb{H}\).
In this setting, the geodesics were originally studied in [10] and [1] for the three-dimensional case and in [11] for general left-invariant structures on higher-dimensional Heisenberg groups. We recall below the main results of [1] for strictly convex norms.
**Theorem 5.9** ([1, Thm. 1, Thm. 2]).: Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a strictly convex norm. Then, the following statements hold:
1. for any \(q\in\mathbb{H}\setminus\{x=y=0\}\), there exists a unique geodesic \(\gamma:[0,1]\to\mathbb{H}\) joining the origin and \(q\).
2. \(\gamma:[0,T]\to\mathbb{H}\) is a geodesic starting at the origin if and only if it satisfies the Pontryagin's maximum principle for the time-optimal control problem: \[\begin{cases}\dot{\gamma}(t)=u_{1}(t)X_{1}(\gamma(t))+u_{2}(t)X_{2}(\gamma(t) ),\\ u(t)\in B_{1}^{\|\cdot\|}(0),\quad\gamma(0)=q_{0},\quad\text{and}\quad\gamma(T )=q_{1},\\ T\to\min\,.\end{cases}\]
_Remark 5.10_.: Note that the geodesics in [1] are found solving the Pontryagin maximum principle of Theorem 3.2 for the time-optimal problem. The latter is an equivalent formulation of (P), however it produces arc-length parameterized geodesics.
The next step is to compute explicitly the exponential map. In [11], the author provides an explicit expression for geodesics starting at the origin, using the convex trigonometric functions functions presented in Section 5.1. Since therein the author solves the time-optimal problem, we prefer to solve explicitly the Hamiltonian system (H), in the case of the sub-Finsler Heisenberg group.
**Proposition 5.11** ([11, Thm. 4]).: Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a strictly convex norm. Let \(\gamma:[0,1]\to\mathbb{H}\) be the projection of a (non-trivial) normal extremal \((\lambda_{t})_{t\in[0,1]}\) starting at the origin, then \(\gamma(t)=(x(t),y(t),z(t))\), with
\[\begin{cases}x(t)=\dfrac{r}{w}\left(\sin_{\Omega^{\circ}}(\phi+ \omega t)-\sin_{\Omega^{\circ}}(\phi)\right),\\ y(t)=-\dfrac{r}{w}\left(\cos_{\Omega^{\circ}}(\phi+\omega t)- \cos_{\Omega^{\circ}}(\phi)\right),\\ z(t)=\dfrac{r^{2}}{2\omega^{2}}\left(\omega t+\cos_{\Omega^{\circ}}(\phi+ \omega t)\sin_{\Omega^{\circ}}(\phi)-\sin_{\Omega^{\circ}}(\phi+\omega t)\cos _{\Omega^{\circ}}(\phi)\right),\end{cases}\]
for some \(\phi\in[0,2\mathbb{S}^{\circ})\), \(\omega\in\mathbb{R}\setminus\{0\}\) and \(r>0\). If \(\omega=0\), then
\[\begin{cases}x(t)=\left(r\cos_{\Omega}(\phi^{\circ})\right)t,\\ y(t)=\left(r\sin_{\Omega}(\phi^{\circ})\right)t,\\ z(t)=0.\end{cases} \tag{45}\]
Proof.: Firstly, we characterize the sub-Finsler Hamiltonian in the sub-Finsler Heisenberg group. Note that, without assuming additional regularity on \(\|\cdot\|\), we can not apply directly Lemma 3.7. Nevertheless, we can still obtain an analogous result by means of convex trigonometry. Indeed, let \(h_{i}(\lambda):=\langle\lambda,X_{i}(\pi(\lambda))\rangle\) for \(i=1,2\), then
\[H(\lambda):=\max_{u\in\mathbb{R}^{2}}\left(\sum_{i=1}^{2}u_{i}h_{i}(\lambda)- \frac{\|u\|}{2}\right),\qquad\forall\,\lambda\in\ T^{*}\mathbb{H}.\]
We introduce polar coordinates on \(\mathbb{R}^{2}\) associated with \(\left\lVert\cdot\right\rVert\) and its dual norm \(\left\lVert\cdot\right\rVert_{*}\), namely \((u_{1},u_{2})\mapsto(\rho,\theta)\) and \((h_{1},h_{2})\mapsto(\zeta,\psi)\) where
\[\begin{cases}u_{1}=\rho\cos_{\Omega^{\circ}}(\theta),\\ u_{2}=\rho\sin_{\Omega^{\circ}}(\theta),\end{cases}\quad\quad\text{and}\quad \quad\begin{cases}h_{1}=\zeta\cos_{\Omega^{\circ}}(\psi),\\ h_{2}=\zeta\sin_{\Omega^{\circ}}(\psi).\end{cases} \tag{46}\]
Hence, the sub-Finsler Hamiltonian becomes
\[H(\lambda) =\max_{u\in\mathbb{R}^{2}}\left(\sum_{i=1}^{2}u_{i}h_{i}(\lambda )-\frac{\left\lVert u\right\rVert}{2}\right) \tag{47}\] \[=\max_{\begin{subarray}{c}\theta\in[0,2\mathbb{S}]\\ \rho>0\end{subarray}}\left(\rho\zeta\left(\cos_{\Omega}(\theta)\cos_{\Omega^{ \circ}}(\psi)+\sin_{\Omega}(\theta)\sin_{\Omega^{\circ}}(\psi)\right)-\frac{ \rho^{2}}{2}\right)\leq\max_{\rho>0}\left(\rho\zeta-\frac{\rho^{2}}{2}\right)= \frac{\zeta^{2}}{2},\]
where the last inequality is a consequence of (40). Moreover, we attain the equality in (47) if and only if \(\rho=\zeta\) and \(\psi=C^{\circ}(\theta)\). Therefore, since \(\zeta=\left\lVert\hat{\lambda}\right\rVert_{*}\) with \(\hat{\lambda}=(h_{1}(\lambda),h_{2}(\lambda))\), we conclude that
\[H(\lambda)=\frac{1}{2}\big{\lVert}\hat{\lambda}\big{\rVert}_{*}^{2},\qquad \forall\,\lambda\in T^{*}\mathbb{H}\setminus\operatorname{Ann}(\mathcal{D}),\]
and the maximum is attained at the control \(u=\hat{\lambda}^{*}\). Furthermore, \(H\in C^{1}(T^{*}M)\) by strict convexity of \(\left\lVert\cdot\right\rVert\), cf. Proposition 5.8. We write the system (21) in coordinates \((x,y,z;h_{1},h_{2},h_{3})\) for the cotangent bundle, where \(h_{3}(\lambda):=\langle\lambda,\partial_{z}\rangle\). The vertical part of (21) becomes
\[\begin{cases}\dot{h}_{1}(t)=\|\hat{\lambda}_{t}\|_{*}d_{\hat{ \lambda}_{t}}\|\cdot\|_{*}\cdot\left(0,-h_{3}(t)\right),\\ \dot{h}_{2}(t)=\|\hat{\lambda}_{t}\|_{*}d_{\hat{\lambda}_{t}}\|\cdot\|_{*} \cdot\left(h_{3}(t),0\right),\\ \dot{h}_{3}(t)=0.\end{cases} \tag{48}\]
Let \((\lambda_{t})_{t\in[0,1]}\) be a normal extremal with associated maximal control given by \(t\mapsto u(t)\), then we use Lemma 3.6 to deduce that \(\|\hat{\lambda}_{t}\|_{*}d_{\hat{\lambda}_{t}}\|\cdot\|_{*}=\hat{\lambda}_{t }^{*}=u(t)\). Therefore, letting \(h_{3}(t)\equiv\omega\in\mathbb{R}\), we may rewrite (48) as
\[\begin{cases}\dot{h}_{1}(t)=-\omega\,u_{2}(t),\\ \dot{h}_{2}(t)=\omega\,u_{1}(t).\end{cases} \tag{49}\]
To solve this system, we use the polar coordinates (46): letting \(t\mapsto(\rho(t),\psi(t))\) be the curve representing \(\hat{\lambda}_{t}=(h_{1}(t),h_{2}(t))\), we deduce that \(\rho(t)\) and \(\psi(t)\) are absolutely continuous and satisfy
\[\rho(t)=\big{\lVert}\hat{\lambda}_{t}\big{\rVert}_{*},\qquad\dot{\psi}(t)= \frac{h_{1}(t)\dot{h}_{2}(t)-\dot{h}_{1}(t)h_{2}(t)}{\rho^{2}(t)}.\]
We may compute explicitly \(\dot{\rho}(t)\) and \(\dot{\psi}(t)\), using once again Lemma 3.6, the system (49) and identity (41):
\[\dot{\rho}(t)=d_{\hat{\lambda}_{t}}\|\cdot\|_{*}\cdot(\dot{h}_{1}(t),\dot{h}_ {2}(t))=\frac{\omega}{\|\hat{\lambda}_{t}\|_{*}}u(t)\cdot\left(-u_{2}(t),u_{1 }(t)\right)=0,\qquad\dot{\psi}(t)=\omega.\]
Thus, integrating the above identities, we obtain \(\rho(t)\equiv r\) and \(\psi(t)=\omega t+\phi\) for some \(r>0\) and \(\phi\in[0,2\mathbb{S}^{\circ})\). Finally, we find an explicit expression for the maximal control:
\[u(t)=\left(r\cos_{\Omega}(C_{\circ}(\phi+\omega t)),r\sin_{\Omega}(C_{\circ}( \phi+\omega t))\right).\]
From this, we may explicitly integrate the horizontal part of the Hamiltonian system, obtaining the desired expression. In particular, if \(\omega=0\) we immediately obtain (45). If \(\omega\neq 0\), we may employ Proposition 5.4 to conclude.
As \((\mathbb{H},\mathsf{d}_{SF})\) is complete, normal extremals can be extended to \(\mathbb{R}\), according to Proposition 3.10. Thus, we may define the (extended) exponential map at the origin on the whole \(T_{0}^{*}\mathbb{H}\times\mathbb{R}\):
\[G:\big{(}[0,2\mathbb{S}^{\circ})\times\mathbb{R}\times[0,\infty) \big{)}\times\mathbb{R} \longrightarrow\mathbb{H}, \tag{50}\] \[(\phi,\omega,r;t) \longmapsto\big{(}x(\phi,\omega,r;t),y(\phi,\omega,r;t),z(\phi, \omega,r;t)\big{)},\]
where \((x(\phi,\omega,r;t),y(\phi,\omega,r;t),z(\phi,\omega,r;t))\) correspond to the curve \((x(t),y(t),z(t))\) defined by Proposition 5.11 with initial datum \((\phi,\omega,r)\) and with the understanding that \(G(\phi,\omega,0;t)\equiv 0\). By the properties of the convex trigonometric functions, \(G\) is a \(C^{1}\) map for \(\omega\neq 0\). Moreover, thanks to Theorem 5.9, for every initial datum \((\phi,\omega,r)\), the curve \(t\mapsto G(\phi,\omega,r;t)\) is a geodesic between its endpoints for sufficiently small times. More precisely, it is minimal for \(|t|<t^{*}=t^{*}(\phi,\omega,r)\), where \(t^{*}>0\) is the first positive time such that \(G(\phi,\omega,r;t^{*})\) lies on the \(z\)-axis. In particular, a direct computation shows that
\[t^{*}=\begin{cases}\frac{2\mathbb{S}^{\circ}}{|\omega|},&\text{if }\omega\neq 0, \\ \infty,&\text{if }\omega=0.\end{cases}\]
We conclude this section by highlighting a property of geodesics in the Heisenberg group that will be relevant in our analysis. For the sake of notation, denote by \(\Omega^{\circ}_{(\phi,\omega,r)}\) the following transformation of \(\Omega^{\circ}=B_{1}^{\|\cdot\|_{*}}(0)\):
\[\Omega^{\circ}_{(\phi,\omega,r)}:=R_{-\pi/2}\left[\frac{r}{\omega}(\Omega^{ \circ}-(\cos_{\Omega^{\circ}}(\phi),\sin_{\Omega^{\circ}}(\phi)))\right],\]
where \(R_{-\pi/2}\) is counter-clockwise rotation in the plane of angle \(-\pi/2\).
**Proposition 5.12** ([1, Thm. 1]).: Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a strictly convex norm and let \(\gamma:[0,1]\to\mathbb{H}\) be a geodesic starting at the origin, with \(\gamma(t)=(x(t),y(t),z(t))\). Then, the curve \(t\mapsto(x(t),y(t))\) is either a straight line or belongs to the boundary of \(\Omega^{\circ}_{(\phi,\omega,r)}\). Moreover, for every \(t\in[0,1]\), \(z(t)\) equals the oriented area that is swept by the vector joining \((0,0)\) with \((x(s),y(s))\), for \(s\in[0,t]\).
### 5.3 Failure of the \(\mathsf{CD}(K,n)\) condition for \(C^{1,1}\)-norms
In this section we contradict the validity of the \(\mathsf{CD}(K,N)\) condition in the sub-Finsler Heisenberg group, equipped with a strictly convex and \(C^{1,1}\) norm and with a smooth measure. The strategy follows the blueprint of the one presented in Section 4.4. The main issue we have to address here is the low regularity (cf. Remark 5.25) of the midpoint and inverse geodesic maps of (29) and (36). Nevertheless, using the explicit expression of geodesics presented in Proposition 5.11, we successfully overcome these challenges through a series of technical lemmas, culminating in Corollary 5.20, Proposition 5.23 and Theorem 5.24.
Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a \(C^{1,1}\) and strictly convex norm \(\|\cdot\|\). According to Proposition 5.8, the dual norm \(\|\cdot\|_{*}\) is \(C^{1}\) and strongly convex. Thus, in the notations of Section 5.1, the correspondences \(C^{\circ}\) and \(C_{\circ}\) are continuous functions. In order to ease the notation, in this section we sometimes use the shorthands:
\[\theta^{\circ}=C^{\circ}(\theta)\quad\text{and}\quad\psi_{\circ}=C_{\circ}( \psi),\qquad\forall\,\theta,\psi\in\mathbb{R}.\]
Alexandrov's theorem ensures that the dual norm \(\|\cdot\|_{*}\) has a second derivative and a second-order Taylor expansion almost everywhere, we call \(D_{*}\subset\mathbb{R}^{2}\) the set of twice differentiability of it.
**Proposition 5.13**.: Let \(\psi\in[0,2\mathbb{S}^{\circ})\) be an angle such that \(Q_{\psi}\in D_{*}\), then the function \(C_{\circ}\) is differentiable at \(\psi\) with positive derivative.
Proof.: Consider a vector \(v\in\mathbb{R}^{2}\) orthogonal to \(d_{Q_{\psi}}\|\cdot\|_{*}\) such that \(\left\|v\right\|_{eu}=1\). Then, since \(Q_{\psi}\in D_{*}\), there exists a constant \(C\in\mathbb{R}\) such that
\[\left\|Q_{\psi}+sv\right\|_{*}=1+Cs^{2}+o(s^{2}),\qquad\text{as $s\to 0$.} \tag{51}\]
Observe that, since the norm \(\left\|\cdot\right\|_{*}\) is strongly convex, the constant \(C\) is strictly positive. Consider the curve
\[s\mapsto x(s):=\frac{Q_{\psi}+sv}{\left\|Q_{\psi}+sv\right\|_{*}},\]
which by definition is a parametrization of an arc of the unit sphere \(S_{1}^{\left\|\cdot\right\|_{*}}(0)=\partial\Omega^{\circ}\). Call \(A(s)\) the signed area of the sector of \(\Omega^{\circ}\) between the rays \(OQ_{\psi}\) and \(Ox(s)\) (see Figure 3). As a consequence of (51), we deduce that
\[A(s)=\frac{1}{2}ks+o(s^{2}),\qquad\text{as $s\to 0$,}\]
where \(k\) is the scalar product between \(Q_{\psi}\) and \(v^{\perp}\), that is the vector obtained by rotating \(v\) with an angle of \(-\frac{\pi}{2}\). In fact, the first-order term \(\frac{1}{2}ks\) is the area of the triangle of vertices \(O\), \(Q_{\psi}\) and \(Q_{\psi}+sv\), while the error term is controlled by the area of the triangle of vertices \(x(s)\), \(Q_{\psi}\) and \(Q_{\psi}+sv\). The latter is an \(o(s^{2})\) as \(s\to 0\), thanks to (51). In particular, letting \(\psi(s)\) be the angle such that \(x(s)=Q_{\psi(s)}\), by definition of generalized angles, it holds that
\[\psi(s)-\psi=2A(s)=ks+o(s^{2}),\qquad\text{as $s\to 0$.}\]
Up to substituting the vector \(v\) with \(-v\), we can assume \(k>0\). Then, in order to conclude, it is enough to prove that the function \(s\mapsto C_{\circ}(\psi(s))\) is differentiable in \(s=0\) with positive derivative.
First of all, by our choice of \(k>0\), \(s\mapsto C_{\circ}(\psi(s))\) is monotone non-decreasing close to \(s=0\), being a composition of monotone non-decreasing functions. Second of all, we can show that it has a first-order expansion. To this aim, note that the curve
\[s\mapsto y(s):=d_{x(s)}\|\cdot\|_{*}\]
is a parametrization of an arc of the sphere \(S_{1}^{\left\|\cdot\right\|}(0)=\partial\Omega\) (cf. Lemma 3.6). Moreover, recalling that \(Q_{\psi}\in D_{*}\) and using the homogeneity of the norm, we have that
\[y(s)=d_{Q_{\psi}+sv}\|\cdot\|_{*}=d_{Q_{\psi}}\|\cdot\|_{*}+a\,s+o(s),\qquad \text{as $s\to 0$,} \tag{52}\]
where \(a:=\operatorname{Hess}_{Q_{\psi}}(\left\|\cdot\right\|_{*})(v)\). Observe that \(a\neq 0\) because \(\left\|\cdot\right\|_{*}\) is strongly convex and \(\left\|Q_{\psi}\right\|_{*}=1\). Then, call \(B(s)\) the (signed) area of the sector of \(\Omega\) between the rays \(Oy(0)\) and \(Oy(s)\) (see Figure 4). Reasoning as we did for \(A(s)\), from (52) we deduce that
\[B(s)=\frac{1}{2}\langle y(0),a^{\perp}\rangle s+o(s^{2}),\qquad\text{as $s\to 0$.}\]
On the other hand, by definition
\[C_{\circ}(\psi(s))-C_{\circ}(\psi(0))=2B(s)=\langle y(0),a^{\perp}\rangle s+o(s^{ 2}),\qquad\text{as }s\to 0. \tag{53}\]
This shows that the function \(s\mapsto C_{\circ}(\psi(s))\) is differentiable in \(s=0\) with derivative \(\langle y(0),a^{\perp}\rangle\). In addition, since \(C_{\circ}\circ\psi\) is non-decreasing close to \(s=0\), (53) also implies that \(\langle y(0),a^{\perp}\rangle\geq 0\). We are left to show that \(\langle y(0),a^{\perp}\rangle\) is strictly positive. If \(\langle y(0),a^{\perp}\rangle=0\) then \(a\) is parallel to \(y(0)\), however, according to (52), the vector \(a\) is tangent to the sphere \(S_{1}^{\parallel\cdot\parallel}(0)\) at \(y(0)\) and therefore we obtain a contradiction.
_Remark 5.14_.: Since the norm is invariant by homotheties, then also \(D_{*}\) is so, thus the set of angles \(\psi\) such that \(Q_{\psi}\not\in D_{*}\) has null \(\mathscr{L}^{1}\)-measure. In particular, the function \(C_{\circ}\) is differentiable with positive derivative \(\mathscr{L}^{1}\)-almost everywhere, as a consequence of Proposition 5.13.
As already mentioned, the strategy to prove the main theorem of this section is the same of Section 4.4. In particular, it is fundamental to prove estimates on the volume contraction along geodesic homotheties. To this aim, we consider the Jacobian determinant of the exponential map (50):
\[J(\phi,\omega,r;t):=\left|\det\begin{pmatrix}\frac{\partial x}{\partial r}& \frac{\partial x}{\partial\phi}&\frac{\partial x}{\partial\omega}\\ \frac{\partial y}{\partial r}&\frac{\partial y}{\partial\phi}&\frac{\partial y }{\partial\omega}\\ \frac{\partial z}{\partial r}&\frac{\partial z}{\partial\phi}&\frac{\partial z }{\partial\omega}\end{pmatrix}(\phi,\omega,r;t)\right|\]
where we recall \(x(\phi,\omega,r;t)\), \(y(\phi,\omega,r;t)\), \(z(\phi,\omega,r;t)\) are defined in Proposition 5.11. In order to study this, we will use the following formulation:
\[J(\phi,\omega,r;t)=\left|\frac{\partial z}{\partial\omega}(\phi,\omega,r;t) \det(M_{1})-\frac{\partial z}{\partial\phi}(\phi,\omega,r;t)\det(M_{2})+\frac {\partial z}{\partial r}(\phi,\omega,r;t)\det(M_{3})\right| \tag{54}\]
where
\[M_{1}:=\begin{pmatrix}\frac{\partial x}{\partial r}&\frac{\partial x}{ \partial\phi}\\ \frac{\partial y}{\partial r}&\frac{\partial y}{\partial\phi}\end{pmatrix}( \phi,\omega,r;t),\quad M_{2}:=\begin{pmatrix}\frac{\partial x}{\partial r}& \frac{\partial x}{\partial\omega}\\ \frac{\partial y}{\partial r}&\frac{\partial y}{\partial\omega}\end{pmatrix}( \phi,\omega,r;t),\quad M_{3}:=\begin{pmatrix}\frac{\partial x}{\partial\phi}& \frac{\partial x}{\partial\omega}\\ \frac{\partial y}{\partial\phi}&\frac{\partial y}{\partial\omega}\end{pmatrix} (\phi,\omega,r;t).\]
We are particularly interested in studying the behaviour of \(J(\phi,\omega,r;t)\) as \(t\to 0\). In the following lemmas we estimate the behaviour of every term in (54) as \(t\to 0\).
_Notation 5.15_.: Let \(I\subset\mathbb{R}\) be an interval containing \(0\). Given a function \(f:I\to\mathbb{R}\) and \(n\in\mathbb{N}\), we write
\[f(t)\sim t^{n},\qquad\text{as }t\to 0,\]
if there exists a constant \(C\neq 0\) such that \(f(t)=Ct^{n}+o(t^{n})\), as \(t\to 0\).
**Lemma 5.16**.: Let \(\phi\in[0,2\mathbb{S}^{\circ})\) be a differentiability point for the map \(C_{\circ}\), \(r>0\) and \(\omega\neq 0\), then
\[\det\big{(}M_{1}(\phi,\omega,r;t)\big{)}\sim t^{2},\qquad\text{as }t\to 0, \tag{55}\]
while
\[\det\big{(}M_{2}(\phi,\omega,r;t)\big{)},\,\det\big{(}M_{3}(\phi,\omega,r;t) \big{)}=O(t^{3}),\qquad\text{as }t\to 0. \tag{56}\]
Proof.: Let us begin by proving (55). Firstly, since the function \(C_{\circ}\) is differentiable at \(\phi\), we can compute the following Taylor expansions as \(t\to 0\), using Proposition 5.4:
\[\cos_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)} =\cos_{\Omega}\big{(}\phi_{\circ}\big{)}-t\omega C_{\circ}^{\prime }(\phi)\sin_{\Omega^{\circ}}(\phi)+o(t),\] \[\sin_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)} =\sin_{\Omega}\big{(}\phi_{\circ}\big{)}+t\omega C_{\circ}^{\prime }(\phi)\cos_{\Omega^{\circ}}(\phi)+o(t).\]
Therefore, we may expand the entries of \(M_{1}\) as \(t\to 0\):
\[\frac{\partial x}{\partial r}(\phi,\omega,r;t) =\frac{1}{\omega}\left(\sin_{\Omega^{\circ}}(\phi+\omega t)-\sin_{ \Omega^{\circ}}(\phi)\right)=\cos_{\Omega}(\phi_{\circ})t+o(t), \tag{57}\] \[\frac{\partial y}{\partial r}(\phi,\omega,r;t) =-\frac{1}{\omega}\left(\cos_{\Omega^{\circ}}(\phi+\omega t)- \cos_{\Omega^{\circ}}(\phi)\right)=\sin_{\Omega}(\phi_{\circ})t+o(t),\] \[\frac{\partial x}{\partial\phi}(\phi,\omega,r;t) =\frac{r}{\omega}\left(\cos_{\Omega}\left((\phi+\omega t)_{\circ }\right)-\cos_{\Omega}\left(\phi_{\circ}\right)\right)=-trC^{\prime}_{\circ}( \phi)\sin_{\Omega^{\circ}}(\phi)+o(t),\] \[\frac{\partial y}{\partial\phi}(\phi,\omega,r;t) =\frac{r}{\omega}\left(\sin_{\Omega}\left((\phi+\omega t)_{\circ }\right)-\sin_{\Omega}\left(\phi_{\circ}\right)\right)=trC^{\prime}_{\circ}( \phi)\cos_{\Omega^{\circ}}(\phi)+o(t),\]
where we used once again Proposition 5.4. Finally, the determinant has the following Taylor expansion as \(t\to 0\):
\[\det(M_{1}) =\frac{\partial x}{\partial r}\frac{\partial y}{\partial\phi}- \frac{\partial y}{\partial r}\frac{\partial x}{\partial\phi}=t^{2}rc^{\prime}_ {\circ}(\phi)\big{(}\sin_{\Omega^{\circ}}(\phi)\sin_{\Omega}\left(\phi_{\circ }\right)+\cos_{\Omega^{\circ}}(\phi)\cos_{\Omega}\left(\phi_{\circ}\right) \big{)}+o(t^{2})\] \[=t^{2}rc^{\prime}_{\circ}(\phi)+o(t^{2}),\]
where, in the last equality, we used Proposition 5.3. This proves (55), keeping in mind Proposition 5.13, which guarantees that \(C^{\prime}_{\circ}(\phi)>0\).
Now we prove (56) for \(\det(M_{2})\), the proof for \(\det(M_{3})\) is analogous. As a first step, reasoning as before, we can Taylor expand at second-order the following quantities, as \(t\to 0\):
\[\cos_{\Omega^{\circ}}(\phi+\omega t) =\cos_{\Omega^{\circ}}(\phi)-\omega t\sin_{\Omega}(\phi_{\circ}) -\frac{1}{2}(\omega t)^{2}C^{\prime}_{\circ}(\phi)\cos_{\Omega^{\circ}}(\phi) +o(t^{2}),\] \[\sin_{\Omega^{\circ}}(\phi+\omega t) =\sin_{\Omega^{\circ}}(\phi)+\omega t\cos_{\Omega}(\phi_{\circ}) -\frac{1}{2}(\omega t)^{2}C^{\prime}_{\circ}(\phi)\sin_{\Omega^{\circ}}(\phi) +o(t^{2}).\]
Hence, we deduce the expansion for the derivative of \(x\) in the \(\omega\) direction, as \(t\to 0\):
\[\frac{\partial x}{\partial\omega}(\phi,\omega,r;t)=-\frac{r}{ \omega^{2}}\big{(}\sin_{\Omega^{\circ}}(\phi+\omega t)-\sin_{\Omega^{\circ}}( \phi)\big{)}+\frac{rt}{\omega}\cos_{\Omega}\big{(}(\phi+\omega t)_{\circ} \big{)}\\ =-\frac{r}{\omega}\big{(}t\cos_{\Omega}(\phi_{\circ})-\frac{1}{2} \omega t^{2}C^{\prime}_{\circ}(\phi)\sin_{\Omega^{\circ}}(\phi)\big{)}+\frac{ rt}{\omega}\big{(}\cos_{\Omega}\left(\phi_{\circ}\right)-t\omega C^{\prime}_{ \circ}(\phi)\sin_{\Omega^{\circ}}(\phi)\big{)}+o(t^{2})\\ =-\frac{1}{2}rt^{2}C^{\prime}_{\circ}(\phi)\sin_{\Omega^{\circ}}( \phi)+o(t^{2}). \tag{58}\]
An analogous computation shows that the derivative of \(y\) in \(\omega\) has the ensuing expansion as \(t\to 0\):
\[\frac{\partial y}{\partial\omega}(\phi,\omega,r;t)=\frac{1}{2}rt^{2}C^{\prime} _{\circ}(\phi)\cos_{\Omega^{\circ}}(\phi)+o(t^{2}). \tag{59}\]
Note that, on the one hand, (58) and (59) imply that
\[\frac{\partial x}{\partial\omega}=O(t^{2})\qquad\text{and}\qquad\frac{\partial y }{\partial\omega}=O(t^{2}), \tag{60}\]
as \(t\to 0\). On the other hand, by (57), we can deduce the following behavior, as \(t\to 0\):
\[\frac{\partial x}{\partial r}=O(t)\qquad\text{and}\qquad\frac{\partial y}{ \partial r}=O(t). \tag{61}\]
Thus, (60) and (61) prove the claimed behavior of \(\det(M_{2})\) as \(t\to 0\), since
\[\det(M_{2})=\frac{\partial x}{\partial r}\frac{\partial y}{\partial\omega}- \frac{\partial y}{\partial r}\frac{\partial x}{\partial\omega}.\]
In the next lemmas, we study the derivatives of \(z\). These are the most delicate to estimate, since the second-order Taylor polynomial of \(z\) is zero and higher-order derivatives may not exist.
_Notation 5.17_.: Let \(g:\mathbb{R}\to\mathbb{R}\) be a function. We write
\[g(t)=C(1+O(\varepsilon))f(t),\qquad\forall\,t\in[-\rho,\rho].\]
if there exists a constant \(K>0\) and a function \(f:\mathbb{R}\to\mathbb{R}\) such that, for every \(\varepsilon>0\), there exist positive constants \(C=C(\varepsilon),\rho=\rho(\varepsilon)>0\) for which the following holds
\[C(1-K\varepsilon)f(t)<g(t)<C(1+K\varepsilon)f(t),\qquad\forall\,t\in[-\rho, \rho].\]
**Lemma 5.18**.: Given \(\varepsilon>0\) sufficiently small, for \(\mathscr{L}^{1}\)-almost every \(\phi\in[0,2\mathbb{S}^{\circ})\), every \(r>0\) and \(\omega\neq 0\), there exist two positive constants \(k=k(r)\) and \(\rho=\rho(\phi,\omega,r)\) such that
\[\frac{\partial z}{\partial\omega}(\phi,\omega,r;t)=(1+O(\varepsilon))kt^{3}, \qquad\qquad\forall\,t\in[-\rho,\rho]. \tag{62}\]
Proof.: First of all, we compute that
\[\begin{split}\frac{\partial z}{\partial\omega}(\phi,\omega,r;t)& =\frac{r^{2}t}{2\omega^{2}}\big{(}1-\sin_{\Omega^{\circ}}(\phi) \sin_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)}-\cos_{\Omega^{\circ}}(\phi )\cos_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)}\big{)}\\ &\quad-\frac{r^{2}}{\omega^{3}}\big{(}\omega t+\sin_{\Omega^{ \circ}}(\phi)\cos_{\Omega^{\circ}}(\phi+\omega t)-\cos_{\Omega^{\circ}}(\phi )\sin_{\Omega^{\circ}}(\phi+\omega t)\big{)}.\end{split} \tag{63}\]
In order to evaluate this quantity, fix an angle \(\psi\in[0,2\mathbb{S}^{\circ})\), for which Proposition 5.13 holds, and consider the function \(f_{\psi}\), defined as
\[s\mapsto f_{\psi}(s):=1-\sin_{\Omega^{\circ}}(\psi+s)\sin_{\Omega}(\psi_{ \circ})-\cos_{\Omega^{\circ}}(\psi+s)\cos_{\Omega}(\psi_{\circ}).\]
Notice that (41) ensures that \(f_{\psi}(0)=0\), moreover direct computations show that
\[f_{\psi}^{\prime}(0)=0\qquad\text{and}\qquad f_{\psi}^{\prime\prime}(0)=C_{ \circ}^{\prime}(\psi)>0.\]
Consequently, it holds that
\[f_{\psi}(s)=C_{\circ}^{\prime}(\psi)\cdot s^{2}+o(s^{2}),\qquad\text{as }s\to 0. \tag{64}\]
For every \(n\in\mathbb{Z}\) and \(m\in\mathbb{N}\) define the set of angles
\[E_{n,m}:=\left\{\psi\in[0,2\mathbb{S}^{\circ})\,:\,(1+\varepsilon)^{n-1}s^{2 }<f_{\psi}(s)<(1+\varepsilon)^{n+1}s^{2},\text{ for every }s\in\Big{[}-\frac{1}{m},\frac{1}{m}\Big{]} \right\}.\]
Observe that, by (64), we have that \(E:=\bigcup_{n\in\mathbb{Z},m\in\mathbb{N}}E_{n,m}\) covers all differentiability points of \(C_{\circ}\), in particular \(E\) has full \(\mathscr{L}^{1}\)-measure, cf. Remark 5.14.
Now, fix \(\omega>0\), \(r>0\) and take \(\phi\in[0,2\mathbb{S}^{\circ})\) to be a density point2
Footnote 2: We say that \(r\in\mathbb{R}\) is a _density point_ for a measurable set \(J\subset\mathbb{R}\) if
\[\lim_{s\to 0^{+}}\frac{\mathscr{L}^{1}(J\cap[r-s,r+s])}{2s}=1.\]
for the set \(E_{n,m}\), for some \(n\in\mathbb{Z}\), \(m\in\mathbb{N}\). We are going to prove the statement (62) for our choice of parameters and for positive times. The cases with \(\omega<0\) and negative times are completely analogous. Let \(0<\rho(\phi,\omega,r)<\frac{1}{2\omega m}\) be sufficiently small such that for every \(t\in(0,\rho]\)
\[\mathscr{L}^{1}(E_{n,m}\cap[\phi-2\omega t,\phi+2\omega t])>4\omega t(1- \varepsilon/4). \tag{65}\]
Introduce the set
\[F_{n,m}:=\{s\in\mathbb{R}\,:\,\phi+\omega s\in E_{n,m}\}.\]
Observe that, from (65), we can deduce that for every \(t\in(0,\rho]\),
\[\mathscr{L}^{1}(F_{n,m}\cap[-2t,2t])>4t(1-\varepsilon/4). \tag{66}\]
Now, given every \(t\in(0,\rho]\), (66) ensures that there exists \(\bar{s}\in[t(1-\varepsilon),t]\) such that \(\bar{s}\in F_{n,m}\). Then, thanks to Corollary 5.7, we obtain that
\[\begin{split} 1-\sin_{\Omega^{\circ}}(\phi)\sin_{\Omega}& \big{(}(\phi+\omega t)_{\circ}\big{)}-\cos_{\Omega^{\circ}}(\phi) \cos_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)}\\ &\geq 1-\sin_{\Omega^{\circ}}(\phi)\sin_{\Omega}\big{(}(\phi+\omega \bar{s})_{\circ}\big{)}-\cos_{\Omega^{\circ}}(\phi)\cos_{\Omega}\big{(}(\phi+ \omega\bar{s})_{\circ}\big{)}\\ &=f_{\phi+\omega\bar{s}}(-\omega\bar{s})\geq(1+\varepsilon)^{n-1 }(\omega\bar{s})^{2}\geq(1-\varepsilon)^{2}(1+\varepsilon)^{n-1}(\omega t)^{2 },\end{split} \tag{67}\]
where the second to last inequality holds by our choice of the parameter \(\rho\) and because \(\phi+\omega\bar{s}\in E_{n,m}\). With an analogous argument, we can find an element in \([t,t(1+\varepsilon)]\cap F_{n,m}\) and deduce the estimate:
\[1-\sin_{\Omega^{\circ}}(\phi)\sin_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)} -\cos_{\Omega^{\circ}}(\phi)\cos_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)} \leq(1+\varepsilon)^{2}(1+\varepsilon)^{n+1}(\omega t)^{2}. \tag{68}\]
Combining (67) and (68), we conclude that, on \((0,\rho]\), the following holds
\[1-\sin_{\Omega^{\circ}}(\phi)\sin_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)} -\cos_{\Omega^{\circ}}(\phi)\cos_{\Omega}\big{(}(\phi+\omega t)_{\circ}\big{)} =(1+O(\varepsilon))(1+\varepsilon)^{n}(\omega t)^{2}, \tag{69}\]
in the Notation 5.17. Consequently, we deduce:
\[\frac{r^{2}t}{2\omega^{2}}\left(1-\sin_{\Omega^{\circ}}(\phi)\sin_{\Omega} \big{(}(\phi+\omega t)_{\circ}\big{)}-\cos_{\Omega^{\circ}}(\phi)\cos_{\Omega }\big{(}(\phi+\omega t)_{\circ}\big{)}\right)=(1+O(\varepsilon))(1+ \varepsilon)^{n}\frac{r^{2}t^{3}}{2}, \tag{70}\]
for \(t\in(0,\rho]\). To estimate the second term in (63), observe that
\[\frac{\partial}{\partial s}\big{(}\omega s+\sin_{\Omega^{\circ}} (\phi)\cos_{\Omega^{\circ}} (\phi+\omega s)-\cos_{\Omega^{\circ}}(\phi)\sin_{\Omega^{\circ}}(\phi+\omega s )\big{)}\\ =\omega\big{(}1-\sin_{\Omega^{\circ}}(\phi)\sin_{\Omega}\big{(}( \phi+\omega s)_{\circ}\big{)}-\cos_{\Omega^{\circ}}(\phi)\cos_{\Omega}\big{(}( \phi+\omega s)_{\circ}\big{)}\big{)}.\]
In particular, since in \(s=0\) this quantity is equal to \(0\), we have that for \(t\in(0,\rho]\)
\[\omega t +\sin_{\Omega^{\circ}}(\phi)\cos_{\Omega^{\circ}}(\phi+\omega t)- \cos_{\Omega^{\circ}}(\phi)\sin_{\Omega^{\circ}}(\phi+\omega t)\] \[=\omega\int_{0}^{t}\big{(}1-\sin_{\Omega^{\circ}}(\phi)\sin_{ \Omega}\big{(}(\phi+\omega s)_{\circ}\big{)}-\cos_{\Omega^{\circ}}(\phi)\cos _{\Omega}\big{(}(\phi+\omega s)_{\circ}\big{)}\big{)}\,\mathrm{d}s\] \[=\omega\int_{0}^{t}(1+O(\varepsilon))(1+\varepsilon)^{n}(\omega t )^{2}\,\mathrm{d}s=(1+O(\varepsilon))(1+\varepsilon)^{n}\frac{(\omega t)^{3}} {3},\]
where the second equality follows from (69). Then, we obtain that:
\[\frac{r^{2}}{\omega^{3}}\big{(}\omega t+\sin_{\Omega^{\circ}}(\phi)\cos_{ \Omega^{\circ}}(\phi+\omega t)-\cos_{\Omega^{\circ}}(\phi)\sin_{\Omega^{\circ} }(\phi+\omega t)\big{)}=(1+O(\varepsilon))(1+\varepsilon)^{n}\frac{r^{2}t^{3}} {3}. \tag{71}\]
Finally, putting together (70) and (71), we conclude that
\[\frac{\partial z}{\partial\omega}(\phi,\omega,r;t)=(1+O(\varepsilon))(1+ \varepsilon)^{n}\frac{r^{2}t^{3}}{6},\qquad\forall\,t\in(0,\rho],\]
that is (62) with \(k=k(r):=(1+\varepsilon)^{n}\frac{r^{2}}{6}\). To conclude, observe that we proved the statement for (every \(r>0\), \(\omega\neq 0\) and) every \(\phi\in[0,2\mathbb{S}^{\circ})\) which is a density point of some \(E_{n,m}\) and the set of such angles has full \(\mathscr{L}^{1}\)-measure in \([0,2\mathbb{S}^{\circ})\). Indeed, \(E=\bigcup_{n\in\mathbb{Z},m\in\mathbb{N}}E_{n,m}\) has full \(\mathscr{L}^{1}\)-measure in \([0,2\mathbb{S}^{\circ})\) and almost every point of a measurable set is a density point.
**Lemma 5.19**.: Let \(\phi\in[0,2\mathbb{S}^{\circ})\) be a differentiability point for the map \(C_{\circ}\), \(r>0\) and \(\omega\neq 0\), then
\[\frac{\partial z}{\partial r}(\phi,\omega,r;t),\,\frac{\partial z}{\partial\phi }(\phi,\omega,r;t)=o(t^{2}),\qquad\text{as }t\to 0.\]
Proof.: We start by proving the statement for \(\frac{\partial z}{\partial r}\). We have that
\[\frac{\partial z}{\partial r}(\phi,\omega,r;t)=\frac{r}{\omega^{2}}\left( \omega t+\cos_{\Omega^{\circ}}(\phi+\omega t)\sin_{\Omega^{\circ}}(\phi)-\sin _{\Omega^{\circ}}(\phi+\omega t)\cos_{\Omega^{\circ}}(\phi)\right).\]
Direct computations show that, on the one hand,
\[\frac{\partial}{\partial t}\bigg{|}_{t=0}\big{(}\omega t+\cos_{ \Omega^{\circ}}(\phi+\omega t)\sin_{\Omega^{\circ}}(\phi)-\sin_{\Omega^{\circ} }(\phi+\omega t)\cos_{\Omega^{\circ}}(\phi)\big{)}\\ =\omega-\omega\sin_{\Omega}(\phi_{\circ})\sin_{\Omega^{\circ}}( \phi)-\omega\cos_{\Omega}(\phi_{\circ})\cos_{\Omega^{\circ}}(\phi)=0,\]
where we applied Proposition 5.3, and, on the other hand,
\[\frac{\partial^{2}}{\partial t^{2}}\bigg{|}_{t=0}\big{(}\omega t +\cos_{\Omega^{\circ}}(\phi+\omega t)\sin_{\Omega^{\circ}}(\phi)-\sin_{\Omega^{ \circ}}(\phi+\omega t)\cos_{\Omega^{\circ}}(\phi)\big{)}\\ =-\omega^{2}\cos_{\Omega^{\circ}}(\phi)\sin_{\Omega^{\circ}}( \phi)C^{\prime}_{\circ}(\phi)+\omega^{2}\sin_{\Omega^{\circ}}(\phi)\cos_{ \Omega^{\circ}}(\phi)C^{\prime}_{\circ}(\phi)=0.\]
Consequently, we conclude the proof of the first part of the statement:
\[\frac{\partial z}{\partial r}(\phi,\omega,r;t)=\frac{r}{\omega^{2}}\cdot o(t^ {2})=o(t^{2}),\qquad\text{as }t\to 0.\]
In order to prove the statement for \(\frac{\partial z}{\partial\omega}\), we use a geometric argument based on Proposition 5.12. First of all, recall that \(d_{Q_{\phi}}\|\cdot\|_{\star}\) identifies a half-plane tangent at \(Q_{\phi}\) and containing \(\Omega^{\circ}\). Thus, we can find a rigid transformation \(R:\mathbb{R}^{2}\to\mathbb{R}^{2}\), such that \(R(Q_{\phi})=(0,0)\) and \(R(\Omega^{\circ})\) is contained in \(\{y\geq 0\}\subset\mathbb{R}^{2}\), see Figure 5. Then, as \(\left\|\cdot\right\|_{\star}\) is \(C^{1,1}\), the image of the unit sphere \(R\big{(}\partial\Omega^{\circ}\big{)}\), can be described (locally around \(O\)) as the graph of a non-negative function \(f\in C^{1,1}(\mathbb{R})\) with \(f(0)=0\). In addition, by our choice of \(\phi\in[0,2\mathbb{S}^{\circ})\), \(f\) is twice differentiable in \(0\) with strictly positive second derivative \(f^{\prime\prime}(0):=c>0\). Now consider the function \(p\) defined in a neighborhood of \(\phi\) as
\[p(\psi):=\mathtt{p}_{x}\big{(}R(Q_{\psi})\big{)},\]
where \(\mathtt{p}_{x}:\mathbb{R}^{2}\to\mathbb{R}\) denotes the projection on the \(x\)-axis, i.e. \(\mathtt{p}_{x}(a,b)=a\).
Figure 5: Image of \(Q_{\phi}\) and \(\Omega^{\circ}\) through the rigid transformation \(R\).
Second of all, for \(s_{1},s_{2}\in\mathbb{R}\), call \(F(s_{1},s_{2})\) the signed area between the segment connecting \((s_{1},f(s_{1}))\) and \((s_{2},f(s_{2}))\) and the graph of \(f\) (intended positive if \(s_{1}<s_{2}\) and negative if \(s_{1}>s_{2}\)), see Figure 6. Proposition 5.12 ensures that for \(\psi\) in a neighborhood of \(\phi\) it holds that
\[z(\psi,\omega,r;t)=\frac{r^{2}}{\omega^{2}}F\big{(}p(\psi),p(\psi+\omega t) \big{)}.\]
In particular, we obtain that
\[\frac{\omega^{2}}{r^{2}}\frac{\partial z}{\partial\phi}(\phi,\omega,r;t)= \frac{\partial}{\partial s_{1}}F(0,p(\phi+\omega t))\cdot p^{\prime}(\phi)+ \frac{\partial}{\partial s_{2}}F(0,p(\phi+\omega t))\cdot p^{\prime}(\phi+ \omega t). \tag{72}\]
We now proceed to compute the terms in the last formula, starting from the ones involving \(p^{\prime}\). To this aim, consider the point \((x_{0},y_{0}):=R(O)\) and, for every \(q\) in a neighborhood of \(0\), call \(A(q)\) the signed area inside \(R\big{(}\partial\Omega^{\circ}\big{)}\) between the segments \((x_{0},y_{0})O\) and \((x_{0},y_{0})(q,f(q))\). Observe that
\[A^{\prime}(q)=\frac{1}{2}\langle(1,f^{\prime}(q)),(y_{0}-f(q),q-x_{0})\rangle =\frac{1}{2}y_{0}+O(q),\qquad\text{as $q\to 0$.}\]
Note that, in the last equality, we have used that \(f(0)=f^{\prime}(0)=0\) and \(f\in C^{1,1}(\mathbb{R})\). Consequently, since \(A(0)=0\), we have that
\[A(q)=\frac{1}{2}y_{0}q+O(q^{2}),\qquad\text{as $q\to 0$.} \tag{73}\]
On the other hand, by the definition of angle it holds that \(2A(p(\phi+\vartheta))=\vartheta\) for every \(\vartheta\) sufficiently small and therefore, invoking (73) and observing that \(p\in C^{1}\), we obtain that
\[p(\phi+\vartheta)=\frac{1}{y_{0}}\vartheta+o(\vartheta)\quad\text{and}\quad p ^{\prime}(\phi+\vartheta)=\frac{1}{y_{0}}+o(1),\qquad\text{as $\vartheta\to 0$.} \tag{74}\]
Now we compute the partial derivatives of the function \(F\). Observe that \(F\) can be calculated in the following way
\[F(s_{1},s_{2})=\frac{1}{2}(f(s_{1})+f(s_{2}))(s_{2}-s_{1})-\int_{s_{1}}^{s_{2} }f(x)\,\mathrm{d}x.\]
As a consequence, we compute that
\[\frac{\partial}{\partial s_{1}}F(s_{1},s_{2}) =\frac{1}{2}f^{\prime}(s_{1})(s_{2}-s_{1})+\frac{1}{2}\big{(}f(s_{ 1})-f(s_{2})\big{)}\] \[\frac{\partial}{\partial s_{2}}F(s_{1},s_{2}) =\frac{1}{2}f^{\prime}(s_{2})(s_{2}-s_{1})+\frac{1}{2}\big{(}f(s_{ 1})-f(s_{2})\big{)}.\]
Combining these two relations with (72), we conclude that
\[\frac{\omega^{2}}{r^{2}}\frac{\partial z}{\partial\phi}(\phi,\omega,r; t)=-\frac{1}{2}f(p(\phi+\omega t))\cdot p^{\prime}(\phi),\\ +\frac{1}{2}[f^{\prime}(p(\phi+\omega t))p(\phi+\omega t)-f(p(\phi+ \omega t))]\cdot p^{\prime}(\phi+\omega t).\]
Now, recall that \(f\) is twice differentiable in \(0\) with positive second derivative \(c\), therefore we have that
\[f(x)=\frac{1}{2}cx^{2}+o(x^{2})\qquad\text{and}\qquad f^{\prime}(x)=cx+o(x).\]
Using these relations, together with (74), we can conclude that
\[\frac{\omega^{2}}{r^{2}}\frac{\partial z}{\partial\phi}(\phi, \omega,r;t) =\frac{1}{2}f^{\prime}(p(\phi+\omega t))p(\phi+\omega t)p^{\prime }(\phi+\omega t)-\frac{1}{2}f(p(\phi+\omega t))[p^{\prime}(\phi)+p^{\prime}( \phi+\omega t)]\] \[=\frac{1}{2}[cp(\phi+\omega t)+o(p(\phi+\omega t))]\,p(\phi+ \omega t)p^{\prime}(\phi+\omega t)\] \[\quad-\frac{1}{4}[cp(\phi+\omega t)^{2}+o(p(\phi+\omega t)^{2})] [p^{\prime}(\phi)+p^{\prime}(\phi+\omega t)]\] \[=\frac{c}{2y_{0}^{3}}(\omega t)^{2}-\frac{c}{2y_{0}^{3}}(\omega t )^{2}+o(t^{2})=o(t^{2}).\]
This concludes the proof.
As a consequence of the these lemmas, we obtain the following estimate of the quantity \(J(\phi,\omega,r;t)\), as \(t\to 0\).
**Corollary 5.20**.: Given \(\varepsilon>0\) sufficiently small, for \(\mathscr{L}^{1}\)-almost every \(\phi\in[0,2\mathbb{S}^{\circ})\), every \(r>0\) and \(\omega\neq 0\), there exist two positive constants \(C=C(\phi,\omega,r)\) and \(\rho=\rho(\phi,\omega,r)\) such that
\[J(\phi,\omega,r;t)=C(1+O(\varepsilon))|t|^{5},\qquad\forall\,t\in[-\rho,\rho],\]
in the Notation 5.17.
Proof.: Let \(\phi\in[0,2\mathbb{S}^{\circ})\) be a differentiability point for the map \(C_{\circ}\) and such that the conclusion of Lemma 5.18 holds, and fix \(r>0\) and \(\omega\neq 0\). Observe that, on the one hand, as a consequence of Lemma 5.16 and Lemma 5.19, we have that
\[\left|\frac{\partial z}{\partial\phi}\det(M_{2})\right|(\phi,\omega,r;t),\, \left|\frac{\partial z}{\partial r}\det(M_{3})\right|(\phi,\omega,r;t)=o(t^{ 5}),\qquad\text{as }t\to 0.\]
On the other hand, Lemma 5.16 and Lemma 5.18 ensure that there exist positive constants \(C=C(\phi,\omega,r),\rho=\rho(\phi,\omega,r)>0\) such that
\[\left|\frac{\partial z}{\partial\omega}\det(M_{1})\right|(\phi,\omega,r;t)=C( 1+O(\varepsilon))|t|^{5},\qquad\forall\,t\in[-\rho,\rho]\]
where, in particular, \(\rho\) has to be smaller than the constant identified by Lemma 5.18. Up to taking a smaller \(\rho\) and keeping in mind (54), we may conclude that
\[J(\phi,\omega,r;t)=C(1+O(\varepsilon))|t|^{5},\qquad\forall\,t\in[-\rho,\rho].\]
_Remark 5.21_.: Note that, in the sub-Riemannian Heisenberg group, the contraction rate of volumes along geodesic is exactly \(t^{5}\), cf. [1]. In our setting, we are able to highlight the same behavior for the Jacobian determinant of the exponential map \(J(\phi,\omega,r;t)\), as \(t\to 0\).
Now that we know the behaviour of \(J(\phi,\omega,r;t)\) as \(t\to 0\), in the next proposition, we obtain a statement similar to Proposition 4.25, which will allow us to disprove the \(\mathsf{CD}(K,N)\) condition in the Heisenberg group. In particular, the proof of the following proposition uses Corollary 5.20 and some ideas developed in [15, Prop. 3.1].
In our setting, we define the _midpoint map_ as:
\[\mathcal{M}(p,q):=e_{\frac{1}{2}}\left(\gamma_{pq}\right),\qquad\text{if }p \star q^{-1}\notin\{x=y=0\}, \tag{75}\]
where \(\gamma_{pq}:[0,1]\to\mathbb{H}\) is the unique geodesic joining \(p\) and \(q\), given by Theorem 5.9. Similarly, we define the _inverse geodesic map_\(I_{m}\) (with respect to \(m\in\mathbb{H}\)) as:
\[I_{m}(q)=p,\qquad\text{if there exists }x\in\mathbb{H}\text{ such that }\mathcal{M}(p,q)=m. \tag{76}\]
_Remark 5.22_.: Recall the definition of midpoint map in (34) and inverse geodesic map in (36). Both maps were defined using the differential structure of a smooth sub-Finsler manifold, however they are characterized by the metric structure of the space. In particular, if the norm is sufficiently regular, they coincide with (75) and (76).
**Proposition 5.23**.: Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a strictly convex and \(C^{1,1}\) norm. For \(\mathscr{L}^{1}\)-almost every \(\phi\in[0,2\mathbb{S}^{\circ})\), every \(r>0\) and \(\omega\neq 0\), there exists a positive constant \(\rho=\rho(\phi,\omega,r)\) such that for every \(t\in[-\rho,\rho]\):
1. the inverse geodesic map \(I_{\mathrm{e}}\) is well-defined and \(C^{1}\) in a neighborhood of \(G(\phi,\omega,r;t)\);
2. the midpoint map \(\mathcal{M}\) is well-defined and \(C^{1}\) in a neighborhood of \((\mathrm{e},G(\phi,\omega,r;t))\), moreover \[\big{|}\det d_{G(\phi,\omega,r;t)}\mathcal{M}(\mathrm{e},\cdot)\big{|}\leq \frac{1}{2^{4}}.\] (77)
Proof.: Take \(\varepsilon\) sufficiently small, let \(\phi\) be an angle for which the conclusion of Corollary 5.20 holds. Fix \(r>0\) and \(\omega\neq 0\), and let \(\rho=\rho(\phi,\omega,r)\) be the (positive) constant identified by Corollary 5.20. Let \(t\in[-\rho,\rho]\) and consider the map \(E_{t}:T_{\mathrm{e}}^{*}\mathbb{H}\to\mathbb{H}\) defined as
\[E_{t}(\phi,\omega,r):=G(\phi,\omega,r;t)=\big{(}x(\phi,\omega,r;t),y(\phi, \omega,r;t),z(\phi,\omega,r;t)\big{)}, \tag{78}\]
where \(G\) is defined in (50). Note that \(J(\phi,\omega,r;t)\) is the Jacobian of \(E_{t}(\phi,\omega,r)\) and in particular, since \(t\in[-\rho,\rho]\), Corollary 5.20 ensures that \(J(\phi,\omega,r;t)>0\). Then, from the inverse function theorem, we deduce that \(E_{t}\) is locally invertible in a neighborhood \(B_{t}\subset\mathbb{H}\) of \(E_{t}(\phi,\omega,r)\) with \(C^{1}\) inverse \(E_{t}^{-1}:B_{t}\to T_{\mathrm{e}}^{*}\mathbb{H}\). Then, according to Theorem 5.9 and Proposition 5.11, the curve \([-t,t]\ni s\mapsto G(\phi,\omega,r;s)\) is the unique geodesic connecting \(G(\phi,\omega,r;-t)\) and \(G(\phi,\omega,r;-t)\), and such that \(G(\phi,\omega,r;0)=\mathrm{e}\), provided that \(\rho\) is sufficiently small. Hence, we can write the map \(I_{\mathrm{e}}:B_{t}\to\mathbb{R}^{3}\) as
\[I_{\mathrm{e}}(q)=E_{-t}(E_{t}^{-1}(q)),\qquad\forall\,q\in B_{t}.\]
Therefore, the map \(I_{\mathrm{e}}\) is \(C^{1}\) on \(B_{t}\), being a composition of \(C^{1}\) functions, proving item (i).
With an analogous argument, the midpoint map (with first entry \(\mathrm{e}\)), \(\mathcal{M}_{\mathrm{e}}(\cdot):=\mathcal{M}(\mathrm{e},\cdot):B_{t}\to\mathbb{ R}^{3}\), can be written as
\[\mathcal{M}_{\mathrm{e}}(q)=E_{t/2}(E_{t}^{-1}(q)),\qquad\forall\,q\in B_{t}. \tag{79}\]
As before, we deduce this map is well-defined and \(C^{1}\). To infer regularity of the midpoint map in a neighborhood of \((\mathrm{e},G(\phi,\omega,r;t))\), we take advantage of the underline group structure, in particular of the left-translations (44), which are isometries. Indeed, note that
\[\mathcal{M}(p,q)=L_{p}\left(\mathcal{M}_{\mathrm{e}}(L_{p^{-1}}(q))\right), \qquad\forall\,p,q\in\mathbb{H},\]
and, for every \((p,q)\) in a suitable neighborhood of \((\mathrm{e},G(\phi,\omega,r;t))\), we have \(L_{p^{-1}}(q)\in B_{t}\), therefore \(\mathcal{M}\) is well-defined and \(C^{1}\). Finally, keeping in mind (79) and applying Corollary 5.20, we deduce that
\[\big{|}\det d_{G(\phi,\omega,r;t)}\mathcal{M}_{\mathrm{e}}(\cdot) \big{|} =\big{|}\det d_{(\phi,\omega,r)}E_{t/2}\big{|}\cdot\big{|}\det d_{ (\phi,\omega,r)}E_{t}\big{|}^{-1}\] \[=J(\phi,\omega,r;t/2)\cdot J(\phi,\omega,r;t)^{-1}\] \[=\frac{C(1+O(\varepsilon))|t/2|^{5}}{C(1+O(\varepsilon))|t|^{5}} =\frac{1}{2^{5}}\left(1+O(\varepsilon)\right)\leq\frac{1}{2^{4}},\]
where the last inequality is true for \(\varepsilon\) sufficiently small. This concludes the proof of item (ii).
**Theorem 5.24**.: Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a strictly convex and \(C^{1,1}\) norm and with a smooth measure \(\mathfrak{m}\). Then, the metric measure space \((\mathbb{H},\mathrm{d}_{SF},\mathfrak{m})\) does not satisfy the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\), for every \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
Proof.: Take an angle \(\phi\) for which the conclusion of Proposition 5.23 holds, fix \(r>0\), \(\omega\neq 0\) and call \(\gamma\) the curve
\[\mathbb{R}\ni s\mapsto\gamma(s):=G(\phi,\omega,r;s).\]
Fix \(t\in(0,\rho]\), where \(\rho=\rho(\phi,\omega,r)\) is the positive constant identified by Proposition 5.23. Recall the map \(E_{t}\) (see (78)) from the proof of Proposition 5.23. \(E_{t}\) is invertible, with \(C^{1}\) inverse, in a neighborhood \(B_{t}\subset\mathbb{H}\) of \(E_{t}(\phi,\omega,r)=\gamma(t)\). Consider the function
\[s\mapsto\Phi(s):=\mathfrak{p}_{1}\big{[}E_{t}^{-1}\big{(}L_{\gamma(s)^{-1}}( \gamma(t+s))\big{)}\big{]},\]
where \(\mathfrak{p}_{1}\) denotes the projection onto the first coordinate. Observe that, for \(s\) sufficiently small, \(L_{\gamma(s)^{-1}}(\gamma(t+s))\in B_{t}\), thus, \(\Phi\) is well-defined and \(C^{1}\) (being composition of \(C^{1}\) functions) in an open interval \(I\subset\mathbb{R}\) containing \(0\). Moreover, note that \(\Phi(s)\) is the initial angle for the geodesic joining \(\mathrm{e}\) and \(\gamma(s)^{-1}\star\gamma(t+s)\). Now, we want to prove that there exists an interval \(\tilde{I}\subset I\) such that, for \(\mathscr{L}^{1}\)-almost every \(s\in\tilde{I}\), \(\Phi(s)\) is an angle for which the conclusion of Proposition 5.23 holds. We have two cases, either \(\Phi^{\prime}\equiv 0\) in \(I\) or there is \(\bar{s}\in I\) such that \(\Phi^{\prime}(\bar{s})\neq 0\). In the first case, since by definition \(\Phi(0)=\phi\), we deduce that \(\Phi(s)\equiv\phi\), thus the claim is true. In the second case, since \(\Phi\) is \(C^{1}\), we can find an interval \(\tilde{I}\subset I\) such that \(\Phi^{\prime}(s)\neq 0\) for every \(s\in\tilde{I}\). Then, consider
\[J:=\{\psi\in\Phi(\tilde{I}):\psi\text{ is an angle for which Proposition \ref{prop:23 holds} holds}\}\subset\Phi(\tilde{I})\]
and observe that \(J\) has full \(\mathscr{L}^{1}\)-measure in \(\Phi(\tilde{I})\). Therefore, the set \(\tilde{J}:=\Phi^{-1}(J)\subset\tilde{I}\) has full \(\mathscr{L}^{1}\)-measure in \(\tilde{I}\), it being the image of \(J\) through a \(C^{1}\) function with non-null derivative. Thus the claim is true also in this second case.
At this point, let \(\bar{s}\in\tilde{I}\) such that \(\Phi(\bar{s})\) is an angle for which the conclusion of Proposition 5.23 holds and consider
\[\bar{\rho}:=\rho\big{(}E_{t}^{-1}\big{(}L_{\gamma(\bar{s})^{-1}}(\gamma(t+\bar {s}))\big{)}\big{)}>0.\]
For every \(s\in[-\bar{\rho},\bar{\rho}]\setminus\{0\}\), from Proposition 5.23, we deduce that the inverse geodesic map \(I_{\mathrm{e}}\) and the midpoint map \(\mathcal{M}\) are well-defined and \(C^{1}\) in a neighborhood of \(G\big{(}E_{t}^{-1}\big{(}L_{\gamma(\bar{s})^{-1}}(\gamma(t+\bar{s}))\big{)};s \big{)}\) and \(\big{(}\mathrm{e},G\big{(}E_{t}^{-1}\big{(}L_{\gamma(\bar{s})^{-1}}(\gamma(t+ \bar{s}))\big{)};s\big{)}\big{)}\), respectively. Moreover, we have that
\[\big{|}\det d_{G(E_{t}^{-1}(L_{\gamma(\bar{s})^{-1}}(\gamma(t+\bar{s})));s)} \mathcal{M}(\mathrm{e},\cdot)\big{|}\leq\frac{1}{2^{4}}.\]
Observe that, since the left-translations are smooth isometries, the inverse geodesic map \(I_{\gamma(\bar{s})}\) is well-defined and \(C^{1}\) in a neighborhood of \(\gamma(\bar{s}+s)\), in fact it can be written as
\[I_{\gamma(\bar{s})}(p)=L_{\gamma(\bar{s})}\big{[}I_{\mathrm{e}}\big{(}L_{\gamma( \bar{s})^{-1}}(p)\big{)}\big{]},\]
and \(L_{\gamma(\bar{s})^{-1}}\big{(}\gamma(\bar{s}+s)\big{)}=G\big{(}E_{t}^{-1}\big{(}L_ {\gamma(\bar{s})^{-1}}(\gamma(t+\bar{s}))\big{)};s\big{)}\). Similarly, we can prove that the midpoint map is well-defined and \(C^{1}\) in a neighborhood of \((\gamma(\bar{s}),\gamma(\bar{s}+s))\), with
\[\big{|}\det d_{\gamma(\bar{s}+s)}\mathcal{M}(\gamma(\bar{s}),\cdot)\big{|}\leq \frac{1}{2^{4}}.\]
In conclusion, up to restriction and reparametrization, we can find a geodesic \(\eta:[0,1]\to\mathbb{H}\) with the property that, for \(\mathscr{L}^{1}\)-almost every \(\bar{s}\in[0,1]\), there exists \(\lambda(\bar{s})>0\) such that, for every \(s\in[\bar{s}-\lambda(\bar{s}),\bar{s}+\lambda(\bar{s})]\cap[0,1]\setminus\{ \bar{s}\}\), the inverse geodesic map \(I_{\eta(\bar{s})}\) and the midpoint map \(\mathcal{M}\) are well-defined and \(C^{1}\) in a neighborhood of \(\eta(s)\) and \((\eta(\bar{s}),\eta(s))\) respectively, and in addition
\[\big{|}\det d_{\eta(s)}\mathcal{M}(\eta(\bar{s}),\cdot)\big{|}\leq\frac{1}{2^ {4}}.\]
Set \(\lambda(s)=0\) on the (null) set where this property is not satisfied and consider the set
\[T:=\big{\{}(s,t)\in[0,1]^{2}\,:\,t\in[s-\lambda(s),s+\lambda(s)]\big{\}}\,.\]
Observe that, introducing for every \(\epsilon>0\) the set
\[D_{\epsilon}:=\{(s,t)\in[0,1]^{2}\,:\,|t-s|<\epsilon\},\]
we have that
\[\frac{\mathscr{L}^{2}(T\cap D_{\epsilon})}{\mathscr{L}^{2}(D_{\epsilon})}= \frac{\mathscr{L}^{2}(T\cap D_{\epsilon})}{2\epsilon-\epsilon^{2}}\to 1, \qquad\text{as}\,\,\,\epsilon\to 0. \tag{80}\]
On the other hand, we can find \(\delta>0\) such that the set \(\Lambda_{\delta}:=\{s\in[0,1]\,:\,\lambda(s)>\delta\}\) satisfies \(\mathscr{L}^{1}(\Lambda_{\delta})>\frac{3}{4}\). In particular, for every \(\epsilon<\delta\) sufficiently small we have that
\[\mathscr{L}^{2}\left(\left\{(s,t)\in[0,1]^{2}\,:\,\frac{s+t}{2}\not\in \Lambda_{\delta}\right\}\cap D_{\epsilon}\right)<\frac{1}{2}\epsilon. \tag{81}\]
Therefore, putting together (80) and (81), we can find \(\epsilon<\delta\) sufficiently small such that
\[\mathscr{L}^{2}\left(T\cap D_{\epsilon}\cap\left\{(s,t)\in[0,1]^{2}\,:\, \frac{s+t}{2}\in\Lambda_{\delta}\right\}\right)>\frac{1}{2}\mathscr{L}^{2}(D _{\epsilon}).\]
Then, since the set \(D_{\epsilon}\) is symmetric with respect to the diagonal \(\{s=t\}\), we can find \(\bar{s}\neq\bar{t}\) such that
\[(\bar{s},\bar{t}),(\bar{t},\bar{s})\in T\cap D_{\epsilon}\cap\left\{(s,t)\in[ 0,1]^{2}\,:\,\frac{s+t}{2}\in\Lambda_{\delta}\right\}.\]
In particular, this tells us that:
* \(\bar{t}\in[\bar{s}-\lambda(\bar{s}),\bar{s}+\lambda(\bar{s})]\) and \(\bar{s}\in[\bar{t}-\lambda(\bar{t}),\bar{t}+\lambda(\bar{t})]\);
* \(|\bar{t}-\bar{s}|<\epsilon<\delta\);
* \(\frac{\bar{s}+\bar{t}}{2}\in\Lambda_{\delta}\).
Now, on the one hand, (i) ensures that the midpoint map \(\mathcal{M}\) is well-defined and \(C^{1}\) in a neighborhood of \((\eta(\bar{s}),\eta(\bar{t}))\) with
\[\big{|}\det d_{\eta(\bar{t})}\mathcal{M}(\eta(\bar{s}),\cdot)\big{|}\leq\frac{ 1}{2^{4}}\qquad\text{and}\qquad\big{|}\det d_{\eta(\bar{s})}\mathcal{M}(\cdot, \eta(\bar{t}))\big{|}\leq\frac{1}{2^{4}}.\]
While, on the other hand, the combination of (ii) and (iii) guarantees that the inverse geodesic map \(I_{\eta(\frac{s+\bar{t}}{2})}\) is well-defined and \(C^{1}\) in a neighborhood of \(\eta(\bar{s})\) and in a neighborhood of \(\eta(\bar{t})\) respectively. Indeed, we have:
\[\bar{s},\bar{t}\in\left[\frac{\bar{s}+\bar{t}}{2}-\delta,\frac{\bar{s}+\bar{t }}{2}+\delta\right]\subset\left[\frac{\bar{s}+\bar{t}}{2}-\lambda\left(\frac{ \bar{s}+\bar{t}}{2}\right),\frac{\bar{s}+\bar{t}}{2}+\lambda\left(\frac{\bar{s}+ \bar{t}}{2}\right)\right],\]
and, by the very definition of \(\lambda(\cdot)\), we obtain the claimed regularity of the inverse geodesic map.
Once we have these properties, we can repeat the same strategy used in the second part of the proof of Theorem 4.26 and contradict the Brunn-Minkowski inequality \(\mathsf{BM}(K,N)\) for every \(K\in\mathbb{R}\) and every \(N\in(1,\infty)\).
_Remark 5.25_.: If we want to replicate the strategy of Theorem 4.26, we ought to find a short geodesic \(\gamma:[0,1]\to\mathbb{H}\) such that
1. the midpoint map \(\mathcal{M}\) is \(C^{1}\) around \((\gamma(0),\gamma(1))\) and satisfies a Jacobian estimates at \(\gamma(1)\) of the type (77);
2. the midpoint map \(\mathcal{M}\) satisfies a Jacobian estimates at \(\gamma(0)\) of the type (77);
3. the inverse geodesic map \(\mathcal{I}_{\gamma(1/2)}\), with respect to \(\gamma(1/2)\), is \(C^{1}\) around \(\gamma(0)\) and \(\gamma(1)\).
Proposition 5.23 guarantees the existence of a large set \(\mathscr{A}\subset T^{*}_{\gamma(0)}\mathbb{H}\) of initial covectors for which the corresponding geodesic \(\gamma\) satisfies (i). The problem arises as the set \(\mathscr{A}\) of "good" covectors depends on the base point and is large only in a measure-theoretic sense. A simple "shortening" argument, mimicking the strategy of the smooth case, is sufficient to address (ii). However, once the geodesic is fixed, we have no way of ensuring that (iii) is satisfied. In particular, it may happen that the map \(\mathcal{I}_{\gamma(1/2)}\) does not fit within the framework of Proposition 5.23 item (i), as the corresponding initial covector may fall outside the hypothesis. To overcome such a difficulty, we use a density-type argument to choose _simultaneously_ an initial point and an initial covector in such a way that (i)-(iii) are satisfied.
### Failure of the \(\mathsf{MCP}(K,N)\) condition for singular norms
In this section we prove Theorem 1.7, showing that the measure contraction property (see Definition 2.5) can not hold in a sub-Finsler Heisenberg group, equipped with a strictly convex, singular norm. Our strategy is based on the observation that, in this setting, geodesics exhibit a branching behavior, despite being unique (at least for small times).
**Theorem 5.26**.: Let \(\mathbb{H}\) be the sub-Finsler Heisenberg group, equipped with a strictly convex norm \(\|\cdot\|\) which is not \(C^{1}\), and let \(\mathfrak{m}\) be a smooth measure on \(\mathbb{H}\). Then, the metric measure space \((\mathbb{H},\mathsf{d}_{SF},\mathfrak{m})\) does not satisfy the measure contraction property \(\mathsf{MCP}(K,N)\) for every \(K\in\mathbb{R}\) and \(N\in(1,\infty)\).
Proof.: For simplicity, we assume \(\mathfrak{m}=\mathscr{L}^{3}\). As it is apparent from the proof, the same argument can be carried out in the general case.
According to Proposition 5.8, since \(\|\cdot\|\) is not \(C^{1}\), its dual norm \(\|\cdot\|_{*}\) is not strictly convex. In particular, there exists a straight segment contained in the sphere \(S^{\|\cdot\|_{*}}_{1}(0)=\partial\Omega^{\circ}\). Since the differential structure of the Heisenberg group is invariant under rotations around the \(z\)-axis, we can assume without losing generality that this segment is vertical in \(\mathbb{R}^{2}\cap\{x>0\}\), i.e. there exists \(\bar{x}\in\mathbb{R}\) and an interval \(I:=[y_{0},y_{1}]\subset\mathbb{R}\) such that
\[\{\bar{x}\}\times I\subset\partial\Omega^{\circ}.\]
Moreover, we can take the interval \(I\) to be maximal, namely for every \(y\not\in I\) we have \((\bar{x},y)\not\in\Omega\) (see Figure 8). Let \(\psi_{0}\in[0,2\mathbb{S}^{\circ})\) be such that \(Q_{\psi_{0}}=(\bar{x},y_{0})\), then it holds that
\[(\bar{x},y)=Q_{\psi_{0}+(y-y_{0})\bar{x}},\qquad\text{for every $y\in I$}. \tag{82}\]
As a consequence, we have that
\[\cos_{\Omega^{\circ}}(\psi_{0}+(y-y_{0})\bar{x})=\bar{x}\quad\text{and}\quad \sin_{\Omega^{\circ}}(\psi_{0}+(y-y_{0})\bar{x})=y,\qquad\text{for $y\in I$}. \tag{83}\]
Let \(y_{2}=\frac{1}{2}(y_{0}+y_{1})\) and \(\phi_{0}=\psi_{0}+\frac{1}{2}(y_{1}-y_{0})\bar{x}\), so that \((\bar{x},y_{2})=Q_{\phi_{0}}\) by (82). Moreover, take \(\phi_{1}>\psi_{1}:=\psi_{0}+(y_{1}-y_{0})\bar{x}\) sufficiently close to \(\psi_{1}\) (so that \(Q_{\phi_{1}}\) is not in the flat part of \(\partial\Omega^{\circ}\)) and call \(\bar{r}=\phi_{1}-\phi_{0}>0\). We are now going to prove that there exists a suitably small neighborhood \(\mathscr{A}\subset T^{*}_{0}\mathbb{H}\cong[0,2\mathbb{S}^{\circ})\times \mathbb{R}\times[0,\infty)\) of the point \((\phi_{0},\bar{r},\bar{r})^{3}\) such that
\[\mathscr{L}^{3}\big{(}G(\mathscr{A};1)\big{)}>0. \tag{84}\]
For proving this claim, one could argue directly by computing the Jacobian of the map \(G(\cdot,1)\) at the point \((\phi_{0},\bar{r},\bar{r})\), however the computations are rather involved and do not display the geometrical features of the space. Thus, we instead prefer to present a different strategy, which highlights the interesting behaviour of geodesics.
Consider the map
\[F(\phi,\omega,r):=\big{(}x(\phi,\omega,r;1),y(\phi,\omega,r;1)\big{)},\]
where \(x(\phi,\omega,r;t),y(\phi,\omega,r;t)\) are defined as in (50), and observe that
\[F(\phi_{0},\bar{r},\bar{r})=(\sin_{\Omega^{\circ}}(\phi_{1})-\sin_{\Omega^{ \circ}}(\phi_{0}),\cos_{\Omega^{\circ}}(\phi_{0})-\cos_{\Omega^{\circ}}(\phi_{ 1}))=(\sin_{\Omega^{\circ}}(\phi_{1})-y_{2},\bar{x}-\cos_{\Omega^{\circ}}(\phi_ {1})).\]
Proceeding with hindsight, let \(\varepsilon>0\) such that \(\varepsilon<\min\{\frac{1}{2}(\phi_{1}-\psi_{1}),\frac{1}{4}(\psi_{1}-\phi_{0})\}\) and consider the intervals \(I_{\phi}=[\phi_{0}-\varepsilon,\phi_{0}+\varepsilon]\) and \(I_{r}=[\bar{r}-\varepsilon,\bar{r}+\varepsilon]\), then the set \(F(I_{\phi}\times I_{r}\times I_{r})\) is a neighborhood of \(F(\phi_{0},\bar{r},\bar{r})\). Indeed, due to our choice of \(\phi_{1}\) the set
\[\{F(\phi_{0},r,r)\,:\,r\in[\bar{r}-\varepsilon/2,\bar{r}+\varepsilon/2]\} \subset\mathbb{R}^{2} \tag{85}\]
is a curve that is not parallel to the \(x\)-axis. Moreover, for every small \(\delta\) such that \(|\delta|<\psi_{1}-\phi_{0}=\phi_{0}-\psi_{0}\) and every \(r\in[\bar{r}-\varepsilon/2,\bar{r}+\varepsilon/2]\), the equalities in (83) imply the following relation:
\[F(\phi_{0}+\delta,r-\delta,r-\delta) =(\sin_{\Omega^{\circ}}(\phi_{0}+r)-\sin_{\Omega^{\circ}}(\phi_{ 0}+\delta),\cos_{\Omega^{\circ}}(\phi_{0}+\delta)-\cos_{\Omega^{\circ}}(\phi_ {0}+r))\] \[=(\sin_{\Omega^{\circ}}(\phi_{0}+r)-\sin_{\Omega^{\circ}}(\phi_{ 0})-\delta/\bar{x},\bar{x}-\cos_{\Omega^{\circ}}(\phi_{0}+r))\] \[=(-\delta/\bar{x},0)+F(\phi_{0},r,r).\]
This shows that \(F(I_{\phi}\times I_{r}\times I_{r})\) contains all the sufficiently small horizontal translation of the set in (85) (see Figure 9), so it is a neighborhood of \(F(\phi_{0},\bar{r},\bar{r})\). In particular \(\mathscr{L}^{2}(F(I_{\phi}\times I_{r}\times I_{r}))>0\).
Now we claim that, for every point \((\tilde{x},\tilde{y},\tilde{z})=G(\tilde{\psi},\tilde{\omega},\tilde{r};1)\) with \(\tilde{\psi}\in I_{\phi}\), \(\tilde{\omega}\in I_{r}\) and \(\tilde{r}\in I_{r}\), there exists an interval \(J_{z}\ni\tilde{z}\) (depending on \(\tilde{x}\) and \(\tilde{y}\)) such that
\[\{(\tilde{x},\tilde{y},z)\,:\,z\in J_{z}\}\subset G([\tilde{\psi}-\varepsilon, \tilde{\psi}+\varepsilon],[\tilde{\omega}-\varepsilon,\tilde{\omega}+\varepsilon], [\tilde{r}-\varepsilon,\tilde{r}+\varepsilon];1). \tag{86}\]
This is enough to prove (84), indeed, on the one hand, (86) implies that
\[\{(\tilde{x},\tilde{y},z)\,:\,z\in J_{z}\}\subset G(I^{\prime}_{\psi}\times I^{ \prime}_{r}\times I^{\prime}_{r};1), \tag{87}\]
where \(I^{\prime}_{\psi}=[\phi_{0}-2\varepsilon,\phi_{0}+2\varepsilon]\), and \(I^{\prime}_{r}=[\bar{r}-2\varepsilon,\bar{r}+2\varepsilon]\). On the other hand, since (87) holds for every point \((\tilde{x},\tilde{y})\in F(I_{\phi}\times I_{r}\times I_{r})\), we deduce that
\[\mathscr{L}^{3}\big{(}G(I^{\prime}_{\psi}\times I^{\prime}_{r}\times I^{ \prime}_{r};1)\big{)}\geq\int_{F(I_{\phi}\times I_{r}\times I_{r})}\mathscr{L} ^{1}(J_{z}(\tilde{x},\tilde{y}))\,\mathrm{d}\tilde{x}\,\mathrm{d}\tilde{y}>0.\]
which implies (84) with \(\mathscr{A}=I^{\prime}_{\psi}\times I^{\prime}_{r}\times I^{\prime}_{r}\).
We proceed to the proof of claim (86): let \((\tilde{x},\tilde{y},\tilde{z})=G(\tilde{\psi},\tilde{\omega},\tilde{r};1)\) with \(\tilde{\psi}\in I_{\phi}\), \(\tilde{\omega}\in I_{r}\) and \(\tilde{r}\in I_{r}\) and consider the family of parallel lines
\[\big{\{}l(s)=\{y=s+kx\}\,:\,s\in\mathbb{R}\big{\}} \tag{88}\]
in \(\mathbb{R}^{2}\), following the direction identified by the vector \((\tilde{x},\tilde{y})\), see Figure 10. Call \(S^{\prime}\subset\mathbb{R}^{2}\) the sphere \(\partial\Omega^{\circ}\) dilated by \(\frac{\tilde{x}}{\tilde{w}}\) and rotated by \(-\frac{\pi}{2}\). Then, there exists \(\bar{s}\in\mathbb{R}\) such that \(l(\bar{s})\) intersects \(S^{\prime}\) in the points
\[\frac{\tilde{r}}{\tilde{w}}(\sin_{\Omega^{\circ}}(\tilde{\psi}),-\cos_{\Omega ^{\circ}}(\tilde{\psi}))\qquad\text{ and }\qquad\frac{\tilde{r}}{\tilde{w}}(\sin_{\Omega^{\circ}}(\tilde{\psi}+ \tilde{r}),-\cos_{\Omega^{\circ}}(\tilde{\psi}+\tilde{r})).\]
Let \(a(s)\) be the function that associates to \(s\) the area inside \(S^{\prime}\) and below \(l(s)\) and let \(d(s)\) be the function that associates to \(s\) the (Euclidean) distance between the two intersections of \(l(s)\) with \(S^{\prime}\) (see Figure 11). In particular, by our choice of \(\bar{s}\), we have \(d(\bar{s})=\|(\tilde{x},\tilde{y})\|_{eu}\) and, according to Proposition 5.12, \(a(\bar{s})=\tilde{z}\). Moreover, note that, by Lemma 5.28, the function
\[s\mapsto\frac{a(s)}{d(s)^{2}}\text{ is strictly increasing}. \tag{89}\]
Now, for every \(s\) close enough to \(\bar{s}\), the line \(l(s)\) intersects \(S^{\prime}\) in the points
\[\frac{\tilde{r}}{\tilde{\omega}}(\sin_{\Omega^{\circ}}(\psi(s)),-\cos_{\Omega ^{\circ}}(\psi(s)))\qquad\text{ and }\qquad\frac{\tilde{r}}{\tilde{\omega}}(\sin_{\Omega^{\circ}}(\psi(s)+r(s)),- \cos_{\Omega^{\circ}}(\psi(s)+r(s))),\]
with \(\psi(s)\in[\tilde{\psi}-\varepsilon,\tilde{\psi}+\varepsilon]\) and \(r(s)\in[\tilde{r}-\varepsilon/2,\tilde{r}+\varepsilon/2]\). By Proposition 5.12 and our choice of parallel lines in (88), we deduce that
\[G\bigg{(}\psi(s),r(s),r(s)\frac{\|(\tilde{x},\tilde{y})\|_{eu}}{d(s)};1\bigg{)} =\bigg{(}\tilde{x},\tilde{y},\frac{\|(\tilde{x},\tilde{y})\|_{eu}^{2}}{d(s)^{2 }}\cdot a(s)\bigg{)}.\]
Observe that, since \(d\) is a continuous function and \(d(\bar{s})=\left\|(\tilde{x},\tilde{y})\right\|_{eu}\), for every \(s\) sufficiently close to \(\bar{s}\) we have
\[r(s)\in[\tilde{r}-\varepsilon,\tilde{r}+\varepsilon]\subset I^{\prime}_{r} \qquad\text{and}\qquad r(s)\frac{d(s)}{\left\|(\tilde{x},\tilde{y})\right\|_{ eu}}\in[\tilde{r}-\varepsilon,\tilde{r}+\varepsilon]\subset I^{\prime}_{r}.\]
Then, (89) is sufficient to conclude the existence of an interval \(J_{z}\subset\mathbb{R}\) as in (86). This concludes the proof of claim (84) with the choice \(\mathscr{A}=I^{\prime}_{\psi}\times I^{\prime}_{r}\times I^{\prime}_{r}\).
Finally, we are ready to disprove the measure contraction property \(\mathsf{MCP}(K,N)\), taking as marginals
\[\mu_{0}:=\delta_{\mathrm{e}}\qquad\text{and}\qquad\mu_{1}:=\frac{1}{\mathscr{ L}^{3}(G(\mathscr{A};1))}\,\mathscr{L}^{3}|_{G(\mathscr{A};1)}.\]
Note that, thanks to our construction of the set \(\mathscr{A}\), the curve \(t\mapsto G(\lambda;t)\), with \(\lambda\in\mathscr{A}\), is the unique geodesic joining the origin and \(G(\lambda;1)\) (cf. Theorem 5.9). Therefore, according to Remark 2.6, it is enough to contradict (8) with \(A^{\prime}=A=G(\mathscr{A};1)\). In particular, we prove that there exists \(t_{0}\in(0,1)\) such that
\[M_{t}(\{\mathrm{e}\},A)\subset\{y=0,z=0\},\qquad\forall\,t<t_{0}. \tag{90}\]
To this aim, fix any \((\phi,\omega,r)\in\mathscr{A}\) and note that, for every \(t<\frac{\psi_{1}-\phi}{\omega}\), (83) implies that
\[\cos_{\Omega^{\circ}}(\phi+\omega t)=\bar{x}\qquad\text{and}\qquad\sin_{ \Omega^{\circ}}(\phi+\omega t)=\sin_{\Omega^{\circ}}(\phi)+\frac{\omega t}{ \bar{x}}.\]
From these relations, it follows immediately that
\[y(\phi,\omega,r;t)=0\qquad\text{and}\qquad z(\phi,\omega,r;t)=0,\]
for every \(t<\frac{\psi_{1}-\phi}{\omega}\). Observe that, by our choice of \(\varepsilon\) small enough, \(\frac{\psi_{1}-\phi}{\omega}\) is bounded from below by a positive constant uniformly as \(\phi\in I^{\prime}_{\phi}\) and \(\omega\in I^{\prime}_{r}\), thus ensuring the existence of a constant \(t_{0}\in(0,1)\) for which (90) holds.
_Remark 5.27_.: In the last step of the proof of the preceding theorem, we established the existence of a family of branching geodesics: namely those corresponding to a flat part of \(\partial\Omega^{\circ}\). In particular, when \(\mathbb{H}\) is equipped with a strictly convex and singular norm, geodesics can branch, although they are unique. This is remarkable as examples of branching spaces usually occur when geodesics are not unique.
**Lemma 5.28**.: Let \(f:\mathbb{R}\to\mathbb{R}\) be a concave and \(C^{1}\) function. Assume that there exist \(\alpha_{0}<\beta_{0}\) such that
\[f(\alpha_{0})=f(\beta_{0})=0\qquad\text{and}\qquad f>0\,\text{ on }\,(\alpha_{0},\beta_{0}). \tag{91}\]
For every \(s\in[0,\max f)\), define \(\alpha(s)<\beta(s)\) such that
\[\{y=s\}\cap\operatorname{Graph}(f)=\{(\alpha(s),s)\,;(\beta(s),s)\}.\]
Denote by \(a(s)\) the area enclosed by the line \(\{y=s\}\) and the graph of \(f\), and by \(d(s):=\beta(s)-\alpha(s)\) (see Figure 12). Then,
\[[0,\max f)\ni s\mapsto\frac{a(s)}{d^{2}(s)}\qquad\text{is strictly decreasing}.\]
Proof.: Fix \(0\leq s_{1}<s_{2}<\max f\), then it is sufficient to prove that
\[A_{1}:=a(s_{1})>\frac{d^{2}(s_{1})}{d^{2}(s_{2})}a(s_{2})=:A_{2}. \tag{92}\]
Observe that, by definition, \(a(s)=\int_{\alpha(s)}^{\beta(s)}\left(f(t)-s\right)\mathrm{d}t\), therefore:
\[A_{1}=\int_{\alpha(s_{1})}^{\beta(s_{1})}\left(f(t)-s_{1}\right)\mathrm{d}t, \qquad A_{2}=\frac{d(s_{1})}{d(s_{2})}\int_{\alpha(s_{1})}^{\beta(s_{1})} \left[f\left(\alpha(s_{2})+\frac{d(s_{2})}{d(s_{1})}(t-\alpha(s_{1}))\right)-s _{2}\right]\,\mathrm{d}t,\]
where, for \(A_{2}\), we used the change of variables \(t\mapsto\alpha(s_{1})+\frac{d(s_{1})}{d(s_{2})}(t-\alpha(s_{2}))\). For ease of notation, set \(g\) to be the integrand of \(A_{2}\), namely:
\[g(t):=\frac{d(s_{1})}{d(s_{2})}\left[f\left(\alpha(s_{2})+\frac{d(s_{2})}{d(s_ {1})}(t-\alpha(s_{1}))\right)-s_{2}\right],\qquad\forall\,t\in\mathbb{R}.\]
Now, let \(\tilde{t}\in[\alpha(s_{1}),\beta(s_{1})]\) be such that
\[\tilde{t}=\alpha(s_{2})+\frac{d(s_{2})}{d(s_{1})}(\tilde{t}-\alpha(s_{1})).\]
Note that, by linearity, \(t\leq\tilde{t}\) if and only if \(t\leq\alpha(s_{2})+\frac{d(s_{2})}{d(s_{1})}(t-\alpha(s_{1}))\). Thus, for every \(t\leq\tilde{t}\), the concavity of \(f\) yields that
\[g^{\prime}(t)=f^{\prime}\bigg{(}\alpha(s_{2})+\frac{d(s_{2})}{d(s_{1})}(t- \alpha(s_{1}))\bigg{)}\leq f^{\prime}(t). \tag{93}\]
Therefore, observing that \(g(\alpha(s_{1}))=0\), we deduce that, for every \(t\leq\tilde{t}\),
\[g(t)=\int_{\alpha(s_{1})}^{t}g^{\prime}(r)\,\mathrm{d}r\leq\int_{\alpha(s_{1} )}^{t}f^{\prime}(r)\,\mathrm{d}r=f(t)-s_{1}. \tag{94}\]
The same inequality can be proved for every \(t\geq\tilde{t}\), proceeding in a symmetric way. Thus, integrating both sides of (94), we obtain \(A_{1}\geq A_{2}\). Finally, observe that if \(A_{1}=A_{2}\) then also (93) is an equality, for every \(t<\tilde{t}\). By concavity, this implies that \(f^{\prime}(t)\equiv c_{1}\) for every \(t<\tilde{t}\). Analogously, \(f^{\prime}(t)\equiv c_{2}\) for every \(t>\tilde{t}\) and (91) implies that we must have \(c_{1}\neq c_{2}\). Thus, \(f\) is linear on \((\alpha(s_{1})),\tilde{t})\) and \((\tilde{t},\beta(s_{1})))\) and not differentiable at \(\tilde{t}\). But, this contradicts that \(f\in C^{1}(\mathbb{R})\), proving claim (92). |
2307.07289 | Real-time Graph Building on FPGAs for Machine Learning Trigger
Applications in Particle Physics | We present a design methodology that enables the semi-automatic generation of
a hardware-accelerated graph building architectures for locally constrained
graphs based on formally described detector definitions. In addition, we define
a similarity measure in order to compare our locally constrained graph building
approaches with commonly used k-nearest neighbour building approaches. To
demonstrate the feasibility of our solution for particle physics applications,
we implemented a real-time graph building approach in a case study for the
Belle~II central drift chamber using Field-Programmable Gate Arrays~(FPGAs).
Our presented solution adheres to all throughput and latency constraints
currently present in the hardware-based trigger of the Belle~II experiment. We
achieve constant time complexity at the expense of linear space complexity and
thus prove that our automated methodology generates online graph building
designs suitable for a wide range of particle physics applications. By enabling
an hardware-accelerated pre-processing of graphs, we enable the deployment of
novel Graph Neural Networks~(GNNs) in first level triggers of particle physics
experiments. | Marc Neu, Juergen Becker, Philipp Dorwarth, Torben Ferber, Lea Reuter, Slavomira Stefkova, Kai Unger | 2023-07-14T12:02:26Z | http://arxiv.org/abs/2307.07289v2 | # Real-time Graph Building on FPGAs for Machine Learning Trigger Applications in Particle Physics
###### Abstract
We present a design methodology that enables the semi-automatic generation of a hardware-accelerated graph building architectures for locally constrained graphs based on formally described detector definitions. In addition, we define a similarity measure in order to compare our locally constrained graph building approaches with commonly used k-nearest neighbour building approaches. To demonstrate the feasibility of our solution for particle physics applications, we implemented a real-time graph building approach in a case study for the Belle II central drift chamber using Field-Programmable Gate Arrays (FPGAs). Our presented solution adheres to all throughput and latency constraints currently present in the hardware-based trigger of the Belle II experiment. We achieve constant time complexity at the expense of linear space complexity and thus prove that our automated methodology generates online graph building designs suitable for a wide range of particle physics applications. By enabling an hardware-accelerated pre-processing of graphs, we enable the deployment of novel Graph Neural Networks (GNNs) in first level triggers of particle physics experiments.
**Keywords:** graph building, graph neural networks, field programmable gate arrays,particle physics, machine learning, nearest neighbour, Belle II
## 1 Introduction
Machine Learning is widely used in particle physics for various reconstruction tasks and Graph Neural Networks (GNNs) are recognised as one possible solution for irregular geometries in high energy physics. GNNs have proven suitable for jet clustering [1], calorimeter clustering [2], particle track reconstruction [3, 4, 5], particle tagging [6, 7] and particle flow reconstruction [8]. However, all applications described above are implemented in an offline environment, relying on high performance computing clusters utilising Central Processing Units (CPUs) and Graphics Processing Units (GPUs) to achieve the required throughput for the analysis of collision events. Therefore, existing implementations are not suitable for
real-time particle tracking and reconstruction in trigger systems of particle detectors.
The realisation of GNNs on FPGAs for particle tracking is an active area of research [4, 9, 10, 11]. Due to latency and throughput constraints, a suitable implementation meeting all requirements imposed by particle physics experiments is yet to be developed. Especially the generation of input graphs under latency constraints is a challenge that has not received full attention so far in the evaluation of existing prototypes. Current prototypes as described in [4, 9] are trained on preprocessed graph datasets, taking into account geometric properties of detectors. However, a holistic implementation of GNNs for triggers requires the consideration of the entire data flow chain. This raises the question on how to build graphs under latency constraints in high-throughput particle physics applications.
In our work, we consider constraints from currently operating first level trigger systems [12, 13, 14]: event processing rates in the order of \(10\,\mathrm{MHz}\) to \(100\,\mathrm{MHz}\) and latencies in the order of \(1\,\mathrm{\SIUnitSymbolMicro s}\) to \(10\,\mathrm{\SIUnitSymbolMicro s}\) render the utilisation of compound platforms based on CPUs and Field Programmable Gate Arrays (FPGAs) used in other research areas infeasible [15, 16].
To overcome the research gap, our work comprises the following contributions: First, we outline existing nearest neighbour graph-building methods and evaluate their feasibility for trigger applications. Second, we develop a methodology to transform formal graph-building approaches to hardware accelerated processing elements in an automated way. Third, we evaluate our proposed toolchain on the Belle II central drift chamber (CDC), demonstrating the feasibility of our solution to build graphs under the constraints imposed by current trigger systems.
The paper is organised as follows: In section 2 we give an overview of related work on FPGA-accelerated graph building. The CDC, the event simulation and details of the beam background simulation are described in section 3. The methodology for transforming discrete sensor signals into a graphical representation is discussed in section 4. The procedure for implementing real-time graph building in hardware is described in section 5. A concrete example of real-time graph building for the Belle II CDC is provided in section 6. We summarise our results in section 7.
## 2 Related Work
Previous work on FPGA-accelerated GNNs for particle physics utilise input graphs based on synchronous sampled collision events as input for training and inference of the respective networks [4, 17]. Early studies made use of fully connected graphs which lead to scalability challenges for detectors with more than 10 individual sensors [18]. Typical particle physics trigger systems have much higher number of sensors though (see table 1).
Aiming to significantly reduce the maximum size of input graphs, the geometric arrangement of sensors in the detector has been considered recently [3, 5]. Nevertheless, input graphs are currently generated offline, stored in the FPGA memory and are accessed over AXI1-Mapped Memory interfaces in prototype implementations [9]. However, as sensors in detectors are read out as individual channels without providing relational information, the processing of input graphs must be considered as part of the critical path in online track reconstruction and trigger algorithms.
Footnote 1: AXI: Advanced eXtensible Interface, is an on-chip communication bus protocol.
While building suitable input graphs for neural networks is a rather recent application, general nearest neighbour (NN) graph building has been studied extensively in literature [22, 23, 24]. In order to reduce the computational demand of NN graph-building algorithms, continuous efforts have
\begin{table}
\begin{tabular}{c c c c} \hline \hline & CMS & Belle II & DUNE \\ \cline{2-4} & [9, 21] & [19] & [20] \\ \hline Subsystem & Muon & CDC & ProtoDune SP \\ \hline Number of & 6500 & 14 336 & 15 360 \\ \hline Trigger Data & 40 MHz & 32 MHz & 2 MHz \\ Input Rate & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Input parameters for the first level trigger systems in three current particle physics detectors. For CMS, 95 % quantiles for the number of sensor hits per event is reported in [9], while for the Belle II CDC [19] and DUNE [20] the number of sensors inputs is given.
been made towards building approximate graphs making use of local sensitive hashing [25, 26], backtracking [27], or small world graphs [28]. Performance improvement from these algorithms have been demonstrated for applications targeting high-dimensional graphs containing more than \(10^{6}\) vertices such as database queries [29]. There are two key challenges that limit the generalisation of these techniques in the particle physics trigger context. First, \(k\)-nearest neighbour (\(k\)-NN) algorithms inherently rely on sequential processing and present challenges in efficient parallelisation. Second, while there is a wide range of graph-processing frameworks available (see Ref. [30] for a survey on graph processing accelerators), none of them meet the stringent latency and throughput requirements of current particle physics trigger systems: FFNG [31] focuses on the domain of high-performance computing and therefore does not impose hard real-time constraints. GraphGen [32] relies on external memory controllers which introduce additional latency into the system. GraphACT [16, 33] utilise preprocessing techniques on CPU-FPGA compound structures in order to optimise throughput and energy efficiency which again introduces non determinism and additional latency. And lastly, current GNN accelerators like HyGCN [34] or AWB-GCN [35] use the previously described techniques to reduce the required system bandwidth and improve the energy efficiency of the inference. They are therefore not suitable for particle physics applications.
## 3 Simulation and Dataset
In this work, we use simulated Belle II events to benchmark the graph-building algorithms. The detector geometry and interactions of final state particles with the material are simulated using GEANT4[36], which is combined with the simulation of a detector response in the Belle II Analysis Software Framework [37]. The Belle II detector consists of several subdetectors arranged around the beam pipe in a cylindrical structure that is described in detail in Ref. [38, 39]. The solenoid's central axis is the \(z\)-axis of the laboratory frame. The longitudinal direction, the transverse \(xy\) plane with azimuthal angle \(\phi\), and the polar angle \(\theta\) are defined with respect to the detector's solenoidal axis in the direction of the electron beam. The CDC consists of 14336 sense wires surrounded by field wires which are arranged in nine so-called superlayers of two types: axial and stereo superlayers. The stereo superlayers are slightly angled, allowing for 3D reconstruction of the track. In the simulated events, we only keep the detector response of the CDC.
We simulated two muons (\(\mu^{+}\),\(\mu^{-}\)) per event with momentum \(0.5<p<5\) GeV/c, and direction \(17^{\circ}<\theta<150^{\circ}\) and \(0^{\circ}<\phi<360^{\circ}\) drawn randomly from independent uniform distributions in \(p\), \(\theta\), and \(\phi\). The generated polar angle range corresponds to the full CDC acceptance. Each of the muons is displaced from the interaction point between 20 cm and 100 cm, where the displacement is drawn randomly from independent uniform distributions.
As part of the simulation, we overlay simulated beam background events corresponding to instantaneous luminosity of \(\mathcal{L}_{\text{beam}}=6.5\times 10^{35}\,\text{cm}^{-2}\text{s}^{-1}\)[40, 41]. The conditions we simulate are similar to the conditions that we expect to occur when the design of the experiment reaches its ultimate luminosity.
An example of an event display for a physical event \(e^{+}e^{-}\rightarrow\mu^{+}\mu^{-}(\gamma)\) is shown in fig. 1.
Figure 1: Typical event display showing the transverse plane of the Belle II CDC. Hits generated by signal muon particles are shown with purple markers and background hits by black markers.
## 4 Graph Building
This work proposes a methodology for transforming discrete sensor signals captured inside a particle detector into a graphical representation under real-time constraints. Particular importance is given to the use-case of particle physics trigger algorithms, adhering to tight latency constraints in the sub-microsecond timescale.
Current large-scale particle detectors are composed of various discrete sensors and often, due to technical limitations, placed heterogeneously inside the system. For this reason, signals from the sensors cannot be considered regularly distributed, as it is the case with, for example, monolithic image sensors. In the following a detector \(D\) is defined as a set of \(N\) discrete sensors \(\{\vec{s}_{1},...,\vec{s}_{N}\}\), where each individual sensor \(\vec{s}_{i}\) is described by a feature vector of length \(f\). Some examples for described features are the euclidean location inside the detector, the timing information of the received signal, or a discrete _hit identifier_. To map relational connections between individual sensors, a graph based on the detector description is generated which contains the respective sensor features.
Formally described, a graph building algorithm generates an non-directional graph \(G(D,E)\), where \(D\) is the set of vertices of the graph, and \(E\subseteq D\times D\) is the set of edges. The set of vertices is directly given by the previously described set of sensors in a detector. Each edge \(e_{ij}=e(\vec{s}_{i},\vec{s}_{j})\in E\) with \(\vec{s}_{i},\vec{s}_{j}\in D\) in the graph connects two sensors based on a building specification, that depends on sensor features. In the following, we consider the case of building non-directed graphs. We do not introduce any fundamental restrictions that limit the generalisation of our concept to directed graphs.
In general, graph building approaches are tailored to the specific detector and physics case. We consider three approaches that can be classified into two classes of nearest-neighbour graph building: locally constrained graphs, and locally unconstrained graphs.
Figure 2 depicts an exemplary cut-out of a detector, in which sensors are placed heterogeneously in two-dimensional space. For simplicity, sensors are aligned in a grid-like structure without restricting the generality of our graph-building approach. A graph is built for a query vertex which is depicted by a solid black circle. We use the exemplary query vertex to illustrate NN-graph building on a single vertex for simplicity. In the following, we compare the three building approaches and explain their differences.
### \(\boldsymbol{k}\)-Nn
\(k\)-NN graph building is illustrated on a single query node in fig. 1(a). Repeating the building algorithm sequentially leads to a worst-case execution time complexity of \(\mathcal{O}(k|D|\log(|D|)\)[22]. To reduce the execution time, parallelization of the algorithm has been studied in Ref. [23], achieving a lower theoretical complexity. Based on the optimization, a linear \(\mathcal{O}(|D|)\) time complexity is achieved in experimental evaluation [24]. Nevertheless, substantial processing overhead and limitations through exclusive-read, exclusive-write memory interfaces limit the usability for trigger applications. To achieve a higher degree of parallelization, algorithms as described in Ref. [26, 27] make use of locally constrained approximate graphs.
Figure 2: Example for the three different approaches of building nearest neighbour graphs. Sensors inside a detector are depicted as circles. A sensor which is hit by a particle is identified by a solid outline, those without a hit by a dotted outline. The query vertices are depicted in black. Edges connecting two nearest neighbours are indicated by a solid line. Nodes filled with purple are considered candidate sensors, which are part of the specified search pattern around the query vertex.
### \(\varepsilon\)-Nn
\(\varepsilon\)-NN graph building is illustrated on a single query node in fig. 1(b). The parameter \(\varepsilon\) defines an upper bound for the distance of a candidate vertex from the query vertex. All vertices for which eq. (1) holds true are connected in a graph, yielding a locally constrained graph. Figuratively, a uniform sphere is placed over a query point joining all edges which are inside the sphere into the graph:
\[d(\vec{x}_{i},\vec{y}_{i})=\left\|\vec{x}_{i}-\vec{x}_{j}\right\|_{2}<\epsilon \tag{1}\]
Since the \(\varepsilon\)-NN approach is controlled by only one parameter, it is a general approach to building location-constrained graphs. However, variations between adjacent sensors in heterogeneous detectors are not well represented in the \(\varepsilon\)-NN algorithm.
### \(p\)-Nn
Pattern nearest-neighbour (\(p\)-NN) graph building is illustrated on a single query node in fig. 1(c). For building the graph, every candidate sensor is checked and, if a predefined condition is fulfilled, the edge between candidate node and query node is included in the graph.
### Comparison
When comparing the \(k\)-NN, the \(\varepsilon\)-NN and the \(p\)-NN algorithms, it is obvious that in general all three approaches yield different graphs for the same input set of sensors. While the \(p\)-NN building and the \(\varepsilon\)-NN building can both be considered locally constrained algorithms, the \(k\)-NN approach differs as outliers far away from a query point might be included. Nevertheless it is noted in Ref. [42], that on a uniformly distributed dataset a suitable upper bound \(\varepsilon^{*}\) exists, for which the resulting \(\varepsilon\)-NN graph is a good approximation of corresponding \(k\)-NN graph.
## 5 Toolchain
In the following, we leverage the described mathematical property to demonstrate the feasibility of building approximate \(k\)-NN graphs for trigger applications. First, we provide a methodology to evaluate the approximate equivalence of \(k\)-NN, \(\varepsilon\)-NN and \(p\)-NN graph building approaches, providing a measure of generality for \(k\)-NN parameters chosen in offline track reconstruction algorithms [3, 20]. Second, we semi-automatically generate a generic hardware implementation for the \(p\)-NN graph building as an application-specific version of the \(\varepsilon\)-NN graph building, thus demonstrating the feasibility of graph-based signal processing in low-level trigger systems. Third, we perform a case study on the Belle II trigger system demonstrating achievable throughput and latency measures in the environment of trigger applications.
### Hardware Generator Methodology
Algorithms that generate graphs by relating multiple signal channels belong to the domain of digital signal processing. As such they share characteristics of typical signal processing applications like digital filters or neural networks. Both applications are data-flow dominated and require a large number of multiply-and-accumulate operators and optimizations for data throughput. Thus, implementing these algorithms on FPGAs show promising results in comparison to an implementation on general purpose processors [43].
Developing custom digital logic for FPGAs is time-consuming and error-prone. To increase productivity, various high-level synthesis (HLS) frameworks have been developed that transform digital signal processing applications from formal definitions into hardware implementations, reducing the required design effort. For example, digital filters are automatically implemented by commercially available development tools like MATLAB and hardware-aware training and deployment of neural networks is addressed by open-source toolchains like FINN [44, 45] and HLS4ML [46, 47]. While these frameworks have lowered the entry barriers for FPGA-algorithm development, their off-the-shelf usability is limited to pre-defined neural network architectures. In addition, adapting the frameworks to support custom architectures is often time-consuming and error-prone.
Therefore, we propose a generator-based methodology enabling to transform a graph building algorithm into an actual firmware implementation. Figure 3 illustrates our development flow
for both the generation of an intermediate representation of the circuit and an algorithmic evaluation of the building approach. As an input, a database containing the formal definition of a detector is expected, alongside hyperparameters describing the building approach. Based on the selected approach, an intermediate-graph representation is generated, containing information how the building approach is mapped onto the detector. The intermediate-graph representation serves as an input for the hardware generation and the algorithmic evaluation.
On one side, an intermediate-circuit representation is generated by combining the intermediate-graph representation and parameterised hardware modules from our hardware description language (HDL) template library. We use Chisel3 [48] as hardware-design language providing an entry point to register transfer-level circuit designs in Scala.
On the other side, the intermediate-graph representation is evaluated on a user-defined dataset and compared to a generic \(k\)-NN graph-building approach. To achieve a quantitative comparison we introduce similarity metrics for different operating conditions in the detector in section 6. This result can be used to iteratively adapt hyperparameters in the \(\varepsilon\)-NN or \(p\)-NN approach, improving the similarity to \(k\)-NN graphs that are often used in offline track reconstruction.
### Intermediate-Graph Representation
The parameter \(\varepsilon\) in the \(\varepsilon\)-NN approach and the pattern function in the \(p\)-NN approach limit the dimensionality of the graph under construction. In comparison to fully-connected graphs, the maximum number of edges is lowered by imposing local constraints on the connectedness of sensors in the detector. Local constraints are implemented by considering the influence of static sensor features, like euclidean distances between sensors, during design time of the FPGA firmware. Leveraging the a-priori knowledge of the sensor position, the computational effort required during online inference of the algorithm is lowered.
Algorithm 1 describes the procedure to derive the intermediate-graph representation of an arbitrary graph-building procedure. As an input the formally described set of sensors \(D\) is given. Iterating over every sensor in the detector, the locality of not yet visited sensors is checked by a user-defined _metric_ describing the graph building approach. If a sensor is considered to be in the neighbourhood of another sensor, the connection is added to the resulting set of edge candidates \(E\). All edges in \(E\) must be checked for their validity during the inference of the online graph building.
The combination of the formal detector description and the set of candidate edges is sufficient to describe an arbitrary building approach on non-directed graphs. According to algorithm 1, the worst-case time complexity during design-time amounts to \(\mathcal{O}(\left|D\right|^{2})\), which is higher than the worst-case time-complexity of state-of-the-art \(k\)-NN building approaches. However, the worst-case time-complexity during run-time is now only dependent on the number of identified edges during design-time. Therefore, generating a graph of low dimensionality by choosing a suitable _metric_,
Fig. 3: Proposed generator-based methodology for our graph building approach. On the left side, the development flow for the hardware implementation is depicted, yielding an intermediate hardware representation. On the right side, flow for the algorithmic evaluation of the algorithms is shown.
considerably lowers the number of required comparisons at run-time. Such an optimization would not be possible when using a \(k\)-NN approach, as even for a low dimensionality all possible edges must be considered.
```
1:Set of Sensors \(D\)
2:Set of Edges \(E\)
3:procedurebuildGraph(\(D\))
4:\(E\leftarrow\emptyset\)
5:while\(D\not\subset\emptyset\)do
6:\(s_{i}\gets D.pop()\)
7:for all\(s_{j}\in D\)do
8:if\(metric(s_{i},s_{j})\)then
9:\(E\gets E\cup\{e_{ij}\}\)
10:endif
11:endfor
12:endwhile
13:return\(E\)
14:endprocedure
```
**Algorithm 1** Design-time graph building
### Full Toolchain Integration
Our methodology covers the conversion of an arbitrary graph building algorithm into an intermediate-circuit representation. The resulting intermediate-circuit representation, implemented on the FPGA as a hardware module, exposes multiple interfaces on the FPGA. On the input side, heterogeneous sensor data is supplied through a parallel interface as defined in the detector description. On the output side, graph features are accessible through a parallel register interface to provide edge features to successive processing modules.
Considering the application of our module in a latency-sensitive, high-throughput environment like particle experiments, direct access to graph data is required at the hardware level. Therefore bus architectures employed in general-purpose processors, like AXI or AMBA, are not suitable for our use case. To reduce communication overhead between registers, which store graph data, and algorithmic processing units, an analysis of data paths during the generation of the final FPGA firmware is required.
Figure 4 depicts exemplary, how our graph building methodology is combined with state-of-the-art HLS tools enabling the generation of hardware-accelerated neural networks. The left side of the figure depicts a generic HLS flow converting, for example, a PyTorch [49] neural network model into hardware modules. There are numerous HLS toolchains available for deploying neural networks on FPGAs, for example HLS4ML [47], FINN [44, 45] or ScaleHLS [50, 51]. The register transfer level description of hardware modules generated by HLS toolchains are composed of discrete registers, wires, and synthesizable operations. In a similar way, the right side of the figure depicts our proposed graph building procedure. The formal detector description and the user-defined graph building _metric_ are used as an input to generate a register-transfer level description of the hardware module. As both toolchains are generating hardware descriptions in the register transfer abstraction level, merging the two modules is feasible. Last, a top level design combining both modules in SystemVerilog [52] is generated for an FPGA-specific implementation using commercially available toolchains, for example Vivado ML [53].
### Module Architecture
Utilising the generated intermediate graph description, available generator templates, and
Figure 4: Exemplary integration of our graph building methodology into a state-of-the-art HLS design flows.
user-defined hyperparameters, a hardware module is generated at the register-transfer level. The system architecture of the module is depicted in fig. 5. The total number of graph edges is factorised into \(M\) edge processing elements and \(N\) graph edges per edge processing element. Readings from the detector sensors are routed to an array of \(M\) edge processing elements via a static distribution network. Every edge processing element builds \(N\) graph edges in a time-division multiplex. For each edge, two adjacent vertices are required which are provided to the edge processing element in two arrays of length \(N\). Consequently, graph edges are built from candidates identified at design time yielding a sparse array of both active and inactive edges. In the described architecture, all generated edges are accessible through parallel registers. In case a serial interface is required for successive algorithms, an interface transformation is achieved by adding FIFO modules.
Figure 6 illustrates the block level diagram of an edge processing element in detail. During design-time, each hardware module is allocated \(N\) edges which are built sequentially. Static allocation allows a-priori known sensor and edge features, like euclidean distances, to be stored in read-only registers. During run-time, the described module loads static features from the registers, combines them with variable input features, like the deposited energy, and classifies the edge as active or inactive. The online graph building is carried out in three steps. First, a pair of sensor readings is loaded from the shift registers, and static sensor and edge features are loaded from a static lookup table. Second, a Boolean flag is generated based on a neighbourhood condition e.g., a user-specified metric is fulfilled for two adjacent sensors. Third, the resulting feature vector of the edge is stored in the respective register. Feature vectors of all edge processing elements are routed via a second static distribution network mapping each edge to a fixed position in the output register.
The proposed architecture takes advantages of distributed lookup tables and registers on the FPGA in two ways. First, due to the independence of the edge processing elements space-domain multiplexing is feasible on the FPGA even for large graphs. Second, static features of the graph edges and vertices are stored in distributed registers allowing logic minimisation algorithms to reduce the required memory [54].
To conclude, we developed an architecture for online graph building which is well suited for the latency constrained environment of low level trigger systems in particle physics experiments. The variable output interface allows for an easy integration of successive trigger algorithms and leaves ample room for application specific optimisation. The number of output queues is controlled by the parameter \(N\) which yields a flexible and efficient design supporting variable degrees of time-domain multiplexing.
## 6 Case Study: Belle II Trigger
To demonstrate the working principle of our concept, we adapt our graph building methodology for the first level (L1) trigger of the Belle II
Figure 5: System architecture of the generated hardware module. Sensor signals are received on the left side of the figure. The resulting graph edges are shown on the right side.
Figure 6: The edge processing element consists of a stream converter, an edge classifier, and a lookup table. Edge registers are made available through a parallel interface.
experiment. The implementation focuses on the CDC (see section 3) that is responsible for all track-based triggers.
### Environment
The aim of the trigger system is to preselect collision events based on their reconstructed event topologies. In order to filter events, a multi-stage trigger system is employed. As a result, the effective data rate and thus the processing load of the data acquisition systems is reduced.
To give an overview of the constraints and requirements imposed by the experiment, the existing system is briefly described in the following. The L1 track triggers are shown schematically in in fig. 7. They perform real-time filtering with a strict latency requirement of 5 us[19]. The sense wires inside the CDC are sampled with 32 MHz and wire hits are accumulated for approximately 500 ns. In order to process all available input signals concurrently, a distributed FPGA-based platform is employed.
To obtain a trigger decision, track segments are generated from incoming events in parallel by performing space-division multiplexing. Based on the output of the track segment finder (TSF), multiple algorithms including conventional 2D and 3D track finding algorithms as well as a Neural Network Trigger [14] generate track objects of varying precision, efficiency, and purity for a Global Decision Logic [55].
The integration of GNNs in the L1 trigger system requires an online-graph building approach that is optimised for both latency and throughput. In this case study, we employ our proposed toolchain to generate an application-specific graph-building module as described in the previous section while adhering to constraints in the challenging environment of the Belle II experiment.
### Graph Building
The wire configuration of the CDC is mapped onto the formal detector definition from section 4, using wires as discrete sensors. These sensors are called nodes or vertices in the following. Inside the L1 trigger system, three signals are received per wire: a _hit identifier_, the _TDC readout_ and the _ADC readout_, where TDC is the output of a time-to-digital converter measuring the drift time, and ADC is the output of an analogue-to-digital converter measuring the signal height that is proportional to the energy deposition in a drift cell. Cartesian coordinates of the wires inside the detector are known during design time and used as static sensor features. Additionally, the distance between two vertices, which is also known during design-time, is considered as an edge feature.
Illustrating the working principle our graph building approaches, fig. 8 depicts four cut-outs of the CDC in the \(x\)-\(y\) plane for \(z=0\).
In sector A, _hit identifier_ received by the detector for an exemplary event are indicated by black markers. The other three sectors show one graph building approach each: Sector B depicts a \(k\)-NN graph for of \(k=6\), as there are up to six direct neighbours for each wire. The \(k\)-NN graphs connects wires that are widely separated. Sector C shows an \(\varepsilon\)-NN graph for \(\varepsilon=22\) mm. The specific value for \(\varepsilon\) is chosen, because 22 mm is in the range of one to two neighbour wires inside the CDC. This graph building approach connects hits in close proximity only, yielding multiple separated graphs. In addition, more edges are detected in the inner rings compared to the outer rings of the detector due to the higher wire density in this region. Finally, sector D shows a \(p\)-NN graph using the pattern described in fig. 9. The pattern extends the existing pattern [56, 57, 58] of the currently implemented TSF in the L1 trigger system by taking neighbours in the same superlayers into account. When comparing the \(\varepsilon\)-NN graphs and the \(p\)-NN graphs with each other, it is observed
Figure 7: Flowchart of the L1 trigger system at the Belle II experiment, limited to systems that use the wire hit information from the CDC [55].
that the degrees2 of \(p\)-NN vertices are more evenly distributed (see inserts in fig. 8).
Footnote 2: The degree of a vertex of a graph is the number of edges that are connected to the vertex.
### Parameter Exploration
In general, \(k\)-NN, \(\varepsilon\)-NN and \(p\)-NN algorithms generate different graphs for an identical input event. However, to replace \(k\)-NN graph building with a locally constrained graph building approach, the graphs should ideally be identical. As the generated graphs depends strongly on the chosen hyperparameters, on the geometry of the detector, and on the hit distribution of the events under observation, a quantitative measure of the similarity of the generated graphs between \(k\)-NN graphs and locally constrained graphs, such as \(\varepsilon\)-NN or \(p\)-NN graphs, is necessary. The optimal choice of the hyperparameter \(\varepsilon^{*}\) is the one that maximises the similarity for any \(k\). For this optimisation we use simulated events as described in section 3. We generate both the \(k\)-NN graphs and the locally constrained graphs on the dataset considering the neighbourhood of wires inside the detector. Edges of the \(k\)-NN graphs are labelled \(E_{k}\), whereas the edges of observed locally constrained graphs are labelled \(E_{l}\). We measure the similarity between the two graphs using the the binary classifications metrics recall and precision defined as
\[recall=\frac{|E_{k}\cap E_{l}|}{|E_{k}|}, \tag{2}\]
\[precision=\frac{|E_{k}\cap E_{l}|}{|E_{l}|}. \tag{3}\]
We vary \(k\) between 1 to 6 and \(\varepsilon\) between 14 mm to 28 mm, as the minimal distance between two wires in the CDC is approximately 10 mm. Precision and recall scores are calculated for every pair of \(k\) and \(\varepsilon\) parameters and show mean value over 2000 events in fig. 10. As expected, the precision score increases monotonically when parameter \(k\) is increased. In addition, it increases if the parameter \(\varepsilon\) is reduced. The recall score behaves in the opposite way: It monotonically decreases when parameter \(k\) is increased. In addition, it decreases if the parameter \(\varepsilon\) is decreased. Similarity is defined as the ratio between recall and precision, where an optimal working point also maximizes recall and precision itself. We observe that we do not find high similarity for all values of \(k\). Maximal similarity is found for \(k=3\) and \(\varepsilon=22\) mm,
Figure 8: Typical event display of the CDC for various graph building approaches. Quadrants show \⃝ A⃝ all hits, \⃝ B⃝ \(k\)-NN graph building (\(k\)=6), \⃝ C⃝ e-NN graph building (\(\varepsilon\)=22 mm), and \⃝ D⃝ \(p\)-NN graph building (see fig. 9). The inserts show zooms to a smaller section of the CDC.
Figure 9: Two query vertices illustrate the neighbourhood pattern in hourglass shape used for the Belle II detector case study. The superlayer is rolled off radially and an exemplary cut-out is shown. Vertices which are considered neighbour candidates of the respective query vertex are shown as purple-filled markers.
and \(k=4\) and \(\varepsilon=28\,\mathrm{mm}\), respectively. The corresponding precision and recall on the underlying data set are around 65-70%.
The similarity between \(k\)-NN and \(\varepsilon\)-NN graphs can be interpreted in relation to the mathematical statement from Ref. [42] (compare section 4). Based on the background noise and the large number of hits per events, we assume that the _hit identifiers_ in the dataset are approximately uniformly distributed. Therefore, we expect that pairs of \(k\)-NN and \(\varepsilon\)-NN graphs exist that exhibit a high degree of similarity, e.g. precision and recall scores close to one. Our expectation is only partially met as the trade-off point reaches only about 65-70 %. One possible reason for the remaining difference between the two graphs is the underlying background noise. Although the events are clearly dominated by noise, the influence on the hit distribution is not strong enough for higher similarity scores.
We perform the same comparison between the \(k\)-NN and the \(p\)-NN graph building approach as shown in fig. 11. We achieve similar results in comparison to the \(\varepsilon\)-NN comparison: The recall score is monotonically decreasing for a larger parameter \(k\), and the precision score is monotonically increasing for larger parameter \(k\). For \(k\) between three and four, precision and recall scores are approximately similar and around 70 %.
Again, our expectation of a high degree of similarity is only partially met. This similarity is to be expected, as the chosen pattern is also locally constrained and approximately ellipsoid.
### Prototype Setup
For the implementation of the proposed algorithm into a hardware prototype, the CDC is partitioned into 20 partially overlapping sectors in \(\phi\) and radial distance \(r\) for the L1 trigger. Each \(\phi\)-\(r\)-sector is processed independently by one FPGA platform, the overlapping of the sectors ensures that no data is lost. The overlapping sectors must be merged in subsequent reconstruction steps that are not part of the graph-building stage. In the following, the graph-building module is implemented on the Belle II Universal Trigger Board 4 (UT4) featuring a Xilinx Ultrascale XCVU160WE-2E. The UT4 board is currently used in the Belle II L1 Trigger and therefore serves as a reference for for future upgrades of the L1 trigger system.
To implement the online graph building module, we generate JSON databases for every \(\phi\)-sector of the CDC. Each database represents a formal detector containing the positions of the wires and information about sensor-features as described in section section 4. Sensor features are composed of 1 bit for the binary _hit identifier_, 5 bit for the _TDC readout_, 4 bit for the _ADC readout_, and the Cartesian coordinates of the wires. Additional edge features containing information about the wire distances of two adjacent vertices are included as well. The resolution of the euclidean features can be arbitrarily chosen and is therefore considered a hyperparameter of the module implementation.
The sector database and a function describing the pattern as illustrated in fig. 9 is provided as an input to our proposed toolchain which is implemented in Python 3.10. An intermediate graph representation is generated as a JSON database, containing a type definitions of all vertices, edges and their respective features. In addition, features known at design-time, such as Cartesian coordinates, are rounded down, quantized equally spaced, and included in the intermediate graph representation. By generating the databases for all 20 sectors, we identify the smallest and largest sector of the CDC to provide a lower and an upper bound for our problem size. The maximum number of edges in each sector is determined by the pattern from fig. 9. The smallest sectors are located in superlayer two containing 498 vertices and 2305 edges, while the largest sectors are located in superlayer six containing 978 vertices and 4545 edges.
To demonstrate our graph building approach, we synthesise the previously generated intermediate graph representation into a hardware module targeting the architecture of the UT4. We provide the JSON database as an input for the hardware generator, which is a set of custom modules implemented in Chisel 3.6.0. In addition, we provide a Scala function that performs the online classification of edge candidates based on the _hit identifier_: an edge candidate is considered valid, if the _hit identifiers_ of both adjacent vertices are hit. For the edge processing elements we choose the number of edges per edge processing element \(N\) of eight. Therefore, eight edges are processed
sequentially in every edge processing element as described in section 5. Based on the required throughput of \(32\,\mathrm{MHz}\), a system frequency of at least \(256\,\mathrm{MHz}\) is required to achieve the desired throughput. By starting the generator application, edges and features are extracted from the intermediate graph representation and scheduled on edge processing elements. After completion, the hardware generator produces a SystemVerilog file containing the graph-building hardware module [52].
### Implementation Results
For further evaluation, the SystemVerilog module implementing the presented \(p\)-NN graph building is synthesised out-of-context for the UT4 board using Xilinx Vivado 2022.2. During synthesis, the target frequency \(f_{sys}\) is set to \(256\,\mathrm{MHz}\), for which no timing violations are reported by the tool. In addition, functional tests are performed to validate the algorithmic correctness of the module. In the following we perform two series of measurements to validate the feasibility of the proposed implementation on the Xilinx Ultrascale XCVU160WE-2E FPGA.
Figure 12 depicts the results of the two evaluation series, reporting the utilisation on the UT4 board for the respective resource types. The first series of three synthesised versions is shown in fig. 11(a), varying the input graph size in a suitable range between the \(2305\) and \(4545\) edges. The highest occupancy is reported for registers, amounting up to \(16.4\,\mathrm{\char 37}\) for the largest input graph, as opposed to \(7.8\,\mathrm{\char 37}\) for the smallest graph. For all other resource types, the utilisation is lower than \(5\,\mathrm{\char 37}\). In general, it is observed that the resource
Figure 11: Precision and recall for the comparison between the \(p\)-NN graphs (for the pattern see in fig. 9) and the \(k\)-NN graphs.
Figure 10: Precision and recall for the comparison of the \(k\)-NN and \(\varepsilon\)-NN graph building approaches.
utilisation scales linearly with the number of edges in the input graph.
For the second series, a variation in resolution of the underlying edge features is considered. An overview of all utilised features is given in table 2. The width of features that are received as inputs from the CDC, namely _hit identifier_, _ADC readout_, and _TDC readout_, are exemplary chosen in a way which is supported by the current readout system. As an example, the the _TDC readout_ quantisation of 5 bit derives from the drift time resolution of 1 ns at a trigger data input rate of 32 MHz. The resolution of euclidean coordinates and distances can be optimised at design-time.
In the following, we choose a resolution between 4 bit to 16 bit which results in a quantisation error for the euclidean coordinates in the range 34.4 mm to 0.017 mm. 4 bit per coordinate result in a total edge width of 40 bit, whereas a resolution of 16 bit per coordinate results in a total edge width of 100 bit.
The implementation utilisation of all three synthesised modules is shown in fig. 11(b), varying the resolution of euclidean coordinates and distances in the generated edges.
Similar to the previous measurement, the highest utilisation is reported for registers, taking up between 11.1 % and 26.1 % depending on the width of the edges. However it can be seen, that the implementation size scales linearly with the number of edges in the input graph.
Based on the presented results, the implementation of the graph building module is considered feasible on the UT4 board. By experimental evaluation we show that our hardware architecture can be implemented semi-automatically for the L1 trigger of the Belle II experiment, enabling the deployment of GNNs in the latency-constrained trigger chain. The feature vectors of the edges are provided via a parallel output register, where the address of every edge is statically determined at design time. Depending on successive filtering algorithms, any number of output queues can be provided. To conclude, our toolchain allows for a flexible and resource efficient design of online graph building modules for trigger applications. In the presented implementation, our module is able to achieve a throughput of 32 million samples per second at total latency of 39.06 ns, corresponding to ten clock cycles at \(f_{sys}\). As the reported latency is well below the required \(\mathcal{O}(1\,\mathrm{\SIUnitSymbolMicro s})\), our graph building module leaves a large part of the latency and resource budget on FPGAs to the demanding GNN solutions.
## 7 Conclusion
In our work, we analysed three graph building approaches on their feasibility for the real-time environment of particle physics machine-learning applications. As the _k_-NN algorithm, which is favoured by state-of-the-art GNN tracking solutions, is unsuitable for the strict sub-microsecond latency constraints imposed by trigger systems, we identify two locally constrained nearest neighbour algorithms \(\varepsilon\)-NN and \(p\)-NN as possible alternatives. In an effort to reduce the number of design-iterations and time-consuming hardware debugging, we develop a generator-based hardware design methodology tailored specifically to online graph-building algorithms. Our approach generalises graph-building algorithms into a intermediate-graph representation based on a formal detector description and user-specified metrics. The semi-automated workflow enables the generation of FPGA-accelerated hardware implementation of locally constrained nearest neighbour algorithms. To demonstrate the capabilities of our toolchain, we perform a case study on the trigger system of the Belle II detector. We implement an online graph-building algorithm which adapts the pattern of the current track segment finder, demonstrating the feasibility of our approach in the environment of particle physics trigger applications. The code used for this research is available open source under Ref. [59].
\begin{table}
\begin{tabular}{c c c c} \hline \hline Feature & Type & Occurrence & Width \\ \hline _hit identifier_ & Dynamic & 2 & 1 bit \\ _ADC readout_ & Dynamic & 2 & 4 bit \\ _TDC readout_ & Dynamic & 2 & 5 bit \\ _X coordinate_ & Static & 2 & 4 bit to 16 bit \\ _Y coordinate_ & Static & 2 & 4 bit to 16 bit \\ _distance_ & Static & 1 & 4 bit to 16 bit \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of the features of the sensors used to define the edges. The occurrence indicates how often the respective feature is represented in an edge.
Nearest neighbour algorithms presented in this work achieve a \(\mathcal{O}(1)\) time complexity and a \(\mathcal{O}(|E|)\) space complexity, compared to a \(\mathcal{O}(|D|)\) time complexity in approximate \(k\)-NN algorithms or a \(\mathcal{O}(k|D|\log(|D|)\) complexity in the sequential case [22, 24]. As a result, our semi-automated methodology may also be applied to other detectors with heterogeneous sensor arrays to build graphs under latency constraints, enabling the integration of GNN-tracking solutions in particle physics.
During the evaluation of our similarity metric, we found a non-negligible difference between \(k\)-NN graphs and locally constrained NN-graphs. For the complete replacement of \(k\)-NN graphs with our proposed \(\varepsilon\)-NN and \(p\)-NN graphs, the differences must be taken into account to achieve optimal performance when designing successive trigger stages. For this reason, we consider the future development of methods for algorithm co-design essential for integrating GNNs into real-world trigger applications.
Data Availability Statement.The datasets generated during and analysed during the current study are property of the Belle II collaboration and not publicly available.
Code Availability Statement.The code used for this research is available open source under Ref. [59].
Acknowledgements.The authors would like to thank the Belle II collaboration for useful discussions and suggestions on how to improve this work.
It is a great pleasure to thank (in alphabetical order) Greta Heine, Jan Kieseler, Christian Kiesling and Elia Schmidt for discussions, and Tanja Harbaum, Greta Heine, Taichiro Koga, Florian Schade, and Jing-Ge Shiu for feedback and comments on earlier versions of the manuscript.
## Compliance with ethical standards
### Conflict of interest
The authors declare that they have no conflict of interest.
Figure 12: Resource utilisation reported after out-of-context synthesis on the UT4 platform using Vivado 2022.2 for registers, lookup tables (LUTs) and multiplexers (F7MUXes). Measurement are indicated by dots and connected by lines through linear interpolation to guide the eye. Unreported resource types are not utilised in the implementation. |
2308.12997 | A Generalized Semi-Analytic Model for Magnetar-Driven Supernovae | Several types of energetic supernovae, such as superluminous supernovae
(SLSNe) and broad-line Ic supernovae (Ic-BL SNe), could be powered by the
spin-down of a rapidly rotating magnetar. Currently, most models used to infer
the parameters for potential magnetar-driven supernovae make several unsuitable
assumptions that likely bias the estimated parameters. In this work, we present
a new model for magnetar-driven supernovae that relaxes several of these
assumptions and an inference workflow that enables accurate estimation of
parameters from lightcurves of magnetar-driven supernovae. In particular, in
this model, we include the dynamical evolution of the ejecta, coupling it to
the energy injected by the magnetar itself while also allowing for non-dipole
spin down. We show that the model can reproduce SLSN and Ic-BL SN light curves
consistent with the parameter space from computationally expensive numerical
models. We also show the results of parameter inference on four well-known
example supernovae, demonstrating the model's effectiveness at capturing the
considerable diversity in magnetar-driven supernova lightcurves. The model fits
each light curve well and recovers parameters broadly consistent with previous
works. This model will allow us to explore the full diversity of
magnetar-driven supernovae under one theoretical framework, more accurately
characterize these supernovae from only photometric data, and make more
accurate predictions of future multiwavelength emission to test the
magnetar-driven scenario better. | Conor M. B. Omand, Nikhil Sarin | 2023-08-24T18:00:13Z | http://arxiv.org/abs/2308.12997v2 | # A Generalized Semi-Analytic Model for Magnetar-Driven Supernovae
###### Abstract
Several types of energetic supernovae, such as superluminous supernovae (SLSNe) and broad-line Ic supernovae (Ic-BL SNe), could be powered by the spin-down of a rapidly rotating magnetar. Currently, most models used to infer the parameters for potential magnetar-driven supernovae make several unsuitable assumptions that likely bias the estimated parameters. In this work, we present a new model for magnetar-driven supernovae that relaxes several of these assumptions and an inference workflow that enables accurate estimation of parameters from lightcurves of magnetar-driven supernovae. In particular, in this model, we include the dynamical evolution of the ejecta, coupling it to the energy injected by the magnetar itself while also allowing for non-dipole spin down. We show that the model can reproduce SLSN and Ic-BL SN light curves consistent with the parameter space from computationally expensive numerical models. We also show the results of parameter inference on four well-known example supernovae, demonstrating the model's effectiveness at capturing the considerable diversity in magnetar-driven supernova lightcurves. The model fits each light curve well and recovers parameters broadly consistent with previous works. This model will allow us to explore the full diversity of magnetar-driven supernovae under one theoretical framework, more accurately characterize these supernovae from only photometric data, and make more accurate predictions of future multiwavelength emission to test the magnetar-driven scenario better.
keywords: supernovae: general - stars: magnetars - supernovae: individual: SN 2015bn - supernovae: individual: SN 2007ru - supernovae: individual: ZTF20acigmel - supernovae: individual: iPTF14gqr
## 1 Introduction
Recent wide-field high-cadence surveys, such as the Zwicky Transient Facility (ZTF, Bellm et al., 2019) and the Asteroid Terrestrial-impact Last Alert System (ATLAS, Tonry, 2011; Tonry et al., 2018), can discover more than 1000 supernovae per year (Cappellaro, 2022), and next generation facilities such as Rubin Observatory (Ivezic et al., 2019) will be able to discover more than 1000 supernovae per night (LSST Science Collaboration et al., 2009; Stritzinger and Moriya, 2018). Core-collapse supernovae (CCSNe), caused by the deaths of massive stars, tend to have explosion energies of \(\sim 10^{51}\) erg and radiated energies \(\sim 10^{49}\) erg, which can be well explained by models (Arnett, 1980, 1982). However, several classes of CCSNe have energies higher than these standard models. Superluminous supernovae (SLSNe) radiate - 100 times more energy than a standard CCSN (Gal-Yam, 2012; Nicholl, 2021) and broad-line Type Ic supernovae (SNe Ic-BL) have inferred kinetic energies \(\sim 10\) times higher than a typical CCSN (e.g. Taddia et al., 2019). These energetic supernovae have both been associated with long- or ultra-long gamma-ray bursts (GRBs) (Gendre et al., 2013; Nakauchi et al., 2013; Levan et al., 2014; Cano et al., 2017), and SLSNe and SNe Ic-BL also have similar host galaxies (Lunman et al., 2014; Leloudas et al., 2015; Angus et al., 2016; Schulze et al., 2018; Orm et al., 2020), and have similar spectral features at both early (Pastorello et al., 2010; Inserra et al., 2013; Nicholl et al., 2013; Blanchard et al., 2019) and late (Milisavljevic et al., 2013; Jerkstrand et al., 2017; Nicholl et al., 2017) times. Other types of unusually energetic supernovae, such as fast blue optical transients (FBOTs) and bright ultra-stripped supernovae (USSNe), have been suggested to have similar power sources as SLSNe and SNe Ic-BL (Liu et al., 2022; Sawada et al., 2022).
Several different models can be used to explain the high energies of one or both of SLSNe or SNe Ic-BL. Stars with masses M\({}_{*}\gtrsim 130\) M\({}_{\odot}\) can explode as pair instability supernovae (PISNe) (Barkat et al., 1967; Heger and Woosley, 2002; Gal-Yam et al., 2009), which can generate tens of solar masses of \({}^{6}\)Ni and present as an extremely long-lived, luminous supernovae. Slightly less massive stars, with \(130\) M\({}_{\odot}\gtrsim\) M\({}_{*}\gtrsim 100\) M\({}_{\odot}\), can eject shells of material through the pair-instability without being completely destabilized, and collisions between the ejected shells can be as luminous as an SLSN (Heger et al., 2003; Woosley et al., 2007; Chatzopoulos and Wheeler, 2012; Yoshida et al., 2016; Woosley, 2017) - these are known as pulsational pair-instability supernovae (PPISNe). The collision of supernova ejecta and circumstellar material surrounding the progenitor, ejected through either a steady wind, eruptive mass loss, or binary interaction (Smith, 2014), can convert much of the supernova kinetic energy into radiation, leading to a highly luminous supernova (Chatzopoulos and Wheeler, 2012; Chatzopoulos et al., 2013; Villar et al., 2017; Jiang et al., 2020). Finally, the compact remnant can inject
some energy into the ejecta; this energy may come from fallback accretion onto a central black hole or neutron star (Dexter & Kasen, 2013; Moriya et al., 2018), a collapsar or jet (MacFadyen & Woosley, 1999), or the spin-down energy of a rapidly-rotating magnetar (Kasen & Bildsten, 2010; Woosley, 2010).
In the magnetar model, the spin-down energy is emitted as a highly magnetized particle wind, which expands relativistically until it collides with the inner edge of the supernova ejecta, create forward and reverse shocks. The expanding wind becomes shocked by the reverse shock, accelerating the particles up to ultrarelativistic energies, which then emit via synchrotron radiation and inverse Compton scattering (Gaensler & Slane, 2006); this shocked wind is known as a pulsar wind nebula (PWN). The PWN applies a pressure to the supernova ejecta, causing it to accelerate, and the radiation from the nebula is thermalized in the ejecta, causing the ejecta temperature and supernova luminosity to both increase (Kasen & Bildsten, 2010). The PWN-ejecta interaction can also cause Rayleigh-Taylor instabilities, which can shred the inner ejecta and cause non-spherical structure to emerge in the ejecta (Suzuki & Maeda, 2017, 2021).
The magnetar model predicts many multiwavelength signals that could be used to identify and characterize the newborn neutron star. Once the ejecta becomes optically thin, the non-thermal emission from the PWN can be detected directly, either at high energy in hard X-rays or gamma rays (Kotera et al., 2013; Murase et al., 2015; Kashiyama et al., 2016), or at low energy in radio (Omand et al., 2018; Eftekhari et al., 2021; Murase et al., 2021); two energetic SNe have radio detections at late times that are consistent with PWN emission, PTF10ngi (Eftekhari et al., 2019; Law et al., 2019; Mondal et al., 2020; Hatsukade et al., 2021) and SN2012au (Strob et al., 2021). Dust formed in the supernova can absorb PWN emission and re-emit that energy in infrared (Omand et al., 2019), causing bright continuum emission recently seen in four SLSNe (Chen et al., 2021; Sun et al., 2022). High-energy photons and Rayleigh-Taylor induced mixing can change the chemical and ionization structure of the ejecta, leading to unique signatures in the supernova nebular spectrum (Chevalier & Fransson, 1992; Omand & Jerkstrand, 2023). Aspherical ejecta caused by either hydro instabilities or an aspherical PWN can produce an optical polarization signal (Tanaka et al., 2017), which has been detected in some energetic SNe (Inserra et al., 2016; Saito et al., 2020; Pursiainen et al., 2022; Poidevin et al., 2023; Pursiainen et al., 2023), but not others (Leloudas et al., 2015; Lee, 2019, 2020; Poidevin et al., 2022, 2023; Pursiainen et al., 2023).
Accurate parameter estimation from the light curve around optical peak is essential for a number of reasons. Firstly, new surveys such as the LSST will detect SLSNe out to high redshift, where they can not all be classified with spectroscopy. Therefore, these supernovae will have to be characterized from their light curve data alone. Also, Predicting late-time multiwavelength signals requires accurate parameter estimation, as different parameters that produce similar optical light curves can produce vastly different multiwavelength signals (e.g. Omand et al., 2018, 2019). The model used in codes currently widely used for inference of magnetar-driven SNe (e.g Nicholl et al., 2017) make several assumptions to reduce computational complexity, which are unjustified outside a small region of the parameter space. In particular, they assume a constant ejecta velocity, which is independent of the magnetar luminosity, although numerical simulations show that ejecta acceleration due to PWN pressure plays a vital role in the dynamics of the ejecta (Chen et al., 2016; Suzuki & Maeda, 2017, 2019; Chen et al., 2020; Suzuki & Maeda, 2021). They also assume the magnetar spin down through pure vacuum dipole emission, even though studies of Galactic pulsars (Lyne et al., 2015; Parthasarathy et al., 2020) and putative magnetars born in GRBs (Lasky et al., 2017; Sarin et al., 2020, 2020) show that most neutron stars are inconsistent with a pure vacuum dipole.
In this work, we present a model where these assumptions are relaxed, which can fully explore the diversity of magnetar-driven supernovae and unite phenomenologically different supernovae, such as SNe Ic-BL and SLSNe, under one theoretical framework. In Section 2, we introduce our model for magnetar-driven supernovae. In Section 3, we show the diversity of supernovae and supernova observables resulting from differences in initial parameters. In Section 4, we perform Bayesian inference on a few varying supernovae to show how our model can consistently reproduce and explain them. Finally, in Section 5, we discuss the model's implications and conclude. Throughout the paper, we use the notation \(Q_{x}=Q/10^{x}\) in cgs units unless otherwise noted.
## 2 Model
The physics of our model is based on previous magnetar-driven kilonovae models (e.g. Yu et al., 2013; Metzger, 2019; Sarin et al., 2022), but modified to describe supernovae. We present here a non-relativistic model description, although the model implementation is fully relativistic. For the fully relativistic description of the kinematics (Equations 5, 6, 14, 16, 17, 18, and 19), see Sarin et al. (2022). We note that relativistic corrections are largely unimportant for supernovae apart from transients with exceptionally low ejecta masses and powerful magnetar engines.
### Model Physics
The central magnetar spins down by releasing its rotational energy
\[E_{\rm rot}=\frac{1}{2}I\Omega^{2}, \tag{1}\]
where \(I\) is the moment of inertia of the magnetar and \(\Omega\) is the rotational angular frequency of the magnetar. The time derivative of this relation gives the spin down luminosity,
\[L_{\rm SD}=I\Omega\dot{\Omega}, \tag{2}\]
which, given \(\dot{\Omega}\propto-\Omega^{n}\) for braking index \(n\), can be modelled generally as (Lasky et al., 2017)
\[L_{\rm SD}(t)=L_{0}\left(1+\frac{t}{t_{\rm SD}}\right)^{\frac{14n}{1-n}}, \tag{3}\]
where \(L_{0}\) is the initial magnetar spin-down luminosity and \(t_{\rm SD}\) is the magnetar spin-down time. A braking index of \(n=3\) corresponds to pure vacuum dipole spin down (Ostriker & Gunn, 1969; Goldreich & Julian, 1969), which gives a late-time spin-down down luminosity of \(L\propto t^{-2}\)(Zhang & Meszaros, 2001), while a braking index of \(n=5\) corresponds to gravitational wave spin down via magnetic deformation (Cutler & Jones, 2000), and has a late-time spin-down luminosity of \(L\propto t^{-3/2}\). We note that in general, \(n\) is expected to be variable during the early life of the magnetar (Lander & Jones, 2018, 2020), but we keep it constant for simplicity. Integrating Equation 3 gives the total rotational energy as a function of braking index:
\[E_{\rm rot}=\frac{n-1}{2}L_{0}t_{\rm SD}. \tag{4}\]
This recovers \(E_{\rm rot}=L_{0}t_{\rm SD}\) for vacuum dipole spin-down.
The rotational energy from the magnetar is converted into a pulsar wind. This highly magnetized, ultrarelativistic wind collides with and pushes a shock into the supernova ejecta, increasing its kinetic
and internal energy. The internal energy is also increased by the absorption of PWN photons by the ejecta. The total energy of the system is the combination of the kinetic and internal energy
\[E_{\rm{ej}}=\frac{1}{2}M_{\rm{ej}}v_{\rm{ej}}^{2}+E_{\rm{intt}}, \tag{5}\]
where \(M_{\rm{ej}}\) is the ejecta mass. The evolution of this system is governed by the energy sources, radioactive heating and magnetar spin-down luminosity, and energy losses from radiated luminosity and adiabatic cooling from expansion. The evolution of the internal energy of the ejecta is written as (Kasen et al., 2016)
\[\frac{dE_{\rm{int}}}{dt}=\xi L_{\rm{SD}}+L_{\rm{ra}}-L_{\rm{bol}}-\mathcal{P} \frac{dV}{dt}, \tag{6}\]
where \(L_{\rm{ra}}\) and \(L_{\rm{bol}}\) are the radioactive power and emitted bolometric luminosity, respectively, \(\xi\) is the fraction of spin-down luminosity injected into the ejecta, and \(\mathcal{P}\) and \(V\) are the pressure and volume of the ejecta.
Here, we adopt the Wang et al. (2015) prescription for gamma-ray leakage used in other models (e.g. Nicholl et al., 2017; Sarin et al., 2022) with
\[\xi=1-e^{-Ar^{-2}}, \tag{7}\]
where
\[A=\frac{3\kappa_{\gamma}M_{\rm{ej}}}{4\pi v_{\rm{ej}}^{2}} \tag{8}\]
is the leakage parameter and \(\kappa_{\gamma}\) is the gamma-ray opacity of the ejecta.
The radioactive power from the decay of \({}^{56}\)Ni is given by
\[L_{\rm{ra}}=f_{\rm{Ni}}M_{\rm{ej}}(L_{\rm{sto_{Ni}}}e^{-t/t_{\rm{sto_{Ni}}}}+L _{\rm{sto_{Co}}}e^{-t/t_{\rm{sto_{Co}}}}), \tag{9}\]
where \(f_{\rm{Ni}}\) is the nickel fraction of the ejecta, \(L_{\rm{sto_{Ni}}}=6.45\times 10^{43}\) erg s\({}^{-1}\)\(M_{0}^{-1}\) and \(L_{\rm{sto_{Co}}}=1.45\times 10^{43}\) erg s\({}^{-1}\)\(M_{0}^{-1}\) are the decay luminosities of \({}^{56}\)Ni and \({}^{56}\)Co, and \(t_{\rm{sto_{Ni}}}=8.8\) days and \(t_{\rm{sto_{Co}}}=111.3\) days are the decay timescales for \({}^{56}\)Ni and \({}^{56}\)Co (Nadyozhin, 1994).
The dynamical evolution of the ejecta is given by
\[\frac{dv_{\rm{ej}}}{dt}=\frac{c^{2}\mathcal{P}(d\mathcal{V}/dt)}{M_{\rm{ej}}v _{\rm{ej}}^{3}}, \tag{10}\]
where
\[\mathcal{V}=\frac{4}{3}\pi R_{\rm{ej}}^{3}, \tag{11}\]
\[\frac{d\mathcal{V}}{dt}=4\pi R_{\rm{ej}}^{2}v_{\rm{ej}}, \tag{12}\]
\[\mathcal{P}=\frac{E_{\rm{int}}}{3\mathcal{V}}. \tag{13}\]
Substituting these into Equation 10 gives
\[\frac{dv_{\rm{ej}}}{dt}=\frac{c^{2}E_{\rm{int}}}{M_{\rm{ej}}R_{\rm{ej}}v_{\rm {ej}}^{2}}. \tag{14}\]
The initial ejecta velocity is set by
\[v_{\rm{ej,0}}=\sqrt{\frac{2E_{\rm{SN}}}{M_{\rm{ej}}}}, \tag{15}\]
where \(E_{\rm{SN}}\) is the supernova explosion energy.
The bolometric radiated luminosity is (Kasen & Bildsten, 2010; Kotera et al., 2013)
\[L_{\rm{bol}}=\frac{E_{\rm{int}}c}{\tau R_{\rm{ej}}}=\frac{E_{\rm{int}}t}{t_{ \rm{dif}}^{2}} (t\leq t_{\tau}), \tag{16}\]
\[=\frac{E_{\rm{int}}c}{R_{\rm{ej}}}, \tag{17}\]
where
\[\tau=\frac{\kappa M_{\rm{ej}}R_{\rm{ej}}}{\mathcal{V}} \tag{18}\]
is the optical depth of the ejecta, \(\kappa\) is the ejecta opacity,
\[t_{\rm{dif}}=\left(\frac{\tau R_{\rm{ej}}t}{c}\right)^{1/2} \tag{19}\]
is the effective diffusion time, and \(t_{\tau}>t_{\rm{dif}}\) is the time when \(\tau=1\).
Calculating the bolometric luminosity of the magnetar-driven transient (Equations 16 and 17) involves solving the evolution of the internal energy and dynamics (Equations 6 and 14 respectively) using the input power sources (Equations 3 and 9). The photospheric temperature is determined from the bolometric luminosity and ejecta radius until the temperature reaches the photospheric plateau temperature, as in Nicholl et al. (2017). This can be expressed as
\[T_{\rm{phot}}(t)=\begin{cases}\left(\frac{L_{\rm{bol}}(t)}{4\pi\sigma R_{\rm{ej }}^{2}}\right)^{1/4}&\text{for }\left(\frac{L_{\rm{bol}}(t)}{4\pi\sigma R_{\rm{ej}}^{2}}\right)^{1/4}>T_{\rm{ min}},\\ T_{\rm{min}}&\text{for }\left(\frac{L_{\rm{bol}}(t)}{4\pi\sigma R_{\rm{ej}}^{2}} \right)^{1/4}\leq T_{\rm{min}}\end{cases} \tag{20}\]
The spectral energy distribution (SED) of the transient is then calculated using the cutoff blackbody used in Nicholl et al. (2017), with \(F_{\lambda<L_{\rm{cut}}}=F_{\lambda}(\lambda/\lambda_{\rm{cut}})\) and \(\lambda_{\rm{cut}}=3000\)(Chomiuk et al., 2011; Nicholl et al., 2017), but this can also be switched to a simple blackbody.
### Parameters, Priors, and Implementation
The model presented above is implemented into the open-source electromagnetic transient fitting software package, Redback(Sarin et al., 2023). The input parameters for the model, their default priors, and the values used in Section 3 are listed in Table 1.
Due to the acceleration of the ejecta from the PWN, \(v_{\rm{ej}}\) is not constant and can not be used as a free parameter, since it is coupled to both \(E_{\rm{SN}}\) and \(L_{\rm{SD}}\). Velocity information from spectroscopy can be used to weight the results, although the velocity measured from absorption widths is not the same as the photospheric velocity which is not the same as the ejecta velocity (Arnett, 1982), so we caution against this unless the velocities are well calibrated (e.g. Dessart et al., 2016).
Our model differs from others (e.g. Nicholl et al., 2017) in the choice of input parameters used to determine the magnetar luminosity. Previous models use the initial magnetar spin period \(P_{0}\), dipole component of the pulsar magnetic field \(B\), and neutron star mass \(M_{\rm{NS}}\); while we use \(L_{0}\) and \(t_{\rm{SD}}\) (and \(n\), which is implicitly fixed to 3 in other models). We note that as the magnetar luminosity for vacuum dipole spin-down can determined from only \(L_{0}\) and \(t_{\rm{SD}}\) (see Equation 3), using three parameters is unnecessary for parameter inference. Using these parameters also avoids assumptions such as the moment of inertia of the neutron star, which depends on the equation of state (EoS) (Lattimer & Schutz, 2005) and can vary depending on the mass and spin period of the neutron star (Worley et al., 2008). To recover parameters such as the magnetar spin period and magnetic field, one can use the scalings used in other models such as
\[L_{0}= 2.0\times 10^{47}P_{0,-3}^{-4}B_{14}^{2}, \tag{21}\] \[t_{\rm SD}= 1.3\times 10^{5}P_{0,-3}^{2}B_{14}^{-2}\left(\frac{M_{\rm NS}}{1.4M _{\odot}}\right) \tag{22}\]
for transients consistent with \(n=3\), which assumes a neutron star with moment of inertia of \(\sim 1.3\times 10^{45}\) g cm\({}^{2}\) for a 1.4 \(M_{\odot}\) neutron star, which is consistent with the APR EoS (Akmal et al., 1998) or MDI EoS with density independent nuclear symmetry energy (Das et al., 2003; Shetty et al., 2007), which both give neutron star radii \(\sim 11.5-12\) km (Worley et al., 2008). These scalings become more complicated for \(n\neq 3\)(e.g., Shapiro & Teukolsky, 1983), with other dependencies such as the ellipticity of the neutron star or bulk viscosity (depending on the spin-down processes involved), leaving it difficult to definitively recover a spin period or magnetic field with such simplified models.
## 3 Results
We now explore the diversity of magnetar driven-supernovae for different initial conditions using the model derived in Section 2. We assume the supernova explosion energy is \(10^{51}\) erg, typical for a neutrino-driven explosion, and the supernova is entirely powered by magnetar energy after the initial explosion, with no contribution from \({}^{56}\)Ni. We fix both the ejecta opacity and gamma-ray opacity to 0.1 cm\({}^{2}\) g\({}^{-1}\); the former is typical of stripped-envelope supernovae (Inserra et al., 2013; Kleiser & Kasen, 2014) and the latter will not affect the light curve peak properties unless it is extremely low, so we use a wavelength-independent opacity here for simplicity. We also fix the photospheric plateau temperature to 5000 K, which is typical for SLSNe (Nichol et al., 2017).
First, we show the diversity of supernovae that can be produced using this model in Section 3.1. To compare with previously derived results, we assume vacuum dipole spin-down and use Equations 21 and 22 with a 1.4 \(M_{\odot}\) neutron star to express our initial conditions in terms of \(P_{0}\) and \(B\); the mapping between (\(L_{0}\), \(t_{\rm SD}\)) and (\(P_{0}\), \(B\)) is shown in Figure 1. Then we show the effect of changing braking index in Section 3.2.
### Energetics, Timescales, and Observables
We first see if our model can reproduce typical observables for the observed populations of SLSNe and SNe Ic-BL. SLSNe at optical light curve peak typically have g-band absolute magnitudes \(-23<M_{\rm g}<-20\), with most brighter than \(-21\); spectroscopically determined photospheric velocities of \(\sim 10~{}000-15~{}000\) km s\({}^{-1}\); rise times of \(10-70\) days; and \(g-r\) of \(-0.3-0.3\)(Chen et al., 2023). SNe Ic-BL at optical light curve peak typically have r-band absolute magnitudes around \(-20<M_{r}<-18\); spectroscopically determined photospheric velocities of \(\sim 15~{}000-30~{}000\) km s\({}^{-1}\); rise times of \(5-20\) days; and \(g-r\) of \(0-0.5\)(Taddia et al., 2019). The inferred ejecta masses of typical SLSNe and SNe Ic-BL are both around \(5~{}M_{\odot}\)(Taddia et al., 2019; Chen et al., 2023), and SLSNe show a negative correlation between initial spin period and ejecta mass (Blanchard et al., 2020).
Several inefficiencies can prevent all the magnetar spin-down luminosity from eventually being emitted as supernova luminosity. Some fraction (\(\xi\) from Equation 6) will escape without interacting with the ejecta at all, and possibly be detected as non-thermal x-rays or gamma rays. Some fraction of the energy will also accelerate the
\begin{table}
\begin{tabular}{c c c c c} Parameter & Definition & Units & Default Prior & Section 3.1 Value \\ \hline \(L_{0}\) & Initial Magnetar Spin-Down Luminosity & erg s\({}^{-1}\) & L[\(10^{40}\), \(10^{50}\)] & Varying \\ \(t_{\rm SD}\) & Spin-Down Time & s & L[\(10^{2}\), \(10^{6}\)] & Varying \\ \(n\) & Magnetar Braking Index & & U[1.5, 10] & 3 \\ \(f_{\rm Si}\) & Ejecta Nickel Mass Fraction & & [\(10^{-3}\), 1] & 0 \\ \(M_{\rm ej}\) & Ejecta Mass & \(M_{\odot}\) & L[\(10^{-2}\), \(1\times 10^{2}\)] & Varying \\ \(E_{\rm SN}\) & Supernova Explosion Energy & erg & L[\(5\times 10^{50}\), \(2\times 10^{51}\)] & \(10^{51}\) \\ \(\kappa\) & Ejecta Opacity & cm\({}^{2}\) g\({}^{-1}\) & L[\(0.05\), 0.2] & 0.1 \\ \(\kappa_{\gamma}\) & Ejecta Gamma-Ray Opacity & cm\({}^{2}\) g\({}^{-1}\) & L[\(10^{-4}\), \(10^{4}\)] & 0.1 \\ \(T_{\rm min}\) & Photospheric Plateau Temperature & K & L[\(10^{3}\), \(3\times 10^{4}\)] & 5000 \\ \end{tabular}
\end{table}
Table 1: The parameters and default priors for the generalized magnetar model, as well as the values used for the parameter exploration in Section 3. Priors are either uniform (U) or log-uniform (L).
Figure 1: Initial pulsar luminosities \(L_{0}\) (top) and spin-down times \(t_{\rm SD}\) (bottom) for different initial pulsar rotation periods and magnetic fields.
ejecta instead of thermalizing and being re-emitted, which will affect both the supernova luminosity and peak timescale; this fraction can be determined by the ratio of the spin-down time and the diffusion time (Suzuki & Maeda, 2021; Sarin et al., 2022).
Figure 2 shows the ratio of the final supernova kinetic and radiated energies for various magnetic fields and ejecta masses for spin periods of 1 ms (close to the mass shedding limit for neutron stars (Watts et al., 2016)) and 3 ms (where the spin-down luminosity and explosion energy can become comparable), and for various magnetic fields and spin periods for an ejecta mass of 10 \(M_{\odot}\). The energy ratio \(E_{\rm kin}/E_{\rm rad}\) does correlate strongly with \(\zeta=t_{\rm SD}/t_{\rm diff}\) up to large ejecta mass for low spin periods, although at spin period increases, the correlation gets weaker as the behaviour of the energy ratio changes; this is due to the total amount of energy injected by the magnetar prior to the diffusion time decreasing below the explosion energy, meaning that the ejecta dynamics are no longer primarily determined by the magnetar spin-down energy. There is a small region of the parameter space at low spin period, magnetic field, and ejecta mass where the radiated energy can surpass the kinetic energy. However, for most of the parameter space this ratio is between 1 and 100. Typical supernovae have \(E_{\rm kig}/E_{\rm rad}=10^{51}\) erg / \(10^{49}\) erg = 100, but without a contribution from \({}^{65}\)Ni this ratio can go much higher.
Figure 3 shows the peak timescale of the bolometric luminosity and the final ejecta velocity over the same parameter grid as Figure 2. The trends match our theoretical expectations, as \(P_{0}\) increases, the total energy of the system decreases, causing the ejecta velocity to decrease and the peak timescale to increase. Conversely, as \(B\) increases, the spin-down time decreases, causing the ejecta velocity to increase and the peak timescale to decrease. Finally, as \(M_{\rm ej}\) increases, the diffusion time increases, causing the ejecta velocity to decrease and the peak timescale to increase. The final ejecta velocity is largely insensitive to the magnetic field strength above a certain field threshold despite a change in peak timescale. This is a product of the coupling of dynamical evolution of the ejecta to the magnetar's rotational energy, as the final velocity is reached earlier in the supernova evolution for higher magnetic field (due to their shorter spin-down timescale). For a magnetar with rotation period of 1 ms, a magnetic field of \(>10^{15}\) is needed to reduce the peak timescale below 10 days, even at an ejecta mass of 1 \(M_{\odot}\), meaning that the fastest SNe Ic-BL likely require both an ejecta mass below 1 \(M_{\odot}\) and a magnetar spinning at close to breakup speeds. Timescales of \(>\) 20 days require ejecta masses below 5 \(M_{\odot}\), meaning that our model will likely estimate a lower ejecta mass for SNe Ic-BL than Taddia et al. (2019). Timescales and ejecta velocities typical of SLSNe can be reproduced over a large portion of the parameter space. However, a higher ejecta mass is required for faster spinning magnetars to keep the velocities below that of SNe Ic-BL. In contrast, a low ejecta mass is required for faster-spinning magnetars to keep the timescales below \(\sim\) 100 days and the velocities higher than \(\sim\) 10 000 km s\({}^{-1}\), providing some phenomenological justification for the mass-spin correlation found by Blanchard et al. (2019).
Figure 4 shows the peak \(g\)-band absolute magnitude and peak \(g-r\) colour over the same parameter grid. We find a portion of parameter space for low spin period, ejecta mass, and magnetic field which produces a transient more luminous than any previously observed superluminous supernova. However, it is unlikely such a combination (particularly low spin period and magnetic fields) can be conceived as magnetic-field amplification mechanisms such as the magneto-rotational instability or Kelvin-Helmholtz instability likely amplify most typical progenitor fields to larger poloidal fields than seen in this parameter space (e.g., Reboul-Salze et al., 2021), meanwhile, the stability of magnetic-field configurations in this part of the parameter space is also questionable (e.g., Braithwaite, 2009) and so magnetars in this parameter space may never materialise. We note that as we do not track gravitational-wave losses, the newly born magnetar could potentially spin-down rapidly through gravitational-wave radiation (Sarin and Lasky, 2021) in this parameter space, depleting the energy reservoir to power such a luminous transient. To compound this all further, it is unknown whether stellar explosions with such small ejecta masses could harbour magnetars that are rapidly rotating but have quite weak poloidal fields in the first place. The parameter space between \(M_{g}=-21\) and \(M_{g}=-23\), where SLSN are, shifts to lower masses for higher spin periods, where the parameter space around \(M_{g}=-19\), where most SNe Ic-BL are, require either a large ejecta mass (\(\gtrsim 5M_{\odot}\)), higher spin period, or extremely high magnetic field. The parameter space where \(g-r<0\) mostly overlaps with the \(M_{g}<-21\) region, showing that most SLSNe should have \(-0.5<g-r<0\) at peak, which is broadly consistent with observations (Chen et al., 2023).
### Effect of Varying Braking Index
Light curve luminosity and morphology can vary significantly with variations in magnetar braking index. Figure 5 shows the bolometric luminosity, absolute \(g\)-band magnitude, and absolute \(r\)-band magnitude for several supernovae where only the braking index \(n\) is varied. The ejecta mass, spin-down time, and total rotational energy are fixed to 10 \(M_{\odot}\), \(10^{6}\) s, and \(10^{52}\) erg, with initial magnetar luminosity calculated from Equation 4; all other parameters are the same as in Section 3.1. The timing of the light curve peak can vary by a factor of \(\sim\) 3 and late-time luminosities by orders of magnitude, with large \(n\) peaking later and having higher luminosities at later times, although the variation in late-time luminosity asymptotes as \(n\) increases due to the exponent in Equation 3 asymptoting to \(-1\). The peak luminosity can vary by \(\sim\) 1 mag and is highest for \(n\approx 3\) in this case, although this will likely vary depending on the energetics and diffusion time of the supernova.
## 4 Case Studies for Inference
As a proof of concept, we perform inference on several different classes of supernovae to see if the model is flexible enough to recover sensible parameters for a variety of objects. First, we validate the model using a simulated SLSN. Then, we perform inference on SN 2015bn, an SLSN; SN 2007ru, a SN Ic-BL; ZTF20acigmel (better known as "the Camel"), an FBOT; and iPTF14gqr, a USSN.
Inference is performed on multiband photometry using the open-source software package Redback(Sarin et al., 2023) with the dvsnety sampler (Speagle, 2020) implemented in Blavly(Ashton et al., 2019; Romero-Shaw et al., 2020). We sample with a Gaussian likelihood and an additional white noise term, and sample in flux density rather than magnitude. We use the default priors in all cases (shown in Table 1) except for the explosion energy of iPTF14gqr, where the lower limit is reduced to 5 \(\times\) 10\({}^{48}\) erg to capture the lower expected explosion energies of USSNe (Suwa et al., 2015). We also sample the unknown explosion time with a uniform prior of up to 100 days before the first observation and an extinction term \(A_{v}\), and use a constraint that the total rotational energy of the magnetar \(E_{\rm rot}\lesssim 10^{53}\) erg.
The fitted light curve and corner plot for the simulated SN are shown in Figure 6, the fitted light curves for the other SNe are shown in Figure 7, the input parameters for the simulated SN and recovered
parameters for each SN are shown in Table 2, and the corner plots for each of the real SNe are shown in Appendix A.
### Validation on a Simulated Supernova
To test if the model could recover parameters correctly, we simulated an SLSN with \(L_{0}=10^{46}\) erg s\({}^{-1}\), \(t_{\rm SD}=10^{6}\) s, \(n=3\), and \(M_{\rm ej}=10\)\(M_{\odot}\), with the other parameters the same as in Section 3. The supernova was placed at redshift \(z=0.5\) and data was generated using the Redback simulation workflow as observed by ZTF in \(g\), \(r\) and \(i\) band for the first 200 days post explosion.
The light curve (Figure 6 (left)) is fit well by the model throughout its evolution, and three of the four injected parameters are recovered to within \(1\sigma\) in the one-dimensional posteriors, with only \(M_{\rm ej}\) being slightly outside that error region, and every parameter recovered to within \(1\sigma\) in the two-dimensional posteriors due to correlations in each parameter. The correlation between \(L_{0}\) and \(t_{\rm SD}\) also suggests that the rotational energy of the magnetar is well constrained to \(\sim 10^{52}\) erg, i.e., the injected value.
### SN 2015bn
SN 2015bn is an SLSN-I at \(z=0.1136\) that was first discovered by the Catalina Sky Survey on 2014 December 23. It peaked at 79 rest-frame days post discovery, which made it one of the slowest evolving SLSNe at the time, and had peak magnitudes of \(M_{\rm g}=-22.0\pm 0.08\) mag (AB) and \(M_{U}=-23.07\pm 0.09\) mag (Vega), making it one of the most luminous as well (Nicholl et al., 2016). Since then, it has
Figure 3: Bolometric peak timescale \(t_{\rm peak}\) (top) and final ejecta velocity \(v_{\rm ej}\) (bottom) for supernovae with varying ejecta mass and \(P_{0}=1\) ms (left) and \(P_{0}=3\) ms (middle) and with varying spin period and \(M_{\rm ej}=10M_{\odot}\) (right). The black lines indicate notable values of \(t_{\rm peak}\) and \(v_{\rm ej}\).
Figure 2: Ratio of kinetic to radiated energy for supernovae with varying ejecta mass and \(P_{0}=1\) ms (left) and \(P_{0}=3\) ms (middle) and with varying spin period and \(M_{\rm ej}=10M_{\odot}\) (right). The black lines indicate contours of constant \(E_{\rm kin}/E_{\rm rad}\), while the magenta lines indicate contours of constant \(\zeta=t_{\rm SD}/t_{\rm df}\).
\begin{table}
\begin{tabular}{c c c c c c} Object & Supernova type & log(\(L_{0}\)) [erg s\({}^{-1}\)] & log(\(t_{\rm SD}\)) [s] & \(n\) & \(M_{\rm e}\) [\(M_{\odot}\)] \\ \hline Simulated (Injected) & & 46.0 & 6.0 & 3.0 & 10.0 \\ Simulated (Recovered) & & \(46.47^{+0.55}_{-0.58}\) & \(5.56^{+0.59}_{-0.00}\) & \(4.70^{+2.51}_{-1.66}\) & \(16.94^{+6.53}_{-1.46}\) \\ \hline SN2015bn & SLSN & \(45.79^{+0.07}_{-0.07}\) & \(6.98^{+0.4}_{-0.4}\) & \(3.21^{+0.34}_{-0.34}\) & \(9.57^{+2.16}_{-1.26}\) \\ SN2007ru & Ic-BL & \(46.16^{+1.74}_{-1.27}\) & \(3.32^{+0.00}_{-0.33}\) & \(5.47^{+3.32}_{-0.83}\) & \(1.49^{+0.25}_{-0.29}\) \\ ZTF20ucigmel & FBOT & \(46.91^{+1.71}_{-0.72}\) & \(4.08^{+0.40}_{-0.30}\) & \(3.57^{+0.09}_{-0.34}\) & \(0.23^{+0.09}_{-0.09}\) \\ iPTF14ggr & USSN & \(43.09^{+0.27}_{-0.16}\) & \(6.23^{+0.27}_{-0.37}\) & \(3.77^{+3.92}_{-1.53}\) & \(0.18^{+0.09}_{-0.06}\) \\ \end{tabular}
\end{table}
Table 2: Injected parameters for the simulated supernovae and median inferred parameter values and \(1\sigma\) uncertainties for the simulated supernova and the four supernovae from the case studies.
Figure 4: \(g\)-band absolute magnitude \(M_{g}\) (top) and \(g-r\) colour (bottom) at peak for supernovae with varying ejecta mass and \(P_{0}=1\) ms (left) and \(P_{0}=3\) ms (middle) and with varying spin period and \(M_{\rm ej}=10M_{\odot}\) (right). The black lines indicate notable values of \(M_{g}\) and \(g-r\).
Figure 5: Bolometric luminosity (left), absolute \(g\)-band magnitude (middle), and absolute \(r\)-band magnitude (right) for several supernovae where only the braking index \(n\) is varied.
Figure 6: Fitted light curve (left) and posteriors of key parameters (right) for the simulated SLSN. The solid lines in the light curve plot indicate the light curve from the model with the highest likelihood, while the shaded area indicates the 90% credible interval. The orange dots and lines in the posterior indicate the injected parameters.
Figure 7: Fitted light curves for the four supernovae from our case studies. The solid lines indicate the light curve from the model with the highest likelihood, while the shaded area indicates the 90% credible interval.
been followed-up extensively in optical/UV/NIR, with photometry and spectroscopy (Nicholl et al., 2016, 2018), and polarimetry (Inserra et al., 2016; Leloudas et al., 2017), and well as in radio (Nicholl et al., 2018; Eftekhari et al., 2021; Murase et al., 2021) and X-rays (Inserra et al., 2017; Bihrombakki et al., 2018). SN 2015bn shows strong undulations in the light curve on a timescale of 30-50 days (Nicholl et al., 2016, 2017), and was detectable in optical/UV for more than 1000 days (Nicholl et al., 2018), although the supernova has yet to be detected in either radio or x-rays.
We import the observational data (Nicholl et al., 2016, 2016) from the Open Supernova Catalog (Guillochon et al., 2017). Our model is able to fit the supernova peak very well in all bands and can reproduce \(r\) and \(i\) band data for more than 400 days, although although it finds much bluer emission than observed in the post-peak photospheric phase. The inferred rotational energy of the magnetar is \(\sim 6\times 10^{52}\) erg, which is close to the maximum rotational energy the can be extracted from a newborn magnetar. The magnetar also shows \(n\approx 3\), meaning the vacuum dipole is a good approximation for this object. Using the scaling relations from Equations 21 and 22 for a 1.4 \(M_{\odot}\) neutron star with the same equation of state as Nicholl et al. (2017), we get a spin period \(P_{0}\approx 0.7\) ms, extremely close to the mass-shedding limit, and a magnetic field \(B\approx 8\times 10^{12}\) G.
Comparing to the results of Nicholl et al. (2017) shows some parameters in agreement, but strong discrepancy between others. Using Equations 4, 21, and 22 with parameters from Nicholl et al. (2017) show that they find a spin-down timescale of \(\sim 105\) days, in good agreement with our \(\sim 110\) days, but a magnetar rotational energy of only \(\sim 10^{52}\) erg, a factor of \(\sim 5\) lower than recovered by our model. This is because the magnetar needs to supply both the radiated and kinetic energy of the supernova in our model, while in the Nicholl et al. (2017) model the ejecta velocity is input separately. We also recover a similar ejecta mass of \(\sim 10\)\(M_{\odot}\).
### SN 2007ru
SN 2007ru is an SN Ic-BL at \(z=0.01546\) that was first discovered by the Himalayan Chandra Telescope on 2007 December 2 (Sahu et al., 2009). The peak magnitude of the supernova was \(M_{V}\approx-19.06\) mag, one of the brighter SNe Ic-BL, and estimated rise time was \(8\pm 3\) days (Sahu et al., 2009). The photospheric velocity was estimated to be around \(20\ 000\ {\rm km}\ {\rm s}^{-1}\). A nickel-powered model estimated the \(M_{\rm ej}\approx 1.3\)\(M_{\odot}\) and \(M_{\rm Ni}\approx 0.4\)\(M_{\odot}\)(Sahu et al., 2009), while a previous magnetar-powered model estimated \(P_{0}\approx 2.30\) ms, \(B\approx 6.2\times 10^{15}\) G, and \(M_{\rm ej}\approx 4.43M_{\odot}\)(Wang et al., 2016).
We import the observational data (Sahu et al., 2009) from the Open Supernova Catalog (Guillochon et al., 2017). The model fits most of the data well in the optical, although it does slightly underpredict the NIR data around peak. The magnetar energy is \(\lesssim 10^{50}\) erg for this supernova, lower than the explosion energy, which dominates the dynamics here. This energy is also much lower than that estimated by Wang et al. (2016), and the spin-down time is a factor of \(\sim 10\) larger. The ejecta mass we estimate is similar to the previous nickel-powered model, but lower than the previous magnetar-powered model. Finally, the braking index \(n\) is much higher than 3, and we can reject vacuum dipole spin-down at at \(\gtrsim 95\%\) confidence. Our posterior on the braking index is also consistent with the magnetar spin-down being dominated by gravitational-wave radiation (Sarin et al., 2018, 2020). However we caution against strong conclusions based on the measurement of \(n\) due to our simplified treatment of the neutron star spin evolution.
### ZTF20acigmel
AT2020xnd or ZTF20acigmel, the 'Camel', is an FBOT at z = 0.2433 that was first discovered by ZTF (Bellm et al., 2019) on 2020 October 12 (Perley et al., 2021). The peak magnitude was \(M_{5000}\approx-20.6\) mag or \(M_{3900}\approx-20.9\) mag, and estimated rise time was \(\sim 2\) days (Perley et al., 2021). The photospheric radius is already receding at 7 days, and the photospheric temperature from 7-13 days is estimated to be \(20\ 000\ \pm 2000\) K. ZTF20acigmel was also found to be luminous in both radio (Ho et al., 2022) and x-rays (Bright et al., 2022).
We used the publicly available photometric data from Perley et al. (2021) for our fit. The model fits the data in all filters throughout the evolution of the object. The total rotational energy of the magnetar is around \(10^{51}\) erg, comparable to the explosion energy. The braking index is consistent with vacuum dipole, and using the scaling relations and the same 1.4 \(M_{\odot}\) neutron star as for SN 2015bn, we can derive an initial spin period of \(\sim 5\) ms and magnetic field of \(\sim 2\times 10^{15}\) G; these values, as well as the ejecta mass, are consistent with values found for the FBOT distribution (Liu et al., 2022).
### iPTF14gqr
iPTF14gqr is a USSN at \(z=0.063\) that was first discovered by the intermediate Palomar Transient Factory (iPTF) (Law et al., 2009) on 2014 October 14. The supernova featured a bright first peak that faded within a day, followed by a more extended light curve the rises after about \(\sim 4\) days. The first peak can be explained by shock cooling (De et al., 2018) and the second peak by either radioactivity (De et al., 2018) or a new born magnetar (Sawada et al., 2022), although both peaks can also be explained by interaction alone (Khatami and Kasen, 2023).
We used the publicly available photometric data from De et al. (2018), although we exclude all data points within one day post-explosion, since we only claim that the second peak could be magnetar powered. The model fits the data in all filters throughout the evolution of the object. The explosion energy, which has a wider prior to account for the low explosion energy of USSNe, is constrained to \(\sim 6\times 10^{49}\) erg, while the total magnetar rotational energy is \(\sim 2\times 10^{49}\) erg. The braking index is not very well constrained; although it is consistent with vacuum dipole to within error, the most likely value is \(\sim 2\). The median spin-down time of \(\approx 20\) days matches that found by Sawada et al. (2022), although our magnetar energy is a factor of \(\sim 5\) higher; this is likely because our explosion energy is smaller. The ejecta mass we derive is also consistent with estimates by both De et al. (2018) and Sawada et al. (2022).
## 5 Discussion and Summary
As shown by both the exploration of the parameter space (Section 3) and the case studies (Section 4), the model presented here is incredibly versatile. Three of the four case studies had previous been fit by a magnetar model (Nicholl et al., 2017, 2016; Wang et al., 2016; Sawada et al., 2022), but each of these studies used a different model with different assumptions. A versatile model allows comparisons of different populations to be done self-consistently and determine what model variations manifest in vastly different types of supernovae, as well as probe whether a continuum between these sources could exist or whether there are multiple distinct classes.
Much of the flexibility of our model comes from self-consistent dynamical evolution of the ejecta, while the addition of magnetar braking index as a parameter allows for some possible insight into
the spin-down mechanism of the newborn millisecond magnetar. Figure 8 (top) shows the posterior probability distribution of braking index for the four case study supernova. As mentioned above, SN 2015bn is very well approximated by \(n=3\), while \(n=3\) is rejected in SN 2007ru at \(>95\%\) confidence; both of these supernovae have posteriors that are roughly Gaussian. ZTF20acigmel and iPTF14qgr both show non-Gaussianity in their posteriors; the Camel has a tail at high \(n\) but peaks very close to \(n=3\), while iPTF14qgr is not well constrained but peaks closer to \(n=2\) and has significant posterior support at \(n=1.5\), the lowest value of our prior. While making definitive statements about spin-down mechanisms will require a much larger sample, this small sample already shows diversity in their inferred braking index, highlighting a potential interesting question for future studies.
All the objects studied in Section 4, including the simulated supernova, show a strong negative correlation between \(L_{0}\) and \(t_{\rm SD}\). This shows that the magnetar rotational energy \(E_{\rm rot}\), and thus the total energy budget of the supernova, can be constrained for these objects to within an order of magnitude (see Figure 8 (bottom)). The ejecta masses were all found to be similar to previous studies and the spin-down times were found to be very close to previous models for SN 2015bn and iPTF14qgr (Nicholl et al., 2017; Wang et al., 2016; Sawada et al., 2022), although the total magnetar energy budget was different in each case due to the way we treated our dynamics. The magnitude of these discrepancies seems to vary depending on the supernova, and a large-scale sample study on supernovae that have been previously characterized by a magnetar model (e.g. Chen et al., 2023) is necessary to characterize the systematic difference between our model and previous models.
This model has a few caveats which may prevent it from properly describing certain transients. The first is that it is a one-zone, one-dimensional model. This makes the treatment of the photospheric radius very simplified compared to real supernovae. Engine-driven supernovae also show hydrodynamic instabilities in multidimensional simulations (e.g. Chen et al., 2016, 2020; Suzuki and Maeda, 2017, 2021) which can shed the inner ejecta, causing a decrease in the effective optical depth of the ejecta, as well as affecting the timescale for non-thermal leakage. If the spin-down timescale of the magnetar is smaller than the Kelvin-Helmholtz timescale of the magnetar for neutrino emission \(t_{\rm KH,\nu}\lesssim 100\) s, then baryon loading on the magnetized wind via the neutrino-driven wind can be relevant (Thompson et al., 2004) and the magnetized wind can be collimated by anisotropic and hoop stress (Bucciantini et al., 2007, 2008) and form a jet (Kashiyama et al., 2016). This model also does not self-consistently track gravitational-wave emission and how it depletes the overall rotational energy reservoir. The model also has no way to explain the bumps and undulations that have recently found in a large number of SLSNe (Hosseinzadeh et al., 2022; Chen et al., 2023), which have been explained by both circumstellar material (e.g. West et al., 2023; Chugai and Utrobin, 2023) and magnetars (Chugai and Utrobin, 2022; Moriya et al., 2022; Dong et al., 2023). The model we present can only explain a smooth light curve with minimal fine structure; however, this structure is likely connected to small mass ejections or binary interactions in the final few years of the life of the progenitor and may not be strongly connected to the power source of the supernova. Furthermore, The SED used in our model has a simplified treatment of line blanketing that is calibrated to SLSNe (Nicholl et al., 2017), and may not be accurate for other transients. Although it is possible to switch the SED to a blackbody if line blanketing is not expected to be strong, the SED is also not a good approximation deep into the nebular phase on the supernova, since the emission will start to be dominated by line emission instead of photospheric emission (e.g., Schulze et al., 2023).
Although the magnetar-driven supernova model is versatile enough to fit light curves of many different supernovae, the best way to determine whether a magnetar is really the power source is to compare the nebular spectra, polarization, and non-thermal emission from the supernovae with different models. Within the magnetar model, this emission is form the PWN or its interaction with the supernova ejecta. If the ejecta is optically thin, the synchrotron and inverse Compton emission from the PWN can leak through and be detected (Kotera et al., 2013; Metzger et al., 2014; Murase et al., 2015; Omand et al., 2018), while if the ejecta is optically thick, these photons will be absorbed and change temperature and electronic state of the ejecta, both giving detectable signals (Chevalier and Fransson, 1992; Omand et al., 2019; Omand and Jerkstrand, 2023). However, interaction with circumstellar material can also produce non-thermal emission, polarization, and spectra with high ionization lines, therefore detailed modeling is necessary to make any strong conclusions.
In this work, we present a more flexible, inference-capable, publicly available model for the light curves of magnetar-driven supernovae. The main changes from previous models are the coupling of the magnetar energy injection to the kinetic energy of the supernova and the addition of the magnetar braking index as a free parameter, allowing the exploration of non-vacuum-dipole spin down. We show that the model can reproduce the basic properties on several phenomenologically different supernovae, and also fit four different
Figure 8: Posterior distributions of the magnetar braking index (top) and rotational energy (bottom) for the four supernovae from our case studies. The median values are indicated by the blue vertical lines within the distribution. The black dashed line indicates vacuum dipole spin-down (\(n=3\)).
types of supernovae, retrieving parameters consistent with works using separate models. This model will allow us to explore the full diversity of these supernovae, better characterize these supernovae from just their light curves, and make better predictions future multiwavelength emission to better test the magnetar-driven scenario.
## Acknowledgements
We thank Claes Fransson and Steve Schulze for helpful comments and discussions. N. Sarin is supported by a Nordita Fellowship, Nordita is funded in part by NordForsk.
## Data Availability
Light curve data for SN2015bn and SN2007ru was obtained from the Open Supernova Catalog (Guillochon et al., 2017) using Redback. Light curve data for ZTF20acigmel and iPTF14ggr were downloaded from repositories released with the corresponding papers (Perley et al., 2021; De et al., 2018). The model is available for public use within Redback(Sarin et al., 2023).
|
2306.04481 | Sustainable Adaptive Security | With software systems permeating our lives, we are entitled to expect that
such systems are secure by design, and that such security endures throughout
the use of these systems and their subsequent evolution. Although adaptive
security systems have been proposed to continuously protect assets from harm,
they can only mitigate threats arising from changes foreseen at design time. In
this paper, we propose the notion of Sustainable Adaptive Security (SAS) which
reflects such enduring protection by augmenting adaptive security systems with
the capability of mitigating newly discovered threats. To achieve this
objective, a SAS system should be designed by combining automation (e.g., to
discover and mitigate security threats) and human intervention (e.g., to
resolve uncertainties during threat discovery and mitigation). In this paper,
we use a smart home example to showcase how we can engineer the activities of
the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems
satisfying sustainable adaptive security. We suggest that using anomaly
detection together with abductive reasoning can help discover new threats and
guide the evolution of security requirements and controls. We also exemplify
situations when humans can be involved in the execution of the activities of
the MAPE loop and discuss the requirements to engineer human interventions. | Liliana Pasquale, Kushal Ramkumar, Wanling Cai, John McCarthy, Gavin Doherty, Bashar Nuseibeh | 2023-06-05T08:48:36Z | http://arxiv.org/abs/2306.04481v1 | # Sustainable Adaptive Security
###### Abstract
With software systems permeating our lives, we are entitled to expect that such systems are secure by design, and that such security endures throughout the use of these systems and their subsequent evolution. Although adaptive security systems have been proposed to continuously protect assets from harm, they can only mitigate threats arising from changes foreseen at design time. In this paper, we propose the notion of _Sustainable Adaptive Security_ (SAS) which reflects such enduring protection by augmenting adaptive security systems with the capability of mitigating newly discovered threats. To achieve this objective, a SAS system should be designed by combining automation (e.g., to discover and mitigate security threats) and human intervention (e.g., to resolve uncertainties during threat discovery and mitigation). In this paper, we use a smart home example to showcase how we can engineer the activities of the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems satisfying sustainable adaptive security. We suggest that using anomaly detection together with abductive reasoning can help discover new threats and guide the evolution of security requirements and controls. We also exemplify situations when humans can be involved in the execution of the activities of the MAPE loop and discuss the requirements to engineer human interventions.
## I Introduction
Security threats are on the rise. Many recent critical cyber security incidents, such as log4j and SolarWinds, arose from newly discovered threats [1]. Therefore, there is a need to build systems that are secure by design, but that can also detect and mitigate newly discovered security threats over extended periods of time. Although adaptive security approaches [2, 3] have been proposed to mitigate evolving security threats, they only address threats arising from changes foreseen at design time. However, unanticipated changes, such as newly discovered assets and vulnerabilities or wrong domain assumptions, can bring new security threats and require security requirements and controls to evolve at runtime.
In this paper, we propose the notion of _Sustainable Adaptive Security_ (SAS) which reflects the capability of adaptive security systems to preserve security requirements throughout their use and subsequent evolution. Systems satisfying sustainable adaptive security (hereafter referred to as SAS systems) should be capable of discovering changes that may bring unanticipated security threats and managing the evolution of security requirements and controls to mitigate such threats. Although autonomy is a desired property of SAS systems, human intervention can be beneficial to preserve security requirements in the long run, for example, by monitoring security-relevant data and supporting decision making [4, 5, 6].
We use an example of a smart home to motivate the need for sustainable security. We explain how we can engineer the activities of the MAPE (Monitor, Analysis, Planning, and Execution) loop [7] of SAS systems. We showcase that combining anomaly detection with abductive reasoning can help discover new security threats and guide the evolution of security requirements and controls to mitigate such threats. We also exemplify how humans can assist the SAS system in executing the activities of the MAPE loop. Using our example, we discuss three requirements (security, trust, and usability) to be considered when engineering human interventions. Finally, we conclude with a discussion of a future research agenda for the research community to develop SAS systems.
## II Smart Home Security Example
Fig. 1 illustrates the plan of the smart home (on the left) and exemplifies some of the unexpected changes that can occur (on the right). The home has a WiFi router and a smart lock which secures the physical space of the home. Devices connected to the WiFi network can potentially control the smart lock [8] and send commands to open and close the door. A security goal [9] that we aim to maintain is the integrity of the smart home. This goal can be achieved by preventing unauthorised access to the smart home. Specifically, the smart lock should only lock or unlock the door when an authorised user (tenant) intends for the action to occur. We focus the scope of our research on a system that is currently deployed. This is because modern systems include components built by different vendors. Discovering threats during development may not be feasible because new threats can emerge depending on how a system is deployed and configured, e.g., depending on the type and location of the devices in the smart home.
Fig. 1: Smart Home Example
A _new digital device can connect to the WiFi network_. Such device may be unknown and can control the smart lock letting an outsider inside the house unaccompanied. In such case, a new security control should prevent the new device from controlling the smart lock. The presence of unknown devices connected to the WiFi network can indicate security misconfiguration (e.g., weak passwords, insecure encryption protocols) or new vulnerabilities in the WiFi router. Therefore, any security control initially selected by the system will not preserve security in the long run because it may not address the root cause of the problem.
A new device (e.g., smart speaker) connecting to the WiFi network may, instead, be known and trustworthy and require temporary authorisation to access the smart lock and other appliances in the smart home. However, adding a new device, can also bring new vulnerabilities. For example, a smart speaker may be authorised to control the smart lock, but may be vulnerable to ultrasonic voice command attacks that are inaudible [10]. This not only violates the security requirement from the example, but also puts other devices controllable by the smart speaker at risk.
If new devices connect too frequently to the WiFi network, an adaptive security system can learn that new devices should not be allowed to connect to the WiFi network. The system can also ask the tenant to confirm whether the security control should be enacted. However, an abrupt notification without any explanation of why such security control is necessary can decrease the trust in the system and in the security posture of the smart home. Also, the tenant may not have the expertise to understand whether the suggested security control is effective.
The smart home can also be vulnerable to attacks exploiting _ununitigated or new vulnerabilities_. Some vulnerabilities, although they are known, they are not mitigated due to insufficient security knowledge [11] or delayed updates [12]. For example, an unmitigated vulnerability in the smart lock (e.g., CVE-2022-32509) may enable man in the middle attacks and allow an outsider inside. Other vulnerabilities are new and a fix does not exist for them yet.
## III Sustainable Security
In the system security community, sustainable security has been defined as the capability of an organisation to continuously secure a multitude of devices, running potentially outdated versions of software, for long periods of time [13]. It has also been considered as the property of a network intrusion detection system to ensure continuous reliable operation [14], or the capability of organisational processes to be more resilient to insider threats [15]. The notion of sustainable security is also related to cyber resilience [16], i.e. "the ability to prepare for and recover quickly from both known and unknown threats". Although cyberresilience metrics [17] have been defined to compare system design and prioritise upgrades and maintenance, previous work has not considered how to engineer systems that can address unknown threats and satisfy evolving security requirements over time, an extension of the concept of sustainability in software engineering [18, 19].
As systems are increasingly socio-technical, our definition of sustainable security considers both a system and a human perspective. We define sustainable security as _the capability of a system to preserve security requirements over sustained periods of time_. To achieve this aim, systems need to continuously identify and mitigate emerging threats by _adapting_ their security controls or evolving the requirements specification. When mitigating threats they should identify short-term security controls that stop the malicious action, but also long-term controls that address the root-cause of the threat and prevent the recurrence of the same threat.
From a human perspective, sustainable security should refer to _the design of technology that enables interactions between the system and the humans to achieve cyberresilience_. To the best of our knowledge, the notion of cyberresilience has not been explored in the HCI community. Previous work has studied how technology plays a role in various forms of resilience to disaster and displacement [20, 21, 22], and also how individuals respond to mandated technology adoption [23]. Humans should complement the system functionality to support selection and execution of effective security controls that preserve security in the long run. Human participation can be sustained by engineering usable interactions that also increase trust in the system and its security posture. Usability can be achieved by avoiding obtrusiveness. Trust can be fostered by enabling humans to understand how the system operates and what is happening at the current moment.
## IV Engineering SAS Systems
We revisit the activities of the MAPE loop to engineer Sustainable Adaptive Security (SAS) systems, as shown in Fig. 2. To discover new threats the monitoring activity not only should assess satisfaction of requirements and assumptions on the operating environment (domain assumptions) but should also detect unusual behaviours (anomalies), which can provide indicators that security requirements and controls should evolve. We assume that the SAS system can collect data about the user behaviour (e.g., entering, exiting, whether the door is locked or unlocked or whether new devices are present in the network). The analysis activity should diagnose unusual behaviour and assess whether new threats can materialise. The planning activity should identify how security requirements and controls should evolve, if necessary. The execution activity should enact security controls. We assume that the SAS system is separated from the system under protection (smart home), although it can observe/control the devices in the smart home.
Humans can be involved in the execution of the activities of the MAPE loop in capacities other than the traditional user [24, 25] such as security and software engineers. For example, during monitoring a user can provide information about data that cannot be observed by the SAS system (e.g., whether a device connected to the WiFi network is trustworthy). During analysis, information about a discovered anomaly can be provided to software/security engineers to diagnose the anomaly, if it cannot be done automatically. During planning, a user can confirm whether a security control can be enacted
(e.g., forcing closure of the door of the smart home). Finally, humans can support execution of security controls (e.g., the tenant may be asked to return home).
Similarly to previous work on adaptive security [26], we use a goal modeling framework [27] to make security requirements refinements and dependencies on environment conditions explicit. We suggest that a representation of security requirements can help engineer the activities of the MAPE loop of a sustainable adaptive security system. Fig. 3 shows a simplified goal model of the smart home example. The root goal represents the security requirement related to the authorised access to the smart home. This requirement can be achieved by authenticating access to the smart lock and securing access to the WiFi network. Two-Factor authentication is used to access the smart lock. An 8 character password is used for the WiFi network and the length of the password is assumed to be sufficient to protect the network. We assume that a) trusted devices in the WiFi network do not let an outsider in the smart home unaccompanied; b) the smart lock cannot be tampered with; c) an outsider can only enter if the door is unlocked and d) the tenant always locks the door when they exit.
In the rest of this section we showcase how abductive reasoning can be used to diagnose whether an anomaly can lead to the violation of the authorisation requirement for the unexpected changes described in Section II. We encoded the goal model and the system functionalities (e.g., opening/closing door, entering/exiting) using Answer Set Programming and use clingo to perform abductive reasoning1. As suggested by Alrajeh et al. [28], symbolic learning techniques can use traces of the system behaviour satisfying and violating the authorisation requirement to learn how security requirements and controls should evolve. We also exemplify how humans can participate in the activities of the MAPE loop.
Footnote 1: For reasons of space, we omit information about the ASP models in this paper and refer the interested reader to [https://tinyurl.com/256tfktc](https://tinyurl.com/256tfktc)
### _New Device Connects to the WiFi Network_
The monitoring activity detects that a new device is connected to the WiFi network. However, to diagnose the anomaly the SAS system needs to know whether the device is trustworthy, i.e. known to the tenant. If the device (d1) is not trustworthy, the ASP model will be updated accordingly and will generate a trace showcasing how the authorisation requirement can be violated, as shown below.
exit(tenant,home,1), close(sl,2), open(d1,sl,3), enter(outsider,home,4), in(outsider,home,4).
After the tenant exits at time 1 and closes the door at time 2, the new device sends a command to the smart lock (sl) to open the door and lets and outsider inside.
The SAS system can learn to evolve the specification by, for example, preventing the new device from sending an open command to the smart lock as shown below.
X!=d1 :- open(X,sl,T), net_device(X), T = 0..4.
In this case, human participation can be sought to understand whether the security control selected by the SAS system should be enacted. To foster trust, observability and transparency principles should be satisfied [29]. In other words, the SAS system should indicate that a network device that is not trustworthy was detected (observability) and that the suggested security control is necessary to prevent the new device from letting an offender inside unaccompanied (transparency). Alternatively, it may be desirable to evolve security controls differently, for example by removing the new device from the WiFi network. Although alternative security controls can be discovered automatically by the system, the tenant may not have the expertise to select one appropriately. Thus, intervention of the system/security engineer may be necessary to perform this task. If an engineer is required to modify security controls, the SAS system should also satisfy the intelligibility principle [29] by communicating in what ways security controls can be modified without introducing additional vulnerabilities.
However, the security controls identified above may not be sustainable if the new device is trustworthy. For example, the tenant may want to use a new smart speaker to send commands to the smart lock to open and close the door. Thus, the SAS system should ask the tenant whether the device is trustworthy, to avoid selecting ineffective security controls. To increase the likelihood that the tenant completes the task successfully, the SAS system should satisfy the feedforward principle [29] by providing the tenant with information about the detected device (e.g., type of device). Similarly to the previous example, to satisfy observability, the SAS system should indicate that a new device was detected. To satisfy transparency the SAS system should indicate that, if the device is trustworthy, access to the network and the home appliances will be granted.
Now, let us imagine that new devices keep connecting to the WiFi network frequently. To avoid sending frequent notifications to the tenant, the SAS system may learn that new devices should not be allowed in the WiFi network by default or can ask the tenant whether, from now on, they may want to prevent any new device from connecting to the network. In this case, the SAS system should satisfy observability by indicating that new devices are connected to the network too frequently. It should also satisfy transparency by explaining
Fig. 2: Sustainable Adaptive Security System
that forbidding access to new devices will require the tenant to manually grant access to the network when a new device will connect to the WiFi network in the future.
However, these security controls may not be sustainable, because they may not address the root cause of the problem, which can be that the network authentication is not effective. Thus, the SAS can use the structure of the goal model to identify which security control is responsible to regulate access to the network and, for example, evolve the domain assumption related to the password strength by increasing the minimum number of characters for a password to be considered strong. Changing the domain assumption will force the system to learn that a longer password needs to be enforced. In this case, human input should be sought to identify the minimum number of characters that a password should have. Since the tenant may not have the expertise to advise on the password length, a security/software engineer may need to be involved to perform this task.
### _New Vulnerabilities_
If the monitoring activity measures the latency of requests and responses to and from appliances in the smart home it can discover unusual latencies that can indicate the presence of a man-in-the-middle attack. These types of attacks can be due to authentication vulnerabilities. For example, the Nuki Smart Lock was found to lack SSL/TLS certificate validation, allowing an attacker to perform a man-in-the-middle attack and intercept network traffic (CVE-2022-32509).
Such vulnerability cannot be diagnosed automatically and the SAS system needs to identify a security control that could prevent an attack from happening. In such a case, the domain assumption on the trusted devices in the network may no longer be valid. Removing that domain assumption from the ASP model makes the system identify a violating trace where a trusted device in the network could still let an outsider inside unaccompanied. This trace can be used to learn a security control that blocks any incoming traffic to the smart lock. This security control is not sustainable in the long run because it does not address the root cause of the problem. Intervention of a security/software engineer needs to be sought to confirm the presence of a vulnerability. If information about the vulnerability has already been disclosed (e.g., a CVE entry is available) a patch needs to be identified and applied to the vulnerable device. Also, the requirement specification needs to be updated with an indication of the vulnerability and the fix. Alternatively, advice needs to be sought from the software/security engineer on how the disclosure of a new vulnerability and how to fix it.
## V Related Work
Previous work on security requirements engineering [30, 31, 32, 33, 34, 35, 36] has focused on modelling and reasoning about security concerns to assess risks, identify security controls and estimate the impact of design choices on the protection of the critical assets of the system. These approaches typically require a complete model of the system and have not been designed to support requirements evolution. Other approaches [37, 38] trigger requirements evolution according to pre-defined rules, when security properties are violated. More recently, Drozdov et al. [39] have used symbolic-based learning to learn security policies from historical data produced by anomaly detectors. The authors adopt a domain-specific function to guide the learning process towards the best policies for anomaly detection. Although promising, this work has not been adopted to evolve security requirements at runtime.
In the adaptive security community, previous work has suggested to evolve the representation of threats based on the automated derivation of changing architectural system models from runtime and operational system artifacts [40]. Calo et al. [41] propose an approach to generate access control policies dynamically when the environment changes, assuming that a set of constraints associated with the resources to be accessed are satisfied. To the best of our knowledge, there is no systematic technique to detect and prevent/mitigate evolving threats brought by unexpected changes, such as vulnerabilities and ineffective security controls at runtime.
Explanations have been used to involve humans in the execution of the activities of an adaptive system [42, 43]. In the security domain, Nhlabatsi et al. [44] help users understand security decisions by establishing traceability relationships between requirements and security concerns (e.g., threats, attacks and vulnerabilities). More recently, Adepu et al. [45] characterise explanations in terms of content, effect, and cost (i.e. human effort necessary to understand the explanation).
Fig. 3: Goal Model
The authors use probabilistic reasoning to determine when an explanation should be used to improve overall system utility. However, they do not configure the content of an explanation depending on the input required from the human participant. Models of human participants have been used to decide whether humans should be involved during the planning [46] or execution [47] activity of adaptation. Li et al. [48] propose a framework to reason about the usage of preparatory notifications in adaptive systems. Preparatory notifications can focus the attention of human participants on the task to be performed and help them understand its context before task execution. However, previous work has provided limited guidance on how human intervention should be requested and provided. Gil et al. [29] proposes a conceptual framework to characterise the cooperation between humans and autonomous cyber-physical systems and provide techniques for applying the framework to design human integration. However, this framework has not been applied in the context of an adaptive security system.
## VI Research Agenda
### _Engineering Autonomy in SAS Systems_
During monitoring SAS systems need to identify events that can be observed by the devices that can be controlled in the environment and those for which human intervention is necessary. For example, a combination of device fingerprinting and human intervention [49] can be used to identify new devices connected to the WiFi network.
During analysis, to detect the root cause of an anomaly, it will be necessary to identify parts of the requirements model that require revision. The devices and domain assumptions that are affected by the anomaly can be used to identify the parts of the model that require revision automatically. In our example, frequent connectivity of new devices to the WiFi network required us to revise the security controls protecting the network. If not possible, human intervention should be sought to select the parts of the model that need revision. There can also be situations when anomalies cannot be diagnosed, for example, when we have unanticipated or new vulnerabilities.
We will attempt to detect unanticipated vulnerabilities by referencing the behavioural pattern of the anomaly against an up-to-date knowledge database of attacks created from vulnerability disclosures and vulnerability databases. To detect and mitigate new vulnerabilities, it will be necessary to identify what type of information can be disclosed to software and security engineers to obtain meaningful input.
During planning there can be several ways in which security controls can evolve. This will depend on the examples that are used to perform the learning activity. One challenge to be addressed is the generation of examples and the identification of heuristics that can drive the learning towards the selection of security controls that are effective in the long run, while avoiding human intervention.
In this paper we assumed that the requirements, functionalities, and domain assumptions of the system to be protected are specified in advance. However, this may not always be possible, for example, when new components are added to the system at runtime. Thus, it will be interesting to understand how the system functionalities, domain assumptions and security controls can be learnt from scratch. For example, transfer learning can be used to learn the behaviour of new components from similar ones. Online learning can be used to learn how heterogeneous components can interact when placed together. Then, neural symbolic learning [50] can be used to learn a model of the system behaviour. Finally symbolic learning can be used to learn the security controls that satisfy the system security requirements. Human input will be necessary to identify the security requirements and the security controls available, and to revise the model learnt from scratch.
### _Engineering Human Interventions_
Engineering human interventions requires identifying the tasks to be performed by human participants. In this paper, we initially identified some of them (e.g., provisioning of monitoring information, hypothesis on the root cause of an anomaly, and selection/modification of security controls). However, it will be necessary to elicit such tasks in a more systematic way using a larger set of scenarios from different application domains (e.g., cloud computing). Once tasks are determined, it will be desirable to identify the roles and expertise of humans that can perform these tasks.
Moreover, SAS system should enable effective and usable interaction with humans. To achieve this aim, it will be necessary to consider the following aspects: (1) "who" - who will be involved in the task, (2) "when" - human or the system initiate the actions, and the level of automation or human control for performing the task [51] (e.g., some tasks can require selection but others can require modification of security controls); (3) "how" - human and the system can communicate with each others (4) what information should be exchanged between the human and the system. To foster trust the information exchanged should satisfy some of the properties of human-system integration [29].
Evaluating sustainable adaptive security systems also brings new research challenges. From a system perspective, properties such as timeliness and longevity are very relevant. Timeliness refers to how fast such systems can prevent threats from happening compared to traditional adaptive security systems. Longevity can refer to how long security requirements are satisfied. To evaluate SAS systems from a human perspective, we can consider assessing the successful completion of the task by the participants, but also evaluate sustainable security from an "experiential perspective" by assessing the perceived security level of the system experienced by human participants [52].
|
2310.19450 | Hodge-Compositional Edge Gaussian Processes | We propose principled Gaussian processes (GPs) for modeling functions defined
over the edge set of a simplicial 2-complex, a structure similar to a graph in
which edges may form triangular faces. This approach is intended for learning
flow-type data on networks where edge flows can be characterized by the
discrete divergence and curl. Drawing upon the Hodge decomposition, we first
develop classes of divergence-free and curl-free edge GPs, suitable for various
applications. We then combine them to create \emph{Hodge-compositional edge
GPs} that are expressive enough to represent any edge function. These GPs
facilitate direct and independent learning for the different Hodge components
of edge functions, enabling us to capture their relevance during hyperparameter
optimization. To highlight their practical potential, we apply them for flow
data inference in currency exchange, ocean currents and water supply networks,
comparing them to alternative models. | Maosheng Yang, Viacheslav Borovitskiy, Elvin Isufi | 2023-10-30T11:22:25Z | http://arxiv.org/abs/2310.19450v3 | # Hodge-Compositional Edge Gaussian Processes
###### Abstract
We propose principled Gaussian processes (GPs) for modeling functions defined over the edge set of a simplicial 2-complex, a structure similar to a graph in which edges may form triangular faces. This approach is intended for learning flow-type data on networks where edge flows can be characterized by the discrete divergence and curl. Drawing upon the Hodge decomposition, we first develop classes of divergence-free and curl-free edge GPs, suitable for various applications. We then combine them to create _Hodge-compositional edge GPs_ that are expressive enough to represent any edge function. These GPs facilitate direct and independent learning for the different Hodge components of edge functions, enabling us to capture their relevance during hyperparameter optimization. To highlight their practical potential, we apply them for flow data inference in currency exchange, ocean flows and water supply networks, comparing them to alternative models.
## 1 Introduction
Gaussian processes (GPs) are a widely used class of statistical models capable of quantifying uncertainty associated to their own predictions (Rasmussen and Williams, 2006). These models are determined by covariance kernels which encode prior knowledge about the unknown function. Choosing an appropriate kernel is often challenging, particularly when the input space is non-Euclidean (Duvenaud, 2014).
Developing GPs on graphs has been a subject of recent work, which requires structured kernels to encode the dependence between nodes (Venkitaraman et al., 2020; Zhi et al., 2023), like the diffusion (Smola and Kondor, 2003) or random walk kernels (Vishwanathan et al., 2010). More recently, Borovitskiy et al. (2021) derived the more general family of Matern kernels on graphs from stochastic partial differential equations (SPDEs) thereon, mirroring the continuous approaches on manifolds (Borovitskiy et al., 2020; Azangulov et al., 2022, 2023). Nikitin et al. (2022) incorporated the temporal factor in this framework to build temporal-graph kernels. However, GPs in these works are targeted for modeling functions on the nodes of networks.
We instead focus on functions defined on the _edges_, of particular interest for modeling edge-based dynamical processes in many complex networks, such as flows of energy, signal or mass (Schaub et al., 2014). For example, in water supply networks, we typically monitor the flow rates within pipes (edges) connecting tanks (nodes) (Zhou et al., 2022). Other examples include energy flows in power grids (Jia et al., 2019), synaptic signals between neurons in brain networks (Faskowitz et al., 2022), and exchange rates on trading paths (edges) of currencies (nodes) (Jiang et al., 2011).
While it might seem intuitive to use node-based methods for edge-based tasks using line-graphs (Godsil and Royle, 2001), this often yields sub-optimal solutions (Jia et al., 2019). Alternatively, recent successes in signal processing and neural networks for edge data have emerged from modeling flows on the edge set of a simplicial 2-complex (SC\({}_{2}\)), including (Jia et al., 2019; Barbarossa and Sardellitti, 2020; Schaub et al., 2021; Yang et al., 2022; Roddenberry et al., 2021; Yang and Isufi, 2023), among others. A SC\({}_{2}\) can be viewed as a graph with the additional set of triangular faces, encoding how edges are adjacent to each other via nodes or faces. A SC\({}_{2}\) also allows to characterize key properties of edge flows using discrete concepts of _divergence_ (div) and _curl_(Lovasz, 2004; Lim, 2020), measuring how they diverge at nodes and circulate along faces. For example, electric currents in circuit networks respecting the Kirchhoff's law are div-free (Grady and Polimeni, 2010), and arbitrage-free exchange rates are curl-free along loops of trading paths (Jiang et al., 2011). Moreover, edge functions on a SC\({}_{2}\) admit the
_Hodge decomposition_ into three parts: gradient, curl and harmonic components, being curl-free, div-free or both (Lim, 2020). This provides unique insights in various applications including ranking (Jiang et al., 2011), gaming theory (Candogan et al., 2011), brain networks (Vijay Anand et al., 2022) and finance (Fujivara and Islam, 2020). Nevertheless, existing works on edge-based learning remain deterministic and there is a lack of principled ways to define GP priors on the edge set of SCs, which is the central goal of this work.
Our main contribution lies in the proposal of _Hodge-compositional edge GPs_. We build them as combinations of three GPs, each modeling a specific part of the Hodge decomposition of an edge function, namely the gradient, curl and harmonic parts. With a focus on the Matern family, we show that each of them can be linked to a SPDE, extending the framework used by Borovitskiy et al. (2020, 2021, 2023). Compared to a direct extension of graph GPs, they enable separate learning of the different Hodge components, which allows us to capture the practical behavior of edge flows. We also demonstrate their practical potential in edge-based learning tasks in foreign currency exchange markets, ocean flow analysis and water supply networks.
## 2 Background
A random function \(f:X\to\mathbb{R}\) defined over a set \(X\) is a Gaussian process \(f\sim\mathcal{GP}(\mu,k)\) with mean function \(\mu(\cdot)\) and kernel \(k(\cdot,\cdot)\) if, for any finite set of points \(\mathbf{x}=(x_{1},\ldots,x_{n})^{\top}\in X^{n}\), the random vector \(f(\mathbf{x})=(f(x_{1}),\ldots,f(x_{n}))^{\top}\) is multivariate Gaussian with mean vector \(\mu(\mathbf{x})\) and covariance matrix \(k(\mathbf{x},\mathbf{x})\).
The kernel \(k\) of a _prior_ GP encodes prior knowledge about the unknown function while its mean \(\mu\) is usually assumed to be zero. GP regression combines such a _prior_ with training data \(x_{1},y_{1},\ldots,x_{n},y_{n}\) where \(x_{i}\in X\), \(y_{i}\in\mathbb{R}\) with \(y_{i}=f(x_{i})+\epsilon_{i}\), \(\epsilon_{i}\sim\mathcal{N}(0,\sigma_{\epsilon}^{2})\). This results in a posterior \(f_{\lvert\mathbf{y}\rvert}\) which is another GP: \(f_{\lvert\mathbf{y}\rvert}\sim\mathcal{GP}(\mu_{\lvert\mathbf{y}\rvert},k_{\lvert \mathbf{y}\rvert})\). For any new input \(x^{*}\in X\), the mean \(\mu_{\lvert\mathbf{y}\rvert}(x^{*})\) is the prediction and the posterior variance \(k_{\lvert\mathbf{y}\rvert}(x^{*},x^{*})\) quantifies the uncertainty. We refer the reader to Rasmussen and Williams (2006) for more details. Defining an appropriate kernel is one of the main challenges in GP modeling (Duvenaud, 2014).
### GPs on Graphs
Let \(G=(V,E)\) be an unweighted graph where \(V=\{1,\ldots,N_{0}\}\) is the set of nodes and \(E\) is the set of \(N_{1}\) edges such that if nodes \(i,j\) are connected, then \(e=(i,j)\in E\). We can define real-valued functions on its node set \(f_{0}:V\to\mathbb{R}\), collected into a vector \(\mathbf{f}_{0}=(f_{0}(1),\ldots,f_{0}(N_{0}))^{\top}\in\mathbb{R}^{N_{0}}\). Denote the node-to-edge incidence matrix by \(\mathbf{B}_{1}\) of dimension \(N_{0}\times N_{1}\). Its entries are \([\mathbf{B}_{1}]_{ie}=-1\) and \([\mathbf{B}_{1}]_{je}=1\), and zero otherwise, for edge \(e=(i,j)\). The _graph Laplacian_ is then given by \(\mathbf{L}_{0}=\mathbf{B}_{1}\mathbf{B}_{1}^{\top}\), which is a positive semi-definite linear operator on the space \(\mathbb{R}^{N_{0}}\) of node functions. It admits an eigendecomposition \(\mathbf{L}_{0}=\mathbf{U}_{0}\mathbf{\Lambda}_{0}\mathbf{U}_{0}^{\top}\) where \(\mathbf{\Lambda}_{0}\) collects its eigenvalues on the diagonal and \(\mathbf{U}_{0}\) collects the orthogonal eigenvectors of \(\mathbf{L}_{0}\)(Chung, 1997).
A GP on graphs \(\mathbf{f}_{0}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{0})\) assumes \(\mathbf{f}_{0}\) is a random function with zero mean and a graph kernel \(\mathbf{K}_{0}\) which encodes the covariance between pairs of nodes. To construct principled graph GPs, Borovitskiy et al. (2021) extended the idea of deriving continuous GPs from SPDEs (Whittle, 1963; Lindgren et al., 2011) to the domain of graphs. Specifically, given the following SPDE on graphs with a Gaussian noise \(\mathbf{w}_{0}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\)
\[\Phi(\mathbf{L}_{0})\mathbf{f}_{0}=\mathbf{w}_{0},\text{ with }\Phi(\mathbf{L}_{0})=\left( \tfrac{2\nu}{\kappa^{2}}\mathbf{I}+\mathbf{L}_{0}\right)^{\frac{\nu}{2}}, \tag{1}\]
where \(\Phi(\mathbf{L}_{0})=\mathbf{U}_{0}\Phi(\mathbf{\Lambda}_{0})\mathbf{U}_{0}^{\top}\) and \(\Phi(\cdot)\) applies to \(\mathbf{\Lambda}_{0}\) element-wise, its solution is the Matern graph GP
\[\mathbf{f}_{0}\sim\mathcal{GP}\Big{(}\mathbf{0},\Big{(}\frac{2\nu}{\kappa^{2}}\mathbf{I}+ \mathbf{L}_{0}\Big{)}^{-\nu}\Big{)} \tag{2}\]
with positive parameters \(\kappa,\nu\). When scaled properly, the Matern kernel gives the graph diffusion kernel for \(\nu\to\infty\), which in turn relates to the random walk kernel by Kondor and Lafferty (2002). This SPDE framework can be extended to spatial-temporal data yielding respective graph kernels (Nikitin et al., 2022).
### Edge Functions on Simplicial Complexes
Simplicial 2-complexes represent discrete geometry more expressively than graphs. They are triples \(\text{SC}_{2}=(V,E,T)\) where \(V,E\) are the sets of nodes and edges, same as for graphs, and \(T\) is the set of triangular faces (shortened as triangles) such that if \((i,j),(j,k),(i,k)\) form a _closed_ triangle, then \(t=(i,j,k)\in T\)(Munkres, 2018). An example is shown in Fig. 0(a). We assume a fixed _orientation_ for each edge and each triangle as the increasing order of their node labels. An oriented edge, denoted as \(e=[i,j]\), is an ordering of \(\{i,j\}\). This is not a directed edge allowing flow only from \(i\) to \(j\), but rather an assignment of the sign of the flow: from \(i\) to \(j\) it is positive and the reverse is negative. Same goes for oriented triangles denoted as \(t=[i,j,k]\).
In a \(\text{SC}_{2}\), the functions, \(f_{1}:E\to\mathbb{R}\), on its edges \(E\) are required to be _alternating_(Lim, 2020), meaning that, we have \(f_{1}(\bar{e})=-f_{1}(e)\) if \(\bar{e}=[j,i]\) is oriented opposite to the reference \(e=[i,j]\). For example, in Fig. 0(b), \(f_{1}(1,2)=-1.2\) means there is a \(1.2\) unit of flow from \(2\) to \(1\). This property keeps the flow unchanged with respect to the edge orientation. We collect the edge func
tions on \(E\) into \(\mathbf{f}_{1}=(f_{1}(e_{1}),\ldots,f_{1}(e_{N_{1}}))^{\top}\in\mathbb{R}^{N_{1}}\), as in Fig. 0(b), which we also call as an _edge flow_.
We can also define alternating functions on triangles in \(T\) where \(f_{2}(\bar{t})=-f_{2}(t)\) if \(\bar{t}\) is an odd permutation of reference \(t=[i,j,k]\)(Lim, 2020). We collect them in \(\mathbf{f}_{2}\in\mathbb{R}^{N_{2}}\) where \(N_{2}=|T|\). In topology, functions \(f_{0},f_{1},f_{2}\) are called _0-, 1-, 2-cochains_, which are discrete analogs of differential forms on manifolds (Grady and Polimeni, 2010). This motivates the use of subscripts \(0,1,2\). Here we can view these functions as vectors of data on nodes, edges and triangles.
### Hodge Laplacian
In the similar spirit as \(\mathbf{L}_{0}\) operating on node functions, we can define the discrete _Hodge Laplacian_ operating on the space \(\mathbb{R}^{N_{1}}\) of edge functions
\[\mathbf{L}_{1}=\mathbf{B}_{1}^{\top}\mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{B}_{2}^{\top}:=\mathbf{L}_{ \mathrm{d}}+\mathbf{L}_{\mathrm{u}} \tag{3}\]
where \(\mathbf{B}_{2}\) is the edge-to-triangle incidence matrix. For column indexed by \(t=[i,j,k]\), its entries are \([\mathbf{B}_{2}]_{et}=1\), for \(e=[i,j]\) or \(e=[j,k]\), and \([\mathbf{B}_{2}]_{et}=-1\) for \(e=[i,k]\), and zero otherwise. Matrix \(\mathbf{L}_{1}\) describes the connectivity of edges where the _down_ part \(\mathbf{L}_{\mathrm{d}}\) and the _up_ part \(\mathbf{L}_{\mathrm{u}}\) encode how edges are adjacent, respectively, through nodes and via triangles. For example, \(e_{3}\) and \(e_{6}\) are down neighbors sharing node 4 in Fig. 0(a) and \(e_{1}\) and \(e_{2}\) are up neighbors, collocated in \(t_{1}\). Matrix \(\mathbf{L}_{1}\) is positive semi-definite, admitting an eigendecomposition \(\mathbf{L}_{1}=\mathbf{U}_{1}\mathbf{\Lambda}_{1}\mathbf{U}_{1}^{\top}\) where diagonal matrix \(\mathbf{\Lambda}_{1}=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{N_{1}})\) collects the eigenvalues and \(\mathbf{U}_{1}\) is the eigenvector matrix. Likewise, one can define \(\mathbf{L}_{2}=\mathbf{B}_{2}^{\top}\mathbf{B}_{2}\) encoding the adjacency between triangles. Our discussion henceforth considers the unweighted \(\mathbf{L}_{1}\) but it also holds for the weighted variants in Grady and Polimeni (2010); Schaub et al. (2020).
## 3 Edge Gaussian Processes
We now define GPs on edges of a SC\({}_{2}\), specifically, \(\mathbf{f}_{1}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{1})\) with zero mean and edge kernel \(\mathbf{K}_{1}\). Throughout this work, we refer to them as _edge GPs_, and call graph GPs in Section 2.1 as _node GPs_ because they are both multivariate Gaussian but the former is indexed by \(X=E\) and the latter by \(X=V\). We start with deriving edge GPs from SPDEs on edges as a natural extension of Eq.1. Then, by introducing basic notions from discrete calculus (Grady and Polimeni, 2010) and the Hodge decomposition theorem, we propose the divergence-free and curl-free GPs, combining them into Hodge-compositional GPs.
### Edge GPs from SPDEs on Edges
The derivation of graph GPs in Eq.2 as solutions of graph SPDE in Eq.1 motivates the following SPDEs on edges, with edge Gaussian noise \(\mathbf{w}_{1}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\),
\[\Phi(\mathbf{L}_{1})\mathbf{f}_{1}=\mathbf{w}_{1} \tag{4}\]
where \(\Phi(\mathbf{L}_{1})=\mathbf{U}_{1}\Phi(\mathbf{\Lambda}_{1})\mathbf{U}_{1}^{\top}\) is a differential operator defined through \(\mathbf{L}_{1}\). When we consider the operators
\[\Phi(\mathbf{L}_{1})=\Big{(}\frac{2\nu}{\kappa^{2}}\mathbf{I}+\mathbf{L}_{1}\Big{)}^{\frac {\nu}{2}},\quad\Phi(\mathbf{L}_{1})=e^{\frac{\alpha^{2}}{\mathbf{L}_{1}}}, \tag{5}\]
the solutions to Eq.4 give two edge GPs
\[\mathbf{f}_{1}\sim\mathcal{GP}\Big{(}\mathbf{0},\Big{(}\frac{2\nu}{\kappa^{2}}\mathbf{I} +\mathbf{L}_{1}\Big{)}^{-\nu}\Big{)},\,\mathbf{f}_{1}\sim\mathcal{GP}\Big{(}\mathbf{0},e^ {-\frac{\kappa^{2}}{\mathbf{L}_{1}}}\Big{)} \tag{6}\]
which are the _edge Matern_ and _diffusion_ GPs, respectively. These edge GPs impose structured prior covariance that encodes the dependence between edges. A related _Hodge Laplacian kernel_\((\mathbf{L}_{1}^{\top}\mathbf{L}_{1})^{\dagger}\) can be obtained by setting \(\Phi(\mathbf{L}_{1})=\mathbf{L}_{1}\), i.e., \(\mathbf{L}_{1}\mathbf{f}_{1}=\mathbf{w}_{1}\). This kernel was used to penalize the smoothness of edge functions in Schaub et al. (2021). The kernels of Eq.6 are more flexible though and allow encoding non-local edge-to-edge adjacency while \(\mathbf{L}_{1}\) instead encodes the local direct (one-hop) adjacency.
### Div-free and Curl-free Edge GPs
The edge GPs in Section3.1 define distributions over all edge functions. As opposed to this, here we seek
to define GPs on the classes of divergence-free and curl-free edge functions. We start with defining the appropriate notions of discrete derivatives, expressed in terms of the incidence matrices.
Discrete DerivativesThe _gradient_ is a linear operator from the space of node functions to that of edge functions. At edge \(e=[i,j]\), it is defined as
\[(\operatorname{grad}f_{0})(e)=(\mathbf{B}_{1}^{\top}\mathbf{f}_{0})_{e}=f_{0}(j)-f_{0}( i), \tag{7}\]
which computes the difference between the values of a function on adjacent nodes, resulting in a flow on the connecting edge. We call \(\mathbf{f}_{G}=\mathbf{B}_{1}^{\top}\mathbf{f}_{0}\) a gradient flow and \(\mathbf{f}_{0}\) a node potential, as shown in Fig. 0(c).
The _divergence_, the adjoint of gradient, is a linear operator from the space of edge functions to that of node functions. At node \(i\), it is defined as
\[(\operatorname{div}f_{1})(i)=(\mathbf{B}_{1}\mathbf{f}_{1})_{i}=-\sum_{j\in N(i)}f_{1} (i,j) \tag{8}\]
with \(N(i)\) the neighbors of \(i\). Physically, it computes the net-flow of edge functions passing through node \(i\), i.e., the in-flow minus the out-flow, as shown in Fig. 0(b). A _divergence-free_ flow has a zero net-flow everywhere.
Lastly, the _curl_ operator is a linear operator from the space of edge functions to that of triangle functions. At triangle \(t=[i,j,k]\), it is defined as
\[(\operatorname{curl}f_{1})(t)=(\mathbf{B}_{2}^{\top}\mathbf{f}_{1})_{t}=f_{1}(i,j)+f_{ 1}(j,k)-f_{1}(i,k) \tag{9}\]
which computes the _net-circulation_ of edge functions along the edges of \(t\), as a rotational measure of \(\mathbf{f}_{1}\), as shown in Fig. 0(b). A _curl-free_ flow has zero curl over each triangle. As in calculus, we have the identity \(\operatorname{curl}\operatorname{grad}=\mathbf{B}_{2}^{\top}\mathbf{B}_{1}^{\top}= \mathbf{0}\), i.e., gradient flow is curl-free.
Analogous to their continuous vector field counterparts, div-free and curl-free edge functions are ubiquitous, e.g., the electric currents and the exchange rates later in Section 4.1. We refer to Grady and Polimeni (2010); Lim (2020) for more examples. From this perspective, we can view the graph Laplacian as \(\mathbf{L}_{0}=\operatorname{div}\operatorname{grad}=\mathbf{B}_{1}\mathbf{B}_{1}^{\top}\), which is a graph-theoretic analog of the Laplace-Beltrami operator \(\Delta_{0}\) on manifolds. Also, the SPDE on graphs in Eq. (1) is a discrete counterpart of the continuous one for scalar functions on manifolds. Moreover, the Hodge Laplacian \(\mathbf{L}_{1}\) can be viewed as \(\mathbf{L}_{1}=\operatorname{grad}\operatorname{div}+\operatorname{curl}^{*} \operatorname{curl}=\mathbf{B}_{1}^{\top}\mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{B}_{2}^{\top}\), which is a discrete analog of the vector Laplacian (or Helmholtzian) \(\Delta_{1}\) for vector fields.
Hodge DecompositionThe following _Hodge decomposition theorem_, unfolding an edge function, will allow us to improve the edge GPs in Eq. (6).
**Theorem 1** (Hodge (1989)).: _The space \(\mathbb{R}^{N_{1}}\) of edge functions is a direct sum of three subspaces_
\[\mathbb{R}^{N_{1}}=\operatorname{im}(\mathbf{B}_{1}^{\top})\oplus\ker(\mathbf{L}_{1} )\oplus\operatorname{im}(\mathbf{B}_{2}), \tag{10}\]
_where \(\operatorname{im}(\mathbf{B}_{1}^{\top})\) is the gradient space, \(\ker(\mathbf{L}_{1})\) the harmonic space and \(\operatorname{im}(\mathbf{B}_{2})\) the curl space._
It states that any edge function \(\mathbf{f}_{1}\) is composed of three orthogonal parts: gradient, curl, harmonic functions
\[\mathbf{f}_{1}=\mathbf{f}_{G}+\mathbf{f}_{H}+\mathbf{f}_{C} \tag{11}\]
where \(\mathbf{f}_{G}=\mathbf{B}_{1}^{\top}\mathbf{f}_{0}\), being curl-free, is the gradient of some node function \(\mathbf{f}_{0}\), and \(\mathbf{f}_{C}=\mathbf{B}_{2}\mathbf{f}_{2}\), being div-free, is the curl-adjoint of some triangle function \(\mathbf{f}_{2}\). Lastly, \(\mathbf{f}_{H}\) is harmonic (both div- and curl-free, \(\mathbf{L}_{1}\mathbf{f}_{H}=\mathbf{0}\)). This decomposition is illustrated in Fig. 1. It provides a crucial tool for understanding edge functions and has been used in many applications as we discussed above.
Furthermore, the eigenspace \(\mathbf{U}_{1}\) of \(\mathbf{L}_{1}\) can be reorganized in terms of the three Hodge subspaces as
\[\mathbf{U}_{1}=[\mathbf{U}_{H}\ \mathbf{U}_{G}\ \mathbf{U}_{C}] \tag{12}\]
where \(\mathbf{U}_{H}\) is the eigenvector matrix associated to zero eigenvalues \(\mathbf{\Lambda}_{H}=\mathbf{0}\) of \(\mathbf{L}_{1}\), \(\mathbf{U}_{G}\) is associated to the nonzero eigenvalues \(\mathbf{\Lambda}_{G}\) of \(\mathbf{L}_{\mathrm{d}}\), and \(\mathbf{U}_{C}\) is associated to the nonzero eigenvalues \(\mathbf{\Lambda}_{C}\) of \(\mathbf{L}_{\mathrm{u}}\). Moreover, they span the Hodge subspaces:
\[\begin{split}\operatorname{span}(\mathbf{U}_{H})&= \ker(\mathbf{L}_{1}),\ \operatorname{span}(\mathbf{U}_{G})=\operatorname{im}(\mathbf{B}_{1}^{\top}),\\ \operatorname{span}(\mathbf{U}_{C})&=\operatorname{im}( \mathbf{B}_{2}),\end{split} \tag{13}\]
where \(\operatorname{span}(\bullet)\) denotes all possible linear combinations of columns of \(\bullet\)(Yang et al., 2022).
Div-free, Curl-free Edge GPsGiven the eigendecomposition in Eq. (12), we can obtain special classes of edge GPs by only using a certain type of eigenvectors when building edge kernels of Eq. (6). Specifically, we define _gradient_ and _curl edge GPs_ as follows
\[\mathbf{f}_{G}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{G}),\quad\mathbf{f}_{C}\sim\mathcal{GP} (\mathbf{0},\mathbf{K}_{C}) \tag{14}\]
where the gradient kernel and the curl kernel are
\[\mathbf{K}_{G}=\mathbf{U}_{G}\Psi_{G}(\mathbf{\Lambda}_{G})\mathbf{U}_{G}^{\top},\ \mathbf{K}_{C}=\mathbf{U}_{C}\Psi_{C}(\mathbf{\Lambda}_{C})\mathbf{U}_{C}^{\top}. \tag{15}\]
We also define the _harmonic GPs_\(\mathbf{f}_{H}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{H})\) with the harmonic kernel \(\mathbf{K}_{H}=\mathbf{U}_{H}\Psi_{H}(\mathbf{\Lambda}_{H})\mathbf{U}_{H}^{\top}\).
**Proposition 2**.: _Let \(\mathbf{f}_{G}\) and \(\mathbf{f}_{C}\) be the gradient and curl Gaussian processes, respectively. Then, \(\operatorname{curl}\mathbf{f}_{G}=\mathbf{0}\) and \(\operatorname{div}\mathbf{f}_{C}=\mathbf{0}\) with probability one. Moreover, a harmonic Gaussian process \(\mathbf{f}_{H}\) follows \(\operatorname{curl}\mathbf{f}_{H}=\mathbf{0}\) and \(\operatorname{div}\mathbf{f}_{H}=\mathbf{0}\) with probability one._
See proof in Appendix B.2. These Hodge GPs provide more targeted priors for special edge functions which are either div- or curl-free, capable of capturing these key properties. In the case of Matern kernels, we set
\[\Psi_{\square}(\mathbf{\Lambda}_{\square})=\sigma_{\square}^{2}\Big{(}\frac{2\nu_{ \square}}{\kappa_{\square}^{2}}\mathbf{I}+\mathbf{\Lambda}_{\square}\Big{)}^{-\nu_{ \square}}, \tag{16}\]
for \(\square\in\{H,G,C\}\), where \(\sigma_{\square}^{2}\) controls the variance we assign to the function in the subspace, and \(\nu_{\square},\kappa_{\square}\) are the regular Matern parameters, as illustrated in Fig. 2 (left). Note that since \(\mathbf{\Lambda}_{H}=\mathbf{0}\), we consider a scaling function for \(\mathbf{K}_{H}\) as \(\Psi_{H}(\mathbf{0})=\sigma_{H}^{2}\). These Hodge GPs can be derived from SPDEs on edges as well.
**Proposition 3**.: _Given a scaled curl white noise \(\mathbf{w}_{C}\sim\mathcal{N}(\mathbf{0},\mathbf{W}_{C})\) where \(\mathbf{W}_{C}=\sigma_{C}^{2}\mathbf{U}_{C}\mathbf{U}_{C}^{\top}\), consider the following SPDE on edges:_
\[\Phi_{C}(\mathbf{L}_{u})\mathbf{f}_{C}=\mathbf{w}_{C}, \tag{17}\]
_with differential operators_
\[\Phi_{C}(\mathbf{L}_{u})=\Big{(}\frac{2\nu_{C}}{\kappa_{C}^{2}}\mathbf{I}+\mathbf{L}_{u} \Big{)}^{\frac{\nu_{C}}{2}},\;\Phi_{C}(\mathbf{L}_{u})=e^{\frac{\kappa_{\square}^{ 2}}{4}\mathbf{L}_{u}}. \tag{18}\]
_The respective solutions give the curl edge GPs with Matern kernel in Eq. (16) and diffusion kernel_
\[\Psi_{C}(\mathbf{\Lambda}_{C})=\sigma_{C}^{2}e^{-\frac{\kappa_{\square}^{2}}{2} \mathbf{\Lambda}_{C}}. \tag{19}\]
_Likewise, we can derive the gradient Matern and diffusion GPs from the SPDEs as Eq. (17) but with operators \(\Phi_{G}(\mathbf{L}_{\mathrm{d}})\) and a scaled gradient white noise._
See proof in Appendix B.3. We can draw the intuition of SPDE in Eq. (17) from the continuous analogy. In the case of \(\mathbf{L}_{u}\mathbf{f}_{C}=\mathbf{w}_{C}\), the equation \(\operatorname{curl}^{*}\operatorname{curl}f_{1}(\mathbf{x})=w_{1}(\mathbf{x})\) is a stochastic vector Laplace's equation of a div-free (solenoidal) vector field, where \(w_{1}(\mathbf{x})\) the curl adjoint of some vector potential. In physics, this describes the static magnetic field from a magnetic vector potential, as well as an incompressible fluid.
### Hodge-compositional Edge GPs
Many edge functions of interest are indeed div- or curl -free, but not all. In this section we combine the gradient, curl and harmonic GPs to define the Hodge-compositional (HC) edge GPs.
**Definition 4**.: A Hodge-compositional edge Gaussian process \(\mathbf{f}_{1}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{1})\) is a sum of gradient, curl and harmonic GPs, i.e., \(\mathbf{f}_{1}=\mathbf{f}_{G}+\mathbf{f}_{C}+\mathbf{f}_{H}\) where
\[\mathbf{f}_{\square}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{\square})\text{ with }\mathbf{K}_{\square}=\mathbf{U}_{\square}\Psi_{\square}(\mathbf{\Lambda}_{\square})\mathbf{U}_{ \square}^{\top} \tag{20}\]
for \(\square=H,G,C\) where their kernels do not share hyperparameters. It holds that \(\mathbf{K}_{1}=\mathbf{K}_{H}+\mathbf{K}_{G}+\mathbf{K}_{C}\) and three Hodge GPs are independent.
Naturally, we can construct a Matern HC GP as the sum of Matern GPs in the three subspaces with their kernels given by Eq. (16), and likewise for the diffusion HC GP by Eq. (19). Compared to the GPs in Eq. (6), referred to as non-HC GPs henceforth, HC GPs are more flexible and expressive, having more degrees of freedom. We discuss their practical advantages below.
Inductive GP priorThe HC GP encodes the prior covariance \(\operatorname{Cov}(f_{1}(e),f_{1}(e^{\prime}))\) between edge functions over two edges \(e,e^{\prime}\) as follows: (i) the covariance is the sum of three covariances \(\operatorname{Cov}_{\square}=\operatorname{Cov}(f_{\square}(e),f_{\square}(e ^{\prime}))\) for \(\square=H,G,C\); (ii) each \(\operatorname{Cov}_{\square}\) encodes the covariance between the corresponding Hodge parts of \(f_{1}\) without affecting the others; and (iii) no covariance is imposed across different Hodge components, e.g., \(\operatorname{Cov}(f_{G}(e),f_{C}(e^{\prime}))=0\).
In the spatial/edge domain, this is related to separating the down and up adjacencies encoded in the SPDE operators \(\Phi(\cdot)\). From an eigen-spectrum perspective, the eigenvalues \(\Psi_{\square}\) of HC GP's kernels associated to the three Hodge subspaces have individual parameters. This enables capturing the different Hodge components of edge functions, as well as their relevance during hyperparameter optimization, further allowing us to recover the Hodge components in predictions, which we detail in Appendix B.4. Another implication is that, unlike for GPs from Section 3.2, we do not require specific knowledge about the div or curl of the underlying function.
Comparison to non-HC GPsWhen we view non-HC GPs in terms of the Hodge decomposition, we notice that they put priors on the three Hodge GPs in a way that shares hyperparameters. This enforces learning the same hyperparameters for different Hodge components, resulting in a single function covering the entire edge spectrum, as shown in Fig. 2 (right), as opposed to the three individual functions of the HC one.
This raises issues when separate learning, say, different lengthscales, is required for the gradient and curl components. Non-HC GPs are strictly incapable of this practical need when an eigenvalue is associated to both gradient and curl spaces. We also delve into this in terms of _edge Fourier features_ in Appendix B.5.
Figure 2: (Left) Matern kernels of gradient, curl and harmonic GPs. (Right) Matern kernel of non-HC GP.
Connection to diffusion on edgesThe HC diffusion kernel, given by \(\mathbf{K}_{1}=\exp(-(\frac{\kappa_{0}^{2}}{2}\mathbf{L}_{\mathrm{d}}+\frac{\kappa_{0}^{2} }{2}\mathbf{L}_{\mathrm{u}}))\), when \(\sigma_{\square}^{2}\)s are one, is the Green's function for the edge diffusion of a function \(\mathbf{\phi}:[0,\infty)\times E\to\mathbb{R}\)
\[\frac{\mathrm{d}\mathbf{\phi}(t)}{\mathrm{d}t}=-(\mu\mathbf{L}_{\mathrm{d}}+\gamma\mathbf{ L}_{\mathrm{u}})\mathbf{\phi}(t),\text{ where }\mu,\gamma>0 \tag{21}\]
with \(\mathbf{\phi}|_{t=\tau}=e^{-(\mu\tau\mathbf{L}_{\mathrm{d}}+\gamma\mathbf{L}_{\mathrm{u}}) }\mathbf{\phi}(0)\). This equation describes the diffusion process on the edge space of \(\mathrm{SC}_{2}\) that was used for network analysis (Muhammad and Egerstedt, 2006; DeVille, 2021), often arising as the limit of random walks on edges (Schaub et al., 2020). The covariance \(\mathbf{K}_{1}\) within this context encodes the proportion of edge flow traveling from edge \(e\) to \(e^{\prime}\) via down and up edge adjacencies. Its vector field counterpart was used for shape analysis (Zobel et al., 2011; Sharp et al., 2019). Compared to the graph (node) diffusion converging (\(t\to\infty\)) to the state that is constant on all nodes as long as the graph is connected, the harmonic state of the edge diffusion can be non-constant, lying in the span of \(\mathbf{U}_{H}\).
ComplexityThe kernels of HC edge GPs can be constructed in a scalable way by considering the \(l\) largest eigenvalues with off-the-shelf eigen-solvers, e.g., Lanczos algorithm. See Appendix B.7 for more details on the complexity of HC GPs.
### Node-Edge-Triangle GP Interactions
The gradient and curl components of edge functions are (co)derivatives of some node and triangle functions, specifically, \(\mathbf{f}_{G}=\mathbf{B}_{1}^{\top}\mathbf{f}_{0}\) and \(\mathbf{f}_{C}=\mathbf{B}_{2}\mathbf{f}_{2}\) as in Eq.11. Since the derivative of a GP is also a GP, we can then construct a gradient GP from node GPs.
**Corollary 5**.: _Suppose a node function \(\mathbf{f}_{0}\) is a GP \(\mathbf{f}_{0}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{0})\) with \(\mathbf{K}_{0}=\Psi_{0}(\mathbf{L}_{0})=\mathbf{U}_{0}\Psi_{0}(\mathbf{\Lambda}_{0})\mathbf{U}_{0} ^{\top}\). Then, its gradient is an edge GP \(\mathbf{f}_{G}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{G})\) where \(\mathbf{K}_{G}=\mathbf{B}_{1}^{\top}\mathbf{K}_{0}\mathbf{B}_{1}=\mathbf{U}_{G}\Psi_{G}(\mathbf{ \Lambda}_{G})\mathbf{U}_{G}^{\top}\) with_
\[\Psi_{G}(\mathbf{\Lambda}_{G})=\mathbf{\Lambda}_{G}\Psi_{0}(\mathbf{\Lambda}_{G}). \tag{22}\]
The proof follows from (i) derivatives preserving Gaussianity, and (ii) \(\mathbf{L}_{0}\) and \(\mathbf{L}_{\mathrm{d}}\) having the same nonzero eigenvalues. We can also obtain a curl edge GP from a GP on triangles likewise. In turn, for an edge GP, its div is a node GP and its curl is a GP on triangles. We refer to Appendix B.8 for the proof and more details.
Exploiting this interaction between GPs on nodes, edges and triangles can lead to new useful GPs, especially when functions on nodes, edges and triangles are intrinsically related by physical laws. For example, in water networks, water flowrates in pipes are often related to the gradient of hydraulic heads on nodes, as we will show in Section4.3. This implies that given an appropriate node GP, say, node Matern GP in Eq.2, a good edge GP prior can be imposed as its gradient, as in Corollary5. Furthermore, by leveraging this interaction, we can construct HC edge GPs as follows.
**Proposition 6**.: _Let \(\mathbf{f}_{1}\) be an edge function defined in Eq.11 with harmonic component \(\mathbf{f}_{H}\), node function \(\mathbf{f}_{0}\) and triangle function \(\mathbf{f}_{2}\). If we model \(\mathbf{f}_{0}\) as a GP on nodes \(\mathbf{f}_{0}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{0})\), model \(\mathbf{f}_{2}\) as a GP on triangles \(\mathbf{f}_{2}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{2})\), and \(\mathbf{f}_{H}\) as a harmonic GP \(\mathbf{f}_{H}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{H})\), then we have GP \(\mathbf{f}_{1}\sim\mathcal{GP}(\mathbf{0},\mathbf{K}_{1})\) with_
\[\mathbf{K}_{1}=\mathbf{K}_{H}+\mathbf{B}_{1}^{\top}\mathbf{K}_{0}\mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{K}_{2 }\mathbf{B}_{2}^{\top}. \tag{23}\]
See proof in Appendix B.9. This alternative HC GP incorporates the Hodge theorem prior in a way that directly relates the node potential and the triangle function. It can be applicable when GP priors of node or triangle functions are more discernible. A continuous analogy has been applied by Berlinghieri et al. (2023) to construct Helmholtz GPs for vector fields.
## 4 Experiments
We apply HC GPs for edge-based inference tasks in three applications: foreign currency exchange (forex), ocean flow and water supply networks (WSNs). We showcase the structured prior on edges in these tasks by comparing them to baselines: (i) Euclidean GPs with RBF and Matern kernels, and (ii) Node GPs on the line-graph--built by exchanging the nodes with edges in the original graph (Godsil and Royle, 2001). To highlight the prior of the Hodge decomposition, we also compare with non-HC GPs. For each of them, we consider Matern and diffusion kernels. We perform GP regression with Gaussian likelihood for model fitting using the GPyTorch framework (Gardner et al., 2018). We use the root mean squared error (RMSE) to evaluate the predictive mean and the negative log predictive density (NLPD) for prediction uncertainty. We refer to Appendix C for full experimental details.
### Foreign Currency Exchange
A forex market can be modeled as a network where nodes represent currencies and edges the exchangeable pairs (Jiang et al., 2011). Forex rates in a fair market ideally satisfy the _arbitrage-free_ condition: for any currencies \(i,j,k\), we have \(r^{i/j}r^{j/k}=r^{i/k}\) with \(r^{i/j}\) the rate between \(i\) and \(j\). That is, the exchange path \(i\to j\to k\) provides no gain or loss over a direct path \(i\to k\). If we model forex rates as edge flows \(f_{1}(i,j)=\log(r^{i/j})\), this condition can be translated into that \(\mathbf{f}_{1}\) is a gradient flow, being curl-free, i.e., \(f_{1}(i,j)+f_{1}(j,k)-f_{1}(i,k)=0\). Here we consider real-world forex data on 2018/10/05 with 25 most traded
currencies forming 210 exchangeable pairs and 710 triangles, formed by any three pairwise exchangeable currencies (Oanda, 2018; Jia et al., 2019). We randomly sample 20% of edges for training and test on the rest.
From Table 1, we see that HC GPs achieve significantly lower RMSEs with high certainty (small NLPDs), as visualized in Fig. 3. This shows their ability to automatically capture the curl-free nature of the forex rates. As shown in Fig. 2(e), the HC Matern GP learns that harmonic and curl components should vanish. In contrast, the other three give poor predictions, due to: (i) Euclidean GPs being oblivious of the structure of edge functions; (ii) line-graph GPs imposing structure through node priors, which is inappropriate in this case; and (iii) non-HC GPs being unable to induce the curl-free prior without removing the gradient. This results from sharing parameters in their kernels for different Hodge components. As shown in Fig. 2(e), the non-HC Matern learns a nonzero kernel in the whole spectrum, incapable of removing the non-arbitrage-free part.
### Ocean Flow Analysis
We then consider the edge-based ocean flow learning following the setup in Chen et al. (2021). The flow velocity fields in the ocean were converted using the linear integration approximation to edge flows within a SC\({}_{2}\) whose nodes are 1500 buoys sampled from North Pacific ocean drifter records in 2010-2019 (Lumpkin and Centurioni, 2019). We apply both non-HC and HC GP models to predict the converted edge flows. Given the large number of edges (\(\sim\)20k), we consider a truncated approximation of kernels with eigenpairs associated with the 500 largest eigenvalues (Knyazev, 2001). We randomly sample 20% of edges for training and test on the rest.
From Table 2, we notice that HC and non-HC GPs ex
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{NLPD} \\ \cline{2-5} & Diffusion & Matern & Diffusion & Matern \\ \hline Euclidean & \(2.17\pm 0.13\) & \(2.19\pm 0.12\) & \(2.12\pm 0.07\) & \(2.20\pm 0.18\) \\ Line-Graph & \(2.43\pm 0.07\) & \(2.46\pm 0.07\) & \(2.28\pm 0.04\) & \(2.32\pm 0.03\) \\ Non-HC & \(2.48\pm 0.07\) & \(2.47\pm 0.08\) & \(2.36\pm 0.07\) & \(2.34\pm 0.04\) \\ HC & \(0.08\pm 0.12\) & \(0.06\pm 0.12\) & \(-3.52\pm 0.02\) & \(-3.52\pm 0.02\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Forex rates inference results.
Figure 4: (a-e) (Interpolated) ocean flow in the vector field domain. (f) Learned diffusion kernels in the spectrum.
Figure 3: (a-d) Interpolating a smaller forex market (for better visibility) with train ratio 50% where dashed (solid) edges are used for training (test). (e) Learned Matérn kernels of HC and non-HC GPs in the spectrum.
hibit similar performance. This arises from the comparable behavior of the gradient and curl components, as depicted in Fig. 4, where the learned gradient and curl diffusion kernels display close patterns. In contrast, Euclidean GPs and line-graph GPs give poor predictions emphasizing the importance of structured edge priors.
We further convert the predicted edge flows into the vector field domain, as shown in Fig. 4, based on Chen et al. (2021). We see that the predictions capture the pattern of the original velocity field. We approximate the predicted velocity field uncertainty by computing the average \(\ell_{2}\) distance per location from 50 posterior samples to the mean in the vector field domain. As shown in Fig. 4, we see that at most locations, the velocity field predictions have small standard deviations except few locations (some small islands around the lower left) where the original fields exhibit more discontinuities. Moreover, since HC GPs enable the direct recovery of gradient and curl components, we show their corresponding vector fields in Figs. 4 and 4, giving better insights into how ocean currents behave, of particular interest in oceanography. For example, we can observe the well-known North Pacific gyres including the North Equatorial, Kuroshio and Alaska currents in Fig. 4.
### Water Supply Networks
Network-based methods have been used in WSNs where tanks or reservoirs are represented by nodes, and pipes by edges (Zhou et al., 2022). By modeling the hydraulic heads as node functions \(\mathbf{f}_{0}\) and the water flowrates as edge functions \(\mathbf{f}_{1}\), the commonly used empirical equation connecting the two reads as \(\mathbf{B}_{1}^{\top}\mathbf{f}_{0}=\mathbf{\tilde{f}}_{1}:=\text{diag}(\mathbf{r})\mathbf{f}_{1} ^{1.852}\) where \(r_{e}\) is the resistance of pipe \(e\) and the exponentiation is applied element
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{NLPD} \\ \cline{2-4} & Diffusion & Matern & Diffusion & Matern \\ \hline Euclidean & \(1.00\pm 0.01\) & \(1.00\pm 0.00\) & \(1.42\pm 0.01\) & \(1.42\pm 0.10\) \\ Line-Graph & \(0.99\pm 0.00\) & \(0.99\pm 0.00\) & \(1.41\pm 0.00\) & \(1.41\pm 0.00\) \\ Non-HC & \(0.35\pm 0.00\) & \(0.35\pm 0.00\) & \(0.33\pm 0.00\) & \(0.36\pm 0.03\) \\ HC & \(0.34\pm 0.00\) & \(0.35\pm 0.00\) & \(0.33\pm 0.01\) & \(0.37\pm 0.04\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ocean flow inference results.
Figure 5: (a-e) Posterior mean and standard deviation (std) based on the Matérn node GPs, and the HC and non-HC Matérn edge GPs. Squared (Circled) nodes represent the node samples for training (testing). Dashed (solid) edges denote the edge samples for training (testing). (f) The learned edge GPs in the spectrum.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Node Heads} & \multicolumn{2}{c}{Edge Flowrates} \\ \cline{2-5} & RMSE & NLPD & RMSE & NLPD \\ \hline Diffusion, non-HC & \(0.16\pm 0.05\) & \(0.72\pm 2.06\) & \(0.32\pm 0.05\) & \(0.97\pm 1.80\) \\ Matérn, non-HC & \(0.16\pm 0.04\) & \(0.71\pm 2.39\) & \(0.26\pm 0.05\) & \(0.10\pm 0.13\) \\ \hline Diffusion, HC & \(0.15\pm 0.04\) & \(-0.47\pm 0.14\) & \(0.22\pm 0.03\) & \(-0.20\pm 0.13\) \\ Matérn, HC & \(0.15\pm 0.04\) & \(-0.25\pm 0.48\) & \(0.23\pm 0.03\) & \(-0.45\pm 0.49\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: WSN inference results.
wise (Dini and Tabesh, 2014).
We consider the Zhi Jiang WSN with 114 tanks (including one source) and 164 pipes (without triangles, Dandy (2016)) and simulate a scenario based on Kliese et al. (2017). We perform joint state estimation of heads \(\mathbf{f}_{0}\) and the adjusted flowrates \(\tilde{\mathbf{f}}_{1}\), by modeling them as GPs on nodes and edges, respectively. To compare HC and non-HC edge GPs, for a node GP with kernel \(\mathbf{K}_{0}\), we consider the HC GP as its gradient, as discussed in Corollary 5. For the non-HC one, we consider a kernel \(\mathbf{K}_{1}\) of the same type as \(\mathbf{K}_{0}\). We randomly sample 50% of nodes and edges for training and test on the rest.
From Table 3, we see that while the mean predictions of heads remain similar whether we use HC or non-HC edge GPs, the former perform better for edge flows, particularly in the pipes around the source, as shown in Figs. 5b and 5c. Moreover, HC GPs have better prediction uncertainty with smaller average NLPDs for both heads and flowrates, as illustrated in Figs. 5d and 5e. This is because HC GPs that we use share parameters with node GPs, helping to calibrate the uncertainty of head predictions. They also capture the physical prior of the pipe equation that assumes flowrates are a gradient flow. As shown in Fig. 5f, the HC Matern GP learns a kernel with a trivial harmonic prior and a nonzero gradient prior in small eigenvalues, reflecting the gradient nature of the pipe flowrates. Note that due to the randomness of training samples, the WSN, having small edge connectivity, may become disconnected, causing the significant variance in NLPDs.
## 5 Conclusion
We introduced Hodge-compositional (HC) Gaussian processes (GPs) for modeling functions on the edges of simplicial 2-complexes. These HC GPs are constructed by combining three individual GPs, each designed to capture the gradient, curl and harmonic components of the Hodge decomposition of edge functions. This allows them to independently learn each component, making them more expressive and interpretable when compared to various alternatives. They can also be constructed by leveraging the physical interactions between functions on nodes, edges and triangles. We demonstrated their practical potential in learning real-world flow data.
## Acknowledgements
MY was supported by the TU Delft AI Labs Programme. VB was supported by an ETH Zurich Postdoctoral Fellowship.
|
2301.00640 | Joint reconstructions of growth and expansion histories from stage-IV
surveys with minimal assumptions. II. Modified gravity and massive neutrinos | Based on a formalism introduced in our previous work, we reconstruct the
phenomenological function $G_{\rm eff}(z)$ describing deviations from General
Relativity (GR) in a model-independent manner. In this alternative approach, we
model $\mu\equiv G_\mathrm{eff}/G$ as a Gaussian process and use forecasted
growth-rate measurements from a stage-IV survey to reconstruct its shape for
two different toy models. We follow a two-step procedure: (i) we first
reconstruct the background expansion history from Supernovae (SNe) and Baryon
Acoustic Oscillation (BAO) measurements; (ii) we then use it to obtain the
growth history $f\sigma_8$, that we fit to redshift-space distortions (RSD)
measurements to reconstruct $G_\mathrm{eff}$. We find that upcoming surveys
such as the Dark Energy Spectroscopic Instrument (DESI) might be capable of
detecting deviations from GR, provided the dark energy behavior is accurately
determined. We might even be able to constrain the transition redshift from
$G\to G_\mathrm{eff}$ for some particular models. We further assess the impact
of massive neutrinos on the reconstructions of $G_\mathrm{eff}$ (or $\mu$)
assuming the expansion history is given, and only the neutrino mass is free to
vary. Given the tight constraints on the neutrino mass, and for the profiles we
considered in this work, we recover numerically that the effect of such massive
neutrinos does not alter our conclusions. Finally, we stress that incorrectly
assuming a $\Lambda$CDM expansion history leads to a degraded reconstruction of
$\mu$, and/or a non-negligible bias in the
($\Omega_\mathrm{m,0}$,$\sigma_{8,0}$)-plane. | Rodrigo Calderón, Benjamin L'Huillier, David Polarski, Arman Shafieloo, Alexei A. Starobinsky | 2023-01-02T13:04:17Z | http://arxiv.org/abs/2301.00640v3 | Joint reconstructions of growth and expansion histories from stage-IV surveys with minimal assumptions II: Modified gravity and massive neutrinos.
###### Abstract
Based on a formalism introduced in our previous work, we reconstruct the phenomenological function \(G_{\rm eff}(z)\) describing deviations from General Relativity (GR) in a model-independent manner. In this alternative approach, we model \(\mu\equiv G_{\rm eff}/G\) as a Gaussian process and use forecasted growth-rate measurements from a stage-IV survey to reconstruct its shape for two different toy-models. We follow a two-step procedure: (i) we first reconstruct the background expansion history from Supernovae (SNe) and Baryon Acoustic Oscillation (BAO) measurements; (ii) we then use it to obtain the growth history \(f\sigma_{8}\), that we fit to redshift-space distortions (RSD) measurements to reconstruct \(G_{\rm eff}\). We find that upcoming surveys such as the Dark Energy Spectroscopic Instrument (DESI) might be capable of detecting deviations from GR, provided the dark energy behavior is accurately determined. We might even be able to constrain the transition redshift from \(G\to G_{\rm eff}\) for some particular models. We further assess the impact of massive neutrinos on the reconstructions of \(G_{\rm eff}\) (or \(\mu\)) assuming the expansion history is given, and only the neutrino mass is free to vary. Given the tight constraints on the neutrino mass, and for the profiles we considered in this work, we recover numerically that the effect of such massive neutrinos do not alter our conclusions. Finally, we stress that incorrectly assuming a \(\Lambda\)CDM expansion history leads to a degraded reconstruction of \(\mu\), and/or a non-negligible bias in the (\(\Omega_{\rm m,0,\sigma_{8,0}}\))-plane.
## I Introduction
Addressing the late-time accelerated phase of expansion of the Universe remains a major challenge for fundamental physics [1; 2]. Though most observations to date are in agreement with the standard (Concordance) model of cosmology (\(\Lambda\)CDM), alternative explanations for Dark Energy (DE)--other than a cosmological constant \(\Lambda\)--are still up for debate (see e.g. [3]). In particular, modifying the laws of gravity (beyond Einstein's GR) at large-scales remains a tantalizing possibility [4; 5]. Besides the exact nature of the dark energy (DE) component and its (effective) equation of state, additional modifications come with the properties of the relativistic degrees of freedom, notably the neutrino sector. Interestingly, despite the wide class of modified-gravity (MG) scenarios explored in the last decades, observations seem to suggest that GR remains our best description of gravitational interactions, where dark energy is in the form of a cosmological constant in the Einstein field equations. For example, the detection of GW 170817, together with its electromagnetic counterpart GRB 170817A [6], implies that gravitational waves travel at the speed of light--ruling out a large subclass of Horndeski models predicting a tensor speed \(c_{\rm T}\neq c\) at the present epoch [7]. Hence the detection of gravitational waves (GW) have added stringent constraints on modified gravity models in addition to local constraints. Note that a viable cosmic expansion history can give additional strong constraints, for example on \(f(R)\) models [8]1. At the phenomenological level, most modified theories of gravity predict a time (and possibly scale) dependent _effective_ gravitational coupling \(G_{\rm eff}(z)\)[12; 13] entering the equation for the growth of perturbations. Thus, detecting a deviation from Newton's constant would be a smoking gun for physics beyond \(\Lambda\)CDM and even beyond GR.
Footnote 1: Viable cosmological models of the present Universe in \(f(R)\) gravity satisfying these constraints were independently constructed soon after that paper in [9; 10; 11].
Let us present now the basic formalism of our approach, starting with the background. We consider here spatially flat Friedmann-Lemaitre-Robertson-Walker universes with
\[h^{2}(z)\equiv H^{2}/H_{0}^{2}=\Omega_{\rm m,0}(1+z)^{3}+(1-\Omega_{\rm m,0}) f_{\rm DE}(z)\, \tag{1}\]
where \(f_{\rm DE}=\rho_{\rm DE}(z)/\rho_{\rm DE}(z=0)\). While the second term in (1) becomes generically subdominant in the past for viable cosmologies, this has to be enforced explicitly at
high redshifts (where no data are available) once we use Gaussian Processes in order to reconstruct \(h(z)\)[14]. We stress further that the parameter \(\Omega_{\rm m,0}\) refers to _clustered_ dust-like matter only. The second term of (1) is more general than the compact notation suggests, see the discussion given in [14]. We turn now to the perturbations. We use the following conventions and notations [12] (see also e.g. [15]) in the conformal Newtonian gauge, where the perturbed FLRW metric is described by (\(c=1\))
\[{\rm d}s^{2}=-(1+2\phi){\rm d}t^{2}+(1-2\psi)a^{2}{\rm d}{\mathbf{x}}^{2}, \tag{2}\]
where \(\phi\) and \(\psi\) are the Bardeen potentials. Phenomenologically, on subhorizon scales, in many modified gravity models the departure from the standard perturbations growth in GR is encoded in the modified Poisson equation [12] (see also e.g. [15; 16; 17])
\[\nabla^{2}\phi=4\pi G_{\rm eff}(a,{\mathbf{k}})\ \rho\equiv 4\pi G\mu(a,{\mathbf{k}}) \ \rho. \tag{3}\]
GR corresponds obviously to \(\mu\equiv 1\). The relation between the Bardeen potentials is expressed as follows
\[\phi\equiv\eta(a,{\mathbf{k}})\ \psi\, \tag{4}\]
the two potentials are generically unequal in these models. The subhorizon modes are essentially affected by \(\mu\) as is explicit from Eq. (5) given below, while super horizon modes are affected by both \(\mu\) and \(\eta\)[16]. In this work, given the datasets considered, we restrict our attention to \(\mu\) (see e.g. [18; 19] for constraints on \(\eta\)). In what follows, we will use \(G_{\rm eff}\) and \(\mu\) interchangeably, since \(\mu\) is just \(G_{\rm eff}\) in units of \(G\). The growth of dust-like subhorizon matter perturbations in the Quasi-Static Approximation (QSA) is then governed by [12]
\[\ddot{\delta}+2H\dot{\delta}=4\pi G\,\rho\,\delta\,\mu(z,{\mathbf{k}}), \tag{5}\]
where \(\delta\equiv\delta\rho/\rho\) is the density contrast of dust-like matter. For modes of cosmological interest, the \(k\)-dependence of \(\mu\) is often mild and can be neglected in a first approach [20; 21; 22] - see e.g. [23; 24; 25; 26] for current and future constraints on the scale-dependence of \(\mu\). Note that this is certainly the case for the unscreened scalar-tensor model considered in [12]. We will restrict ourselves here to phenomenological models where \(\mu\) or \(G_{\rm eff}\) is scale independent.
The above equation can be re-written in terms of the growth factor \(f\equiv\delta^{\prime}/\delta\), to give
\[f^{\prime}+\left(f+2+\frac{h^{\prime}}{h}\right)f-\frac{3}{2}\Omega_{\rm m}(z )\mu(z)=0\, \tag{6}\]
where a prime stands for derivative with respect to \(N\equiv\ln a\). From an observational standpoint, redshift space distortions (RSD) provides us with growth rate measurements of the quantity
\[f\sigma_{8}\equiv\frac{\sigma_{8,0}}{\delta_{0}}f\delta=\frac{\sigma_{8,0}}{ \delta_{0}}\delta^{\prime},\quad\text{with}\quad\delta_{0}=\delta(z=0). \tag{7}\]
We remind that the quantities \(\Omega_{i}\) appearing in (1) and (6) are defined in the standard way as in GR with the help of Newton's constant \(G\).
In this work, we will use the synergy between geometrical background probes (Type Ia Supernovae [SN] and Baryon Acoustic Oscillations [BAO]) and growth measurements from RSD to constrain the phenomenological function \(\mu(z)\) describing the departures from GR. While current analysis pipelines rely on various assumptions (namely, \(\Lambda\)+GR) when extracting the cosmological information from large-scale structure, in particular the BAO and RSD measurements, we expect that our results will remain essentially unaffected when such effects are taken into account.
The paper is organized as follows. We start by describing in detail the methodology and the data used in Section II. In Section III, we apply the method to simulated RSD data generated with \(\mu\neq 1\) in both idealistic and realistic scenarios and further discuss the implications of the results. We also comment on the effects of incorrectly assuming a \(\Lambda\)CDM expansion history on the reconstructions in Section III.3. In Section IV, we consider separately the inclusion of massive neutrinos.
## II Method and data
### Models & Mock Data
For the data, we generate mock \(f\sigma_{8}\) measurements for a (stage-IV) DESI-like survey following Tables 2.3-2.7 in [27] (covering \(14{\rm K}\,{\rm deg}^{2}\)) and for different behaviours of \(G_{\rm eff}\) that we aim to reconstruct. Namely, we consider an \(f(R)\)-inspired _bump-like_ profile (which we refer to simply as "Bump") and a smooth _step-like_ transition ("Dip" hereafter) in the recent past towards the weak gravity regime (\(G_{\rm eff}<G\)), see e.g. [28; 29]2. These two profiles are treated purely phenomenologically here, indeed viable \(f(R)\) theories are actually screened and allow \(G_{\rm eff,0}\) to deviate from \(G\) today. Nonetheless, due to the \(k\)-dependence of \(\mu\) which we do not discuss here, cosmic scales smaller than some critical scale would experience a boost in their growth in the recent past.
Footnote 2: Indeed, both such profiles can occur in viable cosmological models in \(f(R)\) gravity, see [30] in particular, especially in the case of oscillations around phantom divide [31].
In the case of the dip, we consider it mainly to assess whether such profiles can be accurately reconstructed using our model-independent approach. Note in this context that a decreasing \(\mu\) is impossible in massless scalar-tensor models [32]. To summarize, these hybrid profiles allow to test our reconstruction independently of any theoretical prior.
The behaviors of the phenomenological functions \(\mu^{\rm fid}(z)\) used to generate the data are depicted by the
dashed-lines in the upper panel of Fig. 1, while the corresponding growth \(f\sigma_{8}(z)\) evolutions are shown in the lower panel. We also make use of stage-IV SN+BAO data to determine the background expansion history \(h(z)\) without relying on a specific parametric model, as explained in SSIII.2. The fiducial background used to generate the data is a Chevallier-Polarski-Linder (CPL) model [33; 34], extensively discussed in [14] with
\[\theta^{\text{fid}}=\{\Omega_{\text{m},0}^{\text{fid}}=0.28,w_{0}^ {\text{fid}}=-0.95,w_{a}^{\text{fid}}=0.3,\\ h_{\text{fid}}=0.7,\sigma_{8,0}^{\text{fid}}=0.81\}, \tag{8}\]
where \(H_{0}=100\,h\,\text{km}\,\text{s}^{-1}\,\text{Mpc}^{-1}\). More details on the background-only (SN+BAO) mock data can also be found in [14]. Already at this stage, let us note that modified theories of gravity can lead to a modified Chandrasekhar mass (with \(m_{\text{ch}}\sim G_{\text{eff}}^{-3/2}\)[35]), relevant for SNeIa analyses, which can affect the absolute magnitude (_e.g._\(\Delta M=\frac{15}{4}\log\mu(z)\) in scalar-tensor theories3[36; 37]) and hence the distance measurements obtained from such standard candles [38; 39; 40]. This effect has even been proposed as a possible explanation for the mismatch between early and late-time measurements of the Hubble constant \(H_{0}\), see e.g. [41; 42; 43; 44; 45; 46; 47]. However, for our purposes, we neglect these effects and assume the \(h(z)\) measurements obtained from SNe are independent of \(\mu\) in the current analysis. The inclusion of these effects for a specific model might be the subject of future works.
Footnote 3: Note however that this theoretical correction can be even smaller, if the stretch correction is taken into account (E. Linder, private communication).
### The Method
To explore possible modifications of gravity at late-times, we model \(G_{\text{eff}}(z)\) as a Gaussian Process4 (GP) centered around Newton's constant \(G\), such that
Footnote 4: We do not delve into the details of Gaussian Process modeling here, instead we refer the reader to our previous work [14] and the excellent review [48] for more.
\[\mu(z;\sigma_{f},\ell_{f},z_{c})=\begin{cases}\mathcal{GP}(\bar{f}=1,k(\sigma_ {f},\ell_{f})),&\text{for }z<z_{c}\\ 1,&\text{for }z\geq z_{c}\end{cases} \tag{9}\]
so that we recover GR at large-\(z\). We further impose the conditions \(\mu(z=0)=1\pm 0.05\) and \(\mu^{\prime}(z=0)=\mu^{\prime}(z_{c})=0\), where \({}^{\prime}\equiv d/dz\)--see Appendix A for details. The second condition allows us to smoothly recover \(G_{\text{eff}}=G\) above a certain \(z_{c}\) and at \(z=0\), while exploring possible departures from GR at intermediate redshifts \(0.1<z<10\) (see e.g. [49; 45; 50; 46; 47; 48; 49] for other approaches). The first condition is not necessary (see our discussion at the beginning of this Section), but from a technical point of view it can help guide our reconstructions at very low \(z\) where we are volume-limited and uncertainties become quite large. Furthermore, when dealing with real data, we do not know the true behaviour of \(\mu\), and whether the underlying model is screened or not, hence the two representative behaviours at \(z=0\) chosen for our profiles. It is comforting to find that the first condition does not alter the reconstruction of the second profile around \(z=0\) as illustrated by the blue curves in Fig. 1.
We use a squared exponential kernel given by
\[k(x,x^{\prime};\sigma_{f},\ell_{f})=\sigma_{f}^{2}\,e^{-(x-x^{\prime})^{2}/2 \ell_{f}^{2}}, \tag{10}\]
where \(\sigma_{f}\) and \(\ell_{f}\) determines the amplitude and typical length-scale of the correlations, respectively [48].
In a Bayesian spirit, we give flat priors to the cosmological and (hyper)parameters, listed in Table 1. We sample the parameter space using Markov Chain Monte Carlo (MCMC) methods, as implemented in emcee[55; 56]. At each step in the MCMC, we draw a sample of \(\mu(N=\ln a)\equiv G_{\text{eff}}/G\sim\mathcal{GP}(1,K)\), characterized by \((\sigma_{f},\ell_{f},z_{c})\), and solve the growth equation, with a given \(\sigma_{8,0}\), to obtain a solution \(f\sigma_{8}(z)\) that we confront to RSD data. Those samples of \(\mu(z)\) retracing a similar shape to \(\mu^{\text{fid}}\) will yield a better fit to growth data, and thus will be statistically favored in the long run. Averaging over a large number of realizations gives the median shape of \(\mu(z)\) and \(95\%\,(2\sigma)\) confidence intervals around it. This is along the lines of what was done in [14] to reconstruct \(f_{\text{DE}}\), but this time we also include conditions on the derivatives of the GP, to smoothly recover the form in Eq. (9), following the formalism described in Appendix A.
## III Results and Discussions
### Ideal case: Background is perfectly known
We first consider the idealistic case where the background expansion history is perfectly known. In other words, we fix \(\Omega_{\text{m},0}\) and \(\sigma_{8,0}\) to their fiducial values, and further assume that the dark energy evolution is known \(f_{\text{DE}}(z)=f_{\text{DE}}^{\text{fid}}\). Although this is far from being a realistic scenario, it allows us to test our method and quantify the uncertainties purely coming from the modifications of gravity, encoded in \(G_{\text{eff}}\).
The posterior distributions for \(\mu(z)\) assuming perfect knowledge of \(h(z)\) and \(\sigma_{8,0}\) are shown in Fig. 1. If the background (and the amplitude of fluctuations \(\sigma_{8,0}\)) are perfectly known, the RSD data alone is enough to perform an accurate (within \(2\sigma\)) reconstruction of the underlying theory of gravity, _i.e._\(G_{\text{eff}}(z)\). In the next sub
\begin{table}
\begin{tabular}{c c c c c} \hline Parameter & \(\sigma_{8,0}\) & \(\log_{10}\sigma_{f}\) & \(\log_{10}\ell_{f}\) & \(\log_{10}z_{c}\) \\ \hline Prior & \([0.5,1.2]\) & \([-3,0.5]\) & \([-1,0.2]\) & \([-1,1]\) \\ \hline \end{tabular}
\end{table}
Table 1: Uniform priors for the parameters used in the MCMC analyses.
section, we take a more realistic approach, where only minimal assumptions on the background are made5 and \(h(z)\) is purely determined from the data.
Footnote 5: We only assume a flat FLRW universe, and that the Hubble rate is a sum of a matter term and an “effective” DE component [14]
Realistic case: \(\sigma_{8,0}\) free - \(\Omega_{\rm m,0}\) and \(f_{\rm DE}(z)\) determined by SN+BAO
In this section, instead of assuming a parametric form for \(h(z)\), we use the reconstructed expansion history as determined by SN+BAO data. In practice, this amounts to obtaining an expansion history \(h(z)\) from the samples of \(f_{\rm DE}\) and calculating angular and luminosity distances which are then fitted to the data, as explained in [14]. The degeneracies between \(\sigma_{8,0},\Omega_{\rm m,0}\) and \(G_{\rm eff}\) makes it very hard to say something about the underlying theory of gravity, given the quality of the data and in particular, when all parameters are free to vary. To circumvent this issue, we assume a single expansion history, as determined solely by the data. More specifically, the expansion history \(h(z)\), along with the value of \(\Omega_{\rm m,0}\)--needed for solving the growth equation (6)--are the median of all the realizations drawn from the Markov SN+BAO chains6, obtained in [14]. Indeed, it was shown in [14] that our method is able to capture a large class of DE models, even those where the contribution from DE is not negligible at high-\(z\). Our reconstruction of \(h(z)\) is accurate to \(\lesssim 1\%\) across the entire redshift range of interest--see Fig. 2. The amplitude of the fluctuations, \(\sigma_{8,0}\), now becomes a free parameter, and we sample the full parameter space \(\theta=\{\sigma_{8,0},\log_{10}\sigma_{f},\log_{10}\ell_{f},\log_{10}z_{c}\}\) in the range given by Table 1. In Fig. 3, we show the reconstructions when using the median of \(h(z)\) and median \(\Omega_{\rm m,0}\) from the SN+BAO chains. As expected, the uncertainties in the reconstructions increase with respect to those in Fig.1, as \(\sigma_{8,0}\) is now a free parameter which is somewhat degenerate with \(G_{\rm eff}\), allowing for more flexibility in the samples of \(G_{\rm eff}\) drawn at each step in MCMC.
Footnote 6: The posterior distributions correspond to the blue contours shown in Fig. 6 of Calderón _et al._[14].
The advantage of taking this approach, is that we do not make any assumption on the evolution of DE, and we are able to effectively reconstruct any expansion history directly from the data, by reconstructing \(f_{\rm DE}(z)\). Moreover, this disentangles the uncertainties coming from the growth evolution \(f\sigma_{8}(z)\) and those coming from the background expansion \(h(z)\). This also allows us to point down a value for \(\Omega_{\rm m,0}\), which is of course anti-correlated with \(\sigma_{8,0}\), which is in turn anti-correlated with \(G_{\rm eff}\). Thus, allowing for more constraining power on the quantity of interest \(\mu(z)\) from RSD alone. The two-dimensional posteriors of the quantity \(\mu\) at two different redshifts \(z=0\)
Figure 2: _Top :_ Reconstruction of the DE evolution \(f_{\rm DE}(z)\). _Bottom :_ Relative (percentage) errors in the background reconstructions from forecasted SN+BAO measurements. The orange line correspond to the true fiducial background in (8), while gray lines depict the reconstructed median, 68 and 95% confidence levels around it. Dashed-black line correspond to \(\Lambda\)CDM’s best-fit (\(f_{\rm DE}=1\), \(\Omega_{\rm m,0}=0.3103\)) to SN+BAO data.
Figure 1: Reconstructions of \(G_{\rm eff}\) in the idealistic case where the background \(h(z)\) and amplitude of fluctuations \(\sigma_{8,0}\) are perfectly known. Solid lines and shaded regions correspond to the median, 68 and 95% confidence intervals around it, respectively. Dashed lines correspond to the fiducial cosmologies generating the DESI-like (RSD) data. The redshift \(z_{c}\) of the transition to GR, as well as the hyperparameters \(\sigma_{f}\) and \(\ell_{f}\) appearing in (9) are nonetheless free parameters to be determined by the data. Both of these reconstructions detect deviations from GR (\(\mu=1\)) at more than \(2\sigma\) for \(z\sim 1\).
and \(z=1.4\) are shown on Fig. 4. At \(z=1.4\), where most of the constraining power of RSD measurements lies, the bump-like posteriors in red exclude GR (\(\mu=1\), in dashed) at \(>2\sigma\), while the posteriors for the dip-like profile in blue are marginally consistent with GR at the \(2\sigma\) level. At low redshift, because of the large uncertainties in \(f\sigma_{8}\), the posteriors are much broader and provide a \(\sim 20\%\) constraints on \(\mu(z=0)\). We note that the study of peculiar velocities using SNIa from ZTF and LSST can potentially improve the measurements of the growth at very low-\(z\) by a factor of 2 with respect to DESI [57]--see also [58] for other interesting constraints using gravitational waves and galaxies' peculiar velocities. Interestingly, because the redshift \(z_{c}\) in (9) of the transition from \(G\to G_{\rm eff}\) is a free parameter, our method allows us to constrain when the departures from GR start taking place (see Fig. 7 and the discussions in Appendix B)
### Incorrectly assuming a \(\Lambda\)CDM background
Cosmological observations suggest that dark energy is in the form of a cosmological constant \(\Lambda\). Because of its simplicity and agreement with observations, it remains the standard model of cosmology today. Thus, most cosmological analyses are done within the \(\Lambda\)CDM framework, which might lead to biased reconstructions
Figure 4: Marginalized posterior distributions of the relevant cosmological parameters, when using our model-independent reconstructions of \(h(z)\), shown in gray in Fig.2 (where the unknown function \(f_{\rm DE}(z)\) is reconstructed in a fully model independent way and \(\Omega_{\rm m,0}\) is fixed to the median of all possible values obtained from the SN+BAO chains—_c.f._ Section III.2).
Figure 5: Reconstructions of \(\mu(z)\) when assuming the best-fit \(\Lambda\)CDM’s expansion history, with \(\Omega_{m,0}=0.31\). Incorrectly assuming a \(\Lambda\)CDM background leads to biased determinations of \(\sigma_{8,0}\) and a degraded reconstruction of \(\mu(z)\), despite being perfectly consistent with \(f\sigma_{8}(z)\), as can be seen from the lower panel.
Figure 3: Realistic case where \(\sigma_{8,0}\) is allowed to vary, and the background \(h(z)\) is determined by SN+BAO (gray lines in Fig. 2). The fiducial cosmologies used to generate the \(f\sigma_{8}(z)\) measurements are shown by the dashed-lines. Despite having larger error confidence intervals with respect to the idealistic case in Fig. 1, both of these reconstructions are still able to rule out GR at more than \(2\sigma\) at \(z\sim 1\).
if DE is _not constant_, as for the fiducial cosmology considered here. In this section, we explore the effects of incorrectly assuming a \(\Lambda\)CDM background expansion history in the reconstructions of \(\mu(z)\). In other words, we fit a \(\Lambda\)CDM model to the SN+BAO mock data described before and find the corresponding best-fit value for \(\Omega_{\rm m,0}\) (and thus \(\Omega_{\Lambda,0}=1-\Omega_{\rm m,0}\)). We remind the reader that the mock data are generated from a time-evolving CPL dark energy model, given by Eq. (8). We then use this expansion history to solve for the perturbations and reconstruct \(\mu(z)\), as explained in the previous sections. The black dashed-lines in Fig. 2 show the best-fit \(\Lambda\)CDM expansion history (with \(\Omega_{\rm m,0}^{\Lambda\rm CDM,bf}=0.3103^{+0.0025}_{-0.0024}\)), compared to the fiducial one with \(\Omega_{\rm m,0}^{\rm fid}=0.28\) in orange (hence, representing a \(\sim 12\sigma\) bias in the fractional matter density). Despite having almost identical \(H(z)\), the differences in the DE evolution \(f_{\rm DE}(z)\) and biased \(\Omega_{\rm m,0}\) translate into a degraded reconstruction of \(G_{\rm eff}\), shown in Fig. 5--to be compared with Fig. 3. We also find that the inferred value of \(\sigma_{8,0}\) can be biased \(\sigma_{8,0}\sim 0.78\) vs. \(\sigma_{8,0}^{\rm fid}=0.81\) (which corresponds to a \(\sim 1.2\sigma\) bias in the inferred amplitude of fluctuations) for the case of the dip (in blue)--see Table 2. As understood from our previous work [14], from the background-only (SN+BAO) stand-point, the lack of DE at high-\(z\) is compensated by higher values of \(\Omega_{\rm m,0}\), which translates into lower values of \(\sigma_{8,0}\) (or lower \(G_{\rm eff}<G_{\rm eff}^{\rm fid}\)) to maintain the agreement with growth-rate measurements of \(f\sigma_{8}(z)\). This is a perfect example of what might happen if one incorrectly assumes DE is constant, the background expansion history might be consistent with the geometrical probes (SN+BAO), but a tension might appear in the amplitude of fluctuations \(\sigma_{8,0}\) inferred from LSS observables. Despite the bias in the cosmological parameters \(\Omega_{\rm m,0}\) and \(\sigma_{8,0}\)--and for the specific cases of \(\mu(z)\) considered here--the reconstructions are still able to capture the main trends in \(\mu(z)\).
Finally, let us note that for the step-like transition in blue, the reason why the reconstructions deviate somehow from the fiducial \(\mu^{\rm fid}(z)\) (in dashed) at very low-\(z\) is because of our theoretical prior \(G_{\rm eff}(z=0)\simeq G\), which tends to draw our GP samples back to 1. We stress that this prior does not need to be imposed, as we do not necessarily have \(G_{\rm eff}(z=0)\simeq G\) in most MG theories. We have in mind here theories without screening mechanisms that do require \(G_{\rm eff}\simeq G\) today to satisfy local constraints, _e.g._[59]. Despite this prior, because of the large uncertainties in RSD measurements at \(z\sim 0\), our reconstructions are still able to capture (within \(2\sigma\)) the true fiducial \(\mu_{\rm Dip}^{\rm fid}\).
## IV Effect of massive neutrinos
In this section, we consider universes containing massive neutrinos. We want to investigate how well our reconstruction of \(\mu\) fares in their presence. It is well known that free-streaming species with non-zero mass (here massive neutrinos) lead to a suppression of gravitational clustering on scales below a characteristic scale, corresponding to their free-streaming length. Hence, while massive neutrinos contribute to the universe expansion in the same way as usual dust-like matter (corresponding to \(\Omega_{m}\)), they are absent from the driving term in the matter perturbations growth. Hence we have in front of us a situation where the parameter \(\Omega_{\rm m,0}\) does not represent all dust-like components at low redshifts. Indeed, one cannot distinguish massive neutrinos from dust-like matter purely from geometric probes at low \(z\). In this case, the splitting in (1), while sensible theoretically, is somewhat ambiguous regarding expansion data if we have no additional information on \(\Omega_{\rm m,0}\) or \(\Omega_{\nu,0}\). This ambiguity however gets broken once we consider the perturbations growth. In a first step, we assume the presence of massive neutrinos and we work with equation (12) below (instead of (1)). So, while we reconstruct \(\mu\) as a Gaussian process, we assume the background expansion is known up to the two parameters \(\Omega_{\rm m,0}\) and \(m_{\nu}\). Here however, we have only one free parameter left. Indeed, in this Section we fix the present relative energy density \(\Omega_{\rm m,0}^{\rm tot}\) of all components which behave like dust at low \(z\), namely
\[\Omega_{\rm m,0}^{\rm tot}\equiv\Omega_{\rm m,0}+\Omega_{\nu,0}=\Omega_{\rm cdm,0}+\Omega_{\rm b,0}+\Omega_{\nu,0}\, \tag{11}\]
where \(\Omega_{\rm cdm,0}\), \(\Omega_{\rm b,0}\), and \(\Omega_{\nu,0}\) are the present relative densities of cold dark matter, baryons, and massive neutrinos respectively. Note that the couple of parameters \((\Omega_{\rm m,0},m_{\nu})\) and \(\left(\Omega_{\rm m,0}^{\rm tot},m_{\nu}\right)\) carry the same information.
We assume now that \(h^{2}(z)\) is given by:
\[h^{2}(z) =\ \Omega_{\rm m,0}\ (1+z)^{3}+\Omega_{\Lambda,0} \tag{12}\] \[+\Omega_{\gamma,0}\ (1+z)^{4}\left(1+0.2271\,\frac{N_{\rm eff}}{3} \,\sum_{i}f_{\nu}\left(\frac{m_{\nu_{i}}}{T_{\nu}}\right)\right),\]
where \(f_{\nu}(y)\simeq(1+(Ay)^{p})^{1/p}\) is a fit provided in Ref. [60], with \(A=\frac{180\zeta(3)}{7\pi^{4}}\) and \(p=1.83\). This fitting function \(f_{\nu}\) describes the evolution from the relativistic behavior when \(m_{\nu}\ll T_{\nu}\) (\(T_{\nu}\sim a^{-1}\)) to the non-relativistic regime when we have eventually \(m_{\nu}\gg T_{\nu}\). Like in (1), the first term appearing in (12) corresponds to the fractional amount of matter that clusters. In order to test our reconstruction in the presence of massive neutrinos, it is more relevant to consider universes sharing identical \(\Omega_{\rm m,0}^{\rm tot}\) rather than identical \(\Omega_{\rm m,0}\), but with different \(\Omega_{\rm m,0}\), or equivalently different neutrino masses \(m_{\nu}\). Clearly, the parameters \(\Omega_{\rm m,0}^{\rm tot}\) and \(m_{\nu}\), completely define the background expansion (12).
The driving term in the perturbations growth equation depends on the combination \(\mu\)\(\Omega_{m}\). Hence for modified gravity and in the presence of massive neutrinos, this
combination is modified at low redshifts as follows
\[G\Omega_{\rm m}^{\rm tot}\to G_{\rm eff}\Omega_{\rm m} =G\Omega_{\rm m}^{\rm tot}\ \mu\left(1-\frac{\Omega_{\nu}}{\Omega_{\rm m}^{\rm tot}}\right)\] \[\approx 0.965\frac{m_{\nu}}{0.5\,{\rm eV}}h_{70}^{-2}\mu\ G\Omega_{\rm m }^{\rm tot}, \tag{13}\]
where we evidently have \(\Omega_{\rm m}^{\rm tot}=\Omega_{\rm m}\) in the absence of massive neutrinos, and \(h_{70}=H_{0}/70\,{\rm km\,s^{-1}\,Mpc^{-1}}\). For the values we take here, the change comes essentially from modified gravity.
Here, we forecast the future surveys' potential to reconstruct the coupling strength \(\mu(z)\) in the presence of massive neutrinos and purely from RSD measurements of \(f\sigma_{8}(z)\). As before, we generate mock data from a fiducial model; this time we choose a (\(\Lambda\)CDM\(\nu\)) cosmology containing 2 massless and 1 massive neutrinos, with \(m_{\nu}^{\rm fid}=0.5\ {\rm eV}\). Although this mass is larger than what is currently allowed by cosmological observations7[61, 62], it is still within the allowed mass range probed by terrestrial experiments, which constrain \(m_{\nu}^{2}\equiv\Sigma_{i}\left|U_{ei}\right|^{2}m_{i}^{2}=0.26^{+0.34}_{-0.3 4}\ {\rm eV^{2}}\), yielding an upper bound on the electron (anti)-neutrino mass \(m_{\nu}<0.8\ {\rm eV}\) at 90% CL [63]8. The rest of the cosmological parameters are fixed to Planck's best-fit values. Due to the growth suppression from such a massive neutrino, the normalization of the matter power spectrum \(P_{\rm m}(k,z=0)\), characterized by \(\sigma_{8,0}\), is now \(\sigma_{8,0}^{\rm fid}\simeq 0.73\), lower than in the previous sections (where \(\sigma_{8,0}\) was fixed to \(\sigma_{8,0}^{\rm fid}=0.81\)).
Footnote 7: Cosmological constraints are indirect and somewhat model-dependent, unlike ground-based experiments.
Footnote 8: Note that masses of usual and sterile neutrinos \(m_{\nu}\sim 1\,{\rm eV}\) are well possible in viable \(f(R)\) cosmological models [64, 65].
In what follows, we assume that this normalization (\(\sigma_{8,0}^{\rm fid}=0.73\), as obtained for \(\mu=1\)) is the same for all profiles of \(G_{\rm eff}\). Although the actual normalization of the \(P_{\rm m}(k,z=0)\) would indeed depend on the theory of gravity, we generate mock data for different profiles of \(\mu\) from the same value of \(\sigma_{8,0}\). We stress that this choice is arbitrary, as we are dealing with simulated data and we are interested in assessing whether the theory of gravity \(\mu(z)\) and \(\sigma_{8,0}\) are accurately recovered by our model-independent reconstructions, which do not know anything about the underlying theory that generates the data.
We then sample the parameters \(\theta=\{\sigma_{8,0},m_{\nu},\log_{10}\sigma_{f},\log_{10}\ell_{f},\log_{10} z_{c}\}\), with \(m_{\nu}\in[0,1]\) to see the impact of a varying neutrino mass on the reconstructions of \(\mu(z)\). The posterior distributions for the relevant cosmological parameters are shown on Fig. 6. The value of \(\sigma_{8,0}\) is anti-correlated with the reconstructions of \(\mu\), mainly seen in the (\(\sigma_{8,0},\mu(z=1.4)\))-plane. Large deviations from GR, up to \(\mu(z=1.4)\sim 1.8\) can be achieved, provided that the amplitude of fluctuations \(\sigma_{8,0}\) is low (\(\sigma_{8,0}\sim 0.65\)). A slight (negative) correlation between \(\Omega_{\rm m,0}\) and \(\sigma_{8,0}\) is also obtained, as expected. The enhanced suppression of growth (due to larger mass \(m_{\nu}\), hence smaller \(\Omega_{\rm m,0}=\Omega_{\rm m,0}^{\rm tot}-\Omega_{\nu,0}\)) needs to be compensated by larger values of \(\sigma_{8,0}\), to maintain the agreement with \(f\sigma_{8}\) measurements. Despite these correlations, the reconstructions of \(\mu(z)\) remain accurate, and does not seem to be affected by a varying neutrino mass (other than increasing the uncertainties in the reconstructions, due to an additional free parameter). The fiducial value for \(\sigma_{8,0}\), shown as a dashed vertical line in Fig. 6, is also accurately recovered.
Finally, let us note that we separately tested our reconstructions in the presence of massive neutrinos without assuming the functional form of \(h(z)\), given by (12) but using instead the (reconstructed) _effective_\(f_{\rm DE}(z)\) in Eq. (1), which captures the effect of relativistic species [14]. Our conclusions remain unaltered, but no information on the neutrino mass can be obtained.
## V Conclusions
In a companion paper Calderon _et al._[14], we jointly reconstructed the growth and expansion histories inside GR directly from the data and using minimal assumptions. We showed that our framework is able to capture a wide variety of behaviors in the DE component. In this work, we extend our methodology to include pos
Figure 6: Marginalized posterior distributions for the parameters in the presence of massive neutrinos. This figure is the same as Fig. 4, but this time assuming the background is known \(h(z)\) (up to 1 free parameter \(m_{\nu}\)) given by Eq. (12), including relativistic species and when the neutrino mass is free to vary (_c.f._ Section IV).
sible modifications of gravity at late-times, as encoded by the function \(G_{\rm eff}(z)\) appearing in the (modified) Poisson equation. We illustrate the efficiency of our method in reconstructing different theories of gravity by reconstructing two phenomenological _shapes_ of \(\mu(z)\equiv G_{\rm eff}/G\). As an example, we consider a "bump" and a smooth transition ("dip") towards the weak gravity regime in the recent past. We used the reconstructed \(h(z)\) from background-only data, as obtained in [14] in order to fit \(f\sigma_{8}(z)\) to RSD mock data, thereby constraining \(\mu(z)\) using minimal assumptions. We also explore the effects of incorrectly assuming a \(\Lambda\)CDM background. In both cases, the fiducial \(\mu(z)\) is within the \(1\sigma\) confidence intervals of our reconstructions, if the background is accurately determined, and within \(2\sigma\) if we incorrectly assume the \(\Lambda\)CDM's best-fit \(h(z)\). Finally, we explored the impact of massive neutrinos on the reconstructions of \(\mu(z)\). To summarize, let us list a few important results.
* If the background is _given_ (Fig. 1), or _accurately reconstructed_ from SN+BAO (Fig. 2), our reconstructions of \(G_{\rm eff}(z)\) are able to distinguish both fiducial \(\mu\)-profiles from GR at \(\gtrsim 2\sigma\) (see Figs. 1 and 3).
* Incorrectly assuming a \(\Lambda\)CDM expansion (with the best-fit \(\Omega_{\rm m,0}\) to background probes) can lead to _biased/degraded reconstructions_ (red-shaded regions in Fig. 5) and/or _biased estimations_ of the amplitude of fluctuations \(\sigma_{8,0}\) (see Table 2). This is despite the perfect agreement with \(f\sigma_{8}(z)\) measurements, as shown in the lower panel of Fig. 5.
* The posterior distributions for the hyperparameters _clearly_ show the _need_ for a deviation from the mean \(\bar{f}=1\), _i.e._ GR is _not_ a good description of the data. This is understood because the marginalized contours in Fig. 7 suggest \(\sigma_{f}\neq 0\). Interestingly, the redshift of the transition \(z_{c}\) is also not compatible with \(0\), and we have a "detection" on when this transition from \(G\to G_{\rm eff}\) happens.
In this work, we used forecasted (stage-IV) SN+BAO data to reconstruct the DE evolution \(f_{\rm DE}(z)\)--which determines the expansion history \(h(z)\)--and separately reconstructed \(\mu(z)\) using DESI-like \(f\sigma_{8}(z)\) measurements for two different toy models of \(G_{\rm eff}\). We expect our methodology to hold for essentially any (viable) form of \(G_{\rm eff}\). We showed that for both profiles considered in this work, the reconstructions are able to detect the deviations from GR at \(\gtrsim 2\sigma\) in the redshift range \(0.5\lesssim z\lesssim 1.5\) where DESI's (RSD) constraining power lies. The inclusion of external data sets, such as the (modified) luminosity distance of gravitational waves \(d_{L}^{\rm GW}(z)\)[66] or the Integrated Sachs-Wolfe effect (ISW) seen in the temperature anisotropies of the Cosmic Microwave Background (CMB) in cross-correlation with LSS surveys would provide interesting (model-independent) constraints on the allowed deviations from GR [67]. Moreover, we note that the effect of massive neutrinos would be tracked more accurately if we allow for a scale-dependent growth. We leave such extensions for future work.
## Acknowledgements
We thank Eric Linder for comments on the draft. BL acknowledges the support of the National Research Foundation of Korea (NRF-2019R1I1A1A01063740 and NRF-2022R1F1A1076338) and the support of the Korea Institute for Advanced Study (KIAS) grant funded by the government of Korea. AS would like to acknowledge the support by National Research Foundation of Korea NRF2021M3F7A1082053, and the support of the Korea Institute for Advanced Study (KIAS) grant funded by the government of Korea. AAS was partly supported by the project number 0033-2019-0005 of the Russian Ministry of Science and Higher Education.
## Appendix A Gaussian process with observations on the derivatives
In this section, we describe a less common use of Gaussian process when we also observe the derivative of the function \(f\) to be reconstructed [48; 68]. We note that in this section, \(f\) denotes a general function, not the growth rate. In our case, \(f=\mu(z)\). In addition to observations
Figure 7: Marginalized posterior distributions for the relevant parameters from the RSD chains. The background expansion history used in the analysis is fixed to the median \(h(z)\) obtained from the SN+BAO chains, shown as a gray line in Fig. 2.
of \(y\), we also "observe" \(y^{\prime}=f^{\prime}(x)+\varepsilon\), where
\[\varepsilon\sim\mathcal{N}(0,C_{y^{\prime}}) \tag{10}\]
is a Gaussian noise and \(C_{y^{\prime}}\) is the covariance of the derivatives. We further assume that \(y\) and \(y^{\prime}\) are uncorrelated. Therefore, the vector
\[\begin{bmatrix}y\\ y^{\prime}\\ f\\ f^{\prime}\end{bmatrix} \tag{11}\]
is jointly gaussian, and the posterior predictive distribution can be calculated using
\[\begin{bmatrix}f\\ f^{\prime}\end{bmatrix}|y,y^{\prime},X,X_{*}\sim\mathcal{N}\left(\begin{bmatrix} \bar{f}\\ \bar{f}^{\prime}\end{bmatrix};\begin{bmatrix}A-CB^{-1}C^{T}\end{bmatrix} \right), \tag{12}\]
where the mean is
\[\begin{bmatrix}\bar{f}\\ \bar{f}^{\prime}\end{bmatrix}=CB^{-1}\begin{bmatrix}y-\mu_{y}\\ y^{\prime}-\mu_{y^{\prime}}\end{bmatrix}, \tag{13}\]
and the covariance matrix is given by
\[A =\begin{bmatrix}K_{**}&K_{**}^{01}\\ K_{**}^{10}&K_{**}^{11}\end{bmatrix}\in\mathbb{M}_{2n_{*},2n_{*}}, \tag{14a}\] \[B =\begin{bmatrix}K+C_{y}&K^{01}\\ K^{10}&K^{11}+C_{y^{\prime}}\end{bmatrix}\in\mathbb{M}_{n+n^{\prime}},\] (14b) \[C^{T} =\begin{bmatrix}K_{*}&K_{*}^{01}\\ K_{*}^{10}&K_{*}^{11}\end{bmatrix}\in\mathbb{M}_{n+n^{\prime},2n_{*}}, \tag{14c}\]
where
\[K =k(X,X), \tag{15a}\] \[K_{*} =k(X,X_{*}),\] (15b) \[K_{**} =k(X_{*},X_{*}), \tag{15c}\]
and for any matrix \(X\),
\[X^{i,j}=\frac{\partial^{i+j}X}{\partial X^{i}\partial X^{j}}. \tag{16}\]
This formalism allows us to impose theoretical priors on the samples of \(\mu(z)\) and its derivative \(\mu^{\prime}(z)\) to smoothly recover the expected GR behaviour at early times (see Eq. (9)).
## Appendix B Supplementary Material
Inspecting the posterior distributions of the hyperparameters, shown in Fig. 7, can yield additional information on the \(G_{\text{eff}}\) reconstructions and put interesting constraints on the departures from GR. First, let us note that the inferred value of \(\sigma_{8,0}\) is unbiased in both cases, when the evolution of DE \(f_{\text{DE}}(z)\) is reconstructed using our model independent approach [14]. This is not the case when one (incorrectly) assumes a \(\Lambda\)CDM expansion history (see Table 2). Second, both the "bump" and "dip" reconstructions seem to require a deviation from the mean function \(\bar{f}=\mu=1\) (_i.e._ GR), as the posteriors of \(\log_{10}\sigma_{f}\) are not compatible with \(\sigma_{f}\to 0\). This suggest that GR is not a good description of the growth \(f\sigma_{8}(z)\) history and that the data requires extra flexibility, as encoded by the GP kernel in Eq. (10). Lastly, the posteriors of \(\log_{10}z_{c}\) seem to peak at the redshift \(z_{c}\sim 3\) where the departures from GR actually takes place (depicted by the vertical dashed line in Fig. 7).
|
2305.03866 | Spiking neural networks with Hebbian plasticity for unsupervised
representation learning | We introduce a novel spiking neural network model for learning distributed
internal representations from data in an unsupervised procedure. We achieved
this by transforming the non-spiking feedforward Bayesian Confidence
Propagation Neural Network (BCPNN) model, employing an online correlation-based
Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform
representation learning, into a spiking neural network with Poisson statistics
and low firing rate comparable to in vivo cortical pyramidal neurons. We
evaluated the representations learned by our spiking model using a linear
classifier and show performance close to the non-spiking BCPNN, and competitive
with other Hebbian-based spiking networks when trained on MNIST and F-MNIST
machine learning benchmarks. | Naresh Ravichandran, Anders Lansner, Pawel Herman | 2023-05-05T22:34:54Z | http://arxiv.org/abs/2305.03866v2 | # Spiking neural networks with Hebbian plasticity
###### Abstract
We introduce a novel spiking neural network model for learning distributed internal representations from data in an unsupervised procedure. We achieved this by transforming the non-spiking feedforward Bayesian Confidence Propagation Neural Network (BCPNN) model, employing an online correlation-based Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform representation learning, into a spiking neural network with Poisson statistics and low firing rate comparable to _in vivo_ cortical pyramidal neurons. We evaluated the representations learned by our spiking model using a linear classifier and show performance close to the non-spiking BCPNN, and competitive with other Hebbian-based spiking networks when trained on MNIST and F-MNIST machine learning benchmarks.
## 1 Introduction
The success of deep learning (DL) in solving various real-world pattern recognition benchmarks has shown the importance of building large-scale artificial neural networks (ANNs) with the ability to learn distributed internal representations from real-world data. One of the emerging concerns however is the energy footprint of heavy computations involved in training large ANN architectures. In response to this challenge there has been growing interest in neuromorphic approaches that build on more biologically plausible spiking neural networks (SNNs). This new generation of neural network models holds a promise for energy-efficient neuromorphic hardware that can handle real-time data streams efficiently with sparse and asynchronous event-based communication [1]. It is therefore imperative that, in parallel to DL development, we develop SNNs that can learn representations from real-world data. Building such SNN models has been typically addressed either by converting a traditional non-spiking ANN trained with gradient descent learning into a SNN, or by modifying backprop-based gradient descent algorithms to accommodate spiking neurons [1, 2]. Since the
current approaches do not fully leverage the biological nature of the learning principles in SNNs, there is a motivated concern that full potential of SNNs and their neuromorphic implementations may not be harnessed.
Our philosophy for SNN design is steeped into the biological brain's inspirations and hence we aim to develop a biologically constrained SNN model that performs unsupervised representation learning based on Hebbian learning principles. For this, we derive our model from an abstract (non-spiking) brain-like BCPNN architecture, previously shown to perform representation learning by solely using Hebbian learning (synaptic plasticity) and Hebbian rewiring (structural plasticity) mechanisms [5]. Crucially, we employ on-line Hebbian learning directly in the spiking domain. To this end, we interpret spikes as stochastic independent samples from a Poisson distribution, where the underlying firing rates are computed as probabilities from the BCPNN model. This is motivated from the observations that _in vivo_ cortical pyramidal neurons show reliable firing rate whereas the timing of spikes is highly irregular and the corresponding inter-spike intervals closely resembles a Poisson distribution [3, 4]. Our main contribution is to show that the BCPNN model can be converted to a SNN preserving the biological details with minimal compromise on performance. The spiking statistics in our model reach a maximum firing rate of around 50 spikes/s, matching the sparse firing of _in vivo_ cortical pyramidal neurons. We evaluated the internal representation of the model by means of a linear classifier and compared it with the corresponding non-spiking model as well as other SNNs with Hebbian learning.
## 2 Model description
We summarize key details of the model relevant to the spiking version and refer to previous works on the feedforward non-spiking BCPNN model for full details [5].
**Modular network design**: Our spiking BCPNN model consists of one spiking input layer and one spiking hidden layer. The layer architecture is derived from the columnar organization of the neocortex. Each layer in our network model is composed of many identical hypercolumns modules, each of which in turn comprises many neuron units (referred to as minicolumns) sharing the same receptive field.
**Localized learning**: The learning mechanism is local, online, and correlation-based Hebbian-Bayesian synaptic plasticity where each synapse accumulates short and long-term traces of pre-, post-, and joint activities. From the pre- and post-synaptic spikes at time \(t\), \(S_{i},S_{j}\in\{0,1\}\), we compute \(Z\)-traces, \(Z_{i}\) and \(Z_{j}\), as a form of short-term filtering (\(\tau_{z}\) \(\sim\) few milliseconds) providing a coincidence detection window between pre- and post-synaptic spikes for subsequent LTP/LTD induction (Eq. 1). The \(Z\)-traces are further transformed into \(P\)-traces, \(P_{i},P_{j}\), and \(P_{ij}\), with long time-constants (\(\tau_{p}\) \(\sim\) seconds to hours) reflecting LTP/LTD synaptic processes (Eq. 2). The \(P\)-traces are finally transformed to bias and weight parameter of the synapse corresponding to terms in
ANNs (Eq. 3). All the spike and trace variables are time dependent (time index is dropped for the notation brevity).
\[\tau_{x}\,\frac{dZ_{i}}{dt}=\frac{\tau_{x}}{\Delta t}S_{i}-Z_{i},\qquad\tau_{x} \,\frac{dZ_{j}}{dt}=\frac{\tau_{x}}{\Delta t}S_{j}-Z_{j}, \tag{1}\]
\[\tau_{p}\,\,\frac{dP_{i}}{dt}=Z_{i}-P_{i},\qquad\tau_{p}\,\,\frac{dP_{ij}}{dt}= Z_{i}\,Z_{j}-P_{ij},\qquad\tau_{p}\,\,\frac{dP_{j}}{dt}=Z_{j}-P_{j}, \tag{2}\]
\[b_{j}=\log\,\,P_{j}\,,\qquad w_{ij}=\log\,\,\frac{P_{ij}}{P_{i}\,P_{j}}\,, \tag{3}\]
**Localized rewiring:** The synaptic rewiring mechanism adaptively finds efficient sparse connectivity between the layers, mimicking structural plasticity in the brain [5]. This mechanism uses the \(P\)-traces locally available at each synapse to maximize a "usage" score and updates the sparse binary connectivity matrix \(c_{ij}\) accordingly.
**Neuronal activation:** The total input \(I_{j}\) for neuron \(j\) is updated to be weighted sum of incoming spikes with the time-constant \(\tau_{x}\) (acting here as the depolarization time constant) (Eq. 4). The activation of the neuron, \(\pi_{j}\), is computed as a softmax function of the input \(I_{j}\) (Eq. 5), which induces a soft-winner-takes-all competition between the minicolumn units within each hypercolumn module. The output of the softmax function reflects the posterior belief probability of the minicolumn unit according to the BCPNN formalism [5]. In the non-spiking (rate-based) BCPNN model, this activation \(\pi_{j}\) acts as the firing rate and can be directly communicated as the neuronal signal. For SNNs, we independently sample binary values from this \(\pi_{j}\) activation probability scaled by the maximum firing rate \(f_{max}\) for each time step (Eq. 6). Note that when \(f_{max}=1000\) spikes/s (and \(\Delta t=1\)ms), the spike generation process from Eq. 6 is simply a stochastic sampling of the underlying firing rate probability and setting \(f_{max}<1/\Delta t\) linearly scales the activation probability to the maximum firing rate. Also, in both learning (Eq. 1) and synaptic integration (Eq. 4) steps, we scaled the binary spiking signal by \(\tau_{x}/\Delta t\) as this renders the filtered spike statistics of model to be equivalent to the rate model.
\[\tau_{x}\,\frac{dI_{j}}{dt}=b_{j}+\frac{\tau_{x}}{\Delta t}\sum_{i=0}^{N_{i}} \,S_{i}\,w_{ij}\,c_{ij}-I_{j}, \tag{4}\]
\[\pi_{j}=\,\frac{\exp\,\left(I_{j}\right)}{\sum_{k=1}^{Mh}\exp\,\left(I_{k} \right)}, \tag{5}\]
\[S_{j}\sim P(\text{spike between t and t}+\Delta t)=\pi_{j}\,f_{max}\,\,\Delta t \tag{6}\]
## 3 Experiments
### Comparison of classification performance
To benchmark our spiking BCPNN model on the MNIST (hand-written digit images) and F-MNIST (fashion apparel images) datasets, we first trained it in a purely
unsupervised manner (representation learning) and then used a linear classifier (cross entropy loss, SGD with Adam optimizer, 100 training epochs) to predict class labels (\(n\) = 3 randomized runs, all parameters are listed in Table 1). Table 2 shows that the classification accuracy of our model is competitive with the non-spiking BCPNN as well as other SNNs with Hebbian-like plasticity (STDP and its variants).
### Spiking BCPNN with sparse firing learns distributed representations
In Fig. 1A we plotted the neuronal support, i.e., input, \(I_{j}\), superimposed with the spiking output, \(S_{j}\), for 30 randomly selected neurons within a single hypercolumn module after training the network on MNIST data (for visualization, we offset each neuron's input by 50mV vertically, scaled them to be in the range -80mV to -55mV and added a spike event of 40 mV) and observed sparse spiking with occasional bursts. In Fig. 1B we plotted the firing rate of each neuron in a single randomly chosen hypercolumn module by convolving the spike train with a Gaussian kernel (\(\sigma\) = 50ms). We see that most neurons have low firing rates (-2 spikes/s), with a very few (typically one) neurons showing high level of activity (-50 spikes/s) within the duration of a stimulus pattern presentation (gray vertical bars) due to the local competition within the hypercolumn. We plotted the receptive fields for three hypercolumns and the filters learned by six minicolumns each (randomly chosen) in Fig. 1C. They provide a good qualitative match to the previously published results of the non-spiking BCPNN model [5].
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Type & Parameter & Value & Description \\ \hline Synaptic & \(\tau_{x}\) & 20 ms & Short-term filtering time constant \\ \cline{2-4} & \(\tau_{p}\) & 5 s & Long-term learning time constant \\ \cline{2-4} & \(p_{conn}\) & 10 \% & Sparse connectivity between layers \\ \hline Neuronal & \(H_{l},M_{l}\) & 784, 2 & N:o input layer hypercolumns \& minicolumns \\ \cline{2-4} & \(H_{h},M_{h}\) & 100, 100 & N:o hidden layer hypercolumns \& minicolumns \\ \cline{2-4} & \(f_{max}\) & 50 spikes/s & Maximum firing rate \\ \hline Training & \(\Delta t\) & 1 ms & Simulation time step \\ \cline{2-4} protocol & \(T_{pat}\) & 200 ms & Time period for each pattern \\ \cline{2-4} & \(T_{gap}\) & 100 ms & Time period of gap between patterns \\ \cline{2-4} & \(N_{epoch}\) & 10 & N:o of training epochs \\ \cline{2-4} & \(N_{pat}\) & 60000 & N:o of training patterns \\ \hline \end{tabular}
\end{table}
Table 1: Network parameters.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Model & Activity & Plasticity & MNIST & F-MNIST \\ \hline BCPNN (this work) & spiking & BCPNN & \(97.7\pm 0.09\) & \(83.8\pm 0.12\) \\ \hline BCPNN & rate & BCPNN & \(98.6\pm 0.08\) & \(89.9\pm 0.09\) \\ \hline Diehl \& Cook, 2015 [6] & spiking & STDP & 95.0 & – \\ \hline Kheradpisheh et al., 2018 [7] & spiking & STDP & 98.4 & – \\ \hline Mozafari et al., 2019 [8] & spiking & STDP-R & 97.2 & – \\ \hline Hao et al., 2019 [9] & spiking & sym-STDP & 96.7 & 85.3 \\ \hline Dong et al., 2023 [10] & spiking & STB-STDP & 97.9 & 87.0 \\ \hline \end{tabular}
\end{table}
Table 2: Linear classification accuracy (%).
### Filtering enables spiking models to approximate the non-spiking model
We studied the effects of short-term filtering (\(Z\)-traces) in terms of classification performance (Fig. 2). We ran our experiments by training on a reduced version of MNIST dataset with 1000 training and 1000 test patterns while varying \(\tau_{z}\) and \(f_{max}\) (all other parameters same as in Table 1). For biologically realistic values of \(f_{max}\), like 50 spikes/s, performance with \(\tau_{z}\leq 10\)ms is very low (\(\tau_{z}=1\)ms is effectively no filtering). This is because pre- and post- synaptic spikes are expected to coincide within this time-window for learning to occur, whereas spikes are generated sparsely and irregularly from a Poisson distribution. However, for \(\tau_{z}\sim\)50ms, the performance closely approximates the non-spiking model since this time window is sufficient to expect pre- and post-synaptic spikes to coincide and be associated. For \(f_{max}>500\)Hz (non-biological case), accuracy is high for \(\tau_{z}\) over a wider range since the spikes are dense samples of the underlying neuronal activation and short-term filtering is not necessarily helpful. All models irrespective of \(f_{max}\) drop sharply in performance after \(\tau_{z}>100\)ms, very likely because the temporal window provided is too long compared to the presentation time of each pattern (\(T_{pat}+T_{gap}=300\)ms) and the learning wrongly associates spikes of a pattern with spikes from previous patterns.
Figure 2: Effect of short-term filtering on classification performance.
Figure 1: Neuronal support recorded after training for a time period of 6s across 30 randomly selected neurons shows sparse spiking activity. **B.** Firing rate computed from all (\(M_{h}\)=100) neurons within one hypercolumn. For the duration of pattern presentations (gray vertical bars, \(T_{pat}=200\)ms), mostly a single neuron shows a high firing rate while the rest are at a baseline firing rate. **C.** Local receptive fields are formed from randomly initialized connections through the rewiring mechanism, and individual minicolumns learn filters within their receptive field resembling orientation edge detectors.
Conclusion
We have demonstrated that our spiking BCPNN model can learn internal representations, preserving the learning and rewiring mechanisms introduced in the non-spiking BCPNN model, offering competitive classification performance. Our Poisson spike generation mechanism is simpler than integrate-and-fire models, but it still recapitulates _in vivo_ irregular cortical pyramidal spiking patterns with realistic firing rates. We suggest that it is the Hebbian plasticity mechanism that provides a robust learning algorithm tolerating the highly irregular sparse spiking. This is in stark contrast to backprop-based algorithms where it is not straightforward to accommodate spiking neurons. We found that short-term filtering (\(Z\)-traces) was crucial for this process. The time constants we found to work best (\(\tau_{z}\sim\)50ms) roughly match the dendritic depolarization time constant (paralleling the integration step in Eq. 4), and the NMDA-dependent Ca\({}^{2+}\) influx required for synaptic plasticity (learning step in Eq. 1).
Our scaling experiments (not shown) suggested that the network scales well in terms of performance although the running time is 100x slower compared to the non-spiking model since the timestep needs to be around 1ms (simulations took \(\sim\)10 hours on custom CUDA code running on A100 GPUs). More efficient software and custom hardware implementation can make large-scale SNN simulations more efficient. Another direction of interest is in developing a more complex network architecture that combines recurrent attractors implementing associative memory with hierarchical representation learning (unsupervised) networks.
|
2310.16763 | SuperHF: Supervised Iterative Learning from Human Feedback | While large language models demonstrate remarkable capabilities, they often
present challenges in terms of safety, alignment with human values, and
stability during training. Here, we focus on two prevalent methods used to
align these models, Supervised Fine-Tuning (SFT) and Reinforcement Learning
from Human Feedback (RLHF). SFT is simple and robust, powering a host of
open-source models, while RLHF is a more sophisticated method used in top-tier
models like ChatGPT but also suffers from instability and susceptibility to
reward hacking. We propose a novel approach, Supervised Iterative Learning from
Human Feedback (SuperHF), which seeks to leverage the strengths of both
methods. Our hypothesis is two-fold: that the reward model used in RLHF is
critical for efficient data use and model generalization and that the use of
Proximal Policy Optimization (PPO) in RLHF may not be necessary and could
contribute to instability issues. SuperHF replaces PPO with a simple supervised
loss and a Kullback-Leibler (KL) divergence prior. It creates its own training
data by repeatedly sampling a batch of model outputs and filtering them through
the reward model in an online learning regime. We then break down the reward
optimization problem into three components: robustly optimizing the training
rewards themselves, preventing reward hacking-exploitation of the reward model
that degrades model performance-as measured by a novel METEOR similarity
metric, and maintaining good performance on downstream evaluations. Our
experimental results show SuperHF exceeds PPO-based RLHF on the training
objective, easily and favorably trades off high reward with low reward hacking,
improves downstream calibration, and performs the same on our GPT-4 based
qualitative evaluation scheme all the while being significantly simpler to
implement, highlighting SuperHF's potential as a competitive language model
alignment technique. | Gabriel Mukobi, Peter Chatain, Su Fong, Robert Windesheim, Gitta Kutyniok, Kush Bhatia, Silas Alberti | 2023-10-25T16:52:00Z | http://arxiv.org/abs/2310.16763v1 | # SuperHF: Supervised Iterative Learning
###### Abstract
The field of artificial intelligence is increasingly focused on large-scale language models, which, while demonstrating remarkable capabilities, often present challenges in terms of safety, alignment with human values, and stability during training. Here, we focus on two prevalent methods used to align these models, Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). SFT is simple and robust, powering a host of open-source models, while RLHF is a more sophisticated method used in top-tier models like ChatGPT but that also suffers from instability and susceptibility to reward hacking. We propose a novel approach, Supervised Iterative Learning from Human Feedback (SuperHF), which seeks to leverage the strengths of both methods. Our hypothesis is two-fold: we posit that the reward model used in RLHF is critical for efficient data use and model generalization and that the use of Proximal Policy Optimization (PPO) in RLHF may not be necessary and could contribute to instability issues. SuperHF replaces PPO with a simple supervised loss and a Kullback-Leibler (KL) divergence prior. It creates its own training data by repeatedly sampling a batch of model outputs and filtering them through the reward model in an online learning regime. We then break down the reward optimization problem into three components: robustly optimizing the training rewards themselves, preventing reward hacking--or exploitation of the reward model that can degrade model performance--as measured by a novel METEOR similarity metric, and maintaining good performance on downstream evaluations. Our experimental results show SuperHF exceeds PPO-based RLHF on the training objective, easily and favorably trades off high reward with low reward hacking, improves downstream calibration, and performs the same on our GPT-4 based qualitative evaluation scheme all the while being significantly simpler to implement, highlighting SuperHF's potential as a competitive language model alignment technique.1
Footnote 1: Code to implement SuperHF and reproduce our results is available at [https://github.com/openfeedback/superhf/](https://github.com/openfeedback/superhf/).
## 1 Introduction
Foundation models (FM) have achieved remarkable results across Natural Language Processing (NLP) tasks and beyond. However, ensuring the safety and alignment2 of these increasingly capable FMs
with human values remains a challenging open technical problem (Ouyang et al., 2022). Two dominant approaches have emerged in the literature: Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) (Bai et al., 2022; Stiennon et al., 2022; Ouyang et al., 2022). SFT is simple and easy to reproduce and has thus enabled many recent breakthroughs in open-source models like Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), and Koala (Geng et al., 2023). However, it is often limited by the cost of obtaining large datasets of high-quality instruction examples (Stiennon et al., 2022). RLHF is the method behind popular state-of-the-art models like ChatGPT and has been shown to outperform SFT. However, it is known to be more unstable with respect to hyperparameters (Beeching et al., 2023), degrades performance for NLP tasks (Bai et al., 2022) and calibration (OpenAI, 2023), and suffers from reward hacking (Gao et al., 2022; Krakovna et al., 2017), or gaming the reward model during training at the expense of other qualitative metrics. Due to these limitations and the sheer difficulty in implementing RLHF, is has had a comparative scarcity of open-source replications.
Footnote 1: [https://github.com/google-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning--learning-learning-learning-learning--learning-learning--learning-learning--learning-learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning---learning--learning---learning---learning---learning---learning---learning---learning---learning----learning----learning----learning----learning----learning----learning----learning-----learning-----learning------learning------learning------1](https://github.com/google-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning--learning-learning-learning-learning--learning-learning--learning-learning--learning-learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning---learning--learning---learning---learning---learning---learning---learning---learning---learning----learning----learning----learning----learning----learning----learning----learning-----learning-----learning------learning------learning------1)
To make language model alignment more broadly safe and accessible, we systematically break down which components are necessary and which are incidental. RLHF consists of two components: (1) a reward model (RM) that is trained from human preferences to rate the quality of model outputs, and (2) an RL algorithm like Proximal Policy Optimization (PPO) to optimize the FM using the feedback of the RM. Our hypothesis is that the reward model is the crucial component because it can generalize the human feedback signal across a wider distribution of data, thereby allowing for wider exploration by the policy and greater data efficiency.
To test this hypothesis constructively, we propose Supervised Iterative Learning from Human Feedback (SuperHF), an alignment algorithm that uses a reward model to augment its data efficiency but replaces PPO with a simple supervised fine-tuning loss. The key idea, shown in Figure 1, is to let the language model generate its own training data by sampling a "superbatch" of outputs, filtering these with a reward model, and iteratively fine-tuning on each filtered completion. We expand and unify previous work by combining two important components: (1) the Kullback-Leibler (KL) divergence penalty and (2) the iterative procedure of sampling and fine-tuning steps. Moreover, we embed this method into a Bayesian inference framework, showing that RLHF and SuperHF can be viewed from a simple unified theoretical perspective that does not involve reinforcement learning and naturally justifies the KL penalty and iterative approach.
Our main contributions are as follows:
1. **A simpler drop-in replacement for RLHF.** We propose Supervised Human Feedback (SuperHF), a simpler and more robust human preference learning method. SuperHF replaces reinforcement learning in prior work with a supervised loss on human reward model predictions. This reduces implementation complexity while achieving competitive performance on the training objective. The simplified approach comes at the cost of longer fine-tuning time,
Figure 1: **A diagram of our main SuperHF training loop.** Given a prompt, we sample multiple completions from the language model, score them with a pre-trained reward model, and fine-tune with the best completion with an added KL-divergence constraint before repeating.
though computational resources for human preference learning is often not the bottleneck Ouyang et al. (2022).
2. **Reward is not all your need.** We demonstrate the importance of balancing reward optimization and specification gaming prevention. Using a KL divergence penalty, we can trade off some reward to dramatically reduce reward hacking behaviors as measured by METEOR similarity of model outputs. We also show improved results when fine-tuning preference models starting from an instruction-tuned base, motivating the existing common practice by allowing for easier optimization across a wide range of KL coefficients.
3. **SuperHF holds up downstream.** We evaluate our SuperHF and RLHF models on downstream capabilities and safety benchmarks. SuperHF matches or exceeds the performance of RLHF, with improved calibration and competitive scores from GPT-4-based model evaluations. This confirms that our simpler approach does not compromise performance on key downstream metrics.
We find SuperHF to be a simple yet effective language model alignment algorithm. We validate its capabilities on alignment, safety, and quality metrics, while also providing insights into properly achieving high rewards without specification gaming. Its improved accessibility and strong performance make SuperHF a promising new technique for aligning large language models.
## 2 Related Work
In the recent review by Casper et al. (2023) of the open problems and fundamental limitations of RLHF, one of the key categories of problems is associated with the RL policy. Circumvention of RL via SFT is discussed in Huang et al. (2022); Zhang et al. (2023). Although each of these concurrent works has similarities, SuperHF is the first method to our knowledge to combine all the elements of (1) utilizing supervised fine-tuning loss in an iterative procedure, (2) incorporating a scalar reward model without expert demonstrations, and (3) prior preservation using KL divergence. Moreover, we are the first to systematically categorize and evaluate reward hacking using a GPT -4-based evaluation scheme.
We now discuss several recent approaches that employ SFT with human feedback by incorporating rewards or rankings for fine-tuning and highlight their differences to SuperHF:
The method _RRHF_ scores responses generated by different sampling policies and uses these to align a model with human preferences via a ranking loss Yuan et al. (2023). _Ranked FineTuning (RaFT)_ is a related approach using expert demonstrations alongside a reward model to fine-tune on a streaming dataset of examples Dong et al. (2023). A third method is _Imitation Learning from Language Feedback (ILF)_, which uses language model-based rankings on which an FM is fine-tuned Scheurer et al. (2023). A final method presented in the literature _Quark: Controllable Text Generation_ which uses a reward model to place completions into quantiles Lu et al. (2022). Each quantile is then identified with a reward token and a standard language modeling loss is used on samples from each quantile conditioned on its respective reward token. Quark further employs a KL divergence to prevent divergence from the original model. Furthermore, the _Expert Iteration_ method proposed in Uesato et al. (2022) uses the same loss function we derived (1).
Although all of this concurrent work has some similarities to our work, SuperHF is the first method to our knowledge to combine all the elements of (1) utilizing supervised fine-tuning loss in an iterative procedure, (2) incorporating a scalar reward model without expert demonstrations, and (3) prior preservation using KL divergence.
Other new methods such as _Direct Preference Optimization_Rafailov et al. (2023) have emerged that aim to optimize a language model to match preferences in a preference dataset without using a reward model. These methods are limited by not using online exploration as in RLHF or SuperHF, so future work should compare them. We also contribute to the recent literature on systematically categorizing and evaluating reward hacking using a GPT-4-based evaluation scheme, as in Dubois et al. (2023).
Background
### Reward Modeling
Often obtaining a high-quality instruction fine-tuning dataset is more expensive at scale than obtaining human comparison data. Suppose we have a pre-trained language model \(p_{0}\) that we want to align using a dataset \(\mathcal{D}=\{(a_{1},b_{1}),\dots,(a_{n},b_{n})\}\) of text pairs. For each pair \((a_{i},b_{i})\) we know that a human labeler preferred \(a_{i}\) over \(b_{i}\). A straightforward baseline is to directly continue supervised learning on the preferred completions with the same cross entropy loss objective as in pre-training - an established and stable method for training LMs. However, it has been shown that a reward model is a more data efficient way to utilize \(\mathcal{D}\) because it generalizes the human preference signal across a broader distribution of data (Stiennon et al., 2022).
To extract more signal out of the dataset and generalize to new ones, prior work demonstrates the effectiveness of first training a reward model \(R_{\phi}:\mathbb{R}^{N}\rightarrow\mathbb{R}\), which takes a text sequence as input and outputs a scalar reward, and using that as a signal for further training. We train our RM as a binary classifier to predict whether the human prefers \(a\) or \(b\)(Stiennon et al., 2022; Ouyang et al., 2022), leading to the following standard loss function:
\[L_{\text{RM}}(\phi)=-\mathbb{E}_{(a,b)\sim\mathcal{D}}\left[\log\sigma(R_{ \phi}(a)-R_{\phi}(b))\right]\]
where \(\sigma\) is the sigmoid function \(\sigma(x)=\frac{1}{1+e^{-x}}\) and \(\phi\) are the parameters of the reward model. More details about the RM training setup can be found in the Appendix A. What remains is the question of how to use the RM signal to train a language model in a stable and robust way, leading to RLHF and SuperHF.
### RLHF and Distributional Perspective
We want to optimize the parameters \(\theta\) of a language model \(p_{\theta}\) starting from a base language model \(p_{0}\). Since our goal is to maximize a reward, the evident approach is to frame this as a reinforcement learning problem, i.e. maximizing \(\mathbb{E}_{x\sim p_{\theta}}[R(x)]\). Usually, a KL penalty is added to the loss function in order to prevent excessive divergence from \(p_{0}\), giving the following loss function
\[L_{\text{RLHF}}(\theta)=-\mathbb{E}_{x\sim p_{\theta}}[R(x)]+\beta D_{\text{ KL}}(p_{\theta}||p_{0})\]
where \(D_{\text{KL}}(p_{\theta}||p_{0})=\mathbb{E}_{x\sim p_{\theta}}\log(p_{\theta}/ p_{0})\) and \(\beta\) is a parameter determining the trade-off between the reward signal and the prior \(p_{0}\). This KL penalty might seem out of place in a reinforcement learning context, but it comes very naturally when looking at it from a distributional perspective.
We can frame the problem of incorporating the RM as Bayesian inference instead of RL. Assume we have our pre-trained language model as a prior \(p_{0}\). Intuitively, we can just perform a Bayesian update of our prior \(p_{0}\) to a posterior \(p_{\theta}\) based on the evidence that our model is optimal with respect to \(R(x)\). In this setting we can assign a distribution to a reward function via exponentiation and renormalization (Korbak et al., 2022), leading to the posterior
\[p_{\text{RL}}^{*}=\frac{1}{Z}p_{0}(x)\exp(R(x)/\beta),\]
where \(\beta\) is a temperature parameter and \(Z\) is a normalizing constant. The surprising result is that when performing variational inference on this posterior, i.e. minimizing the KL divergence between our model \(p_{\theta}\) and \(p_{\text{RL}}^{*}\), we obtain the same loss function as in RLHF
\[L_{\text{RLHF}}(\theta)\propto D_{\text{KL}}(p_{\theta}||p_{\text{RL}}^{*})\]
We note the following advantages that the distributional perspective has over the reinforcement learning perspective from Korbak et al. (2022) (Korbak et al., 2022):
1. RL without KL is flawed for generative models, since it discourages diversity. Maximizing the reward leads to distributional collapse, i.e. the model putting its entire probability mass on one optimal sequence. This is a common problem in practice, both in our experiments and in the literature (Choshen et al., 2019; Paulus et al., 2017; Tambwekar et al., 2019; Jaques et al., 2019; Korbak et al., 2021).
2. Other LM fine-tuning methods can be expressed from the distributional perspective, but are no longer equivalent to RL, e.g. SFT as \(D_{\text{KL}}(p_{\mathcal{D}}^{*}||p_{\theta})\) or Generative Distributional Control (GDC) (Khalifa et al., 2021; Korbak et al., 2022b).
3. It treats pre-training, fine-tuning and decoding all from the same probabilistic framework and allows the separation of modeling from inference (Goodman and Stuhlmuller, 2014).
However, there is a problem with this approach. While it allows the _derivation_ of the loss function \(L_{\text{RLHF}}\) from a purely probabilistic approach, it does not yet address the _optimization_ of the loss function. The loss function \(L_{\text{RLHF}}\) is non-differentiable, since the reward model operates on text and decoding a sequence of tokens \(x_{1:n}\) from \(p_{\theta}\) is non-differentiable. Thus, we need to use policy gradient methods from reinforcement learning like PPO (Schulman et al., 2017) to turn it into an optimizable loss function. These methods, however, are notoriously complicated and unstable (Choshen et al., 2019; Beeching et al., 2023) (as shown in Figure 2). Moreover, they seem out of place as remnants of reinforcement learning in the distributional framework. We address these shortcomings by introducing SuperHF.
## 4 Methods
The core issue is that the reward model \(R(x_{1:n})\) operates on a decoded sequence of tokens, but the auto-regressive LM \(p_{\theta}\) is trained on the logits of a single token at a time. SuperHF addresses this by transferring the reward signal to an individual token level, so that we can use the regular cross-entropy pre-training loss. The key step is to introduce a tractable surrogate posterior
\[\tilde{p}_{\text{SHF}}(x)\approx p_{\text{RL}}^{*}(x).\]
SuperHF is an iterative two-step process:
Step 1: Filtering.Sample a _superbatch_ of sequences \(\mathcal{B}=\{x_{1:n}^{(0)},\dots,x_{1:n}^{(B)}\}\) of size \(B\) (e.g. \(B=16\)) from the LM \(p_{\theta^{(t)}}\). Rank these sequences with a reward model \(R\) and filter out the top-\(K\) sequences \(\mathcal{K}\subset\mathcal{D}\). The surrogate posterior \(\tilde{p}_{\text{SHF}}\) is now defined as the empirical distribution of the filtered samples \(\mathcal{K}\).
Since the filtering biases \(\tilde{p}_{\text{SHF}}\) towards higher reward regions of \(p_{\theta^{(t)}}\), it is heuristically closer to the true posterior. However, this can easily lead to many of the same distributional collapse problems, if we are directly utilizing or optimizing \(\tilde{p}_{\text{SHF}}\), for example
\[L_{\text{Exp}}(\theta^{(t)})=D_{\text{KL}}(\tilde{p}_{\text{SHF}}||p_{\theta^ {(t)}}). \tag{1}\]
Step 2: Prior-preserving Fine-tuning.Hence, as a next step we want to incorporate our prior \(p_{0}\) to preserve entropy and tame the surrogate posterior wherever it deviates too far from the prior. This leads to the following SuperHF loss function:
\[L_{\text{SHF}}(\theta^{(t)})=D_{\text{KL}}(\tilde{p}_{\text{SHF}}||p_{\theta^ {(t)}})+\beta D_{\text{KL}}(p_{0}||p_{\theta^{(t)}}),\]
Figure 2: **Training loss curves over several hyperparameter sweeps (100 runs) for RLHF and SuperHF. While 37% of these RLHF training runs diverge with exploding loss, SuperHF remains stable and predictable without such divergence. Only 15.0% of the RLHF runs increase in reward over training compared with 85.4% for SuperHF.**
where \(\beta\) is a configurable hyperparameter. The combination of two KL divergences pulling towards the surrogate posterior and the prior respectively could be interpreted as a heuristic Bayesian update. This loss function operates on a token level and can be expressed as a simple supervised fine-tuning loss with KL regularization. We update the model parameters \(\theta^{(t)}\) through one training step on this loss function and then start the next iteration by going back to the filtering step, creating a new surrogate posterior from \(p_{\theta}^{(t+1)}\).
Intuitively, the surrogate prior can be interpreted as letting the model generate its own supervised training data by generating completions and then filtering them using a reward model. The main differences from previous methods are the distributional perspective, the prior-preservation through the KL divergence, and the iterative nature. The Expert Iteration method proposed in (Uesato et al., 2022) uses the exact same loss function we derived (1). Our experiments in Section 5.2 confirm that, particularly, the prior-preservation penalty has a substantial positive effect on overall SuperHF performance.
### Datasets
We draw our question answering datasets from three main sources, all hosted on Hugging-Face Datasets. From Anthopic/hh-rhlf, we load red-team-attempts, harmless-base, and helpful-base[Bai et al., 2022]. Each of these datasets consists of a conversation between a human and an assistant, where the human initiates a conversation. We extract the first question the human asks, ignoring the rest of the conversation. The red teaming dataset consists of attempts from individuals to elicit inappropriate responses from the model, such as seeking advice on engaging in illegal activities or using offensive language. Of note, the helpful-base dataset also includes similar problematic inquiries. The next dataset we load is openai/webppt_comparisons[Nakano et al., 2021] which provides a distribution of non-adversarial general web queries collected from WebGPT users. Last, we use yizhongw/self_instruct [Wang et al., 2023], a large dataset of model-generated instructions.
For all datasets, we filter out questions that have more than 1024 characters in the prompt. Then, we format each prompt with "\n\nHuman: {prompt}" at the start, and "\n\nAssistant:" at the end as done in [Bai et al., 2022].e We manually balance our data such that 20% of our training prompts come from each of the 5 datasets.
### Models
To investigate how SuperHF compares to other methods for fine-tuning language models based on human preferences, we used or trained 8 different types of models for the majority of our evaluations. They are:3
Footnote 3: Colors of model names are used only to correspond to figures. This paper can be viewed in greyscale.
* LLaMA-7B: A pre-trained large language model released by Touvron et al. (2023) without additional fine-tuning for instruction following or alignment.
* FeedME: Similar to Ouyang et al. (2022) "feedback made easy" models, we do language model fine-tuning on the chosen demonstration of 49,516 preference pairs from our reward model's training dataset.
* Instruct: An instruction-tuned language model fine-tuned on 12,379 instruction demonstrations from databricks-dolly-15k[Conover et al., 2023].
* Best-of-16: (B-o-16 in figures) Models that sample 16 completions for each prompt and use \(R_{train}\) to filter for the highest scoring completion (similar to a single SuperHF step).
* RLHF (LLaMA/FeedME/Instruct): Models fine-tuned with Reinforcement Learning from Human Feedback [Stiennon et al., 2022] using a modified fork of TRL [von Werra et al., 2020].
* SuperHF (LLaMA/FeedME/Instruct): Models fine-tuned with our implementation of Supervised Iterative Learning from Human Feedback.
* Alpaca-7B: An instruction-tuned model fine-tuned by Taori et al. (2023) on expert demonstrations from GPT-3.5 [Ouyang et al., 2022].
All models are approximately 7 billion parameters in size (they all use LLaMA-7B as their root model). For RLHF and SuperHF, we fine-tuned multiple models starting from LLaMA, from FeedME, or from Instruct which we label in parentheses and plot with different hatching. We provide more details about the FeedME, RLHF, and SuperHF model training in Appendix A.
## 5 Experiments
We evaluate the performance of our SuperHF models against the series of other models described above. We conducted experiments to gauge the overall effectiveness of SuperHF on the training objective (Section 5.1), investigate reward hacking which motivates the need to use both a KL-divergence constraint and an instruction-tuned model from which to fine-tune (Section 5.2), and evaluate our models on downstream benchmarks and out-of-distribution GPT-4-based preferences (Section 5.2).
For all figures, we show the means along with error bars or bands representing a bootstrapped 95% confidence interval of the estimator error unless otherwise noted.
### Reward Model Score
Across these and other experiments, we report the direct optimization objective as "Test Score." For this metric, we hold out a test set of around 200 prompts from each of our five training distributions for a total of 1,000 prompts, generate completions on these test prompts with the given model, then score the completions with a held-out test reward model \(R_{test}\). \(R_{test}\) was trained on half of the prompts from our human preferences training data while the train reward model \(R_{train}\) was trained on the other half, such that \(R_{train}\) and \(R_{test}\) were never trained on the same prompts, and neither reward model nor any of the language models were trained on these held-out test prompts.
The motivation to use a test score is to induce a small distributional shift such that memorizing good completions to training prompts does not imply good performance at test time without ample generalization. In practice, however, we find the behavior of \(R_{train}\) and \(R_{test}\) to be very consistent, so while there is no data contamination between training and testing, the two reward models tend to score the same completion similarly.
SuperHF outperforms RLHF on improving reward model score (Figure 3 Left).Our results indicate that SuperHF performs as well or better than RLHF in optimizing the Test Score objective. We find that the FeedME and Instruct methods are competitive baselines, with FeedME intuitively doing better (since it is fine-tuning on the chosen demonstrations of a similar distribution as \(R_{test}\) was trained). When fine-tuning from the LLaMA base model, RLHF does not significantly improve rewards while SuperHF does. From the FeedME base model, RLHF and SuperHF both marginally
Figure 3: **(Left) Comparison of average reward on held-out test set. From the LLaMA base model, RLHF does not improve the rewards while SuperHF does. From the FeedME base model, RLHF and SuperHF marginally increase rewards. From our instruction-tuned LLaMA, SuperHF outperforms RLHF. Best-of-16 (B-o-16) is a competitive baseline, but RLHF and especially SuperHF beat it from Instruct. (Right) Comparison of SuperHF and RLHF stability across different random seeds. The graph depicts the average run scores with a confidence interval for each model, demonstrating their consistent performance regardless of the seed.**
increase rewards, outperforming Alpaca on average. From Instruct, both RLHF and SuperHF see much larger gains, but SuperHF outperforms RLHF by a significant margin. The Best-of-16 baseline beats some models from LLaMA and from FeedME, but RLHF and SuperHF significantly outperform it when fine-tuned from Instruct. Since the RLHF and SuperHF models fine-tuned from FeedME do considerably worse than from Instruct, we focus just on RLHF/SuperHF (LLaMA/Instruct) for later experiments.
Robustness to random seeds (Figure 3 Right).In Figure 2, we showed how unstable RLHF was compared to SuperHF when doing hyperparameter tuning. But it remains to be shown how stable each method is to random initialization after a set of hyperparameters has been chosen. In this experiment, we evaluated the stability of these two methods across 20 random seeds while keeping our hyperparameters fixed to the optimal values. Both RLHF and SuperHF improved the average run scores, confirming the reliable performance of these alignment methods across different random seeds. Importantly, SuperHF shows about the same stability as RLHF as measured by the 95% confidence interval around the mean, suggesting that our SuperHF implementation does not introduce any additional instability.
### Reward is Not All You Need
Although SuperHF and RLHF can both improve the training objective, this may come at the expense of other qualitative aspects of the language model. In particular, we are interested in cases of reward hacking (Krakovna et al., 2017), where a model adversarially outputs qualitatively poor results that score high on training rewards.
One clear symptom of reward hacking is Mode Collapse (Casper et al., 2023b), where strongly optimizing for a reward can lead to a sharp decrease in the diversity of model outputs as it falls into a local optimum or repeated a preferred phrase. We observed many qualitative examples of mode collapse in some of our models (the most common of which included apologies accompanying a refusal to answer, hallucinated messages about being on the tech support team of a big tech company, or irrelevant platiududes appended to each completion) with some qualitative example outputs listed in Appendix F.
In this section, we further investigate SuperHF by quantitatively approximating mode collapse through a metric we refer to as METEOR Similarity. To compute this for a model, we sample pairs of completions from each test dataset (in practice, usually 16 or 32 per dataset depending on the desired resolution, and we constrain each pair to include completions from the same dataset since reward hacking often differs across distributions of prompts). Then, we compute the METEOR score (Banerjee and Lavie, 2005) between the two completions. While METEOR is usually used as a fuzzy measure of how similar a machine-translated passage is to a reference passage, we can also use it as a fuzzy measure of the similarity between two completions. Then, we bootstrap an average and confidence interval of these similarities which is shown in each figure in green.
KL-divergence penalties effectively constrain SuperHF optimization (Figure 4).We show two SuperHF (LLaMA) training runs where the only difference is the use of a KL-divergence penalty in the loss function. Without a KL penalty (KL-Coefficient = 0.0, dashed lines), the model collapses to outputting very similar completions despite achieving the highest rewards. With a significant KL penalty (KL-Coefficient = 0.35, solid lines), the model plateaus at slightly lower rewards, but the completion similarity is almost unchanged compared to the base LLaMA model. These findings suggest that the introduction of a KL-divergence penalty permits a necessary trade-off of some reward to significantly improve diversity in model-generated outputs. Finding a single good strategy for replying and simply repeating that optimal reply is an example of reward hacking that the KL-divergence penalty effectively mitigates in SuperHF.
Starting from an instruction-tuned baseline eases KL-tuning and brings both high rewards and high completion diversity (Figure 5).Here, we sweep the KL-Coefficient hyperparameter from 0.0 to 0.5 on SuperHF training runs starting from both a base LLaMA model and our instruction-tuned model. We aggregate the results across 5 random seeds to reveal clearer patterns since there is some variability in each training trajectory. We find that incorporating an instruction-tuning stage prior to applying SuperHF to the language model made the optimization process smoother and more effective. Although Figure 3 already demonstrated improved reward from fine-tuning from an instruction-tuned
model and that SuperHF does much better than RLHF from a base LLaMA model, these plots indicate that starting SuperHF from Instruct broadens the basin in the KL coefficient range where high rewards and low completion similarities can be concurrently achieved. This simplifies hyperparameter tuning and allows for more favorable tradeoffs, thus providing clear empirical evidence for the common practice of starting RLHF-like methods from instruction-tuned base models.
### Downstream performance
To further evaluate the unexpected consequences of fine-tuning language models to align with human preferences, we evaluate our models on downstream tasks to measure calibration, general capabilities and safety, and an out-of-distribution preference comparison using GPT-4.
SuperHF maintains and even improves calibration (Figure 6).Past work has shown that RLHF fine-tuning can significantly hurt calibration (OpenAI, 2023). In this experiment, we measure the calibration of 6 of our models on MMLU (Hendrycks et al., 2021). Given each model's logits on the tokens of the 4 answer choices (A, B, C, and D), we compute the softmax over just these 4 logits, bin the probability of every answer for every question into 10 equal bins from 0.0 to 1.0, and plot the fraction of correct answers in each bin. A perfectly calibrated model assigns the same probability to an answer as the empirical likelihood that it's correct in choosing that answer as shown by the \(y=x\) line in each graph. We also display the mean squared error (_MSE_, smaller is better) between each calibration plot and this perfect \(y=x\) line as a quantitative summary of calibration error.
Figure 4: **Illustration of the impact of KL-divergence penalties on the Test Reward and METEOR Similarity of SuperHF over training. Without a KL-divergence penalty, the model collapses to outputting similar completions despite achieving the highest rewards. With a significant KL penalty, the model maintains an almost unchanged diversity of responses while trading off just a bit of reward.**
Figure 5: **Sweeps of SuperHF KL-Coefficients when starting from a base LLaMA model (Left) or an instruction-tuned model (Right) across 5 random seeds. These plots show improved optimization and a wider basin in the range of KL-Coefficient values that yield both high rewards and low completion similarities when fine-tuning from Instruct.**
When fine-tuning from LLaMA (_MSE 0.0212_), both RLHF (LLaMA) (_MSE 0.0162_) and SuperHF (LLaMA) (_MSE 0.0158_) actually improve calibration by a bit, though SuperHF narrowly outperforms RLHF. When fine-tuning from Instruct (_MSE 0.0081_), we start off already considerably more calibrated than LLaMA. However, we then observe RLHF (Instruct) regresses on calibration (_MSE 0.0102_) while SuperHF (Instruct) further improves calibration, achieving less than half the calibration error (_MSE 0.0050_) as RLHF.
This suggests that SuperHF not only avoids the loss of calibration sometimes found with RLHF but actively improves calibration. We speculate that this may be due to the simple supervised cross-entropy loss used in SuperHF leading to minimizing the Brier score and thus improving calibration across tokens in general, while RLHF's more complicated PPO objective carries no such promise.
No degradation of downstream capabilities and safety benchmarks (Figure 7).We assess our models' performance on downstream general capabilities and safety benchmarks. We evaluate on MMLU (Hendrycks et al., 2021), a range of common sense reasoning tasks (Common Sense), and the ETHICS (Hendrycks et al., 2023), TruthfulQA (Lin et al., 2022), and HHH Alignment (Askell et al., 2021) benchmarks (Safety). For most evaluations, we use the Language Model Evaluation Harness (Gao et al., 2021), taking the acc_norm and acc_norm_stderr when available, or else the acc and acc_stderr. Error bars for these results are the average of the reported standard errors instead of confidence intervals like other experiments.
Ideally, fine-tuning from human preferences should not change downstream general capabilities and should maintain or improve downstream safety. This is a desired property both for model competitiveness and to not worsen the Safety-Capabilities balance as described in Hendrycks and Mazeika (2022). Our evaluations find no significant difference across almost all of our models for the average performance across each of these three categories of downstream tasks, as desired. The exception is Alpaca which sees some significant improvement, especially in Safety. This demonstrates some benefits from Alpaca's distillation of the outputs of the more capable and aligned GPT-3.5. More granular benchmark tables are in Appendix B.12.
Figure 6: **Calibration curves for SuperHF, RLHF, and base models evaluated on MMLU. SuperHF not only maintains calibration but improves upon the calibration of the base models. LLaMA and SuperHF (LLaMA) have no bar for the final bin because they did not output any probabilities that strong.**
SuperHF (Instruct) **achieves the highest GPT-4-based Elo score in our 8-model league (Figure 8 Left).** Building upon previous work such as Pan et al. (2023) and Perez et al. (2022), we leveraged the capabilities of current AI systems to qualitatively evaluate models instead of relying solely on our reward models or more expensive human crowdworkers.
Using GPT-4-0613 (OpenAI, 2023), we first computed pairwise preference comparisons on 640 pairs of test completions from our best models by asking GPT-4 to pick its preferred of 2 anonymous responses to the same prompt. We then calculated Elo scores initialized from a starting score of 1500 and repeated this calculation 1000 times with random orderings for confidence intervals. See Appendix E for methodological details, full prompts, and example preferences. Because we ran these Elo scores on a league of just these 8 models, they should not be compared with other chatbot Elo scores posted online.
On these overall Elo scores, we find that FeedME, Instruct, and Alpaca each stay quite competitive with relatively simple fine-tuning methods, demonstrating their competitiveness as baselines. Interestingly, both RLHF models and the SuperHF (LLaMA) model see significant losses in Elo, indicating
Figure 8: **(Left) GPT-4-based Elo scores for eight evaluated models. The SuperHF model starting from the instruction-tuned LLM achieved the highest Elo rating. (Right) Head-to-head win rates for SuperHF and RLHF based on GPT-4 evaluations. While SuperHF exhibits favorable results, GPT-4’s overall preferences are not strictly ordered and exhibit some cyclical patterns.**
Figure 7: **Comparison of downstream capabilities and safety benchmarks for RLHF, SuperHF, and base models. Error bars are average The results show no significant degradation in performance for SuperHF.**
they may have overoptimized the training objective in a way that GPT-4 strongly does not prefer. However, SuperHF (Instruct) breaks this pattern, achieving the highest Elo in the entire league. We can view these GPT-4 evaluations as much more out-of-distribution methods of human preferences than our test reward model \(R_{test}\), so it is a promising result that SuperHF (Instruct) generalizes well to this different regime while the other fine-tuning methods do not do as well.
Head-to-head GPT-4-based win rates favor SuperHF **but are complicated (Figure 8 Right).** Using the GPT-4 binary preference evaluations, in addition to the Elo score above, we also computed some direct head-to-head win rates between the various models. A full win-rate table between all 8 models is listed in Appendix B.5, but in Figure 8 Right, we focus on the win rates of RLHF (Instruct) and SuperHF (Instruct).
In these 1-on-1 comparisons using GPT-4 as an evaluator, SuperHF shows favorable win rates overall. Interestingly, though, while SuperHF (Instruct) gets the highest Elo, it does not uniformly beat all other models by these win rates. We observe that GPT-4's ordering of model performances is not strictly linear, but rather circular--for example, we observe that FeedME loses to Alpaca which loses to SuperHF (Instruct) which loses to FeedME. This implies that GPT-4 is subject to some of the same irrational preferences as humans exhibit and underscores the necessity for nuanced and expansive evaluation of language model alignment.
## 6 Discussion and Future Work
RLHF tuning difficulties.Getting the best possible performance out of PPO based RLHF required a significant amount of work in our experience--the open-source TRL (von Werra et al., 2020) implementation we started from did not transfer well out of the box to LLaMA and our data distribution, so we had to spend many months and hundreds of training runs tuning it to acceptable performance. Starting from the successful hyper-parameters in (Beeching et al., 2023), we primarily tuned the batch size, KL-Coefficient, and learning rate, and found that whitening the rewards as in (Dubois et al., 2023)(Touvron et al., 2023b) increased performance. We also experimented with many other changes that showed no noticeable improvements such as offsetting the reward to have a mean of 0.0 across all of training, setting the reward to have a mean of 0.0 across each batch, and KL penalty clipping. This all highlights the many challenges inherent to using RLHF which have been highlighted in prior works (Casper et al., 2023; Bai et al., 2022; Ouyang et al., 2022). SuperHF, in contrast, performed quite well from our initial implementation and was very robust to variation in both hyperparameters (Figure 2) and random seeds (Figure 3 Right).
SuperHF limitations.Although SuperHF is simpler to implement and tune, it does result in an increase in fine-tuning time due to the requirement for sampling more completions per step. In practice, we measured this at about 6x the wall clock training time with our initial implementation of SuperHF compared to RLHF, though we expect this time efficiency could easily be improved since it was not the focus of our work. This training time gap might be much further reduced, however, when considering the much greater need for hyperparameter tuning for RLHF. Additionally, prior work such as Ouyang et al. (2022) has pointed out that computational requirements for fine-tuning language models are many orders of magnitude smaller than costs for pre-training, so when data quality and language model alignment algorithmic performance are more important bottlenecks (as is often the case), SuperHF may be a preferable method despite its increased fine-tuning time.
Future workOne promising direction for future work is scaling SuperHF to larger models in the >30 billion parameter model regime. Preliminary scaling experiments show promise that SuperHF will continue to improve the reward at larger model scales, but further empirical validation is needed. Beyond scaling to larger models, SuperHF is a promising strategy for aligning medium (1B - 12B parameter) language models. Because of the ease of implementation and hyper-parameter tuning along with better performance from a range of base models (such as the base LLaMA as shown in Figure 5.1), our method is desirable for teams operating under time and computational constraints, so follow-up work could investigate how to get the best alignment out of these mid-sized models using SuperHF. Finally, there continues to be much room to develop better evaluations of language model alignment. Our experiments in Section 5.3 and prior work like Dubois et al. (2023) show that binary preference-based evaluations with models like GPT-4 can be inconsistent, and while we are excited by the ability of simple quantitative metrics like METEOR similarity as described in Section 5.2 to
measure specification gaming, we believe the language model alignment field as a whole needs better coverage of the full spectrum of reward hacking behaviors as well as better evaluations for robustness to adversarial attacks and distribution shifts.
## 7 Conclusion
We present Supervised Iterative Learning from Human Feedback (SuperHF), a novel method for aligning foundation models to human preferences from scalar human feedback reward signals which serves as a drop-in replacement for Proximal Policy Optimization (PPO)-based Reinforcement Learning from Human Feedback (RLHF). By reframing the human feedback fine-tuning problem as Bayesian inference, we derive the SuperHF loss, a simple supervised loss incorporating a crucial KL divergence prior. Our experiments demonstrate that SuperHF effectively optimizes reward model scores for question answering, favorably balances high rewards with low reward gaming when using the KL-divergence penalty and starting from instruction-tuned base models, and generalizes as well or better than RLHF to downstream tasks and subjective preference evaluations by GPT-4.
Taking into account the broader impact of our work, SuperHF simplifies language model fine-tuning from human feedback, democratizing the process and enhancing the field's accessibility. It is important to recognize the potential for increased misuse from such work--current language model alignment focuses on the technical challenge of aligning to _any_ preferences at all, so there are risks from actors both fine-tuning open language models to undesirable preferences as well simply using instruction-following models to more easily output harmful or dangerous responses. But as RLHF becomes more widespread with more open-source implementations popping up online, it becomes necessary to critically evaluate the method, and the release of simpler but hopefully safer methods becomes an increasingly better trade-off (additional considerations are described in our X-Risk Sheet in Appendix D. Holistically, we envision SuperHF and similar research directions ultimately contributing to a wide range of language model alignment tools which, through careful governance and robust evaluation, allow for training and deploying future foundation models that more safely align with and protect societal values.
## References
* Ouyang et al. (2022) Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022.
* Bai et al. (2022) Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Sautav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022. URL [http://arxiv.org/abs/2204.05862](http://arxiv.org/abs/2204.05862).
* Stiennon et al. (2022) Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. Learning to summarize from human feedback, 2022.
* Taori et al. (2023) Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca), 2023.
* Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impression gpt-4 with 90%* chatgpt quality, March 2023. URL [https://lmsys.org/blog/2023-03-30-vicuna/](https://lmsys.org/blog/2023-03-30-vicuna/).
* Geng et al. (2023) Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. Koala: A dialogue model for academic research. Blog post, April 2023. URL [https://bair.berkeley.edu/blog/2023/04/03/koala/](https://bair.berkeley.edu/blog/2023/04/03/koala/).
Edward Beeching, Younes Belkada, Kashif Rasul, Lewis Tunstall, Leandro von Werra, Nazneen Rajani, and Nathan Lambert. Stackllama: An rl fine-tuned llama model for stack exchange question and answering, 2023. URL [https://huggingface.co/blog/stackllama](https://huggingface.co/blog/stackllama).
OpenAI. Gpt-4 technical report, 2023.
Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization, 2022.
Victoria Krakovna, Shane Legg, Jan Leike, Zac Kenton, Ramana Kumar, Tom Everitt, Matthew Rahtz, Vladimir Mikulik, and Jonathan Uesato. Specification gaming: The flip side of ai ingenuity, Apr 2017. URL [https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity).
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jeremy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. _arXiv preprint arXiv:2307.15217_, 2023a.
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve, 2022. URL [http://arxiv.org/abs/2210.11610](http://arxiv.org/abs/2210.11610).
Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E. Gonzalez. The wisdom of hindsight makes language models better instruction followers, 2023. URL [http://arxiv.org/abs/2302.05206](http://arxiv.org/abs/2302.05206).
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears, 2023.
Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment, 2023.
Jeremy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. Training language models with language feedback at scale, 2023.
Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. Quark: Controllable text generation with reinforced unlearning, 2022. URL [http://arxiv.org/abs/2205.13636](http://arxiv.org/abs/2205.13636).
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process- and outcome-based feedback, 2022.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model, 2023.
Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback, 2023.
Tomasz Korbak, Ethan Perez, and Christopher L Buckley. Rl with kl penalties is better viewed as bayesian inference, 2022a.
Leshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. On the weaknesses of reinforcement learning for neural machine translation. _arXiv preprint arXiv:1907.01752_, 2019.
Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization, 2017.
Pradyumna Tambwekar, Murtaza Dhuliawala, Lara J. Martin, Animesh Mehta, Brent Harrison, and Mark O. Riedl. Controllable neural story plot generation via reward shaping. In _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence_. International Joint Conferences on Artificial Intelligence Organization, aug 2019. doi: 10.24963/ijcai.2019/829. URL [https://doi.org/10.24963%2Fijcai.2019%2F829](https://doi.org/10.24963%2Fijcai.2019%2F829).
Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog, 2019.
* Korbak et al. (2021) Tomasz Korbak, Hady Elsahar, Marc Dymettman, and German Kruszewski. Energy-based models for code generation under compilability constraints, 2021.
* Khalifa et al. (2021) Muhammad Khalifa, Hady Elsahar, and Marc Dymettman. A distributional approach to controlled text generation, 2021.
* Korbak et al. (2022b) Tomasz Korbak, Hady Elsahar, German Kruszewski, and Marc Dymettman. Controlling conditional language models without catastrophic forgetting, 2022b.
* Goodman and Stuhlmuller (2014) Noah D Goodman and Andreas Stuhlmuller. The design and implementation of probabilistic programming languages, 2014.
* Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017.
* Nakano et al. (2021) Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. Webgpt: Browser-assisted question-answering with human feedback. In _arXiv_, 2021.
* Wang et al. (2023) Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions, 2023.
* Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023a.
* Conover et al. (2023) Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. Free dolly: Introducing the world's first truly open instruction-tuned llm, 2023. URL [https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm).
* von Werra et al. (2020) Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, and Nathan Lambert. Trl: Transformer reinforcement learning. [https://github.com/lvwerra/trl](https://github.com/lvwerra/trl), 2020.
* Casper et al. (2005) Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jeremy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphael Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Sithitharanjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell. Open problems and fundamental limitations of reinforcement learning from human feedback, 2023b.
* Banerjee and Lavie (2005) Satanjeev Banerjee and Alon Lavie. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In _Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization_, pages 65-72, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics. URL [https://aclanthology.org/W05-0909](https://aclanthology.org/W05-0909).
* Hendrycks et al. (2021) Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding, 2021.
* Hendrycks et al. (2023) Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. Aligning ai with shared human values, 2023.
* Lin et al. (2022) Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods, 2022.
* Lin et al. (2021)
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. A general language assistant as a laboratory for alignment, 2021.
* Gao et al. (2021) Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. URL [https://doi.org/10.5281/zenodo.5371628](https://doi.org/10.5281/zenodo.5371628).
* Hendrycks and Mazeika (2022) Dan Hendrycks and Mantas Mazeika. X-risk analysis for ai research, 2022.
* Pan et al. (2023) Alexander Pan, Chan Jun Shern, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Jonathan Ng, Hanlin Zhang, Scott Emmons, and Dan Hendrycks. Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the machiavelli benchmark, 2023.
* Perez et al. (2022) Ethan Perez, Sam Ringer, Kamille Lukositte, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, Andy Jones, Anna Chen, Ben Mann, Brian Israel, Bryan Seethor, Cameron McKinnon, Christopher Olah, Da Yan, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Guro Khundadze, Jackson Kernion, James Landis, Jamie Kerr, Jared Mueller, Jeeyoon Hyun, Joshua Landau, Kamal Ndousse, Landon Goldberg, Liane Lovitt, Martin Lucas, Michael Sellitto, Miranda Zhang, Neerav Kingsland, Nelson Elhage, Nicholas Joseph, Noemi Mercado, Nova DasSarma, Oliver Rausch, Robin Larson, Sam McCandlish, Scott Johnston, Shauna Kravec, Sheer El Showk, Tamera Lanham, Timothy Telleen-Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Jack Clark, Samuel R. Bowman, Amanda Askell, Roger Grosse, Danny Hernandez, Deep Ganguli, Evan Hubinger, Nicholas Schiefer, and Jared Kaplan. Discovering language model behaviors with model-written evaluations, 2022.
* Touvron et al. (2021) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023b.
* Hu et al. (2021) Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021.
* Mangrulkar et al. (2022) Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, and Sayak Paul. Peft: State-of-the-art parameter-efficient fine-tuning methods. [https://github.com/huggingface/peft](https://github.com/huggingface/peft), 2022.
## Appendices
* A Details on Model Training
* A.1 Reward Model
* A.2 All Language Models
* A.3 Supervised Fine-Tuning from Preferences (FTP)
* A.4 RLHF
* A.5 SuperHF
* B Additional Experimental Results
* B.1 Reward Model Calibration
* B.2 Language Model Calibration
* B.3 SuperHF Training Reward
* B.4 Dataset Analysis
* B.5 GPT-4 Preference Win Rates
* B.6 Superbatch Size Ablation
* B.7 Prompt Accumulation Ablation
* B.8 Expanded Elo Scores
* B.9 Expanded Qualitative Ratings
* B.10 Model-Written Evaluations: Advanced AI Risk
* B.11 RLHF KL Coefficient
* B.12 Downstream Benchmark Tables
* C Reproducibility
* C.1 Compute Budget
* C.2 Code
* D X-Risk Sheet
* D.1 Long-Term Impact on Advanced AI Systems
* D.2 Safety-Capabilities Balance
* D.3 Elaborations and Other Considerations
* E Prompts for GPT-4 Qualitative Evaluations
* E.1 Pairwise Preference Comparisons
* E.2 Relevance
* E.3 Avoidance
* E.4 Reward Hacking
* E.5 Bias
* E.6 Diversity
* F Randomly Sampled Model Completions
Details on Model Training
### Reward Model
We fine-tuned a 1.3B GPT-Neo model using a combined dataset of the 'harmless-base' and 'helpful-base' subsets of the Anthropic/hh-rlhf dataset, and the entirety of the 'openai/webgpt_comparisons' dataset. We split the training dataset in half, trained two reward models on each half for one epoch, and evaluated each of them on the other half. The average evaluation accuracy of our reward models is 0.67. Both reward models are trained for a single epoch with a batch size of 64, a learning rate of 1e-5, and a weight decay of 1e-3.
### All Language Models
**Prompt Processing:** We process the prompts from all 4 training datasets in the same way for consistency. First, we filter out the prompts with more than 1024 characters (180 prompts, or \(<1\%\)) to not overflow the context window. Then, we shuffle the prompts with the same seed and truncate this dataset to the desired training example length to ensure all models see the training prompts in the same order. For each prompt, we then prepend a general "system prompt" to condition the model to act like an AI assistant while also wrapping the prompt in an indicator that it was sent by a human and ending it with an indicator that an AI assistant is about to respond. This is so that our language models, when completing the prompts, take on the role of the AI assistant and follows the format in the Anthropic Helpful-Harmless dataset [Bai et al., 2022].
Thus, the final prompts we use for training as well as for test reward evaluation look like "A human user sends a message, and a helpful and harmless AI assistant responds.\n\nHuman:{original dataset prompt}\n\nAssistant:".
**Completion Truncation:** We observed our models completing additional turns of conversation on occasion, an issue that was worse with smaller models. I.e. if our prompt was...\nHuman: AAA \nAssistant:, we wouldn't just get a completion BBB, but would instead get BBB\n\nHuman: CCC\n\nAssistant: DDD.... We didn't want the language models to be simulating additional conversation turns from a hypothetical human, and we also observed that these extra completions were often rife with reward hacking as the model would output the human and assistant thanking each other back and forth.
To remedy this, we process all our model outputs with the same regular expression after completion and before reward model scoring. We use the expression "\n\n[^:]+:|Human|Assistant" to trim additional instances of "\n\n[anything]:" as well as just "Human" or "Assistant" (without the new lines) from our model completions, then strip off any additional whitespace.
**LoRA:** For fine-tuning from LLaMA-7B and Alpaca-7B, we use Low-Rank Adapters (LoRA)[Hu et al., 2021] via the Huggingface PEFT Library[Mangrunkar et al., 2022]. This also makes it easier to compute the KL-divergence term, as simply turning off the adapters restores the mode to the prior state. In particular, we used the LoRA implementation from v0.2.0 of PEFT with \(r=4\), \(\alpha=32\), \(\texttt{dropout}=0.05\), and target models of q_proj and v_proj.
|
2306.05947 | Central Limit Theorems and Approximation Theory: Part I | Central limit theorems (CLTs) have a long history in probability and
statistics. They play a fundamental role in constructing valid statistical
inference procedures. Over the last century, various techniques have been
developed in probability and statistics to prove CLTs under a variety of
assumptions on random variables. Quantitative versions of CLTs (e.g.,
Berry--Esseen bounds) have also been parallelly developed. In this article, we
propose to use approximation theory from functional analysis to derive explicit
bounds on the difference between expectations of functions. | Arisina Banerjee, Arun K Kuchibhotla | 2023-06-09T15:04:29Z | http://arxiv.org/abs/2306.05947v2 | # Central Limit Theorems and Approximation Theory: Part I+
###### Abstract
Central limit theorems (CLTs) have a long history in probability and statistics. They play a fundamental role in constructing valid statistical inference procedures. Over the last century, various techniques have been developed in probability and statistics to prove CLTs under a variety of assumptions on random variables. Quantitative versions of CLTs (e.g., Berry-Esseen bounds) have also been parallelly developed. In this article, we propose to use approximation theory from functional analysis to derive explicit bounds on the difference between expectations of functions.
## 1 Description of the problem
Suppose we have a sequence of \(d\)-dimensional random vectors \(\{X_{i}\}_{i\in\mathbb{N}}\) and a \(d\)-dimensional random vector \(Z\) such that \(Z\) is a gaussian random variable distributed with mean equal to \(\mathbb{E}\left(n^{-1/2}\sum_{i=1}^{n}X_{i}\right)\) and variance equal to \(\operatorname{Var}\left(n^{-1/2}\sum_{i=1}^{n}X_{i}\right)\). For notational simplicity, set \(S_{n,X}=n^{-1/2}\sum_{i=1}^{n}X_{i}\).
Note that, if \(X_{1},\cdots,X_{n}\) are i.i.d distributed with mean \(0\) and variance \(\Sigma\), then \(Z\sim\mathcal{N}(0,\Sigma)\). If \(X_{1},\cdots,X_{n}\) are independent with mean \(0\), then \(Z\sim\mathcal{N}\left(0,n^{-1}\sum_{i=1}^{n}\operatorname{Var}(X_{i})\right)\).
Suppose \(f\) is a Borel measurable function. We wish to bound
\[\Delta_{f}=\left|\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z\right) \right|,\]
with a constant which depends on \(n\), \(d\) and the fucntion \(f\). Classical Berry-Esseen bounds (Bentkus, 2004; Raic, 2019) provide inequalities for \(\Delta_{f}\) when \(f(x)=\mathbf{1}x\in A,x\in\mathbb{R}^{d}\) for some sets \(A\) (e.g., convex sets, Euclidean balls). Results that bound \(\Delta_{f}\) for Borel measurable functions with specific polynomial growth are also well-known; see Sazonov (1981, Theorem 1, Chapter 1, Section 3), Bhattacharya and Rao (2010, Thms 13.2, 13.3), and Angst and Poly (2017, Theorem 4.1). These bounds are derived using smoothing inequalities and are valid for all Borel measurable functions. Unfortunately, these bounds are not sharp enough to imply correct dimension on dimension or smoothness for high-dimensional functions that are "highly" smooth (e.g., high-dimensional functions that depend only on a subset of coordinates); see, for example, Bentkus (2003a, Thms 3.2-3.4) for sharp bounds on \(\Delta_{f}\) for functions \(f\) in Holder classes. Further, the dependence on dimension is much better than that implied by the general result in Sazonov (1981) and Bhattacharya and Rao (2010).
In this paper, we propose an approximation theory and level sets based approach to obtain bounds for \(\Delta_{f}\). The bounds we obtain have sharp dependence on the dimension as well as the sample size even when the dimension grows faster than the sample size. On the flip side, our bounds do not apply to all Borel measurable functions but only to a special class of functions.
The remaining article is organized as follows. In Section 2, we provide a discussion of existing Berry-Esseen bounds for independent random vectors from Bentkus (2004) and Raic (2019). These results will be the backbone of our approach. In Section 3, we provide our first result that bounds \(\Delta_{f}\), for a bounded \(f\), in terms of the upper/lower level sets of \(f\). In Section 4, we provide an application of this result to bound \(\Delta_{f}\) for bounded quasi-concave functions. In Section 5, we provide a discussion of non-uniform Berry-Esseen bound and apply our main result to get bounds for functions in Barron space. We conclude the article with a brief discussion in Section 6.
## 2 Literature Survey
To begin with, we take a look at Theorem 1.2 of Bentkus (2004). Firstly, we define some notations as per Bentkus (2004). Let \(X_{1},\cdots,X_{n}\) be independent random vectors with a common mean \(\mathbb{E}X_{j}=0\). We write, \(S=X_{1}+\cdots+X_{n}\). Throughout we assume that S has a non-degenerated distribution in the sense that the covariance operator, say \(C^{2}=\operatorname{Cov}(S)\), is invertible (where, \(C\) stands for the positive root of \(C^{2}\)). Let \(Z\) be a Gaussian random vector such that \(\mathbb{E}Z=0\) and \(\operatorname{Cov}(S)\) and \(\operatorname{Cov}(Z)\) are equal. We further write
\[\beta=\beta_{1}+\cdots+\beta_{n},\quad\beta_{k}=\mathbb{E}|C^{-1}X_{k}|^{3}.\]
and for any collection of sets \(\mathcal{A}\), define
\[\Delta(\mathcal{A})=\sup_{A\in\mathcal{A}}|\mathbb{P}\{S\in A\}-\mathbb{P}\{Z \in A\}|.\]
**Theorem 2.1**.: _Let a class \(\mathcal{A}\) of convex sets of subsets \(A\subset\mathbb{R}^{d}\) satisfy the following conditions._
1. _Class_ \(\mathcal{A}\) _is invariant under affine symmetric transformations, i.e.,_ \(DA+a\in A\) _if_ \(a\in\mathbb{R}^{d}\) _and_ \(D:\mathbb{R}^{d}\longrightarrow\mathbb{R}^{d}\) _is a linear symmetric invertible operator._
2. _Class_ \(\mathcal{A}\) _is invariant under taking_ \(\varepsilon\)_-neighbourhoods for all_ \(\varepsilon>0\)_. More precisely,_ \(A^{\varepsilon},A^{-\varepsilon}\in\mathcal{A}\)_, if_ \(A\in\mathcal{A}\)_. Here,_ \[A^{\varepsilon}=\{x\in\mathbb{R}^{d}:\rho_{A}(x)\leq\varepsilon\}\quad\text{ and}\quad A^{-\varepsilon}=\{x\in\mathbb{R}^{d}:B_{\varepsilon}(x)\subset A\}\] _where,_ \(\rho_{A}(x)=\inf_{y\in A}|x-y|\) _is the distance between_ \(A\subset\mathbb{R}^{d}\) _and_ \(x\in\mathbb{R}^{d}\) _and_ \(B_{\varepsilon}(x)=\{y\in\mathbb{R}^{d}:|x-y|\leq\varepsilon\}\)_._
_Let \(\phi\) denote the standard normal distribution. Furthermore, assume that, \(\mathcal{A}\) and the standard normal distribution satisfy the condition that, there exists constants, say \(a_{d}=a_{d}(\mathcal{A})\), called the isoperimetric constant of \(\mathcal{A}\), depending only on \(d\) and \(\mathcal{A}\), such that,_
\[\phi\{A^{\varepsilon}\backslash A\}\leq a_{d}\varepsilon\quad\text{and}\quad \phi\{A\backslash A^{-\varepsilon}\}\leq a_{d}\varepsilon\quad\text{for all}\quad A\in\mathcal{A}\quad\text{and}\quad \varepsilon>0.\]
_If these two conditions and the assumption hold, then there exists an absolute constant \(M>0\) such that_
\[\Delta(\mathcal{A})\leq Mb_{d}\beta,\quad b_{d}=\max\{1,a_{d}(\mathcal{A})\}.\]
**Note**: Conditions (i)-(ii) on the class \(\mathcal{A}\) can be relaxed using slightly more refined techniques. In the i.i.d. case one can relax requirement (i) on \(\mathcal{A}\), assuming that \(\mathcal{A}\) is invariant under rescaling by scalars and shifting.
**Example bounds on isoperimetric constant.**
1. \(a_{d}(\mathcal{A})=(2\pi)^{-1/2}\) in the case of the class \(\mathcal{H}\) of all affine half-spaces of \(\mathbb{R}^{d}\)(Bentkus, 2003b). **Definition 2.2**.: _A class of half-spaces \(\mathcal{A}^{\text{half space}}\) is defined as \(\mathcal{A}^{\text{half space}}=\{A_{a,b}:a\in\mathbb{R}^{d},b\in\mathbb{R}\}\), where, \(A_{a,b}=\{x:a^{\top}x\leqslant b\}\)._
2. For the class \(\mathcal{C}\) of all convex subsets of \(\mathbb{R}^{d}\), we have \(a_{d}(\mathcal{C})\leqslant 4d^{1/4}\)(Bentkus, 2003b, Lemma 2.6).
3. For the class \(\mathcal{B}\) of all Euclidean balls of \(\mathbb{R}^{d}\), we have \(a_{d}(\mathcal{B})\leqslant C\) for some absolute constant \(C\)(Zhilova, 2020, Lemma 8.1).
Now, we take a look at a more general form of theorem 2.1 as given by Raic (2019). We define some notations as per Raic (2019). Let \(\mathcal{I}\) be a countable set (either finite or infinite) and let \(\{X_{i}\}_{i\in\mathcal{I}}\), be independent \(\mathbb{R}^{d}\)-valued random vectors. Assume that \(\mathbb{E}X_{i}=0\) for all \(i\) and that \(\sum_{i\in\mathcal{I}}\text{Var}(X_{i})=I_{d}\). It is well known that in this case, the sum \(W:=\sum_{i\in\mathcal{I}}X_{i}\) exists almost surely and that \(\mathbb{E}W=0\) and \(\text{Var}(W)=I_{d}\).
For a measurable set \(A\subset\mathbb{R}^{d}\), let \(\mathcal{N}(\mu,\Sigma)\{A\}:=\mathbb{P}(Z\in A)\) and for a measurable function \(f:\mathbb{R}^{d}\longrightarrow\mathbb{R}\), let \(\mathcal{N}(\mu,\Sigma)\{f\}:=\mathbb{E}f(Z)\), where \(Z\sim\mathcal{N}(\mu,\Sigma)\).
We shall consider a class \(\mathcal{A}\) of measurable sets in \(\mathbb{R}^{d}\). For each \(A\in\mathcal{A}\), we take a measurable function \(\rho_{A}:\mathbb{R}^{d}\longrightarrow\mathbb{R}\). The latter can be considered as a generalized signed distance function. Typically, one can take \(\rho_{A}=\delta_{A}\), where,
\[\delta_{A}(x):=\left\{\begin{array}{ll}-\text{dist}(x,\mathbb{R}^{d}\backslash A ),&x\in A\\ \text{dist}(x,A),&x\notin A.\end{array}\right.\]
But we allow for more general functions. For each \(t\in\mathbb{R}\), we define
\[A^{t|\rho}=\{x:\rho_{A}(x)\leqslant t\}.\]
We define the generalised Gaussian perimeter as:
\[\gamma^{*}(A\backslash\rho):=\sup\biggl{\{}\frac{1}{\varepsilon}\mathcal{N}( 0,I_{d})\{A^{\varepsilon|\rho}\backslash A\},\{\frac{1}{\varepsilon}\mathcal{ N}(0,I_{d})\{A\backslash A^{-\varepsilon|\rho}\};\varepsilon>0\biggr{\}}.\]
\[\gamma^{*}(\mathcal{A}\backslash\rho):=\sup\gamma^{*}(A\backslash\rho).\]
Now, we consider the following assumptions:
1. \(\mathcal{A}\) is closed under translations and uniform scalings by factors greater than one.
2. For each \(A\in\mathcal{A}\) and \(t\in\mathbb{R}\), \(A^{t|\rho}\in\mathcal{A}\cup\{\phi,\mathbb{R}^{d}\}\).
3. For each \(A\in\mathcal{A}\) and \(\varepsilon>0\), either \(A^{-\varepsilon|\rho}=\phi\) or \(\{x:\rho_{A^{-\varepsilon|\rho}}(x)<\varepsilon\subseteq A\}\).
4. For each \(A\in\mathcal{A}\), \(\rho_{A}(x)\leqslant 0\) for all \(x\in A\) and \(\rho_{A}(x)\geqslant 0\) for all \(x\notin A\).
5. For each \(A\in\mathcal{A}\) and each \(y\in\mathbb{R}^{d}\), \(\rho_{A+y}(x+y)=\rho_{A}(x)\) for all \(x\in\mathbb{R}^{d}\).
6. For each \(A\in\mathcal{A}\) and each \(q\geqslant 1\), \(|\rho_{qA}(qx)|\leqslant q|\rho_{A}(x)|\) for all \(x\in\mathbb{R}^{d}\).
7. For each \(A\in\mathcal{A}\), \(\rho_{A}\) is non-expensive on \(\{x:\rho_{A}(x)\geqslant 0\}\), i.e., \(|\rho_{A}(x)-\rho_{A}(y)|\leqslant|x-y|\) for all \(x,y\) with \(\rho_{A}(x)\geqslant 0\) and \(\rho_{A}(y)\geqslant 0\).
* For each \(A\in\mathcal{A}\), \(\rho_{A}\) is differentiable on \(\{x:\rho_{A}(x)>0\}\). Moreover, there exists \(\kappa\geq 0\), such that \[|\nabla\rho_{A}(x)-\nabla\rho_{A}(y)|\leq\frac{\kappa|x-y|}{\min\{\rho_{A}(x), \rho_{A}(y)\}}\] for all \(x,y\) with \(\rho_{A}(x)>0\) and \(\rho_{A}(y)>0\), where \(\nabla\) denotes the gradient.
We also consider another optional assumption as follows:
* \(\mathcal{A}\) is closed under symmetric linear transformations with the smallest eigenvalue at least one.
**Theorem 2.3**.: _Let \(W=\sum_{i\in\mathcal{I}}X_{i}\) and let \(\mathcal{A}\) be a class of sets meeting assumptions (A1)-(A8) (along with the underlying functions \(\rho_{A}\)). Then for each \(A\in\mathcal{A}\), the following estimate holds true_
\[|\mathbb{P}(W\in A)-\mathcal{N}(0,I_{d})\{A\}|\leq\max\{27,1+53\gamma^{*}( \mathcal{A}|\rho)\sqrt{1+\kappa}\}\sum_{i\in\mathcal{I}}\mathbb{E}|X_{i}|^{3}.\]
_In addition, if \(\mathcal{A}\) also satisfies assumption (A1'), then the preceding bound can be improved to_
\[|\mathbb{P}(W\in A)-\mathcal{N}(0,I_{d})\{A\}|\leq\max\{27,1+50\gamma^{*}( \mathcal{A}|\rho)\sqrt{1+\kappa}\}\sum_{i\in\mathcal{I}}\mathbb{E}|X_{i}|^{3}.\]
#### 2.0.1 Examples of Classes of Sets satisfying (A1)-(A8) of theorem 2.3
* the class \(\mathcal{C}_{d}\) of all measurable convex sets in \(\mathbb{R}^{d}\), along with \(\rho_{A}=\delta_{A}\), which is defined in \(\mathcal{C}_{d}\backslash\{\phi,\mathbb{R}^{d}\}\).
* the class of all balls in \(\mathbb{R}^{d}\) (excluding the empty set) along with \(\rho_{A}=\delta_{A}\) (since the balls are convex, it meets all the assumptions (A1)-(A8)).
* For a class of ellipsoids, \(\rho_{A}=\delta_{A}\) is not suitable because an \(\varepsilon\)-neighborhood of an ellipsoid is not an ellipsoid. However, one can set \(\rho_{A}(x)=\delta_{QA}(Qx)\), where \(Q\) is a linear transformation mapping \(A\) into a ball (may depend on \(A\)). [Note that, \(Q\) must be non-expansive in order to satisfy (A7).]
## 3 Level sets and bounds on \(\Delta_{f}\)
If the conditions of Theorem 2.1 are met, then we have
\[\sup_{A\in\mathcal{A}}|\mathbb{P}\left(S_{n,X}\in A\right)-\mathbb{P}\left(Z \in A\right)|\leq\frac{b_{d}(\mathcal{A})}{\sqrt{n}}\frac{1}{n}\sum_{i=1}^{n} \mathbb{E}|\Sigma^{-\frac{1}{2}}X_{i}|^{3}\]
or in other words, we have,
\[\sup_{A\in\mathcal{A}}|\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z \right)|\leq\frac{b_{d}(\mathcal{A})}{\sqrt{n}}\frac{1}{n}\sum_{i=1}^{n} \mathbb{E}|\Sigma^{-\frac{1}{2}}X_{i}|^{3}\]
where, \(f\) is defined as \(f(x)=\mathbb{I}(x\in A)\), \(\Sigma=\operatorname{Var}\left(S_{n,X}\right)\) and \(b_{d}(\mathcal{A})\) is as defined in theorem 2.1.
**Theorem 3.1**.: _Let \(f\) be a bounded Borel measurable function. For each \(t\in\mathbb{R}\), define the upper level sets of \(f(\cdot)\) at level \(t\) as \(A_{t}=\{x\in\mathbb{R}^{d}:\,f(x)\geq t\}\). Then, we have,_
\[|\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z\right)|\leq 2\left\|f \right\|_{\infty}\sup_{t\in\mathbb{R}}|\mathbb{P}\left(S_{n,X}\in A_{t} \right)-\mathbb{P}(Z\in A_{t})|\,.\]
Theorem 3.1 implies that for bounded functions \(f\), \(\Delta_{f}\) can be controlled using classical Berry-Esseen bounds if the level sets belong to a "favorable" class. The scope of Theorem 3.1 can be expanded significantly using the following two combinations.
### Combination 1
Suppose \(f_{1},\cdots,f_{k}\) are bounded functions (satisfying the conditions of theorem 2.1) for which the upper level sets all belong to \(\mathcal{A}\), where \(\mathcal{A}\) is any one of the favorable classes of convex sets or half-spaces or Euclidean balls. Then, for a function \(f\) such that \(f(x)=\sum_{j=1}^{k}\lambda_{j}f_{j}(x)\), where \(\sum_{j=1}^{k}|\lambda_{j}|\leq c\) for some constant \(c\), by Theorem 3.1, we have,
\[\left|\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z\right)\right| =\left|\sum_{j=1}^{k}\lambda_{j}\bigg{\{}\mathbb{E}f_{j}\left(S_ {n,X}\right)-\mathbb{E}f_{j}\left(Z\right)\bigg{\}}\right|\] \[\leq\sum_{j=1}^{k}\left|\lambda_{j}\bigg{\{}\mathbb{E}f_{j}\left(S _{n,X}\right)-\mathbb{E}f_{j}\left(Z\right)\bigg{\}}\right|\] \[\leq\sup_{j}\left|\left\{\mathbb{E}f_{j}\left(S_{n,X}\right)- \mathbb{E}f_{j}\left(Z\right)\bigg{\}}\right|\sum_{j=1}^{k}|\lambda_{j}|\] \[\leq c.\sup_{j}\left[2\left\|f_{j}\right\|_{\infty}\sup_{t}\left| \mathbb{P}\left(S_{n,X}\in A_{j,t}\right)-\mathbb{P}\left(Z\in A_{j,t}\right) \right|\right.\right]\] \[\qquad\left[\text{we got the above step using Theorem \ref{eq:favorable}}\right]\] \[=2c.\sup_{j}\left[\left\|f_{j}\right\|_{\infty}\sup_{t}\left| \mathbb{P}\left(\in A_{j,t}\right)-\mathbb{P}\left(Z\in A_{j,t}\right)\right|\right]\] \[\leq 2c.\sup_{j}\left[\left\|f_{j}\right\|_{\infty}\frac{b_{d}( \mathcal{A})}{\sqrt{n}}\frac{1}{n}\left(\sum_{i=1}^{n}\mathbb{E}|\Sigma^{- \frac{1}{2}}X_{i}|^{3}\right)\bigg{]}\] \[\qquad\left[\because\ \ \text{each}\ f_{j}\ \text{has its upper level sets in the same favorable class}\ \mathcal{A}\right]\] \[=\frac{2b_{d}(\mathcal{A})c}{\sqrt{n}}\left(\frac{1}{n}\sum_{i=1}^ {n}\mathbb{E}|\Sigma^{-\frac{1}{2}}X_{i}|^{3}\right)\sup_{j}\left\|f_{j} \right\|_{\infty}.\]
A direct application of this result can yield bounds for functions that are uniformly approximable by a class of functions whose level sets belong in the favorable class. This will be further explored in Part II. The following result shows that one can construct an infinite class of functions with the same upper-level sets from a given function.
**Theorem 3.2**.: _If the upper-level sets of a function \(f:\mathbb{R}^{n}\longrightarrow\mathbb{R}\) belong in \(\mathcal{A}\), then the upper-level sets of \(g\circ f\) also belong in \(\mathcal{A}\) for any non-decreasing function \(g:\mathbb{R}\longrightarrow\mathbb{R}\)._
Proof.: Let \(A_{t}^{\prime}\) be the upper level sets of \(g\) for \(t\in\mathbb{R}\), Then, we have, \(A_{t}^{\prime}=\{x:g(f(x))\geq t\}\). Now, since \(g\) is a non-decreasing function, we have, \(g(f(x))\geq t\iff f(x)\geq g^{-1}(t)\). Thus, we have
\(A_{t}^{\prime}=\{x:g(f(x))\geq t\}=\{x:f(x)\geq g^{-1}(t)\}=A_{g^{-1}(t)}\), where, \(A_{t}=\{x:f(x)\geq t\}\) denotes the upper level set of \(f\) for \(t\in\mathbb{R}\). We know that \(A_{g}^{-1}(t)\) belongs in \(\mathcal{A}\) and hence \(A_{t}^{\prime}\) also belongs in \(\mathcal{A}\).
**Corollary 3.3**.: _If the upper level sets of a bounded function \(f:\mathbb{R}^{n}\longrightarrow\mathbb{R}\) belong in \(\mathcal{A}\), then the upper level sets of \(h\) such that \(h(x)=\frac{f(x)}{\left\|f\right\|_{\infty}}\) also belong in \(\mathcal{A}\)._
### Combination 2
Suppose a function \(f:\mathbb{R}^{n}\longrightarrow\mathbb{R}\) has an upper level set \(U_{t}\) of the form \(U_{t}=A_{1,t}\cup\cdots\cup A_{k,t}\), where \(A_{j,t}\)'s are disjoint sets each belonging to a favorable class, \(\forall t\in\mathbb{R}\), then also we can bound \(\Delta_{f}\). If \(f\) is a bounded function, then by Theorem 3.1, we have,
\[\left|\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z\right)\right| \leq 2\left\|f\right\|_{\infty}\sup_{t}\left|\mathbb{P}\left(S_{n,X}\in U_{t}\right)-\mathbb{P}\left(Z\in U_{t}\right)\right|\] \[=2\left\|f\right\|_{\infty}\sup_{t}\left|\mathbb{P}\left(S_{n,X} \in\cup_{j=1}^{k}A_{j,t}\right)-\mathbb{P}\left(Z\in\cup_{j=1}^{k}A_{j,t} \right)\right|\] \[=2\left\|f\right\|_{\infty}\sup_{t}\left|\sum_{j=1}^{k}\left[ \mathbb{P}\left(S_{n,X}\in A_{j,t}\right)-\mathbb{P}\left(Z\in A_{j,t}\right) \right]\right|\] \[\qquad\left[\because A_{j,t}\text{'s are disjoint}\right]\] \[\leq 2\left\|f\right\|_{\infty}\sup_{t}\sum_{j=1}^{k}\left|\left| \mathbb{P}\left(S_{n,X}\in A_{j,t}\right)-\mathbb{P}\left(Z\in A_{j,t}\right) \right|\right|\] \[\leq 2\left\|f\right\|_{\infty}\sum_{j=1}^{k}\sup_{t}\left| \left[\mathbb{P}\left(S_{n,X}\in A_{j,t}\right)-\mathbb{P}\left(Z\in A_{j,t} \right)\right]\right|\] \[\leq 2\left\|f\right\|_{\infty}\sum_{j=1}^{k}\frac{b_{d}(\mathcal{ A}_{j})}{\sqrt{n}}\frac{1}{n}\left(\sum_{i=1}^{n}\mathbb{E}|\Sigma^{-\frac{1}{2}}X_{i}| ^{3}\right)\] \[=\frac{2\left\|f\right\|_{\infty}}{\sqrt{n}}\sum_{j=1}^{k}b_{d}( \mathcal{A}_{j})\left(\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}|\Sigma^{-\frac{1}{2}} X_{i}|^{3}\right)\]
### Proof of Theorem 3.1
Proof.: To begin with, we consider a non-negative function \(f\). Then, we can write,
\[f(x)=\int_{0}^{\infty}\mathbf{I}(f(x)\geq t)dt=\int_{0}^{f(x)}dt+\int_{f(x)}^{ \infty}0dt=f(x)\]
Now, \(\left|f(x)\right|\leq\left\|f\right\|_{\infty}\). Thus, we can now write,
\[f(x)=\int_{0}^{\left\|f\right\|_{\infty}}\mathbf{I}(f(x)\geq t)dt\] \[\Longrightarrow\mathbb{E}f(W)=\mathbb{E}\int_{0}^{\left\|f\right\| _{\infty}}\mathbf{I}(f(W)\geq t)dt\]
By Tonelli's theorem, we know that the swapping of the expectation operator and integral is valid for non-negative summands. For the convenience of the reader, we recall Tonelli's theorem here: Suppose that \((T,\mathcal{T},\mu)\) is a \(\sigma\)-finite measure space, and that \(X_{t}\) is a real-valued random variable for
each \(t\in T\). We assume that \(\left(\omega,t\right)\mapsto X_{t}(\omega)\) is measurable, as a function from the product space \(\left(\Omega\times T,\mathcal{F}\otimes\mathcal{T}\right)\) into \(\mathbb{R}\). If \(X_{t}\) is non-negative for each \(t\in\mathbb{R}\), then,
\[\mathbb{E}\int_{T}X_{t}d\mu(t)=\int_{T}\mathbb{E}(X_{t})d\mu(t)\]
Thus, we have,
\[\mathbb{E}f(W)=\int_{0}^{\left\|f\right\|_{\infty}}\mathbb{E}\mathbf{I}(W \geqslant t)dt=\int_{0}^{\left\|f\right\|_{\infty}}\mathbb{P}(f(W)\geqslant t)dt\]
Now, we consider the expression \(\left|\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z\right)\right|\).
\[\left|\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z\right)\right| =\left|\int_{0}^{\left\|f\right\|_{\infty}}\left[\mathbb{P}\left( S_{n,X}\geqslant t\right)-\mathbb{P}\left(Z\geqslant t\right)\right]dt\right|\] \[\leqslant\int_{0}^{\left\|f\right\|_{\infty}}\left|\mathbb{P} \left(S_{n,X}\geqslant t\right)-\mathbb{P}\left(Z\geqslant t\right)\right|dt\] \[=\int_{0}^{\left\|f\right\|_{\infty}}\left|\mathbb{P}\left(S_{n, X}\in A_{t}\right)-\mathbb{P}\left(Z\in A_{t}\right)\right|dt\] \[\leqslant\left\|f\right\|_{\infty}\sup_{t\in\mathbb{R}}\left| \mathbb{P}\left(S_{n,X}\in A_{t}\right)-\mathbb{P}\left(Z\in A_{t}\right)\right|\]
Now, suppose that \(f\) is not non-negative, but bounded. Then, we have,
\[\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z\right) =\mathbb{E}\Bigg{[}\int_{0}^{\left\|f\right\|_{\infty}}\mathbf{I }(f\left(S_{n,X}\right)\geqslant t)dt-\int_{-\left\|f\right\|_{\infty}}^{0} \mathbf{I}(f\left(S_{n,X}\right)\leqslant t)dt\Bigg{]}\] \[\quad-\mathbb{E}\Bigg{[}\int_{0}^{\left\|f\right\|_{\infty}} \mathbf{I}(f(Z)\geqslant t)dt-\int_{-\left\|f\right\|_{\infty}}^{0}\mathbf{I }(f(Z)\leqslant t)dt\Bigg{]}\]
By Tonelli's theorem, we know that the swapping of the expectation operator and integral is valid for non-negative summands. Thus, we have,
\[\mathbb{E}f\left(S_{n,X}\right)-\mathbb{E}f\left(Z\right) =\Bigg{[}\int_{0}^{\left\|f\right\|_{\infty}}\mathbb{E}\mathbf{ I}(f\left(S_{n,X}\geqslant t\right)dt-\int_{-\left\|f\right\|_{\infty}}^{0} \mathbb{E}\mathbf{I}(f\left(S_{n,X}\right)\leqslant t)dt\Bigg{]}\] \[=\int_{0}^{\left\|f\right\|_{\infty}}\left[\mathbb{P}\left(S_{n, X}\in A_{t}\right)-\mathbb{P}\left(Z\in A_{t}\right)\right]dt\] \[\quad-\int_{-\left\|f\right\|_{\infty}}^{0}\left[\mathbb{P}\left( S_{n,X}\in B_{t}\right)-\mathbb{P}\left(Z\in B_{t}\right)\right]dt\] \[\quad\text{where, }B_{t}\text{is the lower level sets of }f\text{ defined as }B_{t}=\left\{x\in\mathbb{R}^{d}:f(x)\leqslant t\right\}\]
In the case of general functions \(f\), which may not be non-negative, since the integral doesn't run only from \(0\) to \(\left\|f\right\|_{\infty}\), there is an occurrence of the lower level sets \(B_{t}\)'s. But since the probability of lying in \(B_{t}\)'s can be expressed in terms of lying in \(A_{t}\)'s, we can easily write the whole bound in terms of \(A_{t}\)'s.
\[\left[\cdot\cdot\left.\mathbb{P}(W\in B_{t})=1-\mathbb{P}(W\in A_{t})\right]\right.\] \[=\left|\int_{0}^{\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left
Theorem 4.2 implies that \(\Delta_{f}\) for quasi-concave functions bounded by \(M\) can be bounded by \(2M\Delta(\mathcal{C})\). The class of functions whose upper-level sets are half-spaces are ridge functions of the form \(f(x)=\sigma(a^{\top}x-b)\) for some monotone function \(\sigma(\cdot)\). These are discussed in later sections. The class of functions whose upper-level sets are Euclidean balls are radial functions, where the upper-level sets belong to the favorable class \(\mathcal{B}\); these will be discussed in part II.
Until now, we have focused on only bounded functions. In the following section, we first discuss non-uniform Berry-Esseen bound and provide applications for unbounded functions whose level sets are half-spaces.
## 5 Non-uniform Berry-Esseen bound and half-space level sets
To begin with, we take a look at the following bound from Shevtsova (2020).
**Theorem 5.1**.: _Let \(X_{1},X_{2},\cdots,X_{n}\) be independent random variables with distribution functions \(F_{1},F_{2},\cdots,F_{n}\) and \(\mathbb{E}X_{k}=0\), \(\sigma_{k}^{2}:=\mathbb{E}X_{k}<\infty\),_
\[S_{n}:=\sum_{k=1}^{n}X_{k},B_{n}^{2}=\sum_{k=1}^{n}\sigma_{k}^{2}>0.\]
_Let us denote_
\[\bar{F}_{n}(x):=P(S_{n}<xB_{n}),\quad\Phi(x)=\int_{-\infty}^{x}e^{-\frac{t^{2} }{2}}dt,\quad\Delta_{n}(x)=\left|\bar{F}_{n}(x)-\Phi(x)\right|,\quad x\in \mathbb{R}.\]
\[L_{n}(\varepsilon):=\frac{1}{B_{n}^{2}}\sum_{k=1}^{n}\mathbb{E}X_{k}^{2} \mathbf{I}(|X_{k}|>\varepsilon B_{n}),\quad\varepsilon>0,\quad n\in\mathbb{N}.\]
_Then,_
\[\Delta_{n}(x) \leqslant\frac{A}{(1+|x|^{3})B_{n}}\int_{0}^{(1+|x|)B_{n}}L_{n}( z)dz\] \[=A\sum_{k=1}^{n}\left[\frac{EX_{k}^{2}\mathbf{I}(|X_{k}|>(1+|x|) B_{n})}{(1+|x|)^{2}B_{n}^{2}}+\frac{EX_{k}^{2}\mathbf{I}(|X_{k}|\leqslant(1+|x|) B_{n})}{(1+|x|)^{3}B_{n}^{3}}\right]\]
_where \(A\) is an absolute constant._
### Bounds for the ReLU and squared ReLU functions
**Theorem 5.2**.: _Let \(\sigma\) denote the ReLU function, i.e., \(\sigma(x)=\max\{0,x\}\). Let \(W_{1},W_{2},\ldots,W_{n}\) be independent univariate random variables with \(\mathbb{E}W_{k}=0\) and \(\sigma_{k}^{2}=\mathbb{E}W_{k}^{2}<\infty\), \(\forall k=1,2,\ldots,n\). Let us denote_
\[S_{n,W}:=\sum_{k=1}^{n}W_{k},\quad B_{n,W}^{2}:=\sum_{k=1}^{n}\sigma_{k}^{2}.\]
_Let \(Z\) be a standard normal random variable. Then, for \(t\geqslant 0\),_
\[|\mathbb{E}\sigma(S_{n,W}-t)-\mathbb{E}\sigma(B_{n,W}Z-t)|\leqslant\frac{A}{B _{n,W}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geqslant t+B_{n,W })+\frac{A}{2}\sum_{k=1}^{n}\frac{\mathbb{E}\left|W_{k}\right|^{3}\mathbf{I}( |W_{k}|<t+B_{n,W})}{(t+B_{n,W})^{2}}.\]
_for an absolute constant \(A\)._
Proof.: For \(t\geqslant 0\),
\[\left|\mathbb{E}\sigma\left(\frac{S_{n,W}}{B_{n,W}}-\frac{t}{B_{n,W} }\right)-\mathbb{E}\sigma\left(Z-\frac{t}{B_{n,W}}\right)\right|\] \[=\left|\int_{0}^{\infty}\left[\mathbb{P}\left(\sigma\left(Z- \frac{t}{B_{n,W}}\right)<w\right)-\mathbb{P}\left(\sigma\left(\frac{S_{n,W}}{B _{n,W}}-\frac{t}{B_{n,W}}\right)<w\right)\right]dw\right|\] \[\leqslant\int_{0}^{\infty}\left|\mathbb{P}\left(\sigma\left(Z- \frac{t}{B_{n,W}}\right)<w\right)-\mathbb{P}\left(\sigma\left(\frac{S_{n,W}}{B _{n,W}}-\frac{t}{B_{n,W}}\right)<w\right)\right|dw\] \[=\int_{0}^{\infty}\left|\mathbb{P}\left(\max\left\{0,Z-\frac{t}{ B_{n,W}}\right\}<w\right)-\mathbb{P}\left(\max\left\{0,\frac{S_{n,W}}{B_{n,W}}- \frac{t}{B_{n,W}}\right\}<w\right)\right|dw.\]
(Since we are considering the integral from \(0\) to \(\infty\), so we have \(w>0\).) So, for a random variable \(X,w>\max\{0,X\}\) means that \(w>0\) and \(w>X\). But because of the lower limit of the integral, we already have \(w>0\). So, the event \(w>0\) and \(w>X\) boils down to the event \(w>X\).
\[\left|\mathbb{E}\sigma\left(\frac{S_{n,W}}{B_{n,W}}-\frac{t}{B_{n,W}}\right)-\mathbb{E}\sigma\left(Z-\frac{t}{B_{n,W}}\right)\right|\] \[\leqslant\int_{0}^{\infty}\left|\left|\mathbb{P}\left(Z-\frac{t} {B_{n,W}}<w\right)-\mathbb{P}\left(\frac{S_{n}}{B_{n}}-\frac{t}{B_{n,W}}<w \right)\right]dw\right|\] \[=\int_{0}^{\infty}\left|\left[\mathbb{P}\left(Z<\frac{t}{B_{n,W} }+w\right)-\mathbb{P}\left(\frac{S_{n}}{B_{n}}<\frac{t}{B_{n,W}}+w\right) \right]dw\right|\] \[=\int_{0}^{\infty}\Delta_{n}\left(w+\frac{t}{B_{n,W}}\right)dw\] \[=\int_{t/B_{n,W}}^{\infty}\Delta_{n}(s)ds\quad\left[\text{by the change of variable }s=w+\frac{t}{B_{n,W}}\right]\]
By Theorem 5.1, we know that,
\[\Delta_{n}(s)\leqslant A\sum_{k=1}^{n}\left[\frac{\mathbb{E}W_{k}^{2}\mathbf{ I}(|W_{k}|>(1+|s|)B_{n,W})}{(1+|s|)^{2}B_{n,W}^{2}}+\frac{\mathbb{E}\left|W_{k} \right|^{3}\mathbf{I}(|W_{k}|\leqslant(1+|s|)B_{n,W})}{(1+|s|)^{3}B_{n,W}^{3}}\right]\]
for an absolute constant \(A\). Then, we have
\[\left|\mathbb{E}\sigma\left(\frac{S_{n,W}}{B_{n,W}}-\frac{t}{B_{ n,W}}\right)-\mathbb{E}\sigma\left(Z-\frac{t}{B_{n,W}}\right)\right|\] \[\leqslant\int_{0}^{\infty}\Delta_{n}(s)ds\] \[=A\sum_{k=1}^{n}\int_{\frac{t}{B_{n,W}}}^{\infty}\Bigg{[}\frac{ \mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|>(1+|s|)B_{n,W}}{(1+|s|)^{2}B_{n,W}^{2}}+ \frac{\mathbb{E}\left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|\leqslant(1+|s|)B_{n,W})}{(1+|s|)^{3}B_{n,W}^{3}}\Bigg{]}ds\] \[=A\sum_{k=1}^{n}\int_{\frac{t}{B_{n,W}}}^{\infty}\Bigg{[}\frac{ \mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|>(1+s)B_{n,W}}{(1+s)^{2}B_{n,W}^{2}}+ \frac{\mathbb{E}\left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|\leqslant(1+s)B_{n,W}) }{(1+s)^{3}B_{n,W}^{3}}\Bigg{]}ds.\]
Now, the first term on the right hand side can be simplified as follows:
\[\int_{\frac{t}{B_{n,W}}}^{\infty}\frac{\mathbb{E}W_{k}^{2}\mathbf{I} (|W_{k}|>(1+s)B_{n,W})}{(1+s)^{2}B_{n,W}^{2}}ds\] \[=\mathbb{E}\int_{\frac{t}{B_{n,W}}}^{\infty}\frac{\frac{W_{k}^{2} \mathbf{I}(|W_{k}|>(1+s)B_{n,W})}{(1+s)^{2}B_{n,W}^{2}}ds}{(1+s)^{2}B_{n,W}^{2 }}ds\] \[=\frac{1}{B_{n,W}^{2}}\mathbb{E}W_{k}^{2}\mathbf{I}\left(|W_{k}| \,B_{n,W}^{-1}-1\geq\frac{t}{B_{n,W}}\right)\int_{\frac{t}{B_{n,W}}}^{|W_{k}|B_ {n,W}^{-1}-1}\frac{1}{(1+s)^{2}}ds\] \[=\frac{1}{B_{n,W}^{2}}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t +B_{n,W})\int_{\frac{t}{B_{n,W}}}^{|W_{k}|B_{n,W}^{-1}-1}\frac{1}{(1+s)^{2}}ds\] \[=\frac{1}{B_{n,W}^{2}}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t +B_{n,W})\Bigg{[}-\frac{1}{1+s}\Bigg{]}_{0}^{|W_{k}|B_{n,W}^{-1}-1}\] \[=\frac{1}{B_{n,W}^{2}}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t +B_{n,W})\Bigg{[}1-\frac{1}{1+(|W_{k}|\,B_{n,W}^{-1}-1)}\Bigg{]}\] \[=\frac{1}{B_{n,W}^{2}}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t +B_{n,W})\Bigg{[}1-\frac{B_{n,W}}{|W_{k}|}\Bigg{]}\] \[=\frac{1}{B_{n,W}^{2}}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t +B_{n,W})-\frac{1}{B_{n,W}}\mathbb{E}\left|W_{k}\right|\mathbf{I}(|W_{k}|\geq t +B_{n,W}).\]
The second term on the right-hand side of the bound on \(|\mathbb{E}\sigma(S_{n,W}/B_{n,W}-t/B_{n,W})-\mathbb{E}\sigma(Z-t/B_{n,W})|\) can be simplified as follows:
\[\int_{\frac{t}{B_{n,W}}}^{\infty}\frac{\mathbb{E}\left|W_{k} \right|^{3}\mathbf{I}(|W_{k}|\leq(1+s)B_{n,W})}{(1+s)^{3}B_{n,W}^{3}}ds\] \[=\mathbb{E}\int_{\frac{t}{B_{n,W}}}^{\infty}\frac{\left|W_{k} \right|^{3}\mathbf{I}(|W_{k}|\leq(1+s)B_{n,W})}{(1+s)^{3}B_{n,W}^{3}}ds\] \[=\frac{1}{B_{n,W}^{3}}\mathbb{E}\left|W_{k}\right|^{3}\int_{\max \{tB_{n,W}^{-1},|W_{k}|B_{n,W}^{-1}\}}^{\infty}\frac{1}{(1+s)^{3}}ds\] \[=\frac{1}{B_{n,W}^{3}}\mathbb{E}\left|W_{k}\right|^{3}\Bigg{[}- \frac{1}{2(1+s)^{2}}\Bigg{]}_{\max\{tB_{n,W}^{-1},|W_{k}|B_{n,W}^{-1}-1\}}^{ \infty}\] \[=\frac{1}{2B_{n,W}^{3}}\mathbb{E}\left|W_{k}\right|^{3}\frac{1}{( 1+|W_{k}|\,B_{n,W}^{-1}-1)^{2}}\mathbf{I}(|W_{k}|\,B_{n,W}^{-1}-1\geq tB_{n,W} ^{-1})\] \[\qquad+\frac{1}{2B_{n,W}^{3}}\mathbb{E}\left|W_{k}\right|^{3} \frac{1}{(1+tB_{n,W}^{-1})^{2}}\mathbf{I}(|W_{k}|\,B_{n,W}^{-1}-1<tB_{n,W}^{-1})\] \[=\frac{1}{2B_{n,W}}\mathbb{E}\left|W_{k}\right|\mathbf{I}(|W_{k}| \geq t+B_{n,W})+\frac{1}{2B_{n,W}}\mathbb{E}\left|W_{k}\right|^{3}\frac{1}{(t +B_{n,W})^{2}}\mathbf{I}(|W_{k}|<t+B_{n,W})\]
Combining these inequalities, we have
\[\left|\mathbb{E}\sigma\left(\frac{S_{n,W}}{B_{n,W}}-\frac{t}{B_{n,W} }\right)-\mathbb{E}\sigma\left(Z-\frac{t}{B_{n,W}}\right)\right|\] \[\leqslant A\sum_{k=1}^{n}\int_{tB_{n,W}^{-1}}^{\infty}\Bigg{[} \frac{\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|>(1+s)B_{n,W}}{(1+s)^{2}B_{n,W}^{2} }+\frac{\mathbb{E}\left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|\leqslant(1+s)B_{n, W})}{(1+s)^{3}B_{n,W}^{3}}\Bigg{]}ds\] \[\leqslant\frac{A}{B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2} \mathbf{I}(|W_{k}|\geqslant t+B_{n,W})-\frac{A}{2B_{n,W}}\sum_{k=1}^{n} \mathbb{E}\left|W_{k}\right|\mathbf{I}(|W_{k}|\geqslant t+B_{n,W})+\] \[\qquad\frac{A}{2B_{n,W}}\sum_{k=1}^{n}\frac{\mathbb{E}\left|W_{k }\right|^{3}\mathbf{I}(|W_{k}|<t+B_{n,W})}{(t+B_{n,W})^{2}}\] \[\leqslant\frac{A}{B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2} \mathbf{I}(|W_{k}|\geqslant t+B_{n,W})+\frac{A}{2B_{n,W}}\sum_{k=1}^{n}\frac{ \mathbb{E}\left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|<t+B_{n,W})}{(t+B_{n,W})^{2}}\]
Finally, using the fact that \(\sigma(\cdot)\) is a positively homogeneous function (i.e., \(\sigma(cx)=c\sigma(x)\) for all \(c>0\)), we obtain
\[\left|\mathbb{E}\sigma(S_{n,W}-t)-\mathbb{E}\sigma(B_{n,W}Z-t)\right|\] \[=\left|\mathbb{E}\max\{0,S_{n,W}-t\}-\mathbb{E}\max\{0,B_{n,W}Z- t\}\right|\] \[=\left|\mathbb{E}B_{n,W}\max\left\{0,\frac{S_{n,W}}{B_{n,W}}- \frac{t}{B_{n,W}}\right\}-\mathbb{E}B_{n,W}\max\left\{0,\left(Z-\frac{t}{B_{n,W}}\right)\right\}\right|\] \[=B_{n,W}\left|\mathbb{E}\sigma\left(\frac{S_{n,W}}{B_{n,W}}- \frac{t}{B_{n,W}}\right)-\mathbb{E}\sigma\left(Z-\frac{t}{B_{n,W}}\right)\right|\] \[\leqslant\frac{A}{B_{n,W}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2} \mathbf{I}(|W_{k}|\geqslant t+B_{n,W})+\frac{A}{2}\sum_{k=1}^{n}\frac{\mathbb{ E}\left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|<t+B_{n,W})}{(t+B_{n,W})^{2}}\quad\text{[ using section \ref{section}\ref{section}]}\]
This concludes the proof.
**Theorem 5.3**.: _Let \(\sigma(\cdot)\) denote the ReLU function, i.e., \(\sigma(x)=\max\{0,x\}\). Let \(W_{1},W_{2},\ldots,W_{n}\) be independent univariate random variables such that \(\mathbb{E}W_{k}=0\) and \(\sigma_{k}^{2}=\mathbb{E}W_{k}^{2}<\infty\), \(\forall k=1,2,\ldots,n\). Let_
\[S_{n,W}\coloneqq\sum_{k=1}^{n}W_{k},\quad B_{n,W}^{2}\coloneqq\sum_{k=1}^{n} \sigma_{k}^{2}.\]
_Let \(Z\) be a standard normal random variable. Then, for \(t\geqslant 0\),_
\[\left|\mathbb{E}\sigma^{2}(S_{n,W}-t)-\mathbb{E}\sigma^{2}(B_{n,W }Z-t)\right|\leqslant 2A\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}| \geqslant t+B_{n,W})\ln(|W_{k}|)\] \[\quad+2A\left[1+\ln(1+tB_{n,W}^{-1})\right]\sum_{k=1}^{n} \mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geqslant t+B_{n,W})\] \[\quad+2\frac{A}{B_{n,W}(1+tB_{n,W}^{-1})}\sum_{k=1}^{n}\mathbb{E} \left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|<t+B_{n,W})\]
_for an absolute constant \(A\)._
Proof.: For \(t\geq 0\),
\[\left|\mathbb{E}\sigma^{2}\left(\frac{S_{n,W}}{B_{n,W}}-\frac{t}{B_{ n,W}}\right)-\mathbb{E}\sigma^{2}\left(Z-\frac{t}{B_{n,W}}\right)\right|\] \[=\left|\int_{0}^{\infty}\left[\mathbb{P}\left(\sigma^{2}\left( \frac{S_{n,W}}{B_{n,W}}-\frac{t}{B_{n,W}}\right)\geq w\right)-\mathbb{P}\left( \sigma^{2}\left(Z-\frac{t}{B_{n,W}}\right)\geq w\right)\right]dw\right|\] \[=\left|\int_{0}^{\infty}\left[\mathbb{P}\left(\sigma^{2}\left(Z- \frac{t}{B_{n,W}}\right)<w\right)-\mathbb{P}\left(\sigma^{2}\left(\frac{S_{n,W }}{B_{n,W}}-\frac{t}{B_{n,W}}\right)<w\right)\right]dw\right|\] \[\leq\int_{0}^{\infty}\left|\left[\mathbb{P}\left(\sigma^{2}\left( Z-\frac{t}{B_{n,W}}\right)<w\right)-\mathbb{P}\left(\sigma^{2}\left(\frac{S_{n,W}}{B_{ n,W}}-\frac{t}{B_{n,W}}\right)<w\right)\right]dw\right|\] \[=\int_{0}^{\infty}\left|\left[\mathbb{P}\left(\max\left\{0,Z- \frac{t}{B_{n,W}}\right\}^{2}<w\right)-\mathbb{P}\left(\max\left\{0,\frac{S_{ n,W}}{B_{n,W}}-\frac{t}{B_{n,W}}\right\}^{2}<w\right)\right]dw\right|\]
Since we are considering the integral from \(0\) to \(\infty\), so we have \(w>0\). So, for a random variable \(X\), \(\sqrt{w}>\max\{0,X\}\) means that \(w>0\) and \(\sqrt{w}>X\). But because of the lower limit of the integral, we already have \(w>0\). So, the event \(w>0\) and \(\sqrt{w}>X\) boils down to the event \(\sqrt{w}>X\).
\[\left|\mathbb{E}\sigma^{2}\left(\frac{S_{n,W}}{B_{n,W}}-\frac{t}{B _{n,W}}\right)-\mathbb{E}\sigma^{2}\left(Z-\frac{t}{B_{n,W}}\right)\right|\] \[=\int_{0}^{\infty}\Delta_{n}\left(\frac{t}{B_{n,W}}+\sqrt{w} \right)dw\] \[=2\int_{t/B_{n,W}}^{\infty}\Delta_{n}(s)\Bigg{(}s-\frac{t}{B_{n,W }}\Bigg{)}ds\] \[\text{[by using the change of variables }s=\frac{t}{B_{n,W}}+\sqrt{w}\big{]}\] \[=2\int_{t/B_{n,w}}^{\infty}s\Delta_{n}(s)ds-2\frac{t}{B_{n,W}} \int_{t/B_{n,w}}^{\infty}\Delta_{n}(s)ds\] \[\leq 2\int_{t/B_{n,w}}^{\infty}s\Delta_{n}(s)ds\quad\left[\because \Delta_{n}(s)>0,\forall s,t\geq 0,B_{n,W}>0\right]\]
By Theorem 5.1, we know that,
\[\Delta_{n}(s)\leq A\sum_{k=1}^{n}\left[\frac{\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{ k}|>(1+|s|)B_{n,W})}{(1+|s|)^{2}B_{n,W}^{2}}+\frac{\mathbb{E}\left|W_{k} \right|^{3}\mathbf{I}(|W_{k}|\leq(1+|s|)B_{n,W})}{(1+|s|)^{3}B_{n,W}^{3}}\right]\]
for an absolute constant \(A\). We can see that the bound of \(\Delta_{n}(s)\) given in section 5.1 is an even
function of \(s\). So, continuing from section 5.1, we can write
\[2\int_{tB_{n,w}^{-1}}^{\infty}s\Delta_{n}(s)ds\leq 2\int_{tB_{n,w}^{-1}}^{ \infty}sA\sum_{k=1}^{n}\left[\frac{\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|>(1+|s| )B_{n,W})}{(1+|s|)^{2}B_{n,W}^{2}}+\frac{\mathbb{E}\left|W_{k}\right|^{3} \mathbf{I}(|W_{k}|\leq(1+|s|)B_{n,W})}{(1+|s|)^{3}B_{n,W}^{3}}\right]ds\] \[=2A\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t+B_{n,W})\int_{tB_{n,w}^{-1}}^{|W_{k}|B_{n,w}^{-1}}\frac{1}{(1+s)}ds+2\frac{A}{B_{n,W}^{3}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k}\right|^{3}\int_{\max\{tB_{n,w}^{-1 },|W_{k}|B_{n,w}^{-1}\}}^{\infty}\frac{1}{(1+s)^{3}}ds\] \[=2\frac{A}{B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{ I}(|W_{k}|\geq t+B_{n,W})\int_{tB_{n,w}^{-1}}^{|W_{k}|B_{n,w}^{-1}-1}\frac{1}{(1+s)} ds+2\frac{A}{B_{n,W}^{3}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k}\right|^{3} \int_{\max\{tB_{n,w}^{-1},|W_{k}|B_{n,w}^{-1}\}}^{\infty}\frac{1}{(1+s)^{2}}ds\] \[=2\frac{A}{B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{ I}(|W_{k}|\geq t+B_{n,W})[\ln(1+s)]_{tB_{n,w}^{-1}}^{|W_{k}|B_{n,w}^{-1}-1}+2 \frac{A}{B_{n,W}^{3}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k}\right|^{3}\left[-\frac {1}{1+s}\right]_{\max\{tB_{n,w}^{-1},|W_{k}|B_{n,w}^{-1}-1\}}^{\infty}\] \[=2\frac{A}{B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{ I}(|W_{k}|\geq t+B_{n,W})\ln(|W_{k}|)-2\frac{A\ln(B_{n,W})}{B_{n,W}^{2}}\sum_{k=1}^{n} \mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t+B_{n,W})+\] \[2\frac{A\ln(1+tB_{n,W}^{-1})}{B_{n,W}^{2}}\sum_{k=1}^{n} \mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t+B_{n,W})+2\frac{A}{B_{n,W}^{3}(1+tB _{n,W}^{-1})}\sum_{k=1}^{n}\mathbb{E}\left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|<t +B_{n,W})+\] \[2\frac{A}{B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{ I}(|W_{k}|\geq t+B_{n,W})\ln(|W_{k}|)+2\frac{A\ln(1+tB_{n,W}^{-1})}{B_{n,W}^{2}} \sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t+B_{n,W})+\] \[2\frac{A}{B_{n,W}^{3}(1+tB_{n,W}^{-1})}\sum_{k=1}^{n}\mathbb{E} \left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|<t+B_{n,W})+2\frac{A}{B_{n,W}^{2}} \sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t+B_{n,W})\]
Finally, using the fact that \(\sigma^{2}(cx)=c^{2}\sigma(x)\) for all \(c>0\)), we obtain
\[\left|\mathbb{E}\sigma^{2}(S_{n,W}-t)-\mathbb{E}\sigma^{2}(B_{n,W }Z-t)\right|=\left|\mathbb{E}\max\{0,S_{n,W}-t\}^{2}-\mathbb{E}\max\{0,B_{n,W}Z -t\}^{2}\right|\] \[=B_{n,W}^{2}\left|\mathbb{E}\sigma^{2}\left(\frac{S_{n,W}}{B_{n,W }}-\frac{t}{B_{n,W}}\right)-\mathbb{E}\sigma^{2}\left(Z-\frac{t}{B_{n,W}} \right)\right|\] \[\leq 2A\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t+B_ {n,W})\ln(|W_{k}|)+2A\ln(1+tB_{n,W}^{-1})\sum_{k=1}^{n}\mathbb{E}W_{k}^{2} \mathbf{I}(|W_{k}|\geq t+B_{n,W})+\] \[2\frac{A}{B_{n,W}(1+tB_{n,W}^{-1})}\sum_{k=1}^{n}\mathbb{E} \left|W_{k}\right|^{3}\mathbf{I}(|W_{k}|<t+B_{n,W})+2A\sum_{k=1}^{n}\mathbb{E}W_{k }^{2}\mathbf{I}(|W_{k}|\geq t+B_{n,W})\] \[=2A\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geq t+B_{n,W})\ln(|W_{k}|)+2A\left[1+\ln(1+tB_{n,W}^{-1})\right]\sum_{k=1}^{n}\mathbb{E}W _{k}^{2}\mathbf{I}(|W_{k}|\geq t+B_{n,W})+\] \[2\frac{A}{B_{n,W}(1+tB_{n,W}^{-1})}\sum_{k=1}^{n}\mathbb{E}\left|W _{k}\right|^{3}\mathbf{I}(|W_{k}|<t+B_{n,W})\]
This concludes the proof.
### Bounds for single-layer neural networks
Motivation.In the sections that follow, we derive bounds for functions that have an integral representation of the forms given in Klusowski and Barron (2018). We provide the statements and proofs of the existence results and bounds for functions with bounded \(l_{1}\) norm of inner parameters. As mentioned in section 1, our main goal is to find bounds for \(|Ef(S_{n,X})-Ef(Z)|\) for arbitrary functions \(f\) and neural networks are one of the most prominent classes of functions that are dense. The single layer neural networks that we consider are dense in \(L_{2}\) and can be used to approximate any continuous function.
**Theorem 5.4**.: _Suppose \(X_{1},\ldots,X_{n}\) is a sequence of mean zero independent \(d\)-dimensional random vectors. Set \(S_{n,X}=n^{-1/2}\sum_{i=1}^{n}X_{i}\). Let \(Z\) be a \(d\)-dimensional Gaussian random vector with mean zero and variance-covariance matrix given by \(\Sigma=\text{Var}(S_{n,X})\). Let \(f\) be a function that admits an integral representation of the form_
\[f(x)=v\int_{[0,1]\times\{a:\left\|a\right\|_{1}=1\}}\eta(t,a)(a^{\top}x-t)_{+}^ {s-1}dP(t,a),\]
_for \(x\in D=[-1,1]^{d}\) and \(s\in\{2,3\}\), where \(P\) is a probability measure on \([0,1]\times\{a:\left\|a\right\|_{1}=1\}\) and \(\eta(t,a)\) is either \(-1\) or \(+1\) for all \(t,a\). Then there exists an universal constant \(A>0\) such that_
* _if_ \(s=2\)_, then_ \[\left|\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z)\right| \leqslant\frac{A\left|v\right|}{n}\Bigg{\{}\sup_{a:\left|a \right|_{1}=1}\Bigg{[}\frac{1}{\left\|a\right\|_{\Sigma}}\sum_{k=1}^{n} \mathbb{E}(a^{\top}X_{k})^{2}\mathbf{I}(\left|a^{\top}X_{k}\right|\geqslant \sqrt{n}\left\|a\right\|_{\Sigma})\Bigg{]}+\] \[\sup_{a:\left|a\right|_{1}=1}\Bigg{[}\frac{1}{2\left\|a\right\|_ {\Sigma}^{2}}\sum_{k=1}^{n}\mathbb{E}\frac{\left|a^{\top}X_{k}\right|^{3}}{ \sqrt{n}}\mathbf{I}(\left|a^{\top}X_{k}\right|<\sqrt{n}\left\|a\right\|_{\Sigma })\Bigg{]}\Bigg{\}},\] _and_
* _if_ \(s=3\)_, then_ \[\left|\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z)\right| \leqslant 2\frac{A\left|v\right|}{n}\Bigg{\{}\sup_{a:\left|a \right|_{1}=1}\sum_{k=1}^{n}\mathbb{E}\Bigg{[}(a^{\top}X_{k})^{2}\ln\left( \frac{e\left|a^{\top}X_{k}\right|}{\left\|a\right\|_{\Sigma}}\right)\Bigg{]} \mathbf{I}(\left|a^{\top}X_{k}\right|\geqslant\sqrt{n}\left\|a\right\|_{\Sigma })+\] \[\sup_{a:\left|a\right|_{1}=1}\frac{1}{\left\|a\right\|_{\Sigma}} \sum_{k=1}^{n}\mathbb{E}\frac{\left|a^{\top}X_{k}\right|^{3}}{\sqrt{n}} \mathbf{I}(\left|a_{k}^{\top}\right|<\sqrt{n}\left\|a\right\|_{\Sigma})\Bigg{\}}.\]
Proof.: Let \(W_{i}=a^{\top}X_{i}/\sqrt{n}\) for \(i=1,\ldots,n\). We define the following notations:
\[S_{n,X}\coloneqq\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i},\quad\Sigma=\text{Var} \left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i}\right),\quad\tilde{S}_{n,W}:=\sum_ {i=1}^{n}W_{i},\quad B_{n,W}^{2}=\sum_{i=1}^{n}\text{Var}(W_{i}).\]
We have,
\[a^{\top}S_{n,X}=a^{\top}\left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i}\right)= \sum_{i=1}^{n}\frac{a^{\top}X_{i}}{\sqrt{n}}=\sum_{i=1}^{n}W_{i}=\tilde{S}_{n,W}\]
Also,
\[Z\sim\mathcal{N}(0,\mathrm{Var}(S_{n,X}))\] \[\implies Z\sim\mathcal{N}\left(0,\mathrm{Var}\left(\frac{1}{\sqrt{n}} \sum\limits_{i=1}^{n}X_{i}\right)\right)\] \[\implies Z\sim\mathcal{N}\left(0,\frac{1}{n}\sum\limits_{i=1}^{n} \mathrm{Var}(X_{i})\right)\quad\text{[}\because X_{i}\text{'s are independent]}\] \[\implies a^{\top}Z\sim\mathcal{N}\left(0,\frac{a^{\top}a}{n}\sum \limits_{i=1}^{n}\mathrm{Var}(X_{i})\right)\] \[\implies a^{\top}Z\sim\mathcal{N}\left(0,\sum\limits_{i=1}^{n} \mathrm{Var}\left(\frac{a^{\top}X_{i}}{\sqrt{n}}\right)\right)\] \[\implies a^{\top}Z\sim\mathcal{N}\left(0,\sum\limits_{i=1}^{n} \mathrm{Var}(W_{i})\right)\] \[\implies a^{\top}Z\sim\mathcal{N}(0,B_{n,W}^{2})\]
Now,
\[\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z) |=\left|v\int_{\left[0,1\right]\times\left\{a:\left|a\right|=1 \right\}}\eta(t,a)\big{[}\mathbb{E}(a^{\top}S_{n,X}-t)_{+}^{s-1}-\mathbb{E}(a ^{\top}Z-t)_{+}^{s-1}\big{]}dP(t,a)\right|\] \[=\left|v\int_{\left[0,1\right]\times\left\{a:\left|a\right|=1 \right\}}\eta(t,a)\big{[}\mathbb{E}\sigma(a^{\top}S_{n,X}-t)^{s-1}-\mathbb{E} \sigma(a^{\top}Z-t)^{s-1}\big{]}dP(t,a)\right|\] \[\leqslant\left|v\right|\int_{\left[0,1\right]\times\left\{a: \left|a\right|=1\right\}}\left|\eta(t,a)\right|\left|\mathbb{E}\sigma(a^{\top }S_{n,X}-t)^{s-1}-\mathbb{E}\sigma(a^{\top}Z-t)^{s-1}\right|dP(t,a)\] \[\left[\eta(t,a)\in-1,1\implies\left|\eta(t,a)\right|=1\forall t \in\left[0,1\right],a\in\left\{a:\left|a\right|=1\right\}\right]\] \[=\left|v\right|\int_{\left[0,1\right]\times\left\{a:\left|a \right|=1\right\}}\left|\mathbb{E}\sigma(a^{\top}S_{n,X}-t)^{s-1}-\mathbb{E} \sigma(a^{\top}Z-t)^{s-1}\right|dP(t,a)\]
For s = 2, continuing from section 5.2, we have,
\[\begin{split}\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z)&|\leqslant |v|\int_{[0,1]\times\{a:\left\|a\right\|_{1}=1\}}\left|\mathbb{E}\sigma(a^{ \top}S_{n,X}-t)-\mathbb{E}\sigma(a^{\top}Z-t)\right|dP(t,a)\\ &\leqslant|v|\int_{[0,1]\times\{a:\left\|a\right\|_{1}=1\}} \left|\mathbb{E}\sigma(\tilde{S}_{n,W}-t)-\mathbb{E}\sigma(a^{\top}Z-t) \right|dP(t,a)\\ &\leqslant|v|\int_{[0,1]\times\{a:\left\|a\right\|_{1}=1\}} \left[\frac{A}{B_{n,w}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}| \geqslant B_{n,W})+\frac{A}{2B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k} \right|^{3}\mathbf{I}(|W_{k}|<B_{n,W})\right]dP\\ &\left[\text{using }theorem\text{ \ref{eq:1}}\right]\\ &\leqslant|v|\sup_{[0,1]\times\{a:\left|a\right|_{1}=1\}}\left[ \frac{A}{B_{n,w}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geqslant B _{n,W})+\frac{A}{2B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k}\right|^{3} \mathbf{I}(|W_{k}|<B_{n,W})\right]\\ &=A\left|v\right|\sup_{[0,1]\times\{a:\left|a\right|_{1}=1\}} \left[\frac{1}{B_{n,w}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}| \geqslant B_{n,W})+\frac{1}{2B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k} \right|^{3}\mathbf{I}(|W_{k}|<B_{n,W})\right]\\ &=A\left|v\right|\sup_{a:\left|a\right|_{1}=1}\left[\frac{1}{B_{n,w}}\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\mathbf{I}(|W_{k}|\geqslant B_{n,W})+ \frac{1}{2B_{n,W}^{2}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k}\right|^{3}\mathbf{ I}(|W_{k}|<B_{n,W})\right]\\ &=A\left|v\right|\sup_{a:\left|a\right|_{1}=1}\left[\frac{1}{ \sqrt{a^{\top}\Sigma a}}\sum_{k=1}^{n}\mathbb{E}\frac{(a^{\top}X_{k})^{2}}{n} \mathbf{I}(\left|a^{\top}X_{k}\right|\geqslant\sqrt{na^{\top}\Sigma a})+\right. \\ &\left.\frac{1}{2a^{\top}\Sigma a}\sum_{k=1}^{n}\mathbb{E}\frac{ \left|a^{\top}X_{k}\right|^{3}}{n\sqrt{n}}\mathbf{I}(\left|a^{\top}X_{k}\right| <\sqrt{na^{\top}\Sigma a})\right]\\ &=\frac{A\left|v\right|}{n}\sup_{a:\left|a\right|_{1}=1}\left[ \frac{1}{\left\|a\right\|_{\Sigma}}\sum_{k=1}^{n}\mathbb{E}(a^{\top}X_{k})^{2} \mathbf{I}(\left|a^{\top}X_{k}\right|\geqslant\sqrt{n}\left|a\right|_{\Sigma} )+\right.\\ &\left.\frac{1}{2\left\|a\right\|_{\Sigma}^{2}}\sum_{k=1}^{n} \mathbb{E}\frac{\left|a^{\top}X_{k}\right|^{3}}{\sqrt{n}}\mathbf{I}(\left|a ^{\top}X_{k}\right|<\sqrt{n}\left|a\right|_{\Sigma})\right]\\ &\leqslant\frac{A\left|v\right|}{n}\left\{\,\sup_{a:\left|a \right|_{1}=1}\left[\frac{1}{\left\|a\right\|_{\Sigma}}\sum_{k=1}^{n}\mathbb{ E}(a^{\top}X_{k})^{2}\mathbf{I}(\left|a^{\top}X_{k}\right|\geqslant\sqrt{n}\left|a \right|_{\Sigma})\right]+\right.\\ &\left.\sup_{a:\left\|a\right\|_{1}=1}\left[\frac{1}{2\left\|a \right\|_{\Sigma}^{2}}\sum_{k=1}^{n}\mathbb{E}\frac{\left|a^{\top}X_{k}\right|^ {3}}{\sqrt{n}}\mathbf{I}(\left|a^{\top}X_{k}\right|<\sqrt{n}\left|a\right|_{ \Sigma})\right]\right\}\right\}\\ &\left.\left[\text{where, }A\text{ is an absolute constant, }\Sigma=\text{Var}\left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i}\right)\text{ and }\left\|a\right\|_{\Sigma}=\sqrt{a^{\top}\Sigma a}\right]\end{split}\]
For s = 3, continuing from section 5.2, we have,
\[\left|\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z)\right| \leqslant\left|v\right|\int_{[0,1]\times\{a:\left\|a\right\|_{1}=1 \}}\left|\mathbb{E}\sigma^{2}(a^{\top}S_{n,X}-t)-\mathbb{E}\sigma^{2}(a^{\top} Z-t)\right|dP(t,a)\] \[\left|v\right|\int_{[0,1]\times\{a:\left\|a\right\|_{1}=1\}} \left|\mathbb{E}\sigma^{2}(\tilde{S}_{n,W}-t)-\mathbb{E}\sigma^{2}(a^{\top}Z- t)\right|dP(t,a)\] \[\leqslant\left|v\right|\int_{[0,1]\times\{a:\left\|a\right\|_{1}=1 \}}\left\{2A\sum_{k=1}^{n}\mathbb{E}W_{k}^{2}\left[\ln\frac{\left|W_{k}\right| }{B_{n,W}}+1\right]\mathbf{I}(\left|W_{k}\right|\geqslant B_{n,W})+\right.\] \[\left.2\frac{A}{B_{n,W}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k} \right|^{3}\mathbf{I}(\left|W_{k}\right|<B_{n,W})\right\}dP(t,a)\quad\left[ \text{using }theorem\text{ \ref{theorem:2}}\right]\] \[=2A\left|v\right|\sup_{a:\left\|a\right\|_{1}=1}\left\{\sum_{k=1 }^{n}\mathbb{E}W_{k}^{2}\left[\ln\frac{\left|W_{k}\right|}{B_{n,W}}+1\right] \mathbf{I}(\left|W_{k}\right|\geqslant B_{n,W})+\right.\] \[\left.\frac{1}{B_{n,W}}\sum_{k=1}^{n}\mathbb{E}\left|W_{k}\right| ^{3}\mathbf{I}(\left|W_{k}\right|<B_{n,W})\right\}\] \[=2A\left|v\right|\sup_{a:\left\|a\right\|_{1}=1}\left\{\sum_{k=1 }^{n}\mathbb{E}\frac{(a^{\top}X_{k})^{2}}{n}\left[\ln\frac{\left|a^{\top}X_{k }\right|}{\left\|a\right\|_{\Sigma}}+1\right]\mathbf{I}(\left|a^{\top}X_{k} \right|\geqslant\sqrt{n}\left\|a\right\|_{\Sigma})+\right.\] \[\left.\frac{1}{\left\|a\right\|_{\Sigma}}\sum_{k=1}^{n}\mathbb{E }\frac{\left|a^{\top}X_{k}\right|^{3}}{\sqrt{n}}\mathbf{I}(\left|a^{\top}X_{k }\right|<\sqrt{n}\left\|a\right\|_{\Sigma})\right\}\] \[\leqslant 2\frac{A\left|v\right|}{n}\Bigg{\{}\sup_{a:\left\|a \right\|_{1}=1}\sum_{k=1}^{n}\mathbb{E}\left[(a^{\top}X_{k})^{2}\ln\left( \frac{e\left|a^{\top}X_{k}\right|}{\left\|a\right\|_{\Sigma}}\right)\right] \mathbf{I}(\left|a^{\top}X_{k}\right|\geqslant\sqrt{n}\left\|a\right\|_{\Sigma })+\] \[\sup_{a:\left\|a\right\|_{1}=1}\frac{1}{\left\|a\right\|_{\Sigma}} \sum_{k=1}^{n}\mathbb{E}\frac{\left|a^{\top}X_{k}\right|^{3}}{\sqrt{n}} \mathbf{I}(\left|a_{k}^{\top}\right|<\sqrt{n}\left\|a\right\|_{\Sigma})\right\}\] \[\left[\text{where, }A\text{ is an absolute constant, }\Sigma=\mathrm{Var}\left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i}\right) \text{ and }\left\|a\right\|_{\Sigma}=\sqrt{a^{\top}\Sigma a}\right]\]
**Theorem 5.5**.: _Suppose \(X_{1},\ldots,X_{n}\) is a sequence of mean zero independent \(d\)-dimensional random vectors. Set \(S_{n,X}=n^{-1/2}\sum_{i=1}^{n}X_{i}\). Let \(Z\) be a \(d\)-dimensional Gaussian random vector with mean zero and variance-covariance matrix given by \(\Sigma=\mathrm{Var}(S_{n,X})\). Let \(D=[-1,1]^{d}\). Suppose \(f:D\longrightarrow\mathbb{R}\) admits a Fourier representation \(f(x)=\int_{\mathbb{R}^{d}}e^{ix^{\top}\omega}\mathcal{F}(f)(\omega)d\omega\) and_
\[v_{f,2}=\int_{\mathbb{R}^{d}}\left\|\omega\right\|_{1}^{2}\left|\mathcal{F}(f) (\omega)\right|d\omega<\infty.\]
_Then, \(\exists\) a probability measure \(P\) on \([0,1]\times\{a:\left\|a\right\|=1\}\), \(\eta\in\{\pm 1\}\), \(s=2\) and \(v\) such that
\(|v|\leqslant 2v_{f,2}\), such that,_
\[f(x)-f(0)-x\cdot\nabla f(0)=v\int_{[0,1]\times\{a:|a|=1\}}\eta(t,a)(a\cdot x-t)_{ +}^{s-1}dP(t,a)\quad\forall x\in D.\]
_and hence, we have,_
\[|\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z)| \leqslant 2\frac{Av_{f,2}}{n}\Bigg{\{}\sup_{a:|a|=1}\left[\frac{1}{ \left\|a\right\|_{\Sigma}}\sum_{k=1}^{n}\mathbb{E}(a^{\top}X_{k})^{2}\mathbf{ I}\left(\left|a^{\top}X_{k}\right|\geqslant\sqrt{n}\left\|a\right\|_{\Sigma} \right)\right]\] \[+\sup_{a:|a|=1}\left[\frac{1}{2\left\|a\right\|_{\Sigma}^{2}}\sum _{k=1}^{n}\mathbb{E}\frac{\left|a^{\top}X_{k}\right|^{3}}{\sqrt{n}}\mathbf{I} \left(\left|a^{\top}X_{k}\right|<\sqrt{n}\left\|a\right\|_{\Sigma}\right) \right]\Bigg{\}}\]
_for an absolute constant \(A\)._
Proof.: For \(|z|\leqslant c\), we have the identity,
\[-\int_{0}^{c}\left[(z-u)_{+}e^{iu}+(-z-u)_{+}e^{-iu}\right]du=e^{iz}-iz-1\]
Taking \(c=\left\|\omega\right\|_{1}\), \(z=\omega^{\top}x\), \(a=a(\omega)=\frac{\omega}{\left\|\omega\right\|_{1}}\) and \(u=\left\|\omega\right\|_{1}t\), \(t\geqslant 0\), we have
\[-\left\|\omega\right\|_{1}^{2}\int_{0}^{\infty}\left[(a^{\top}x-t)_{+}e^{i \left\|\omega\right\|_{1}t}+(-a^{\top}x-t)_{+}e^{-i\left\|\omega\right\|_{1}t }\right]dt=e^{i\omega^{\top}x}-i\omega^{\top}x-1\]
Multiplying the above equation by \(\mathcal{F}(f)(\omega)=e^{ib(\omega)}\left|\mathcal{F}(f)(\omega)\right|\), where \(b(\omega)\) is the angle made by \(\mathcal{F}(f)(\omega)\) with the real axis, we have,
\[-\left\|\omega\right\|_{1}^{2}e^{ib(\omega)}\left|\mathcal{F}(f)(\omega) \right|\int_{0}^{1}\left[(a^{\top}x-t)_{+}e^{i\left\|\omega\right\|_{1}t}+(-a^ {\top}x-t)_{+}e^{-i\left\|\omega\right\|_{1}t}\right]dt=\mathcal{F}(f)(\omega )(e^{i\omega^{\top}x}-i\omega^{\top}x-1)\]
Integrating the above over \(\omega\in\mathbb{R}^{d}\), we have,
\[-\int_{\mathbb{R}^{d}}\left\|\omega\right\|_{1}^{2}e^{ib(\omega)}\left| \mathcal{F}(f)(\omega)\right|\int_{0}^{1}\left[(a^{\top}x-t)_{+}e^{i\left\| \omega\right\|_{1}t}+(-a^{\top}x-t)_{+}e^{-i\left\|\omega\right\|_{1}t}\right] dtd\omega=\int_{\mathbb{R}^{d}}\mathcal{F}(f)(\omega)(e^{i\omega^{\top}x}-i\omega^{ \top}x-1)d\omega.\]
The R.H.S. of section 5.2 can be written as
\[\int_{\mathbb{R}^{d}}\left[\mathcal{F}(f)(\omega)e^{i\omega^{\top}x}-\mathcal{ F}(f)(\omega)i\omega^{\top}x-\mathcal{F}(f)(\omega)\right]d\omega=f(x)-x^{\top} \nabla f(0)-f(0)\]
Now, we consider the L.H.S. of section 5.2 and show that it satisfies the condition of Fubini's Theorem.
\[-\int_{\mathbb{R}^{d}}\left\|\omega\right\|_{1}^{2}e^{ib(\omega)}\left| \mathcal{F}(f)(\omega)\right|\int_{0}^{1}\left[(a^{\top}x-t)_{+}e^{i\left\| \omega\right\|_{1}t}+(-a^{\top}x-t)_{+}e^{-i\left\|\omega\right\|_{1}t}\right] dtd\omega\]
We wish to show that
\[-\int_{\mathbb{R}^{d}}\left\|\omega\right\|_{1}^{2}e^{ib(\omega)}\left| \mathcal{F}(f)(\omega)\right|\Big{|}\int_{0}^{1}\left|(a^{\top}x-t)_{+}e^{i \left\|\omega\right\|_{1}t}+(-a^{\top}x-t)_{+}e^{-i\left\|\omega\right\|_{1}t }\Big{|}\,dtd\omega<\infty\]
Now,
\[\int_{0}^{1}\left|(a^{\top}x-t)_{+}e^{i|\omega|_{1}t}+(-a^{\top}x-t) _{+}e^{-i|\omega|_{1}t}\right|dt \leqslant\int_{0}^{1}\left|(a^{\top}x-t)_{+}e^{i|\omega|_{1}t} \right|dt+\int_{0}^{1}\left|(-a^{\top}x-t)_{+}e^{-i|\omega|_{1}t}\right|dt\] \[=\int_{0}^{1}(a^{\top}x-t)_{+}\left|e^{i|\omega|_{1}t}\right|dt+ \int_{0}^{1}(-a^{\top}x-t)_{+}\left|e^{-i|\omega|_{1}t}\right|dt\] \[=\int_{0}^{1}(a^{\top}x-t)_{+}dt+\int_{0}^{1}(-a^{\top}x-t)_{+}dt\]
Now, we see that,
\[\int_{0}^{1}(a^{\top}x-t)_{+}dt \leqslant\int_{0}^{\infty}(a^{\top}x-t)_{+}dt\] \[=\int_{0}^{\infty}(a^{\top}x-t)\mathbf{I}\{a^{\top}x-t\geqslant 0\}\] \[=\int_{0}^{\infty}(a^{\top}x-t)\mathbf{I}\{a^{\top}x\geqslant t\}\] \[=\int_{0}^{a^{\top}x}(a^{\top}x-t)\mathbf{I}\{a^{\top}x\geqslant 0\}\] \[=\left[-\frac{(a^{\top}x-t)^{2}}{2}\right]_{0}^{a^{\top}x}\mathbf{ I}\{a^{\top}x\geqslant 0\}\] \[=\frac{(a^{\top}x)^{2}}{2}\mathbf{I}\{a^{\top}x\geqslant 0\}\] \[=\frac{(a^{\top}x)^{2}_{+}}{2}\]
and
\[\int_{0}^{1}(-a^{\top}x-t)_{+}dt \leqslant\int_{0}^{\infty}(-a^{\top}x-t)_{+}dt\] \[=\int_{0}^{\infty}(-a^{\top}x-t)\mathbf{I}\{-a^{\top}x-t\geqslant 0\}\] \[=\int_{0}^{\infty}(-a^{\top}x-t)\mathbf{I}\{-a^{\top}x\geqslant t\}\] \[=\int_{0}^{-a^{\top}x}(-a^{\top}x-t)\mathbf{I}\{-a^{\top}x\geqslant 0\}\] \[=-\int_{0}^{-a^{\top}x}(a^{\top}x+t)\mathbf{I}\{-a^{\top}x\geqslant 0\}\] \[=-\left[\frac{(a^{\top}x+t)^{2}}{2}\right]_{0}^{-a^{\top}x}\mathbf{ I}\{-a^{\top}x\geqslant 0\}\] \[=-\frac{(a^{\top}x)^{2}}{2}\mathbf{I}\{-a^{\top}x\geqslant 0\}\] \[=-\frac{(a^{\top}x)^{2}}{2}\]
Thus, we have,
\[-\int_{\mathbb{R}^{d}}\left|\left\|\omega\right\|_{1}^{2}e^{ib( \omega)}\left|\mathcal{F}(f)(\omega)\right|\right|\int_{0}^{1}\left|(a^{\top}x- t)_{+}e^{i|\omega|_{1}t}+(-a^{\top}x-t)_{+}e^{-i|\omega|_{1}t}\right|dtd\omega\] \[\leqslant-\left[\frac{(a^{\top}x)_{+}^{2}}{2}-\frac{(-a^{\top}x) _{+}^{2}}{2}\right]\int_{\mathbb{R}^{d}}\left|\omega\right|_{1}^{2}\left|e^{ib( \omega)}\right|\left|\mathcal{F}(f)(\omega)\right|d\omega\] \[=\left[\frac{(-a^{\top}x)_{+}^{2}}{2}-\frac{(a^{\top}x)_{+}^{2}} {2}\right]\int_{\mathbb{R}^{d}}\left|\omega\right|_{1}^{2}\left|\mathcal{F}(f) (\omega)\right|d\omega=\left[\frac{(-a^{\top}x)_{+}^{2}}{2}-\frac{(a^{\top}x) _{+}^{2}}{2}\right]v_{f,2}\leqslant\frac{(-a^{\top}x)_{+}^{2}}{2}v_{f,2}\] \[\leqslant\frac{\left|-a^{\top}x\right|^{2}}{2}v_{f,2}\leqslant \frac{\left\|x\right\|_{\infty}^{2}}{2}v_{f,2}<\infty\]
Thus, we can say that the L.H.S. of section 5.2 satisfies the condition of Fubini's Theorem. So, continuing from section 5.2, we have,
\[-\int_{\mathbb{R}^{d}}\int_{0}^{1}\left\|\omega\right\|_{1}^{2}e^{ib(\omega)} \left|\mathcal{F}(f)(\omega)\right|\left[(a^{\top}x-t)_{+}e^{i|\omega|_{1}t}+( -a^{\top}x-t)_{+}e^{-i|\omega|_{1}t}\right]dtd\omega=\int_{\mathbb{R}^{d}} \mathcal{F}(f)(\omega)(e^{i\omega^{\top}x}-i\omega^{\top}x-1)d\omega.\]
Taking \(g(t,\omega)=-[(a^{\top}x-t)_{+}\cos(\left\|\omega\right\|_{1}t+b(\omega))+(-a^ {\top}x-t)_{+}\cos(\left\|\omega\right\|_{1}t-b(\omega))]\left\|\omega\right\| _{1}^{2}|\mathcal{F}(f)(\omega)|\), we can write,
\[\int_{\mathbb{R}^{d}}\int_{0}^{1}g(t,\omega)dtd\omega =\int_{\mathbb{R}^{d}}\mathcal{F}(f)(\omega)(e^{i\omega^{\top}x}- i\omega^{\top}x-1)d\omega\] \[\implies\int_{\mathbb{R}^{d}}\int_{0}^{1}g(t,\omega)dtd\omega =f(x)-x^{\top}\nabla f(0)-f(0)\]
Consider the probability measure on \([0,1]\times\mathbb{R}^{d}\) defined by
\[dP^{\prime}(t,\omega)=\frac{1}{v}\left|\cos(\left\|\omega\right\|_{1}t+b(\omega )\right|\left\|\omega\right\|_{1}^{2}|\mathcal{F}(f)(\omega)|\]
where, \(v=\int_{\mathbb{R}^{d}}\int_{0}^{1}[\left|\cos(\left\|\omega\right\|_{1}t+b( \omega))\right|+\left|\cos(\left\|\omega\right\|_{1}t-b(\omega))\right|]\left\| \omega\right\|_{1}^{2}|\mathcal{F}(f)(\omega)|\,dtd\omega\leqslant 2v_{f,2}\).
Define a function \(h(t,a)=(a^{\top}x-t)_{+}\eta(t,\omega)\), where, \(\eta(t,\omega)=-sgn\cos(\left\|\omega\right\|_{1}t+b(\omega))\). We note that \(h(t,a)(x)\) is of the form \(\pm(a^{\top}x-t)_{+}=\eta^{\prime}(t,\omega)(a^{\top}x-t)_{+}\), where, \(\eta^{\prime}(t,\omega)\in\{\pm 1\}\). Thus, we see that,
\[f(x)-f(0)-\nabla f(0)^{\top}x =v\int_{[0,1]\times\mathbb{R}^{d}}\eta^{\prime}(t,\omega)(a^{ \top}x-t)_{+}dP^{\prime}(t,\omega)\] \[=v\int_{[0,1]\times\{a:\left|a\right|=1\}}\eta(t,a)(a^{\top}x-t)_{ +}dP(t,a)\quad\left[\begin{array}{c}\because a(\omega)=\frac{\omega}{\left\| \omega\right\|_{1}}\end{array}\right]\]
where, \(\eta^{\prime}(t,a(\omega))=\eta^{\prime}(t,\omega)\in\{\pm 1\}\). Thus, we have proved that, \(\exists\) a probability measure \(P\) on \([0,1]\times\{a:\left\|a\right\|=1\}\), \(\eta\in\{\pm 1\}\), \(s=2\) and \(v\) such that \(|v|\leqslant 2v_{f,2}\), such that,
\[f(x)-f(0)-x\cdot\nabla f(0)=v\int_{[0,1]\times\{a:\left|a\right|=1\}}\eta(t,a)( a\cdot x-t)_{+}^{s-1}dP(t,a)\quad\forall x\in D.\]
Then, we have,
\[\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z) =\mathbb{E}\Bigg{[}f(0)+\nabla f(0)^{\top}S_{n,X}+v\int_{[0,1]\times \{a:\left\lvert a\right\rvert=1\}}\eta(t,a)(a\cdot S_{n,X}-t)_{+}^{s-1}dP(t,a) \Bigg{]}-\] \[\mathbb{E}\Bigg{[}f(0)+\nabla f(0)^{\top}Z+v\int_{[0,1]\times\{a :\left\lvert a\right\rvert=1\}}\eta(t,a)(a\cdot Z-t)_{+}^{s-1}dP(t,a)\big{]}\] \[=\nabla f(0)^{\top}\mathbb{E}(S_{n,X}-Z)+\mathbb{E}g(S_{n,X})- \mathbb{E}g(Z)\] \[=\mathbb{E}g(S_{n,X})=\mathbb{E}g(Z)\] \[\left[\because\mathbb{E}S_{n,X}-\mathbb{E}Z\implies\nabla f(0)^{ \top}\mathbb{E}(S_{n,X}-Z)=0\right]\]
where, \(g\) is a function that admits a representation of the form \(g(x)=v\int_{[0,1]\times\{a:\left\lvert a\right\rvert=1\}}\eta(t,a)(a\cdot x-t) _{+}^{s-1}dP(t,a)\).
Then, we have,
\[\left\lvert\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z)\right\rvert =\left\lvert\mathbb{E}g(S_{n,X})-\mathbb{E}g(Z)\right\rvert\] \[\leq\frac{A\left\lvert v\right\rvert}{n}\Bigg{\{}\sup_{a:\left\lvert a \right\rvert_{1}=1}\left[\frac{1}{\left\lVert a\right\rvert_{\Sigma}}\sum_{k =1}^{n}\mathbb{E}(a^{\top}X_{k})^{2}\mathbf{I}(\left\lvert a^{\top}X_{k} \right\rvert\geq\sqrt{n}\left\lVert a\right\rvert_{\Sigma})\right]+\] \[\sup_{a:\left\lvert a\right\rvert_{1}=1}\left[\frac{1}{2\left\lVert a \right\rvert_{\Sigma}^{2}}\sum_{k=1}^{n}\mathbb{E}\frac{\left\lvert a^{\top}X_ {k}\right\rvert^{3}}{\sqrt{n}}\mathbf{I}(\left\lvert a^{\top}X_{k}\right\rvert <\sqrt{n}\left\lVert a\right\rvert_{\Sigma})\right]\Bigg{\}}\] \[\left[\text{using the bound for }s=2\text{ in }theorem\text{ \ref{theorem:bound
_for an absolute constant A._
Proof.: To show that \(\exists\) a probability measure \(P\) on \([0,1]\times\{a:\left\|a\right\|=1\}\), \(\eta\in\{\pm 1\}\), \(s=3\) and \(v\) such that \(|v|\leqslant v_{f,3}\), such that,\(f(x)-f(0)-\nabla f(0)^{\top}x-\frac{x^{\top}\nabla_{2}f(0)x}{2}=v\int_{[0,1] \times\{a:\left|a\right|=1\}}\eta(t,a)(a^{\top}x-t)_{+}^{s-1}dP(t,a)\), \(\forall x\in D\), we use a technique exactly similar to that used in theorem 5.5. The function \(f(x)-\frac{x^{\top}\nabla\nabla^{\top}xf(0)}{2}-x^{\top}\nabla f(0)-f(0)\) can be written as the real part of
\[\int_{\mathbb{R}^{d}}\Bigg{(}e^{i\omega^{\top}x}+\frac{(\omega^{\top}x)^{2}}{2 }-i\omega^{\top}x-1\Bigg{)}\mathcal{F}(f)(\omega)d\omega.\]
As before, the above integrand admits an integral representation given by
\[\frac{i}{2}\left\|\omega\right\|_{1}^{3}\int_{0}^{1}[(-a^{\top}x-t)_{+}^{2}e ^{-i\left|\omega\right|_{1}t}-(a^{\top}x-t)_{+}^{2}e^{i\left|\omega\right|_{1} t}]dt,\]
which can be used to show that
\[f(x)-\frac{x^{\top}\nabla\nabla^{\top}xf(0)}{2}-x^{\top}\nabla f(0)-f(0)= \frac{v}{2}\int_{\left\{-1,1\right\}\times\left[0,1\right]\times\mathbb{R}^{d }}h(z,t,a)(x)dP(z,t,\omega),\]
where,
\[h(z,t,a)sgn\sin(z\left\|\omega\right\|_{1}t+b(\omega))(za^{\top}x-t)_{+}^{2}\]
and
\[dP(z,t,\omega)=\frac{1}{v}\left|\sin(z\left\|\omega\right\|_{1}t+b(\omega)) \right|\left\|\omega\right\|_{1}^{3}\left|\mathcal{F}(f)(\omega)\right|dtd\omega,\]
\[v=\int_{\mathbb{R}^{d}}\int_{0}^{1}[\left|\sin(\left\|\omega\right\|_{1}t+b( \omega))\right|+\left|\sin(\left\|\omega\right\|_{1}t-b(\omega))\right|] \left\|\omega\right\|_{1}^{3}\left|\mathcal{F}(f)(\omega)\right|dtd\omega \leqslant 2v_{f,3}\]
Thus, we have proved that, \(\exists\) a probability measure \(P\) on \([0,1]\times\{a:\left\|a\right\|=1\}\), \(\eta\in\{\pm 1\}\), \(s=3\) and \(v\) such that \(|v|\leqslant v_{f,3}\), such that,
\[f(x)-f(0)-\nabla f(0)^{\top}x-\frac{x^{\top}\nabla_{2}f(0)x}{2}=v\int_{[0,1] \times\{a:\left|a\right|=1\}}\eta(t,a)(a^{\top}x-t)_{+}^{s-1}dP(t,a)\quad \forall x\in D.\]
Then,
\[\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z) =\mathbb{E}\Bigg{[}f(0)+\nabla f(0)^{\top}S_{n,X}+\frac{S_{n,X} ^{\top}\nabla_{2}f(0)S_{n,X}}{2}+v\int_{[0,1]\times\{a:\left|a\right|=1\}} \eta(t,a)(a^{\top}S_{n,X}-t)_{+}^{s-1}dP(t,a)\Bigg{]}\] \[E\Bigg{[}f(0)+\nabla f(0)^{\top}Z+\frac{Z^{\top}\nabla_{2}f(0)Z}{ 2}+v\int_{[0,1]\times\{a:\left|a\right|=1\}}\eta(t,a)(a^{\top}Z-t)_{+}^{s-1} dP(t,a)\Bigg{]}\] \[=\nabla f(0)^{\top}\mathbb{E}(S_{n,X}-Z)+\frac{1}{2}\mathbb{E} \left(S_{n,X}^{\top}\nabla_{2}f(0)S_{n,X}-Z^{\top}\nabla_{2}f(0)Z\right)+ \mathbb{E}g(S_{n,X})-\mathbb{E}g(Z)\] \[\text{where, }g\text{ a representation of the form }g(x)=v\int_{[0,1]\times\{a:\left|a\right|=1\}}\eta(t,a)(a^{\top}x-t)_{+}^{s-1}dP (t,a)\] \[=\mathbb{E}g(S_{n,X})-\mathbb{E}g(Z)\]
Then, we have,
\[\left|\mathbb{E}f(S_{n,X})-\mathbb{E}f(Z)\right| =\left|\mathbb{E}g(S_{n,X})-\mathbb{E}g(Z)\right|\] \[\leq 2\frac{A\left|v\right|}{n}\Bigg{\{}\sup_{a:\left|a\right|_{1} =1}\sum_{k=1}^{n}\mathbb{E}\left[(a^{\top}X_{k})^{2}\ln\left(\frac{e\left|a^{ \top}X_{k}\right|}{\left\|a\right|_{\Sigma}}\right)\right]\mathbf{I}(\left|a^{ \top}X_{k}\right|\geq\sqrt{n}\left\|a\right|_{\Sigma})+\] \[\sup_{a:\left|a\right|_{1}=1}\frac{1}{\left\|a\right|_{\Sigma}} \sum_{k=1}^{n}\mathbb{E}\frac{\left|a^{\top}X_{k}\right|^{3}}{\sqrt{n}}\mathbf{ I}(\left|a_{k}^{\top}\right|<\sqrt{n}\left|a\right|_{\Sigma})\Bigg{\}}\quad\text{[ using the bound for $s=3$ in Theorem \ref{eq:bound}]}\] \[\leq 4\frac{Av_{f,3}}{n}\Bigg{\{}\sup_{a:\left|a\right|_{1}=1}\sum _{k=1}^{n}\mathbb{E}\left[(a^{\top}X_{k})^{2}\ln\left(\frac{e\left|a^{\top}X_ {k}\right|}{\left\|a\right|_{\Sigma}}\right)\right]\mathbf{I}(\left|a^{\top}X _{k}\right|\geq\sqrt{n}\left\|a\right|_{\Sigma})+\] \[\sup_{a:\left|a\right|_{1}=1}\frac{1}{\left\|a\right|_{\Sigma}} \sum_{k=1}^{n}\mathbb{E}\frac{\left|a^{\top}X_{k}\right|^{3}}{\sqrt{n}}\mathbf{ I}(\left|a_{k}^{\top}\right|<\sqrt{n}\left|a\right|_{\Sigma})\Bigg{\}}\]
## 6 Discussion
In this paper, we have provided bounds on the difference between functions of random variables using level sets of functions. Using classical uniform and non-uniform Berry-Esseen bounds for univariate random variables. The resulting bounds can be applied to single-layer neural networks and functions on \([-1,1]^{d}\) with finite weighted norm integrable Fourier transform. These functions belong to the functions in Barron space. Unlike the classical bounds that depend on the oscillation function of \(f\), our bounds do not have an explicit dimension dependence. In part II, we will explore extensions of these results to functions with integrable Fourier transforms on \(\mathbb{R}^{d}\) and moreover, explore bounds obtained using function approximation theory using radial basis or neural networks.
Acknowledgments.This work is partially supported by NSF DMS-2113611.
|
2304.02689 | ACTION++: Improving Semi-supervised Medical Image Segmentation with
Adaptive Anatomical Contrast | Medical data often exhibits long-tail distributions with heavy class
imbalance, which naturally leads to difficulty in classifying the minority
classes (i.e., boundary regions or rare objects). Recent work has significantly
improved semi-supervised medical image segmentation in long-tailed scenarios by
equipping them with unsupervised contrastive criteria. However, it remains
unclear how well they will perform in the labeled portion of data where class
distribution is also highly imbalanced. In this work, we present ACTION++, an
improved contrastive learning framework with adaptive anatomical contrast for
semi-supervised medical segmentation. Specifically, we propose an adaptive
supervised contrastive loss, where we first compute the optimal locations of
class centers uniformly distributed on the embedding space (i.e., off-line),
and then perform online contrastive matching training by encouraging different
class features to adaptively match these distinct and uniformly distributed
class centers. Moreover, we argue that blindly adopting a constant temperature
$\tau$ in the contrastive loss on long-tailed medical data is not optimal, and
propose to use a dynamic $\tau$ via a simple cosine schedule to yield better
separation between majority and minority classes. Empirically, we evaluate
ACTION++ on ACDC and LA benchmarks and show that it achieves state-of-the-art
across two semi-supervised settings. Theoretically, we analyze the performance
of adaptive anatomical contrast and confirm its superiority in label
efficiency. | Chenyu You, Weicheng Dai, Yifei Min, Lawrence Staib, Jasjeet S. Sekhon, James S. Duncan | 2023-04-05T18:33:18Z | http://arxiv.org/abs/2304.02689v3 | # ACTION++: Improving Semi-supervised Medical Image Segmentation with Adaptive Anatomical Contrast
###### Abstract
Medical data often exhibits long-tail distributions with heavy class imbalance, which naturally leads to difficulty in classifying the minority classes (_i.e._, boundary regions or rare objects). Recent work has significantly improved semi-supervised medical image segmentation in long-tailed scenarios by equipping them with unsupervised contrastive criteria. However, it remains unclear how well they will perform in the labeled portion of data where class distribution is also highly imbalanced. In this work, we present **ACTION++**, an improved contrastive learning framework with adaptive anatomical contrast for semi-supervised medical segmentation. Specifically, we propose an adaptive supervised contrastive loss, where we first compute the optimal locations of class centers uniformly distributed on the embedding space (_i.e._, off-line), and then perform online contrastive matching training by encouraging different class features to adaptively match these distinct and uniformly distributed class centers. Moreover, we argue that blindly adopting a _constant_ temperature \(\tau\) in the contrastive loss on long-tailed medical data is not optimal, and propose to use a _dynamic_\(\tau\) via a simple cosine schedule to yield better separation between majority and minority classes. Empirically, we evaluate ACTION++ on ACDC and LA benchmarks and show that it achieves state-of-the-art across two semi-supervised settings. Theoretically, we analyze the performance of adaptive anatomical contrast and confirm its superiority in label efficiency.
Keywords:Semi-Supervised Learning Contrastive Learning Imbalanced Learning Long-tailed Medical Image Segmentation.
## 1 Introduction
With the recent development of semi-supervised learning (SSL) [3], rapid progress has been made in medical image segmentation, which typically learns rich anatomical representations from few labeled data and the vast amount of unlabeled data.
Existing SSL approaches can be generally categorized into adversarial training [32, 16], deep co-training [23, 38], mean teacher schemes [27, 37, 14, 13, 15, 7, 36, 33], multi-task learning [19, 11, 22], and contrastive learning [2, 29, 35, 24, 34].
Contrastive learning (CL) has become a remarkable approach to enhance semi-supervised medical image segmentation performance without significantly increasing the amount of parameters and annotation costs [2, 29, 34]. In real-world clinical scenarios, since the classes in medical images follow the Zipfian distribution [39], the medical datasets usually show a long-tailed, even heavy-tailed class distribution, _i.e._, some minority (tail) classes involving significantly fewer pixel-level training instances than other majority (head) classes, as illustrated in Figure 1. Such imbalanced scenarios are usually very challenging for CL methods to address, leading to noticeable performance drop [18].
To address long-tail medical segmentation, our motivations come from the following two perspectives in CL training schemes [2, 34]: **Training objective** - the main focus of existing approaches is on designing proper unsupervised contrastive loss in learning high-quality representations for long-tail medical segmentation. While extensively explored in the unlabeled portion of long-tail medical data, supervised CL has rarely been studied from empirical and theoretical perspectives, which will be one of the focuses in this work; **Temperature scheduler** - the temperature parameter \(\tau\), which controls the strength of attraction and repulsion forces in the contrastive loss [5, 4], has been shown to play a crucial role in learning useful representations. It is affirmed that a large \(\tau\) emphasizes anatomically meaningful group-wise patterns by group-level discrimination, whereas a small \(\tau\) ensures a higher degree of pixel-level (instance) discrimination [28, 25]. On the other hand, as shown in [25], group-wise discrimination often results in reduced model's instance discrimination capabilities, where the model will be biased to "easy" features instead of "hard" features. It is thus unfavorable for long-tailed medical segmentation to blindly treat \(\tau\) as a _constant_ hyperparameter, and a dynamic temperature parameter for CL is worth investigating.
In this paper, we introduce ACTION++, which further optimizes anatomically group-level and pixel-level representations for better head and tail class separations, on both labeled and unlabeled medical data. Specifically, we devise two strategies to improve overall segmentation quality by focusing on the two aforementioned perspectives: (1) we propose supervised adaptive anatomical contrastive learning (SAACL) for long-tail medical segmentation. To prevent the feature space from being biased toward the dominant head class, we first pre-compute the optimal locations of class centers uniformly distributed on the
Figure 1: Examples of two benchmarks (_i.e._, ACDC and LA) with imbalanced class distribution. From left to right: input image, ground-truth segmentation map, class distribution chart, training data feature distribution for multiple classes.
embedding space (_i.e._, off-line), and then perform online contrastive matching training by encouraging different class features to adaptively match these distinct and uniformly distributed class centers; (2) we find that blindly adopting the _constant_ temperature \(\tau\) in the contrastive loss can negatively impact the segmentation performance. Inspired by an average distance maximization perspective, we leverage a _dynamic_\(\tau\) via a simple cosine schedule, resulting in significant improvements in the learned representations. Both of these enable the model to learn a balanced feature space that has similar separability for both the majority (head) and minority (tail) classes, leading to better generalization in long-tail medical data. We evaluated our ACTION++ on the public ACDC and LA datasets [1, 31]. Extensive experimental results show that our ACTION++ outperforms prior methods by a significant margin and sets the new state-of-the-art across two semi-supervised settings. We also theoretically show the superiority of our method in label efficiency (Appendix A). Code will be released with publication.
## 2 Method
### Overview
Problem StatementGiven a medical image dataset \((\mathbf{X},\mathbf{Y})\), our goal is to train a segmentation model \(\mathbf{F}\) that can provide accurate predictions that assign each pixel to their corresponding \(K\)-class segmentation labels.
SetupFigure 2 illustrates an overview of ACTION++. By default, we build this work upon ACTION pipeline [34], the state-of-the-art CL framework for semi-supervised medical image segmentation. The backbone model adopts the
Figure 2: Overview of ACTION++: (1) global and local pre-training with proposed anatomical-aware temperature scheduler, (2) our proposed adaptive anatomical contrast fine-tuning, which first pre-computes the optimal locations of class centers uniformly distributed on the embedding space (_i.e._, off-line), and then performs online contrastive matching training by encouraging different class features to adaptively match these distinct and uniformly distributed class centers with respect to anatomical features.
student-teacher framework that shares the same architecture, and the parameters of the teacher are the exponential moving average of the student's parameters. Hereinafter, we adopt their model as our backbone and briefly summarize its major components: (1) global contrastive distillation pre-training; (2) local contrastive distillation pre-training; and (3) anatomical contrast fine-tuning.
**Global and Local Pre-training**[34] first creates two types of anatomical views as follows: (1) _augmented views_ - \(\mathbf{x}^{1}\) and \(\mathbf{x}^{2}\) are augmented from the unlabeled input scan with two separate data augmentation operators; (2) _mined views_ - \(n\) samples (_i.e._, \(\mathbf{x}^{3}\)) are randomly sampled from the unlabeled portion with additional augmentation. The pairs \(\big{[}\mathbf{x}^{1},\mathbf{x}^{2}\big{]}\) are then processed by student-teacher networks \([F_{s},F_{t}]\) that share the same architecture and weight, and similarly, \(\mathbf{x}^{3}\) is encoded by \(F_{t}\). Their global latent features after the encoder \(E\) (_i.e._, \(\big{[}\mathbf{h}^{1},\mathbf{h}^{2},\mathbf{h}^{3}\big{]}\)) and local output features after decoder \(D\) (_i.e._, \(\big{[}\mathbf{f}^{1},\mathbf{f}^{2},\mathbf{f}^{3}\big{]}\)) are encoded by the two-layer nonlinear projectors, generating global and local embeddings \(\mathbf{v}_{g}\) and \(\mathbf{v}_{l}\). \(\mathbf{v}\) from \(F_{s}\) are separately encoded by the non-linear predictor, producing \(\mathbf{w}\) in both global and local manners1. Third, the relational similarities between augmented and mined views are processed by SoftMax function as follows: \(\mathbf{u}_{s}=\log\frac{\exp\big{(}\text{sim}\big{(}\mathbf{w}^{1},\mathbf{v} ^{3}\big{)}/\tau_{s}\big{)}}{\sum_{n=1}^{N}\exp\big{(}\text{sim}\big{(}\mathbf{ w}^{1},\mathbf{v}_{n}^{3}\big{)}/\tau_{s}\big{)}}\), \(\mathbf{u}_{t}=\log\frac{\exp\big{(}\text{sim}\big{(}\mathbf{w}^{2},\mathbf{v} ^{3}\big{)}/\tau_{t}\big{)}}{\sum_{n=1}^{N}\exp\big{(}\text{sim}\big{(}\mathbf{ w}^{2},\mathbf{v}_{n}^{3}\big{)}/\tau_{t}\big{)}}\), where \(\tau_{s}\) and \(\tau_{t}\) are two temperature parameters. Finally, we minimize the unsupervised instance discrimination loss (_i.e._, Kullback-Leibler divergence \(\mathcal{KL}\)) as:
Footnote 1: For simplicity, we omit details of local instance discrimination in the following.
\[\mathcal{L}_{\text{inst}}=\mathcal{KL}(\mathbf{u}_{s}||\mathbf{u}_{t}). \tag{1}\]
We formally summarize the pretraining objective as the equal combination of the global and local \(\mathcal{L}_{\text{inst}}\), and supervised segmentation loss \(\mathcal{L}_{\text{sup}}\) (_i.e._, equal combination of Dice loss and cross-entropy loss).
**Anatomical Contrast Fine-tuning** The underlying motivation for the fine-tuning stage is that it reduces the vulnerability of the pre-trained model to long-tailed unlabeled data. To mitigate the problem, [34] proposed to fine-tune the model by anatomical contrast. First, the additional representation head \(\boldsymbol{\varphi}\) is used to provide dense representations with the same size as the input scans. Then, [34] explore pulling queries \(\mathbf{r}_{q}\!\in\!\mathcal{R}\) to be similar to the positive keys \(\mathbf{r}_{k}^{+}\!\in\!\mathcal{R}\), and push apart the negative keys \(\mathbf{r}_{k}^{-}\!\in\!\mathcal{R}\). The AnCo loss is defined as follows:
\[\mathcal{L}_{\text{anco}}=\sum_{c\in\mathcal{C}}\sum_{\mathbf{r}_{q}\sim \mathcal{R}_{q}^{c}}-\log\frac{\exp(\mathbf{r}_{q}\cdot\mathbf{r}_{k}^{c,+}/ \tau_{an})}{\exp(\mathbf{r}_{q}\cdot\mathbf{r}_{k}^{c,+}/\tau_{an})+\sum_{ \mathbf{r}_{k}^{-}\sim\mathcal{R}_{k}^{c}}\exp(\mathbf{r}_{q}\cdot\mathbf{r}_ {k}^{-}/\tau_{an})}, \tag{2}\]
where \(\mathcal{C}\) denotes a set of all available classes in the current mini-batch, and \(\tau_{an}\) is a temperature hyperparameter. For class \(c\), we select a query representation set \(\mathcal{R}_{q}^{c}\), a negative key representation set \(\mathcal{R}_{k}^{c}\) whose labels are not in class \(c\), and the positive key \(\mathbf{r}_{k}^{c,+}\) which is the \(c\)-class mean representation. Given \(\mathcal{P}\) is a set including all pixel coordinates with the same size as \(R\), these queries and keys can be defined as: \(\mathcal{R}_{q}^{c}\!=\!\!\bigcup_{[i,j]\in\mathcal{A}}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\frac{1}{|\mathcal{R}_{\sigma}^{c}|}\sum_{\mathbf{r}_{q}\in\mathcal{R}_{q}^{c}}\mathbf{r}_ {q}\,\). We formally summarize the fine-tuning objective as the equal combination of unsupervised \(\mathcal{L}_{\text{anco}}\), unsupervised cross-entropy loss \(\mathcal{L}_{\text{unsup}}\), and supervised segmentation loss \(\mathcal{L}_{\text{sup}}\). For more details, we refer the reader to [34].
### Supervised Adaptive Anatomical Contrastive Learning
The general efficacy of anatomical contrast on long-tail unlabeled data has previously been demonstrated by the authors of [34]. However, taking a closer look, we observe that the well-trained \(\mathbf{F}\) shows a downward trend in performance, which often fails to classify tail classes on labeled data, especially when the data shows long-tailed class distributions. This indicates that such well-trained \(\mathbf{F}\) is required to improve the segmentation capabilities in long-tailed labeled data. To this end, inspired by [17] tailored for the image classification tasks, we introduce supervised adaptive anatomical contrastive learning (SAACL), a training framework for generating well-separated and uniformly distributed latent feature representations for both the head and tail classes. It consists of three main steps, which we describe in the following.
**Anatomical Center Pre-computation** We first pre-compute the anatomical class centers in latent representation space. The optimal class centers are chosen as \(K\) positions from the unit sphere \(\mathbb{S}^{d-1}=\{v\in\mathbb{R}^{d}:\ \|v\|_{2}=1\}\) in the \(d\)-dimensional space. To encourage good separability and uniformity, we compute the class centers \(\{\mathbf{\psi}_{c}\}_{c=1}^{K}\) by minimizing the following uniformity loss \(\mathcal{L}_{\text{unif}}\):
\[\mathcal{L}_{\text{unif}}(\{\mathbf{\psi}_{c}\}_{c=1}^{K})=\sum_{c=1}^{K}\log \left(\sum_{c^{\prime}=1}^{K}\exp(\mathbf{\psi}_{c}\cdot\mathbf{\psi}_{c^{\prime}}/ \tau)\right). \tag{3}\]
In our implementation, we use gradient descent to search for the optimal class centers constrained to the unit sphere \(\mathbb{S}^{d-1}\), which are denoted by \(\{\mathbf{\psi}_{c}^{\star}\}_{c=1}^{K}\). Furthermore, the latent dimension \(d\) is a hyper-parameter, which we set such that \(d\gg K\) to ensure the solution found by gradient descent indeed maximizes the minimum distance between any two class centers [6]. It is also known that any analytical minimizers of Eqn. 3 form a perfectly regular \(K\)-vertex inscribed simplex of the sphere \(\mathbb{S}^{d-1}\)[6]. We emphasize that this first step of pre-computation of class centers is completely off-line as it does not require any training data.
**Adaptive Allocation** As the second step, we explore adaptively allocating these centers among classes. This is a combinatorial optimization problem and an exhaustive search of all choices would be computationally prohibited. Therefore, we draw intuition from the empirical mean in the K-means algorithm and adopt an adaptive allocation scheme to iteratively search for the optimal allocation during training. Specifically, consider a batch \(\mathcal{B}=\{\mathcal{B}_{1},\cdots,\mathcal{B}_{K}\}\) where \(\mathcal{B}_{c}\) denotes a set of samples in a batch with class label \(c\), for \(c=1,\cdots,K\). Define \(\overline{\mathbf{\phi}}_{c}(\mathcal{B})=\sum_{i\in\mathcal{B}_{c}}\mathbf{\phi}_{i}/ \|\sum_{i\in\mathcal{B}_{c}}\mathbf{\phi}_{i}\|_{2}\) be the empirical mean of class \(c\) in current batch, where \(\mathbf{\phi}_{i}\) is the feature embedding of sample \(i\). We compute assignment \(\pi\) by minimizing the distance between pre-computed class centers and the empirical means:
\[\pi^{\star}=\arg\min_{\pi}\sum_{c=1}^{K}\|\mathbf{\psi}_{\pi(c)}^{\star}-\overline {\mathbf{\phi}}_{c}\|_{2}. \tag{4}\]
In implementation, the empirical mean is updated using moving average. That is, for iteration \(t\), we first compute the empirical mean \(\overline{\mathbf{\phi}}_{c}(\mathcal{B})\) for batch \(\mathcal{B}\) as described above, and then update by \(\overline{\mathbf{\phi}}_{c}\leftarrow(1-\eta)\overline{\mathbf{\phi}}_{c}+\eta \overline{\mathbf{\phi}}_{c}(\mathcal{B})\).
**Adaptive Anatomical Contrast** Finally, the allocated class centers are well-separated and should maintain the semantic relation between classes. To utilize these optimal class centers, we want to induce the feature representation of samples from each class to cluster around the corresponding pre-computed class center. To this end, we adopt a supervised contrastive loss for the label portion of the data. Specifically, given a batch of pixel-feature-label tuples \(\{(\omega_{i},\mathbf{\phi}_{i},y_{i})\}_{i=1}^{n}\) where \(\omega_{i}\) is the i-th pixel in the batch, \(\mathbf{\phi}_{i}\) is the feature of the pixel and \(y_{i}\) is its label, we define supervised **a**d**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd**a**nd****a**nd**a**nd**a**nd****a**nd****a**nd****nd**a**nd****a**nd****a**nd****a**nd****nd****a**nd****nd****a**nd****nd******nd******a**nd******nd********nd********nd**********nd********nd********nd********nd********nd********nd********nd********nd********nd**********nd**********nd**********nd**********nd************nd**********nd********nd**********nd************nd************nd**********nd************nd************nd**********nd**********nd************nd**********nd**********nd**********nd**********nd**********nd************nd**********nd**********nd**********nd**********nd************nd************nd************nd************nd************nd************nd**********nd************nd************nd************nd************nd************nd************nd**************nd************nd************nd************nd************nd************nd**************nd**************nd**************nd************nd**************nd**************nd**************nd**************nd**************nd**************nd**************nd**************nd**************nd**************nd**************nd************nd**************nd**************nd**************nd************nd************nd**************nd************nd************nd**************nd************nd**************nd************nd**************nd************nd**************nd**************nd************nd************nd**************nd************nd**************nd************nd************nd************nd**************nd************nd************nd************nd**************nd************nd**************nd**************nd**************nd**************nd************nd************nd**************nd************nd**************nd************nd**************nd**************nd**************nd**************nd**************nd**************nd****************nd**************nd**************nd**************nd************nd**************nd**************nd****************nd**************nd**************nd**************nd****************nd**************nd**************nd************nd**************nd**************nd**************nd**************nd************nd**************nd****************nd**************nd****************nd****************nd**************nd**************nd****************nd**************nd**************nd****************nd****************nd**************nd**************nd**************nd****************nd**************nd****************nd************nd**************nd**************nd**************nd**************nd**************nd**************nd************nd************nd************nd************nd**************nd************nd************nd************nd************nd************nd**********nd**********nd************nd**********nd************nd************nd************nd************nd************nd************nd**********nd************nd************nd**********nd************nd**********nd************nd**********nd**********nd**********nd********nd**********nd************nd************nd**********nd************nd************nd************nd************nd************nd****************nd****************nd
Implementation Details
We use an SGD optimizer for all experiments with a learning rate of 1e-2, a momentum of 0.9, and a weight decay of 0.0001. Following [37; 19; 30; 29] on both datasets, all inputs were normalized as zero mean and unit variance. The data augmentations are rotation and flip operations. Our work is built on ACTION [34], thus we follow the identical model setting except for temperature parameters because they are of direct interest to us. For the sake of completeness, we refer the reader to [34] for more details. We set \(\lambda_{a}\), \(d\) as 0.2, 128, and regarding all \(\tau\), we use \(\tau^{+}\)=1.0 and \(\tau^{-}\)=0.1 if not stated otherwise. On ACDC, we use the U-Net model [26] as the backbone with a 2D patch size of \(256\times 256\) and batch size of 8. For pre-training, the networks are trained for 10K iterations; for fine-tuning, 20K iterations. On LA, we use the V-Net [21] as the backbone. For training, we randomly crop \(112\times 112\times 80\) patches and the batch size is 2. For pre-training, the networks are trained for 5K iterations. For fine-tuning, the networks are for 15K iterations. For testing, we adopt a sliding window strategy with a fixed stride (\(18\times 18\times 4\)). All experiments are conducted in the same environments with fixed random seeds (Hardware: Single NVIDIA GeForce RTX 3090 GPU; Software: PyTorch 1.10.2+cu113, and Python 3.8.11).
Main ResultsWe compare our ACTION++ with current state-of-the-art SSL methods, including UAMT [37], SASSNet [16], DTC [19], URPC [20], MC-Net [30], SS-Net [29], and ACTION [34], and the supervised counterparts (UNet [26]/VNet [21]) trained with Full/Limited supervisions - using their released code. To evaluate 3D segmentation ability, we use Dice coefficient (DSC) and Average Surface Distance (ASD). Table 2 and Table 1 display the results on the public ACDC and LA datasets for the two labeled settings, respectively. We next discuss our main findings as follows. (1) **LA**: As shown in Table 1, our method generally presents better performance than the prior SSL methods under all settings. Fig. 4 (Appendix) also shows that our model consistently outperforms all other competitors, especially in the boundary region; (2) **ACDC**: As Table 2 shows, ACTION++ achieves the best segmentation performance in terms of
\begin{table}
\begin{tabular}{c c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{4 Labeled (5\%)} & \multicolumn{2}{c}{8 Labeled (10\%)} \\ \cline{2-5} & DSC[\%]\(\uparrow\) & ASD[voxel]\(\downarrow\) & DSC[\%]\(\uparrow\) & ASD[voxel]\(\downarrow\) \\ \hline VNet-F [21] & 91.5 & 1.51 & 91.5 & 1.51 \\ VNet-L & 52.6 & 9.87 & 82.7 & 3.26 \\ \hline UAMT [37] & 82.3 & 3.82 & 87.8 & 2.12 \\ SASSNet [16] & 81.6 & 3.58 & 87.5 & 2.59 \\ DTC [19] & 81.3 & 2.70 & 87.5 & 2.36 \\ URPC [20] & 82.5 & 3.65 & 86.9 & 2.28 \\ MC-Net [30] & 83.6 & 2.70 & 87.6 & 1.82 \\ SS-Net [29] & 86.3 & 2.31 & 88.6 & 1.90 \\ ACTION [34] & 86.6 & 2.24 & 88.7 & 2.10 \\ \(\bullet\)ACTION++ (ours) & **87.8** & **2.09** & **89.9** & **1.74** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison (DSC[%]/ASD[voxel]) for LA under two unlabeled settings (5% or 10%). All experiments are conducted as [37; 16; 19; 20; 30; 29; 34] in the identical setting for fair comparisons. The best results are indicated in **bold**. VNet-F (fully-supervided) and VNet-L (semi-supervided) are considered as the upper bound and the lower bound for the performance comparison.
Dice and ASD, consistently outperforming the previous SSL methods across two labeled settings. In Fig. 3 (Appendix), we can observe that ACTION++ can yield the segmentation boundaries accurately, even for very challenging regions (_i.e._, RV and Myo). This suggests that ACTION++ is inherently better at long-tailed learning, in addition to being a better segmentation model in general.
**Ablation Study** We first perform ablation studies on LA with 10% label ratio to evaluate the importance of different components. Table 3 shows the effectiveness of supervised adaptive anatomical contrastive learning (SAACL). Table 4 (Appendix) indicates that using anatomical-aware temperature scheduler (ATS) and SAACL yield better performance in both pre-training and fine-tuning stages. We then theoretically show the superiority of our method in Appendix A.
Finally, we conduct experiments to study the effects of cosine boundaries, cosine period, different methods of varying \(\tau\), and \(\lambda_{a}\) in Table 5, Table 6 (Appendix), respectively. Empirically, we find that using our settings (_i.e._, \(\tau^{-}\!=\!0.1\), \(\tau^{+}\!=\!1.0\), \(T/\#\)iterations=1.0, cosine scheduler, \(\lambda_{a}=0.2\)) attains optimal performance.
## 4 Conclusion
In this paper, we proposed ACTION++, an improved contrastive learning framework with adaptive anatomical contrast for semi-supervised medical segmentation. Our work is inspired by two intriguing observations that, besides the unlabeled data, the class imbalance issue exists in the labeled portion of medical data and the effectiveness of temperature schedules for contrastive learning on long-tailed medical data. Extensive experiments and ablations demonstrated that our model
\begin{table}
\begin{tabular}{l c
consistently achieved superior performance compared to the prior semi-supervised medical image segmentation methods under different label ratios. Our theoretical analysis also revealed the robustness of our method in label efficiency.
|
2303.11134 | Microlensing and event rate of static spherically symmetric wormhole | The study focuses on the impact of microlensing in modern cosmology and
introduces a new framework for the static spherically symmetrical wormhole in
terms of the radial equation of state. Following a standard procedure, the
study calculates the lensing equation, magnification, and event rate based on
the the radial equation of state. The analysis highlights that the image
problem of the light source is complex. Furthermore, the study suggests that
larger values for the throat radius of the wormhole and the radial equation of
state lead to higher event rates. Additionally, it is proposed that the event
rate of a wormhole will be larger compared to that of a black hole, provided
their masses and distances from the light source and observer are comparable.
This study offers the potential to distinguish between a wormhole and a black
hole under similar conditions. | Ke Gao, Lei-Hua Liu | 2023-03-20T14:11:55Z | http://arxiv.org/abs/2303.11134v5 | # Microlensing and multi-images problem of static spherical symmetric wormhole
###### Abstract
In this paper, we develop a framework to re-examine the weak lensing (including the microlensing) effects of the static spherical symmetric wormhole in terms of the radial equation of state \(\eta=\frac{p_{r}}{\rho}\) (REoS). As for its application, we calculate its magnification, and event rate under this REoS, in which we show that the maximal value of magnification of the Ellis-Bronnikov wormhole is only related to the relative position and intrinsic angle, whose the maximal value is around five. For the event rate, our results indicate that one cannot distinguish the Eillis-Bronnikov wormhole and charged wormhole, but its order is much higher than the vacuum case, in which all these metrics belong to the static spherical symmetric wormhole metric. By calculating the lensing equation of the static spherical symmetric wormhole, we find an explicit formula between the maximal number of images of the wormhole and \(\eta\). It shows that this relation is consistent with the classical wormhole, but the case for wormhole with quantum corrections is still mysterious. Our new method may shed new light on distinguishing the wormhole and blackhole via the event rate.
## I Introduction
Wormhole has been proposed more than one hundred years. In 1916, Flamm studied the internal structure of Schwarzschild's solution [1]. Einstein and Rosen proposed the concept of a bridge structure, which paved the way for further understanding of wormholes [2]. The wormhole was first introduced by Misner and Wheeler [3]. Ellis proposed the idea of a drain hole structure [4]. Then, Morris proposed the concept of a traversable wormhole [5], in
which a type of wormhole that could be traversed by humans or spaceships without being destroyed. These advancements in the understanding of wormholes have contributed greatly to the field of theoretical physics. Thereafter, wormholes became the subject of extensive research [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] by physicists. The study of wormholes typically begins with an assumption about their geometric structure and then involves calculating the corresponding material source required to create such a structure. However, such calculations often violate known energy conditions [20; 21; 22; 23], indicating that some exotic matter is required to explain the existence of wormholes. The study of wormholes can deepen our understanding of general relativity and explore space.
As for the lensing effects, Einstein published the first article on gravitational lens field [24]. After a period of silence, gravitational lens has become a hotspot. Gravitational lensing can be classified into two types: weak gravitational lensing including the microlensing [25; 26] and strong gravitational lensing [27; 28]. Weak gravitational lensing is caused by relatively small perturbations in the gravitational field and results in slight distortions of images, while strong gravitational lensing involves more significant deformations due to the presence of massive objects like black holes, galaxies or galaxy clusters. Gravitational microlens is an effective method to explore wormholes [29; 30]. In the literature, The microlensing effect of a wormhole is extensively studied [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. An interesting question arises as to how many images a light source will generate at the equatorial plane when passing through the wormhole space-time, and what this quantity is related to. There has been some discussions about microlensing imaging in wormholes spacetime [37; 43; 47; 32].
In this paper, we will re-examine this problem from a new perspective by implementing the REoS to depict the lensing effects of the spherically symmetric wormhole metric. Then we calculate the deflection angle of this metric by Gauss-Bonnet Theorem (GBT) [53; 54; 55] which is widely used in gravitational lenses to calculate deflection angle [56; 57; 58; 59; 60; 61; 62; 63; 64], and we find that the state coefficient \(\eta\) determines the maximum value of the image of a light source in the equatorial plane. In the last section, we discuss the applicability of the image number formula \(n=2+\frac{1}{\eta}\). We also calculate the magnification, and get a general formula for magnification. We analyze the magnification of Ellis-Bronnikov wormhole as an instance, and detect that it has a maximum value which only depends on the relative position \(\frac{D_{LS}}{D_{S}}\) and intrinsic angle \(\beta\) in the first-order lens equation case. In addition, we study the event rate of a large scale wormhole. We regard it as an object with continuous mass distribution,
and obtain the relationship between the event rate and the radial state coefficient. Our work may bring new ideas to the study of wormholes.
The structure of our article is organized as follows: In section II, we use REoS to construct a metric. Section III, we use the gravitational lens technology to calculate the deflection angle and lens equation and obtain a formula for the number of images. Besides, magnification and event rate are also discussed. In Section IV, we discuss the applicability of the formula in specific examples. In section V, we draw our conclusion and outlook. In appendix VI, we unify all of the parameters in SI unit.
## II Wormhole
In this section, we follow the notation of [65; 66; 67]. By starting with a static spherical symmetry metric:
\[ds^{2}=-e^{2\Phi}dt^{2}+\frac{dr^{2}}{1-b(r)/r}+r^{2}d\Omega^{2}, \tag{1}\]
this metric describes a generic static and spherical wormhole metric. The Einstein field equation provides the following relationships:
\[p_{r}^{\prime}=\frac{2}{r}\big{(}p_{t}-p_{r}\big{)}-\big{(}\rho+p_{r}\big{)} \Phi^{\prime}, \tag{2}\]
\[b^{\prime}=8\pi G\rho(r)r^{2}, \tag{3}\]
\[\Phi^{\prime}=\frac{b+8\pi Gp_{r}r^{3}}{2r^{2}\big{(}1-b(r)/r\big{)}}, \tag{4}\]
where the prime denotes a derivative with respect to the radial coordinate \(r\), \(p_{r}\) represents the radial pressure, \(p_{t}\) indicates the tangential pressure, and \(\rho\) is the energy density. REoS is defined as follows,
\[p_{r}=\eta\rho. \tag{5}\]
where \(\eta\) represents radial state coefficeient. The flaring-out condition and asymptotic flatness take the neccessary condition,
\[\eta>0\text{ and }\eta<-1. \tag{6}\]
Combining these equations (2)-(5), we can get
\[\begin{split} b(r)=r_{0}\bigg{(}\frac{r_{0}}{r}\bigg{)}^{\frac{1}{ \eta}}e^{-(2/\eta)[\Phi(r)-\Phi(r_{0})]}\times\\ \bigg{[}\frac{2}{\eta}\int_{r_{0}}^{r}\big{(}\frac{r}{r_{0}}\big{)} ^{(1+\eta)/\eta}\Phi^{\prime}(r)e^{(2/\eta)[\Phi(r)-\Phi(r_{0})]}dr+1\bigg{]}. \end{split} \tag{7}\]
We choose a domain in which \(\Phi(r)\approx constant\), then,
\[b(r)=r_{0}\bigg{(}\frac{r_{0}}{r}\bigg{)}^{\frac{1}{\eta}}. \tag{8}\]
Substitute the above formula into Eq. (1), then one can get
\[ds^{2}=-Adt^{2}+\frac{dr^{2}}{1-\big{(}r_{0}/r\big{)}^{1+\frac{1}{\eta}}}+r^{2 }d\Omega^{2}, \tag{9}\]
where \(A=-e^{2\Phi}\). In the next section, we will use this metric to discuss microlensing effect.
## III Microlensing
In this section, we will calculate the magnification and the event rate of metric (9). Under REoS, we also find an explicit relation between the maximal number of images of wormhole and \(\eta\).
Figure 1: A sketch of a wormhole. In our case, we consider the lensing effects occur on one side of wormhole (spacetime 1 or spacetime 2).
### Deflection angle
In this subsection, we will implement GBT to calculate the deflection angle. For a photon \(ds^{2}=0\), it (in equatorial plane) will satisfy the following relation as
\[dt^{2}=\frac{dr^{2}}{A\left(1-\left(r_{0}/r\right)^{1+\frac{1}{\eta}}\right)}+ \frac{r^{2}}{A}d\phi^{2}. \tag{10}\]
Then we define two auxiliary quantities as \(du=\frac{dr}{\sqrt{A\left(1-\left(r_{0}/r\right)^{1+\frac{1}{\eta}}\right)}}\) and \(\xi=\frac{r}{\sqrt{A}}\). Gaussian optical curvature can be expressed as
\[K=\frac{-1}{\xi(u)}[\frac{dr}{du}\frac{d}{dr}\big{(}\frac{dr}{du}\big{)}\frac{ d\xi}{dr}+\big{(}\frac{dr}{du}\big{)}^{2}\frac{d^{2}\xi}{dr^{2}}], \tag{11}\]
combine with metric (10), one can get
\[K=\frac{-\sqrt{A}r_{0}\big{(}\frac{\eta_{0}}{r}\big{)}^{\frac{1}{\eta}}\big{(} 1+\frac{1}{\eta}\big{)}}{2r^{3}\sqrt{1-\left(\frac{r_{0}}{r}\right)^{1+\frac{ 1}{\eta}}}}. \tag{12}\]
Now let's derive the expression of deflection angle. We first give the Gauss-Bonnet theorem
\[\int\int_{D}KdS+\int_{\partial D}\kappa dt+\sum_{i}\alpha_{i}=2\pi\chi(D). \tag{13}\]
We choose the integral domain \(D\) in Fig. 2, \(OS\) is the geodesic line, so the line integral on \(OS\) is zero. Besides, this Euler index \(\chi\) in domain \(D\) is 1.
\[\int\int_{D_{2}}KdS+\int_{\gamma_{P}}\kappa dt+\sum_{i}\alpha_{i}=2\pi \tag{14}\]
We can choose \(\gamma\) to vertically intersect the geodesic line \(OS\) at point O and point S, which means
\[\sum_{i}\alpha_{i}=\frac{\pi}{2}(S)+\frac{\pi}{2}(O)=\pi. \tag{15}\]
The sum of external angles here is the sum of two right angles, And then we do an integral transformation
\[\kappa dt=\kappa\frac{dt}{d\phi}d\phi. \tag{16}\]
Here \(\phi\) is the angular coordinate of the center at \(wormhole\). It can be done to set up \(\kappa\frac{dt}{d\phi}=1\) on \(\gamma\), so there is
\[\int\int_{D_{2}}KdS+\int_{\phi_{O}}^{\phi_{S}}d\phi+\pi=2\pi. \tag{17}\]
Geodesic \(OS\) is approximately a straight line, that is, the angle that \(OS\) spans is \(\pi+\alpha\), and we let the angular coordinate of point O be 0. The result is
\[\int\int_{D_{2}}KdS+\int_{0}^{\pi+\alpha}d\phi+\pi=\int\int_{D_{2}}KdS+\pi+\alpha +\pi=2\pi. \tag{18}\]
The final result is
\[\alpha=-\int\int_{D_{2}}KdS. \tag{19}\]
That is to say, our deflection angle can be written as
\[\alpha=-\int_{0}^{\pi}\int_{\frac{b}{\sin\phi}}^{\infty}K\sqrt{\det h_{ab}}drd\phi, \tag{20}\]
where \(b\) is impact parameter and \(h_{ab}\) is metric (10). Note that our results only apply to small angles. Substitute formula (12) to (20), one can obtain
\[\alpha=\int_{0}^{\pi}\int_{\frac{b}{\sin\phi}}^{\infty}\frac{r_{0}\big{(}\frac {r_{0}}{r}\big{)}^{\frac{1}{\eta}}\big{(}1+\frac{1}{\eta}\big{)}}{2\sqrt{A}r^ {2}\big{(}1-\big{(}\frac{r_{0}}{r}\big{)}^{1+\frac{1}{\eta}}\big{)}}drd\phi. \tag{21}\]
Under the weak field approximation, we work out
\[\alpha=\frac{\sqrt{\pi}\big{(}\frac{r_{0}}{b}\big{)}^{1+\frac{1}{\eta}}\eta \Gamma[1+\frac{1}{2\eta}]}{2\sqrt{A}\Gamma[\frac{1}{2}\big{(}3+\frac{1}{\eta} \big{)}]},\quad\text{if }\frac{1}{\eta}>-2. \tag{22}\]
This deflection angle is valid for the first order of \(r_{0}/b\). Being armed with deflection angle (22), one can investigate its corresponding lens equation.
### Lensing equation
Plane geometry in equatorial plane of lens Fig. 3 tells us
\[\beta=\theta-\frac{D_{LS}}{D_{S}}\alpha. \tag{23}\]
Substitute Eq. (22) to Eq. (23), one can obtain that
\[\theta^{2+\frac{1}{\eta}}-\beta\theta^{1+\frac{1}{\eta}}-\frac{D_{LS}}{D_{S}} \frac{\sqrt{\pi}\big{(}\frac{r_{0}}{D_{L}}\big{)}^{1+\frac{1}{\eta}}\eta\Gamma [1+\frac{1}{2\eta}]}{2\sqrt{A}\Gamma[\frac{1}{3}\big{(}3+\frac{1}{\eta}\big{)} ]}=0, \tag{24}\]
here we have used the approximation \(b\approx\theta D_{L}\). According to eq. (24), one can explicitly obtain the relation between the order of lensing equation (n) and \(\eta\),
\[n=2+\frac{1}{\eta}. \tag{25}\]
. This is an equation about the number of images (If there are at most n solutions on the inference of the n order of equation). When \(\eta\to 0_{+}\), then \(n\rightarrow\infty\), this means that we can at most get an infinite number of images. On the other hand, when \(\eta\rightarrow\pm\infty\), had \(n\to 2\), we are expected to have two images.
From the perspective of observation, only the real solution is applicable. However, the second order of the lensing equation could have complex solutions. The situation of the higher order lensing equation is more complicated. To be more precise, one cannot find the exact real solution of the higher-order lensing equation. Thus, the image number equation will be guiding us to explore the image problems in various wormholes.
### Magnification
Magnification is a result of the distortion caused by lensing. By applying the lens equation, the solid angle element \(d\beta^{2}\) is transformed into the solid angle \(d\theta^{2}\), ultimately affecting the observed solid angle under which the source is viewed. This shift in solid angle results in a magnification or demagnification of the received flux. The total magnification can be
Figure 2: The illustration of the Gauss-Bonnet theorem integral domain.
calculated as follows:
\[\mu_{\rm total}=\sum_{i}|\frac{\beta}{\theta_{i}}\frac{d\beta}{d\theta_{i}}|^{-1}. \tag{26}\]
substituting eq (23) into eq (26), which leads to
\[\mu_{\rm total}=\sum_{i}\big{|}\big{(}2+\frac{1}{\eta}\big{)}\frac{\beta}{ \theta_{i}}-\big{(}1+\frac{1}{\eta}\big{)}\frac{\beta^{2}}{\theta_{i}^{2}} \big{|}^{-1}, \tag{27}\]
where \(\theta_{i}\) is the angle of \(i-th\) image of wormhole. Eq. (27) is a general formula for calculating the magnification. We calculate the Ellis-Bronnikov wormhole as an example. For simplicity, one can use the \(b\approx\theta D_{L}\) into the total magnification (27) with \(\eta=1\). Consequently, one can get
\[\mu=\big{|}\frac{3\beta}{\beta+\frac{D_{LS}}{D_{S}}\frac{\pi}{4}\big{(}\frac{r_ {0}}{b}\big{)}^{2}}-\frac{2\beta^{2}}{\big{(}\beta+\frac{D_{LS}}{D_{S}}\frac{ \pi}{4}\big{(}\frac{r_{0}}{b}\big{)}^{2}\big{)}^{2}}\big{|}^{-1}. \tag{28}\]
We plot the magnification of Ellis-Bronnikov wormhole in Fig. 4. It clearly shows that the magnification will be increased as enhancing \(b\), where \(r_{0}\) is fixed. This means that the more closely the photon trajectory is to the wormhole's throat, the more distorted the light can be, the greater magnification. Then there is an maximum value for this magnification, which
Figure 3: Showing lens plane geometry. \(I\) is the location of images, \(S\) is location of source, \(\alpha\) is the deflection angle, \(W\) is the wormhole and \(b\) is the impact parameter. \(\alpha\) is the deflection angle. All of these angles are much less than unity.
depends on the relative position \(\frac{D_{LS}}{D_{S}}\) and the intrinsic angle \(\beta\), and the maximum value in our case is \(4.92992\approx 5\).
### Event rate
The event rate of Ellis-Bronnikov wormhole has been studied by F.Abe [47], which he discussed that wormholes of a throat \(10\sim 10^{11}\) km are uniformly distributed in the universe, and he also calculated the corresponding optic depth and event rate. We adopt different assumptions: the throat of wormholes are quite large \(r_{0}=10^{20}\) m and wormholes evenly distributed in the universe. We study a wormhole (\(n=1\)) which is regarded as an object with continuous mass distribution, and its event rate. The effective mass of wormholes can be calculated as follows
\[M=\frac{r_{0}}{2}+\int_{r_{0}}^{r}4\pi\rho(r^{\prime})r^{\prime 2}dr^{\prime}, \tag{29}\]
where the energy density is
\[\rho=-\frac{Ar_{0}(\frac{r_{0}}{r})^{\frac{1}{\eta}}}{r^{3}\eta}\frac{c^{4}}{ 8\pi G}. \tag{30}\]
Therefore,
\[M=\frac{Ac^{4}D_{L}\big{(}\frac{r_{0}}{D_{L}}\big{)}^{1+\frac{1}{\eta}}}{2G}+ \frac{r_{0}G-Ac^{4}}{2G}. \tag{31}\]
There is a famous parameter in the field of gravitational lens--Einstein angle:
\[\theta_{E}\equiv\sqrt{\frac{4GM}{c^{2}}\frac{D_{LS}}{D_{L}D_{S}}}, \tag{32}\]
and Einstein ring is generally assumed to be the cross-section for microlensing
\[\sigma_{micro}=\pi\theta_{E}^{2}. \tag{33}\]
This is the solid angle within which a source has to be placed in order to produce a detectable microlensing signal. The Einstein radius crossing time is
\[\begin{split} t_{E}&=\frac{D_{L}\theta_{E}}{v}\\ &=\sqrt{\frac{2\big{(}Ac^{4}D_{L}\big{(}\frac{r_{0}}{D_{L}}\big{)} ^{1+\frac{1}{\eta}}+r_{0}G-Ac^{4}\big{)}}{c^{2}v^{2}}\frac{D_{LS}D_{L}}{D_{S}} },\end{split} \tag{34}\]
where \(v\) is the speed of the observed object. The optical depth \(\tau\) to some distance \(D_{S}\) is the probability that a source at that distance gives rise to a detectable microlensing event. For
\[\tau(D_{S})=\frac{4\pi G}{c^{2}}D_{S}^{2}\int_{0}^{1}\rho(x)x(1-x)dx, \tag{35}\]
where \(x=\frac{D_{L}}{D_{S}},dx=\frac{dD_{L}}{D_{S}}\), the integral results is
\[\begin{split}&\tau(D_{S})=|\frac{Ac^{2}r_{0}\big{(}\frac{r_{0}}{D_{L} }\big{)}^{\frac{1}{\eta}}(D_{L}(1+\eta)-D_{S})}{2D_{S}D_{L}(1+\eta)}\\ &-\frac{Ac^{2}(r_{0}(1+\eta)-D_{S})}{2D_{S}(1+\eta)}|.\end{split} \tag{36}\]
To know the rate of microlensing events we may observe while monitoring a certain number of sources for a specific time, then we could represent the event rate as
\[\Gamma=\frac{d(N\tau)}{dt}=\frac{2N}{\pi}\int_{0}^{D_{S}}n(D_{L})\frac{\pi r_{ E}^{2}}{t_{E}}dD_{L}, \tag{37}\]
Assuming the Einstein crossing times to all subject are identical, the result is
\[\Gamma=\frac{2N}{\pi t_{E}}\tau. \tag{38}\]
where
\[\begin{split}&\Gamma=\frac{2N}{\pi\sqrt{\frac{2\big{(}Ac^{4}D_{L} \big{(}\frac{r_{0}}{D_{L}}\big{)}^{1+\frac{1}{\eta}}+r_{0}G-Ac^{4}\big{)}}{c^ {2}v^{2}}\frac{D_{LS}D_{L}}{D_{S}}}}\times\\ &|\frac{Ac^{2}r_{0}\big{(}\frac{r_{0}}{D_{L}}\big{)}^{\frac{1}{ \eta}}\big{(}D_{L}(1+\eta)-D_{S}\big{)}}{2D_{S}D_{L}(1+\eta)}-\frac{Ac^{2} \big{(}r_{0}(1+\eta)-D_{S}\big{)}}{2D_{S}(1+\eta)}|.\end{split} \tag{39}\]
We draw a graph of event rate and \(\eta\) as Fig. 5 and Fig. 6. We find that the event rate in interval \(0<\eta<1\) is divergent because at this interval we have get \(n\rightarrow\infty\), which is restricted by the flaring-out condition (6), and every image can be observed microlensing events. Take Ellis-Bronnikov wormhole (\(\eta=1\)) as an instance, we can observe about \(9\times 10^{4}\) microlensing events in one year for \(1\times 10^{6}\) sources. If in vacuum (\(\eta=-1\)), we can observe about 300 times a year. We find that Ellis-Bronnikov wormhole (\(\eta=1\) will show in Sec. IV) will make the event rate much higher than vacuum. In addition, if two wormholes have the same \(\eta\) value, we cannot distinguish them by event rate. Because the event rate is only
Figure 5: Event rate (39) varies with \(\eta\) whose range covers from 9 to 10. As \(\eta<2\), \(\Gamma\) will nearly be divergent.
Figure 6: Event rate (39) varies with \(\eta\) whose range is from \(-10\) to \(-1\). \(\eta=-1\) corresponds to the vacuum case (will show in IV).
affected by \(\eta\) in our case.
## IV Number of images
In this part, we check the performance of our equations \(n=2+\frac{1}{\eta}\) in specific situations which includes the Vacuum case, Ellis-Bronnikov wormhole, charged wormhole.
### Vacuum Case
When \(\eta=-1\), the metric (9) becomes Schwarzschild-like metric:
\[ds^{2}=-Adt^{2}+dr^{2}+r^{2}d\Omega^{2}. \tag{40}\]
Our calculation shows the energy-momentum tensor is vanishing, thus we call it as the vacuum case. We solve the lensing equation and obtain
\[\theta=\frac{\pi D_{LS}}{2D_{S}\sqrt{A}}+\beta. \tag{41}\]
For comparison, we substitute \(\eta=-1\) into equation (25)
\[n=2+\frac{1}{-1}=1. \tag{42}\]
In the vacuum case, there will be at most one image, which is consistent with physical intuition.
### Ellis-Bronnikov wormhole
We first discuss the number of images with the traditional method and then compare it with formula (25). When redshift parameter \(A=1\) and \(\eta=1\) our metric turn to Ellis-Bronnikov wormhole
\[ds^{2}=-dt^{2}+\frac{dr^{2}}{1-\left(\frac{r_{0}}{r}\right)^{2}}+r^{2}d\Omega^ {2}. \tag{43}\]
The deflection angle for eq (21) (\(\eta=1\)) is
\[\alpha=\frac{\pi}{4}\big{(}\frac{r_{0}}{b}\big{)}^{2}. \tag{44}\]
So, the lens equation is
\[\theta^{3}-\beta\theta^{2}-\frac{\pi D_{LS}r_{0}^{2}}{4D_{S}D_{L}^{2}}=0. \tag{45}\]
for a general cubic equation \(ax^{3}+bx^{2}+cx^{1}+d=0\), The part discriminant is given \(A=b^{2}-3ac\quad B=bc-9ad\quad C=c^{2}-3bd.\) The total discriminant is \(\Delta=B^{2}-4AC.\) We need \(\Delta<0\) to have three real solutions. Using equation (25),
\[n=2+\frac{1}{1}=3. \tag{46}\]
We found that these two are consistent. This is not surprising, because Ellis-Bronnikov wormhole fits our previous metric (9).
### Charged wormhole
Referring to article [32], they show that there will be three images at most for the charged spherical symmetric wormhole and its metric [68] can also be expressed from
\[ds^{2}=-\big{(}1+\frac{Q^{2}}{r^{2}}\big{)}dt^{2}+\big{(}1-\frac{r_{0}^{2}}{r^ {2}}+\frac{Q^{2}}{r^{2}}\big{)}^{-1}dr^{2}+r^{2}d\Omega^{2}. \tag{47}\]
Where \(\frac{r_{0}^{2}}{r}\) is mass term. We calculate the state coefficient with \(\eta=\frac{p_{r}}{\rho}\). the result is
\[\eta=-\frac{r^{4}\left(Q^{4}+Q^{2}\left(r^{2}-r_{0}^{2}\right)+r^{2}r_{0}^{2} \right)}{\left(Q^{2}+r^{2}\right)^{2}\left(Q^{2}-r_{0}^{2}\right)\left(Q^{2}+ r^{2}-r_{0}^{2}\right)}, \tag{48}\]
where \(\eta\) is not a constant here, but \(\eta\) is an approximately constant in our integral region, as shown in our numerical diagram Fig. 7. In the condition of the weak field and \(r_{0}\ll r\) results in \(\eta=1\), substituting equation (25)
\[n=2+\frac{1}{1}=3. \tag{49}\]
It demonstrates consistency in charged wormholes. We know equation (25) was derived under the assumption of a constant redshift parameter, but this is reasonable that the limitation of microlensing makes \(\lim\limits_{r\rightarrow\infty}1+\frac{Q^{2}}{r^{2}}\approx 1\), which means that the observer is quite remote with source and wormhole.
## V Conclusion and outlook
In this paper, we investigate the microlensing effects and the multi-image problem of static spherical wormhole (9). By introducing the so-called REoS \(\eta=\frac{p_{r}}{\rho}\), we can reformulate
the metric (9). Consequently, one can re-examine the microlensing effect of (9), including its magnification, and event rate. In particular, one could obtain an explicit formula \(n=2+\frac{1}{\eta}\) that reflects the maximum number of images on the equatorial plane. Our analysis shows that the investigation is consistent with the vacuum case, Ellis-Bronnikov wormhole, and charged wormhole. However, this formula is still unknown if considering a quantum-corrected wormhole whose metric is
\[ds^{2}=-(1+\frac{\hbar q}{r^{2}})dt^{2}+\frac{dr^{2}}{1-\frac{b_{0}^{2}}{r^{2}} +\frac{\hbar q}{r^{2}}}+r^{2}d\Omega^{2}, \tag{50}\]
where \(\hbar q\) is from the quantum corrections. In the weak field approximation, one can obtain \(\eta=\frac{b_{0}^{2}+qh}{b_{0}^{2}-qh}\). In the large scale (\(b_{0}^{2}\gg\hbar q\)), it will recover the Ellis-Bronnikov wormhole. In the small scale (\(b_{0}^{2}\approx\hbar q\)), \(\eta\) is variable that is determined by the \(b_{0}^{2}/r\) (dubbed as mass part) and \(q\) (dubbed as charge part). From another aspects, the order of lens equation is also unknown. For us, it is still mysterious for the wormhole with quantum corrections. Thus, we conclude that \(n=2+\frac{1}{\eta}\) is valid for metric (9).
We also reformulate the magnification in terms of \(\eta\), then used it to analyze the magnification of the Ellis-Bronnikov wormhole as an instance. We find that the maximum magnification of an Ellis-Bronnikov wormhole only depends on the relative position \(\frac{D_{LS}}{D_{S}}\) and intrinsic angle \(\beta\), and in our case the maximum is about 5, in which the photon is travelling near to the throat of wormhole. For the other types of static spherical wormholes, one can naturally extend our methods into multi-images cases which will be left for the future
Figure 7: The plot shows the relationship between \(\eta\) and the radial distance. When \(r\rightarrow\infty\) in the weak field approximation, it shows that \(\eta=1\).
work.
Instead of considering the hypothesis of a small wormhole (\(r_{0}\) is at \(10^{3}-10^{14}\) m), we consider a large-scale wormhole (\(r_{0}\) is around \(10^{20}\) m) that is evenly distributed throughout the universe. We calculated the event rate for a single wormhole situation. The event rates of the Ellis-Bronnikov wormhole and the charged wormhole are equal with the same \(\eta\) under the microlensing conditions, and they are many orders of magnitude higher than vacuum. An interesting question is whether black holes and wormholes can be distinguished by event rates. Here, we only list some specific cases of the static spherical symmetric wormhole. For implementation into the blackhole, the difficulty is that we need the shape function of the metric of blackhole. Once finding the shape function as shown in (1), it is possible to distinguish the wormhole and its associated blackhole [31].
## Acknowledgements
We appreciate that Hai-Qing Zhang and Bi-Chu Li give lots of suggestions to improve this manuscript. LH and KG are funded by NSFC grant NO. 12165009.
## VI Appendix
We unify all of the parameters in SI unit and only impose their values as follows:
\(A=1,\ \ c=3\times 10^{8},\ \ D_{L}=2\times 10^{24},\ \ D_{LS}=2\times 10^{24},\ \ D_{S}=4 \times 10^{24},\ \ G=6.67\times 10^{-11},\\ N=1\times 10^{6},\ \ r_{0}=5\times 10^{20},\ \ v=3\times 10^{4},\ \ b=5\times 10^{22},\ \ \beta=0.03.\)
|
2302.09452 | ALMA ACA study of the H$_2$S/OCS ratio in low-mass protostars | The identification of the main sulfur reservoir on its way from the diffuse
interstellar medium to the cold dense star-forming cores and eventually to
protostars is a long-standing problem. Despite sulfur's astrochemical
relevance, the abundance of S-bearing molecules in dense cores and regions
around protostars is still insufficiently constrained. The goal of this
investigation is to derive the gas-phase H$_2$S/OCS ratio for several low-mass
protostars, which could provide crucial information about the physical and
chemical conditions in the birth cloud of Sun-like stars. Using ALMA ACA Band 6
observations, H$_2$S, OCS, and their isotopologs are searched for in 10 Class
0/I protostars with different source properties such as age, mass, and
environmental conditions. An LTE model is used to fit synthetic spectra to the
detected lines and to derive the column densities based solely on optically
thin lines. The H$_2$S and OCS column densities span four orders of magnitude
across the sample. The H$_2$S/OCS ratio is found to be in the range from 0.2 to
above 9.7. IRAS 16293-2422 A and Ser-SMM3 have the lowest ratio, while
BHR71-IRS1 has the highest. Only the H$_2$S/OCS ratio of BHR71-IRS1 agress
within uncertainties with the ratio in comet 67P/C$-$G. The determined
gas-phase H$_2$S/OCS ratios can be below the upper limits on the solid-state
ratios by as much as an order of magnitude. The H$_2$S/OCS ratio depends
significantly on the environment of the birth cloud, such as UV-irradiation and
heating received prior to the formation of a protostar. The highly isolated
birth environment of BHR71-IRS1 is hypothesized to be the reason for its high
gaseous H$_2$S/OCS ratio due to lower rates of photoreactions and more
efficient hydrogenation reactions under such dark, cold conditions. The gaseous
inventory of S-bearing molecules in BHR71-IRS1 appears to be most similar to
that of interstellar ices. | Tanya Kushwahaa, Maria N. Drozdovskaya, Łukasz Tychoniec, Benoît Tabone | 2023-02-19T01:20:40Z | http://arxiv.org/abs/2302.09452v1 | # ALMA ACA study of the H\({}_{2}\)S/OCS ratio in low-mass protostars
###### Abstract
Context:The identification of the main sulfur reservoir on its way from the diffuse interstellar medium to the cold dense star-forming cores and eventually to protostars is a long-standing problem. Despite sulfur's astrochemical relevance, the abundance of S-bearing molecules in dense cores and regions around protostars is still insufficiently constrained.
Aims:The goal of this investigation is to derive the gas-phase H\({}_{2}\)S/OCS ratio for several low-mass protostars, which could provide crucial information about the physical and chemical conditions in the birth cloud of Sun-like stars.
Methods:Using ALMA ACA Band 6 observations, H\({}_{2}\)S, OCS, and their isotopologues are searched for in 10 Class 0/I protostars with different source properties such as age, mass, and environmental conditions. An LTE model is used to fit synthetic spectra to the detected lines and to derive the column densities based solely on optically thin lines.
Results:The H\({}_{2}\)S and OCS column densities span four orders of magnitude across the sample. The H\({}_{2}\)S/OCS ratio is found to be in the range from 0.2 to above 9.7. IRAS 16293-2422 A and Ser-SMM3 have the lowest ratio, while BH71-IRS1 has the highest. Only the H\({}_{2}\)S/OCS ratio of BHR71-IRS1 agrees within uncertainties with the ratio in comet 67P/Churyumov-Gerasimenko.
Conclusions:The determined gas-phase H\({}_{2}\)S/OCS ratios can be below the upper limits on the solid-state ratios by as much as an order of magnitude. The H\({}_{2}\)S/OCS ratio depends significantly on the environment of the birth cloud, such as UV-irradiation and heating received prior to the formation of a protostar. The highly isolated birth environment of BHR71-IRS1 is hypothesized to be the reason for its high gaseous H\({}_{2}\)S/OCS ratio due to lower rates of photoreactions and more efficient hydrogenation reactions under such dark, cold conditions. The gaseous inventory of S-bearing molecules in BHR71-IRS1 appears to be most similar to that of interstellar ices.
## 1 Introduction
Sulfur (S) is the tenth most abundant element in the Universe (S/H\(\sim\)1.35\(\times\)10\({}^{-5}\), Yamamoto 2017). It was first detected as carbon monosulfide (CS) in the interstellar medium (Penzias et al. 1971). S-bearing species have since been detected in different regions including molecular clouds (Navarro-Almaida et al. 2020; Spezzano et al. 2022), hot cores (Blake et al. 1987; Charnley 1997; Li et al. 2015; Codella et al. 2021; Drozdovskaya et al. 2018), comets (Smith et al. 1980; Bockelee-Morvan et al. 2000; Biver et al. 2021a,b), as well as starburst galaxies (NGC 253; Martin et al. 2005). The total abundance of an element in dust, ice, and gas is its cosmic abundance, also called its elemental abundance. The gas-phase abundance of atomic sulfur in diffuse clouds is comparable to the cosmic abundance of sulfur (\(\sim\)10\({}^{-5}\); Savage & Sembach 1996; Howk et al. 2006). However, the observed abundance of S-bearing species in dense cores and protostellar environments is lower by a factor of \(\sim\)1000 (Snow et al. 1986; Tieftrunk et al. 1994; Goicoechea et al. 2006; Agundez et al. 2018) in comparison to the total S-abundance in diffuse clouds. The forms and mechanisms behind this sulfur depletion in star-forming regions are still unknown. This is often called the "missing sulfur problem".
Different chemical models have been used to investigate this unknown form of sulfur (Woods et al. 2015; Vidal et al. 2017; Semenov et al. 2018; Vidal & Wakelam 2018; Laas & Caselli 2019). Vidal et al. (2017) have proposed that a notable amount of sulfur is locked up in either HS and H\({}_{2}\)S ices or gaseous atomic sulfur in cores, depending substantially on the age of the molecular cloud. However, the only solid form of sulfur firmly detected in interstellar ices is OCS (Palumbo et al. 1995, 1997; Aikawa et al. 2012; Boogert et al. 2015) and potentially also SO\({}_{2}\)(Boogert et al. 1997; Zasowski et al. 2009; Yang et al. 2022; McClure et al. 2023). Solid state H\({}_{2}\)S detection remains tentative to date (Geballe et al. 1985; Smith 1991). The initial cloud abundance of S-bearing molecules has been shown to set the subsequent abundances of these molecules in protostellar regions, depending on the free-fall timescales (Vidal & Wakelam 2018). In surface layers of protoplanetary disks, the availability of gaseous S-bearing molecules appears to be strongly linked with the availability of oxygen (Semenov et al. 2018). Observational studies of gas-phase species claim either H\({}_{2}\)S (Holdship et al. 2016) or OCS (van der Tak et al. 2003) as the main S-carrier depending
on the environment being observed. Other possible reservoirs of sulfur have been proposed in the form of semi-refractory polymers up to S\({}_{8}\)(A'Hearn et al., 1983; Druard & Wakelam, 2012; Calmonte et al., 2016; Shingledecker et al., 2020), hydrated sulfuric acid (Scappini et al., 2003), atomic sulfur (Anderson et al., 2013), and mineral sulfides, FeS (Keller et al., 2002; Kohler et al., 2014; Kama et al., 2019). On the other hand, chemical models of the evolution from cloud to dense core with updated chemical networks suggest that sulfur is merely partitioned over a diverse set of simple organic-sulfur ices and no additional form is required (Laas & Caselli, 2019). Matching observed and modeled cloud abundances consistently for the full inventory of gaseous S-bearing molecules to better than a factor of 10 remains challenging (Navarro-Almaida et al., 2020). Laboratory experiments point to the importance of the photodissociation of H\({}_{2}\)S ice by UV photons leading to the production of OCS ice (Ferante et al., 2008; Garozzo et al., 2010; Jimenez-Escobar & Munoz Caro, 2011; Chen et al., 2015) and S\({}_{2}\) in mixed ices (Grim & Greenberg, 1987). Calmonte et al. (2016) claim to have recovered the full sulfur inventory in comets.
Sulfur-bearing species have been proposed to probe the physical and chemical properties of star-forming regions and to even act as chemical clocks (Charnley, 1997; Hatchell et al., 1998; Viti et al., 2001; Li et al., 2015). However, it has since been shown that their abundance is sensitive to gas-phase chemistry and the availability of atomic oxygen, which puts their reliability as chemical clocks into question (Wakelam et al., 2004, 2011). Studying S-bearing molecules in young Class 0/I protostars is crucial for two reasons. Firstly, their inner hot regions thermally desorb all the volatile ices that are otherwise hidden from gas-phase observations. Consequently, it is more likely to be able to probe the full volatile inventory of S-bearing molecules and investigate the "missing sulfur" reservoir. Secondly, these targets are a window onto the materials available for the assembly of the protoplanetary disk midplane and the cometesimals therein (Aikawa & Herbst, 1999; Willacy, 2007; Willacy & Woods, 2009). This makes hot inner regions highly suitable targets for comparative studies with comets (Bockelee-Morvan et al., 2000; Drozdovskaya et al., 2019).
The main goal of this paper is to study the physical and chemical conditions in embedded protostars via the H\({}_{2}\)S/OCS ratio. A sample of 10 Class 0/I low-mass protostars with different physical properties (mass, age, environment) is considered. Such protostars are in their earliest phase of formation after collapse with large envelope masses. In this work, Atacama Large Millimeter/submillimeter Array (ALMA) Atacama Compact Array (ACA) Band 6 observations towards these 10 protostars are utilized. The H\({}_{2}\)S/OCS ratio is calculated from the column densities of H\({}_{2}\)S, OCS, and their isotopologues. The details of the observations, model, and model parameters used for synthetic spectral fitting are introduced in Section 2. The detected lines of major and minor isotopologues of H\({}_{2}\)S and OCS, their characteristics, and H\({}_{2}\)S/OCS line ratios are presented in Section 3. The discussion and conclusions are presented in Section 4 and 5, respectively.
## 2 Methods
### Observations
Two sets of observations are jointly analyzed in this paper. The first data set (project-id: 2017.1.00108.S; PI: M. N. Drozdovskaya) targeted IRAS 16293-2422, NGC 1333-IRAS4A, and RCrA IRS7B. The observations were carried out in Band 6 (211-275 GHz) with the ALMA ACA of 7m dishes. The data set has a spectral resolution of 0.079-0.085 km s\({}^{-1}\) (61 kHz), and a spatial resolution of (6.5-9.0)\(\times\)(4.0-6.3)\({}^{\prime\prime}\). The second data set (project-id: 2017.1.1350.S; PI: L. Tychonie) targeted Per-B1-c, BH71, Per-emb-25, NGC 1333-IRAS4B, Ser-SMM3, and
\begin{table}
\begin{tabular}{l c c c} \hline Sky frequency & Channel width & Number of channels \\ (GHz) & (kHz) & (km s\({}^{-1}\)) & \\ \hline Project-id: 2017.1.00108.S & & \\ \hline
[MISSING_PAGE_POST]
continuum) & 977 & 1.260 & 2 048 \\ \hline \end{tabular}
\end{table}
Table 1: Spectral settings of the data sets.
TMC1 with the ALMA ACA 7m dishes, also in Band 6. The data have a similar spatial resolution, (6.1-7.4)\(\times\)(4.5-6.4)\({}^{\prime\prime}\), but a lower spectral resolution of 0.333-0.678 km s\({}^{-1}\) (244-488 kHz). The observed frequency ranges of the data are given in Table 1. Data cubes were processed through the standard pipeline calibration with CASA 5.4.0-68. For each source, the noise level has been calculated by taking the standard deviation of the flux in the frequency ranges where no emission lines were detected, i.e., regions with pure noise, in the spectral window containing the H\({}_{2}\)S, 2\({}_{20}\)-2\({}_{1,1}\) line. The noise level of the first data set is 21-32 mJy beam\({}^{-1}\) channel\({}^{-1}\), and of the second data set is 7-13 mJy beam\({}^{-1}\) channel\({}^{-1}\) (Table 3). Both data sets have a flux uncertainty of 10%. The largest resolvable scale of the first and the second data sets are 26.2-29.2\({}^{\prime\prime}\) and 24.6-29.0\({}^{\prime\prime}\), respectively.
### Sources
The properties of the sources explored in this work are tabulated in Table 2. IRAS 16293-2422 (hereafter, IRAS 16293) is a triple protostellar source, consisting of protostars A and B, separated by 5.3\({}^{\prime\prime}\) (747 au; van der Wiel et al. [2019]) and disk-like structures around the two sources, located in the Rho Ophiuchi star-forming region at a distance of 141 pc (Dzib et al. [2018]). This source was studied thoroughly using ALMA under the Protostellar Interferometric Line Survey (PILS; Jorgensen et al. [2016]) and many preceding observational campaigns (e.g., van Dishoeck et al. [1995]; Caux et al. [2011]). Both hot corinos around A and B are rich in a diverse set of complex organic molecules (Jorgensen et al. [2018]; Manigand et al. [2020]). The source IRAS 16293 A is itself a binary composed of sources A1 and A2 with a separation of 0.38\({}^{\prime\prime}\) (54 au; Maureira et al. [2020]). IRAS4A is also a binary system, comprised of IRAS4A1 and IRAS4A2, separated by 1.8\({}^{\prime\prime}\) (540 au; Sahu et al. [2019]) in the Perseus molecular cloud, located at a distance of 299 pc (Zucker et al. [2018]) in the south-eastern edge of the complex NGC 1333 (Looney et al. [2000]). IRAS4A1 has a much higher dust density in its envelope than IRAS4A2, but both contain complex organic molecules (Sahu et al. [2019]; De Simone et al. [2020]). IRS7B is a low-mass source, with a separation of 14\({}^{\prime\prime}\) (2 000 au) from IRS7A (Brown [1987]), and \(\sim\)8\({}^{\prime\prime}\) (1 000 au) from CXO 34 (Lindberg et al. [2014]). It is situated in the Corona Australis dark cloud at a distance of 130 pc (Neuhauser & Forbrich [2008]). IRS7B has been shown to contain lower complex organic abundances as a result of being located in a strongly irradiated environment (Lindberg et al. [2015]).
From the second set of sources, IRAS4B (sometimes labeled BI) has a binary component B\({}^{\prime}\) (or BI) that is 11\({}^{\prime\prime}\) (3 300 au) away (Sakai et al. [2012]; Andel et al. [2016]; Tobin et al. [2016]). The separation between IRAS4B and IRAS4A is 31\({}^{\prime\prime}\) (9 300 au; Coutens et al. [2013]). IRAS4B displays emission from complex organic molecules (Belloche et al. [2020]) and powers a high-velocity SiO jet (Podio et al. [2021]). B1-c is an isolated deeply embedded protostar in the Barnard 1 clump in the western part of the Perseus molecular cloud at a distance of 301 pc (Zucker et al. [2018]). B1-c contains emission from complex organic molecules and shows a high velocity outflow (Jorgensen et al. [2006]; van Gelder et al. [2020]). The next closest source, B1-a, is \(\sim\)100\({}^{\prime\prime}\) (\(\sim\) 29 500 au) away (Jorgensen et al. [2006]). BHR71 is a Bok globule in the Southern Coalscak dark nebulae at a distance of \(\sim\)200 pc (Seidensticker & Schmidt-Kaler [1989]; Starixys et al. [1994]). It hosts the wide binary system of IRS1 and IRS2 with a separation of 16\({}^{\prime\prime}\) (3 200 au; Bourke [2001]; Parise et al. [2006]; Chen et al. [2008]; Tobin et al. [2019]). IRS1 displays pronounced emission from complex organic molecules (Yang et al. [2020]). Emb-25 is a single source located in the Perseus molecular cloud (Enoch et al. [2009]; Tobin et al. [2016]). It does not show emission from complex organic molecules (Yang et al. [2021]), but powers low-velocity CO outflows (Stephens et al. [2019]). TMC1 is a Class I binary source, located in the Taurus molecular cloud (Chen et al. [1995]; Brown & Chandler [1999]) at a distance of 140 pc (Elias [1978]; Torres et al. [2009]). The separation between the two components, TMC1E and TMC1W, is \(\sim\)0.6\({}^{\prime\prime}\) (\(\sim\)85 au); and neither of the two display complex organic emission (van't Hoff et al. [2020]). SMM3 is a single, embedded protostar located in the SE region in Serpens region; 436 pc away (Ortiz-Leon et al. [2018]). The next closest-lying source is SMM6 at a separation
\begin{table}
\begin{tabular}{l l l l l l l} \hline Source & \(d\) & \(M_{\rm env}\) & \(L_{\rm bol}\) & \(T_{\rm bol}\) & Class & \(v_{\rm LSR}\) \\ & (pc) & (M\({}_{\odot}\)) & (L\({}_{\odot}\)) & (K) & \(-\) & (km s\({}^{-1}\)) \\ \hline IRAS 16293-2422 A & 141\({}^{a}\) & 4.0\({}^{d}\) & \(\sim\)18\({}^{b}\) & \(-\) & 0 & +3.2\({}^{c}\) \\ IRAS 16293-2422 B & 141\({}^{a}\) & 4.0\({}^{d}\) & \(\sim\)3\({}^{b}\) & \(-\) & 0 & +2.7\({}^{c}\) \\ NGC 1333-IRAS4A & 299\({}^{e}\) & 5.6\({}^{f}\) & 9.1\({}^{f}\) & 29\({}^{g}\) & 0 & +7.2\({}^{g}\) \\ RCrA IRAS7B & 130\({}^{f}\) & 2.2\({}^{f}\) & 4.6\({}^{f}\) & 89\({}^{g}\) & 0/I & +5.8\({}^{\prime}\) \\ Per-B1-c & 301\({}^{e}\) & 1.8\({}^{h}\) & 3.84\({}^{h}\) & 48\({}^{h}\) & 0 & +6.4\({}^{k}\) \\ BHR71-IRS1 & 200\({}^{m,p}\) & 2.7\({}^{e}\) & 15\({}^{a}\) & 44\({}^{g}\) & 0 & -4.4\({}^{g}\) \\ Per-emb-25 & 294\({}^{r}\) & 0.5\({}^{h}\) & 1.0\({}^{h}\) & 68\({}^{h}\) & 0/I & +5.8\({}^{k}\) \\ NGC 1333-IRAS4B & 299\({}^{e}\) & 3.0\({}^{d}\) & 4.4\({}^{g}\) & 28\({}^{g}\) & 0 & +7.4\({}^{g}\) \\ Ser-SMM3 & 436\({}^{i}\) & 3.2\({}^{a}\) & 5.1\({}^{a}\) & 38\({}^{a}\) & 0 & +7.6\({}^{g}\) \\ TMC1 & 140\({}^{i,a}\) & 0.2\({}^{a}\) & 0.9\({}^{a}\) & 101\({}^{a}\) & I & +5.2\({}^{g}\) \\ \hline \end{tabular} 1
[FOOTNOTE:1]Footnote 1: The columns represent 1) Source, 2) \(d\): distance to the source in pc, 3) \(M_{\rm env}\): mass of the envelope in M\({}_{\odot}\), 4) \(L_{bol}\): bolometric luminosity in L\({}_{\odot}\), 5) \(T_{bol}\): bolometric temperature in K, 6) Class: stage of the protostar, 7) \(\tau_{\rm LSR}\): local standard of rest velocity in km s\({}^{-1}\). References: [\({}^{a}\)] Dzib et al. (2018), [\({}^{b}\)] Jacobsen et al. (2018), [\({}^{c}\)] Jorgensen et al. (2011), [\({}^{d}\)] van der Wiel et al. (2019), [\({}^{e}\)] Zucker et al. (2018), [\({}^{g}\)] Taquet et al. (2015), [\({}^{g}\)] Tobin et al. (2016), [\({}^{h}\)] Enoch et al. (2009), [\({}^{h}\)] Neuhauser & Forbrich (2008), [\({}^{h}\)] Lindberg et al. (2014), [\({}^{h}\)] Stephens et al. (2019), [\({}^{h}\)] Matthews et al. (2
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline Source & RA & Dec & Pixel size & Radius & FWHM & Noise level \\ & (J2000) & (J2000) & (”) & (au) & (”) & (km s\({}^{-1}\)) & (mJy beam\({}^{-1}\) channel\({}^{-1}\)) & (mJy beam\({}^{-1}\) km s\({}^{-1}\)) \\ \hline IRAS 16293-2422 A & 16h 322 22.8548 & -24\({}^{\circ}\)28’ 36.465\({}^{\prime\prime}\) & 0.8 & 522 & 3.7 & 4.5 & 28 & 17.0 \\ IRAS 16293-2422 B & 16h 322 22.671s & -24\({}^{\circ}\)28’ 33.145\({}^{\prime\prime}\) & 0.8 & 522 & 3.7 & 1.0 & 32 & 9.2 \\ NGC 1333-IRSA4A & 03h 29m 10.509s & +31\({}^{\circ}\)13’ 30.918\({}^{\prime\prime}\) & 1.1 & 822 & 3.5 & 1.8 & 29 & 11.2 \\ RCA IR57B & 19h 01m 165.402s & -36\({}^{\circ}\)57’ 28.276\({}^{\prime\prime}\) & 0.8 & 730 & 4.3 & 1.0 & 21 & 6.0 \\ Per-B1-c & 03h 33m 17.880s & +31\({}^{\circ}\)09’ 31.795\({}^{\prime\prime}\) & 1.2 & 879 & 3.0 & 2.2 & 11 & 13.4 \\ Bright-B18F1 & 12h 01m 36.516s & -65\({}^{\circ}\)08’ 49.298\({}^{\prime\prime}\) & 1.0 & 700 & 3.5 & 2.5 & 7 & 9.2 \\ Per-emb-25 & 03h 26m 37.514s & +30\({}^{\circ}\)15’ 27.792\({}^{\prime\prime}\) & 1.1 & 600 & 3.0 & 1.0 & 11 & 9.0 \\ NGC 1333-IRSA4B & 03h 29m 12.019s & +31\({}^{\circ}\)13’ 08.010\({}^{\prime\prime}\) & 1.2 & 879 & 3.0 & 2.0 & 11 & 12.9 \\ Ser-SMM3 & 18h 29m 59.311s & +01\({}^{\circ}\)14’ 00.365\({}^{\prime\prime}\) & 0.9 & 1 526 & 3.5 & 2.5 & 10 & 13.5 \\ TMC1 & 04h 41m 12.700s & +25\({}^{\circ}\)46’ 34.800\({}^{\prime\prime}\) & 1.1 & 303 & 3.0 & 1.0 & 13 & 10.6 \\ \hline \end{tabular}
\end{table}
Table 3: Location of the center position (in RA and Dec) and radius (in au and arcsecond) of each circular region, from which the spectra are extracted for the studied protostars. The pixel size (in arcsecond) for each source is also given. The noise levels in mJy beam\({}^{-1}\) channel\({}^{-1}\) and mJy beam\({}^{-1}\) km s\({}^{-1}\) of the protostars are deduced according to \(\sqrt{\frac{\gamma}{n}}\) (flux in line-free channel \(j\))\({}^{2}\) number of line-free channels and noise (mJy\({}^{-1}\)beam\({}^{-1}\)channel\({}^{-1}\)) \(\times\)\(\sqrt{n}\)\(\times\) spectral resolution (km s\({}^{-1}\)), where \(n\) is the number of channels computed with FWHM (km s\({}^{-1}\)), using the FWHM of the H\({}_{2}\)S line at 216.710 GHz, respectively.
Figure 1: ALMA pipeline-produced integrated intensity maps (color scale) with the line channels excluded, which are dominated by dust emission, for the studied sample of sources. On-source spectra are extracted by averaging the flux from the pixels within the circular area centered on ‘X’.
of 20\({}^{\prime\prime}\) (\(\sim\) 8 700 au; Davis et al. 1999; Kristensen et al. 2010; Mirocha et al. 2021). SMM3 launches a powerful jet (Tychonice et al. 2021), but does not display complex organic molecule emission, which may be obscured by the enveloping dust (van Gelder et al. 2020).
### Synthetic spectral fitting
For the spectral analysis, on-source spectra were extracted from the data cubes of the sources of the two data sets. Circular regions centered on source positions (in RA and Dec) with the radius of spectrum extraction corresponding to one-beam on-source are given in Table 3. The number of pixels in radius \(r\) of the circular region to be used was computed by dividing the radius of the circular region with the size of one pixel in arcsecond. The spectroscopy used for the targeted molecules and their isotopologos stems from the Cologne Database of Molecular Spectroscopy (CDMS; Muller et al. 2001, 2005; Endres et al. 2016)1 and the Jet Propulsion Laboratory (JPL) catalog (Pickett et al. 1998)2. Line blending in the detected lines was checked with the online database Splatalogue3.
Footnote 1: [https://cdms.astro.uni-koeln.de/](https://cdms.astro.uni-koeln.de/)
Footnote 2: [https://spec.jpl.nasa.gov/](https://spec.jpl.nasa.gov/)
Footnote 3: [https://splatalogue.online/](https://splatalogue.online/)
Synthetic spectral fitting was performed with custom-made Python scripts based on the assumption of local thermal equilibrium (LTE). The input parameters include the full width half-maximum (FWHM) of the line, column density (\(N\)), excitation temperature (\(T_{\rm ex}\)), source size, beam size, and spectral resolution of the observations. Line profiles are assumed to be Gaussian. Further details are provided in section 2.3 of Drozdovskaya et al. (2022). For some sources, the number of free parameters (such as source size or \(T_{\rm ex}\)) can be reduced based on information from other observing programs. These are detailed on a source-by-source basis in the corresponding Appendices. Typically, merely two free parameters were fitted at a time by means of a visual inspection and an exploration of a grid of possible values. The considered range for \(N\) was 10\({}^{13}\)-10\({}^{19}\) cm\({}^{-2}\) in steps of \(0.1\times N\). Simultaneously, the FWHM of the synthetic fit was adjusted to match the FWHM of the detected line. For some sources (such as NGC 1333-IRAS4A and Ser-SMM3), excitation temperature could not be constrained. Hence, a grid of excitation temperatures was considered with a range between 50 and 300 K.
The line optical depth (\(\tau\)) was calculated for the best-fitting combination of parameters to check for optical thickness. If a transition of a certain molecule was found to be optically thick, its column density was computed as the average of the column density of the main species derived from its minor isotopologos given by:
\[\overline{N(X)}=\frac{1}{n}\Sigma_{i=0}^{n}N(X)_{i}, \tag{1}\]
where X is H\({}_{2}\)S or OCS and \(N(X)_{i}\) is the column density of H\({}_{2}\)S or OCS derived from its minor isotopologos. Adopted isotopic ratios for the derivation of main isotopologos from minor isotopologos are given alongside Table 4.
## 3 Results
The spectral set up of the first data set allows the targeted sources to be probed for the emission of the main isotopologos of H\({}_{2}\)S and OCS, \(v\)=0, their minor isotopologos (HDS, HD\({}^{34}\)S H\({}_{2}^{33}\)S, H\({}_{2}^{34}\)S, \({}^{18}\)OCS, O\({}^{13}\)CS, OC\({}^{33}\)S, \({}^{18}\)OC\({}^{34}\)S), and also the vibrationally excited states of OCS (\(v_{2}\)=1\({}^{\pm}\)). Consequently, sources IRAS 16293 A, IRAS 16293 B, IRAS4A, and IRS7B were probed for all these species.
The spectral set up of the second data set allows the other targeted sources (B1-c, BHR71-IRS1, Per-emb-25, IRAS4B, SMM3, and TMC1) to be probed for the main isotopologos of H\({}_{2}\)S and OCS, \(v\)=0, their minor isotopologos (HDS, HD\({}^{34}\)S, \({}^{18}\)OCS, OC\({}^{33}\)S, \({}^{18}\)OC\({}^{34}\)S, \({}^{18}\)O\({}^{13}\)CS), and vibrationally excited state of OCS (\(v_{2}\)=1\({}^{\pm}\)). All the transitions of the detected molecules have \(E_{\rm up}\) in the range of \(84-123\) K and \(A_{ij}\) values of \(0.69-4.9\times 10^{-5}\) s\({}^{-1}\). The details of the targeted molecular lines are presented in Appendix A. Note that the HDS lines probed in the two data sets are not the same - the first data set was probed for HDS, \(14_{2,12}\)-\(13_{4,9}\) transition at rest frequency 214.325 GHz, while the second data set was probed for HDS, \(7_{3,4}\)-\(7_{3,5}\) and \(12_{5,7}\)-\(12_{5,8}\) transitions at 234.046 and 234.528 GHz, respectively. Nevertheless, the \(E_{\rm up}\) is high (\(>\) 400 K) for all three transitions of HDS; and it is not detected in any of these lines in any of the sources. The HD\({}^{34}\)S and OCS, \(v_{2}\)=1\({}^{\pm}\) transitions also have high \(E_{\rm up}\) (\(>\) 400 K). HD\({}^{34}\)S was not detected in any of these lines in any of the sources, but OCS, \(v_{2}\)=1\({}^{\pm}\) was detected in IRAS 16293 A, IRAS 16293 B, and IRAS4A (owing to high OCS column densities and higher sensitivity of the first data set).
All the main and minor isotopologos were detected in IRAS 16293 A, IRAS 16293 B, and IRAS4A except HDS, HD\({}^{34}\)S, \({}^{18}\)OC\({}^{34}\)S, and \({}^{18}\)O\({}^{13}\)CS. In IRS7B, only H\({}_{2}\)S was detected, the rest of the molecular lines were undetected including OCS, \(v\)=0. Whereas, only main S-bearing species, H\({}_{2}\)S and OCS, \(v\)=0, were detected in B1-c, BHR71-IRS1, and SMM3. IRAS4B showed the rotational transition of OC\({}^{33}\)S (\(J=18-17\)) in addition to the H\({}_{2}\)S and OCS, \(v\)=0 lines. Emb-25 and TMC1 showed no detections of main S-bearing species and their minor isotopologos. Thus, 1-\(\sigma\) upper limits on the column densities of H\({}_{2}\)S and OCS, \(v\)=0 were derived for Emb-25 and TMC1; and an upper limit on the column density of OCS, \(v\)=0 was derived for IRS7B. Table 1 provides the CDMS entry, transition quantum numbers, rest frequency, upper energy level, Einstein A coefficient, and the detection/non-detection of each line of all the targeted S-bearing molecules towards all of the sources in the sample.
In Figure 1, the pipeline-produced integrated intensity maps with the line channels excluded are shown for all the sources. These are dominated by the dust emission, but with some degree of contamination by line emission especially for some of the line-rich sources. The circular regions used to extract the spectra of each individual source are also shown. Pixel size of the integrated maps of the sources varies from 0.8 to 1.2\({}^{\prime\prime}\). To match the beam size of the observations, the radius of the circular regions was also varied from 3.0 to 4.3\({}^{\prime\prime}\). The spatial resolution of the presented ACA observations allowed the binary IRAS 16293 A and B (separated by 5.3\({}^{\prime\prime}\)) to be resolved as single sources; however, the resolution was not high enough to disentangle the binary components A1 and A2 of IRAS 16293 A. Similarly, the binary components of IRAS4A (with a separation of 1.8\({}^{\prime\prime}\)), and of TMC1 (with a separation of 0.6\({}^{\prime\prime}\)) could not be disentangled due to the spatial resolution not being high enough. All other sources are either single sources or binaries separated by large distances; hence, they are spatially resolved as individual sources.
The lower and upper uncertainties on the fitted column densities are derived assuming an error of \(\pm\)20 K on the assumed excitation temperature and a 1-\(\sigma\) noise level. The analysis of the spectra extracted towards IRAS 16293-2422 B is presented in the following Section 3.1 and Appendix 5. Full observed spectral
windows towards IRAS 16293 B are shown in Figure 2. For the other sources, the analysis is presented in Appendices C through K.
### Iras 16293-2422 B
Towards IRAS 16293 B, the main S-bearing species (H\({}_{2}\)S and OCS, \(\nu\)=0) and all the targeted minor isotopologue are securely detected, except for HDS due to a very high \(E_{\rm w}\) value (1 277 K) and the double isotopologs of HD\({}^{34}\)S, \({}^{18}\)OC\({}^{34}\)S, \({}^{18}\)O\({}^{13}\)CS due to their low abundances (and high \(E_{\rm w}\) for the case of HD\({}^{34}\)S). The detected transitions of H\({}_{2}\)S, 2\({}_{2,0}\)-2\({}_{1,1}\) and OCS, \(\nu\)=0, \(J\) = 19 \(-\) 18 are bright and optically thick (\(\tau>\)\(>\)1). The H\({}_{2}^{34}\)S line is marginally optically thick (\(\tau=0.2\)), shown in Figure 3. The vibrationally excited OCS, \(\nu_{2}\)=1\({}^{+}\) lines are detected. The lines of the detected molecules do not suffer from blending, except the H\({}_{2}^{34}\)S, 2\({}_{2,0,3}\)-2\({}_{1,1}\),3 transition at 215.512 GHz, which is contaminated by the CH\({}_{3}\)CHO, 11\({}_{2,9}\)-10\({}_{2,8}\) transition. HD\({}^{34}\)S, 7\({}_{3,4}\)-7\({}_{3,5}\) transition at 232.964 GHz is heavily blended with the CH\({}_{3}\)CN, \(\nu_{8}=1\), \(J=15-15\), \(K=7-5\) transition. Most likely all the emission seen around the rest frequency of HD\({}^{34}\)S comes from CH\({}_{3}\)CN, because HD\({}^{34}\)S is a minor species (\({}^{32}\)S/\({}^{34}\)S=22, Wilson 1999, and D/H \(\sim\)0.04 incl. the statistical correction by a factor of 2 to account for the two indistinguishable D atom positions, Drozdovskaya et al. 2018) and the \(E_{\rm w}\) of this transition is high (416 K). The spectra of detected and undetected lines are in Figure B.1 and Figure B.2, respectively.
For the analysis of the targeted S-bearing molecules towards IRAS 16293 B, a \(T_{\rm ex}\) of 125 K is assumed. This value has been deduced to be the best-fitting on the basis of ALMA-PILS observations at higher spatial resolution obtained with the 12m array and a full inventory of S-bearing molecules (Drozdovskaya et al. 2018). A FWHM of 1 km s\({}^{-1}\) is adopted, as it has been shown that this value consistently fits nearly all the molecules investigated towards the hot inner regions of IRAS 16293 B (e.g., Jorgensen et al. 2018). For the larger scales probed by the present ALMA ACA observations, a deviation by 2 km s\({}^{-1}\) from this FWHM can be seen for optically thick lines. This broadening in FWHM is likely due to the opacity broadening effects, which are dominant in optically thick lines, but can be neglected in optically thin lines (Hacar et al. 2016). The synthetic spectral fitting has been carried out for two potential source sizes, 1\({}^{\prime\prime}\) and 2\({}^{\prime\prime}\) (Table 4). Column densities depend on the assumed source size and are lower for the larger source size. However, the \(N\)(H\({}_{2}\)S)/\(N\)(OCS) ratio is 1.3\(\pm\)0.27 and 1.3\(\pm\)0.28 for source sizes of 1\({}^{\prime\prime}\) and 2\({}^{\prime\prime}\), respectively. Thus, the ratio is independent of the assumed source size and is robustly determined with the ALMA ACA data.
For a source size of 2\({}^{\prime\prime}\), the column density of the vibrationally excited state of OCS, \(\nu_{2}\)=1\({}^{\circ}\) derived for IRAS 16293 B (2.5\(\times\)10\({}^{16}\) cm\({}^{-2}\)) is an order of magnitude lower than the OCS, \(\nu_{2}=1\) column density (2.0\(\times\)10\({}^{17}\) cm\({}^{-2}\)) derived in Drozdovskaya et al. (2018). For a source size of 1\({}^{\prime\prime}\), the here obtained value (\(8.5\times 10^{16}\) cm\({}^{-2}\)) is in closer agreement with Droz
Figure 2: Observed spectral windows of IRAS 16293-2422 B (Table 3) obtained with ALMA ACA at Band 6 frequencies (Table 1). A Doppler shift by v\({}_{\rm LSR}\) = 2.7 km s\({}^{-1}\) has been applied (Table 2).
dovskaya et al. (2018). Likewise, the OCS, \(v\)=0 column density determined from the minor isotopologs of OCS for a source size of 1\({}^{\prime\prime}\) (\(2.7\times 10^{17}\) cm\({}^{-2}\)) is in a closer agreement with the column density of OCS, \(v=0\) (2.8\(\times 10^{17}\) cm\({}^{-2}\)) derived in Drozdovskaya et al. (2018), also based on minor isotopologs, than for source size of 2\({}^{\prime\prime}\) (\(7.0\times 10^{16}\) cm\({}^{-2}\)). Drozdovskaya et al. (2018) used a smaller source size (0.5\({}^{\prime\prime}\)) to constrain the column densities of OCS and H\({}_{2}\)S. These comparisons suggest that the ALMA ACA observations in this work are subject to beam dilution, hence the column densities are likely somewhat underestimated. The column density of H\({}_{2}\)S could not be constrained to better than a factor of 10 in Drozdovskaya et al. (2018), namely \(1.6\times 10^{17}-2.2\times 10^{18}\) cm\({}^{-2}\). This was due to the fact that only deuterated isotopologs of H\({}_{2}\)S were covered by the PILS observations and the D/H ratio of H\({}_{2}\)S is only constrained to within a factor of 10. Based on values of the H\({}_{2}\)S column densities for 1\({}^{\prime\prime}\) and 2\({}^{\prime\prime}\) source sizes obtained in this work, the lower estimate for the H\({}_{2}\)S column density in Drozdovskaya et al. (2018) seems to be more accurate. In turn, the H\({}_{2}\)S/OCS ratio obtained in this work (1.3) is closer to the lower end of the \(0.7-7\) range computed in Drozdovskaya et al. (2018).
### Line Profiles
For the synthetic spectral modeling, Gaussian line profiles are assumed (Section 2.3). However, even for optically thin lines, a deviation from Gaussian line profiles is seen in some cases. Two prominent examples are H\({}_{2}^{34}\)S and O\({}^{13}\)CS in IRAS 16293 A (Figure 15), where the high spectral resolution of the data set clearly allows multiple peaks to be spectrally resolved in these lines. Likely, the reason for this is that this source is a compact binary (Mauriera et al. 2020) with multiple components within the ACA beam of these observations. Another prominent example is the OCS, \(v=0\) line in Ser-SMM3 (Figure 16), which has a double-peaked profile centered around the source velocity. Such a line profile is typical for a rotating structure around its protostar (which could be envelope or disk in nature). Detailed modeling of line profiles is out of scope of this paper, as additional observations would be necessary in order to achieve meaningful results. For the purpose of studying the H\({}_{2}\)S/OCS ratio, these effects are secondary and likely do not significantly affect the calculated ratio and the conclusions of this paper. For IRAS 16293 A, the column density of H\({}_{2}^{34}\)S is not used to get the column density of H\({}_{2}\)S, because it is computed to be partially optically thick. Meanwhile, the column density of OCS as obtained from O\({}^{13}\)CS is within a factor of 2 of what is obtained from OC\({}^{33}\)S and \({}^{18}\)OCS. For Ser-SMM3, the lack of constraints on the excitation temperature dominates the uncertainty in the H\({}_{2}\)S/OCS ratio.
### H\({}_{2}\)S/OCS ratio determination
The column densities of H\({}_{2}\)S and OCS derived on the basis of ALMA ACA observations have been used to constrain the ra
\begin{table}
\begin{tabular}{l c c c c|c c c c|c c c} \hline Species & Transition & Freq. & \(E_{\rm up}\) & \(A_{ij}\) & beam size & \multicolumn{2}{c}{\(N\)} & \multicolumn{2}{c}{Derived \(N\)} & \multicolumn{2}{c}{\(\tau\)} \\ & & (GHz) & (K) & (s\({}^{-1}\)) & (\({}^{\prime\prime}\)) & (cm\({}^{-2}\)) & & (cm\({}^{-2}\)) & & \\ \hline & & & & & 1\({}^{\prime\prime}\) & 2\({}^{\prime\prime}\) & 1\({}^{\prime\prime}\) & 2\({}^{\prime\prime}\) & 1\({}^{\prime\prime}\) & 2\({}^{\prime\prime}\) & 1\({}^{\prime\prime}\) & 2\({}^{\prime\prime}\) \\ \hline H\({}_{2}\)S & 2\({}_{20}\)-2\({}_{\rm{1,1}}\) & 216.710 & 84 & 4.9\(\times 10^{-5}\) & 6.0 & op. thick & op. thick & \(N\)(H\({}_{2}\)S)=(3.6\(\pm\)0.6)\(\times 10^{17}\) & \(N\)(H\({}_{2}\)S)=(9.2\(\pm\)1.7)\(\times 10^{16}\) & 30.0 & 8.00 \\ H\({}_{2}\)\({}^{13}\)S & 2\({}_{20}\)-2\({}_{\rm{1,1}}\) & 215.494 & 84 & 2.4\(\times 10^{-5}\) & 6.0 & 2.7\({}^{+2.0}_{-0.2}\times 10^{15}\) & 7.0\({}^{+1.7}_{-1.3}\times 10^{14}\) & \(N\)(H\({}_{2}\)S)=3.4\({}^{+4.8}_{-0.5}\times 10^{17,\,\epsilon}\) & \(N\)(H\({}_{2}\)S)=8.8\({}^{+2.1}_{-1.4}\times 10^{18,\,\epsilon}\) & 0.02 & 0.004 \\ & 2\({}_{20}\)-2\({}_{\rm{1,1}}\) & 215.497 & 84 & 2.4\(\times 10^{-5}\) & & & & & 0.02 & 0.004 \\ & 2\({}_{20}\)-2\({}_{\rm{2,0}}\) & 215.501 & 84 & 4.9\(\times 10^{-6}\) & & & & & & 0.02 & 0.005 \\ & 2\({}_{20}\)-2\({}_{\rm{1,1}}\) & 215.503 & 84 & 4.2\(\times 10^{-5}\) & & & & & & 0.10 & 0.030 \\ & 2\({}_{20}\)-2\({}_{\rm{1,2}}\) & 215.504 & 84 & 1.9\(\times 10^{-5}\) & & & & & & 0.02 & 0.010 \\ & 2\({}_{20}\)-2\({}_{\rm{1,1}}\) & 215.508 & 84 & 1.2\(\times 10^{-5}\) & & & & & & 0.02 & 0.004 \\ & 2\({}_{20}\)-2\({}_{\rm{1,1}}\) & 215.512 & 84 & 2.8\(\times 10^{-5}\) & & & & & & 0.06 & 0.020 \\ & 2\({}_{20}\)-2\({}_{\rm{1,2}}\) & 215.513 & 84 & 1.1\(\times 10^{-5}\) & & & & & & 0.02 & 0.010 \\ & 2\({}_{20}\)-2\({}_{\rm{1,2}}\) & 215.513 & 84 & 9.1\(\times 10^{-6}\) & & & & & & 0.02 & 0.005 \\ H\({}_{2}\)\({}^{13}\)S & 2\({}_{20}\)-2\({}_{\rm{1,4}}\) & 214.377 & 84 & 4.7\(\times 10^{-5}\) & 6.0 & op. thick & \(>\)1.5\(\times 10^{15}\) & & & & & 1.00 & 0.20 \\ OCS, \(v\)=0 & 19-18 & 231.061 & 111 & 3.6\(\times 10^{-5}\) & 5.6 & op. thick & op. thick & \(N\)**(OCS) = (2.7\(\pm\)0.3)\(\times 10^{17}\)** & \(N\)**(OCS) = (7.0\(\pm\)0.3)\(\times 10^{16}\)** & 46.0 & 11.0 \\ OC\({}^{13}\)S & 18-17 & 216.147 & 99 & 2.9\(\times 10^{-5}\) & 6.0 & \(>\)2.4\(\times 10^{15}\) & \(>\)5.6\(\times 10^{14}\) & & & & 0.40 & 0.10 \\ O\({}^{13}\)CS & 19-18 & 230.318 & 110 & 3.5\(\times 10^{-5}\) & 5.7 & \(>\)3.8\(\times 10^{15}\) & \(>\)8.2\(\times 10^{14}\) & & & & 0.60 & 0.10 \\ \({}^{14}\)OCS & 19-18 & 216.753 & 104 & 3.0\(\times 10^{-5}\) & 6.0 & 4.7\({}^{+0.6}_{-0.2}\times 10^{14}\) & 1.2\({}^{+0.2}_{-0.1}\times 10^{14}\) & \(N\)(OCS)=2.6\({}^{+0.4}_{-0.2}\times
tio of H\({}_{2}\)S to OCS (Table 5). It was possible to compute this ratio for five out of ten sources in the considered sample. Neither H\({}_{2}\)S nor OCS were detected in Emb-25 and TMC1, consequently the H\({}_{2}\)S/OCS ratio could not be constrained. The non-detection of OCS in IRS7B allowed to derive only a lower limit on the H\({}_{2}\)S/OCS ratio. Table 5 also contains the best-available estimates of the H\({}_{2}\)S/OCS ratio for the warm and cold components of B1-c, and for the cold component of BHR71-IRS1, although these numbers carry a higher level of uncertainty due to line opacity that could not be resolved on the basis of these observations. For the warm component of BHR71-IRS1, a lower limit on the H\({}_{2}\)S/OCS ratio could be computed. For further analysis, the sample has been divided into three sub-samples: compact binary, wide binary, and single, based on the separations between components of multiple sources or closest neighbours.
## 4 Discussion
Figure 4 displays the derived protostellar H\({}_{2}\)S/OCS ratios, as well as the cometary (67P/Churyumov-Gerasimenko, hereafter 67P/C-G) and interstellar ice (W33A and Mon R2 IRS2) H\({}_{2}\)S/OCS ratios. The derived protostellar H\({}_{2}\)S/OCS ratios span a range from 0.2 to above 9.7. The ratios show a variation of approximately one order of magnitude, being the lowest in IRAS 16293 A and SMM3, and the highest in BHR71-IRS1.
In the subsections 4.1 and 4.2, a comparison of the protostellar H\({}_{2}\)S/OCS ratios with this ratio in interstellar and cometary ices, respectively, is made. Comets are thought to preserve the chemical composition of the Sun's birth cloud (Mumma & Charnley, 2011). By comparing the H\({}_{2}\)S/OCS ratio of comet 67P/C-G with the ratios in nascent solar-like protostellar systems, an assessment can be made whether such an inheritance is true in the case of S-bearing molecules.
### Interstellar ices
Observations towards the cold, outer protostellar envelopes of high-mass protostars W33A and Mon R2 IRS2 are used to acquire the H\({}_{2}\)S/OCS ratio in interstellar ices. The ratio is computed based on the ice abundances of OCS detected as an absorption feature at 4.9 \(\mu\)m (Palumbo et al., 1995) using the Infrared Telescope Facility (IRTF) and upper limits on the H\({}_{2}\)S abundance derived based on the non-detection of the 3.98 \(\mu\)m band. The column density of solid OCS with respect to solid H\({}_{2}\)O is \(N_{\rm solid}\)(OCS)/\(N_{\rm solid}\)(H\({}_{2}\)O) = 4\(\times 10^{-4}\)(Palumbo et al., 1995) and \(N_{\rm solid}\)(OCS)/\(N_{\rm solid}\)(H\({}_{2}\)O) = 5.5\(\times 10^{-4}\)(Palumbo et al., 1997) for W33A and Mon R2 IRS2, respectively. Based on the non-detection of solid H\({}_{2}\)S towards W33A in the Infrared Space Observatory (ISO) Spectra from the Short Wavelength Spectrometer (SWS), \(N_{\rm solid}\)(H\({}_{2}\)S)/\(N_{\rm solid}\)(H\({}_{2}\)O)\({}_{\rm solid}\)\(<\)0.03 (van der Tak et al., 2003). The upper limits on the solid H\({}_{2}\)S and solid H\({}_{2}\)O in Mon R2 IRS2 are \(<0.2\times 10^{17}\) and \(42.7\times 10^{17}\) cm\({}^{-2}\)(Smith, 1991), respectively, yielding \(N_{\rm solid}\)(H\({}_{2}\)S)/\(N_{\rm solid}\)(H\({}_{2}\)O) \(<\)4.7\(\times 10^{-3}\). The H\({}_{2}\)S/OCS ratio in interstellar ices is poorly constrained due to the non-detection of solid H\({}_{2}\)S to date. The upper limits on the interstellar ices ratio are within the uncertainties of the cometary ices ratio. The derived protostellar ratios for all the sources are lower than the upper limit on the H\({}_{2}\)S/OCS ratio determined for interstellar ices except BHR71-IRS1 with H\({}_{2}\)S/OCS ratio exceeding the upper limit on the ratio for Mon R2 IRS2.
### Comet 67P/Churyumov-Gerasimenko
Comets are thought to be the most unprocessed objects in the Solar System (Mumma & Charnley, 2011). Cometary chemical composition has been shown to be similar to a degree to that of star-forming regions (Bockelee-Morvan et al., 2000; Drozdovskaya et al., 2019). Consequently, the cometary H\({}_{2}\)S/OCS ratio is thought to provide an independent measurement of this ratio in interstellar ices. The H\({}_{2}\)S and OCS abundances from the ESA _Rosetta_ mission were used to compute the H\({}_{2}\)S/OCS ratio for the Jupiter-family comet 67P/C-G. The H\({}_{2}\)S and OCS abundances relative to H\({}_{2}\)O are 1.10\(\pm\)0.46% and 0.041\({}^{+0.082\%}_{-0.020}\)%, respectively (Rubin et al., 2019). These molecules are typical constituents of comets (Lis et al., 1997; Bockelee-Morvan et al., 2000; Boissier et al., 2007; Mumma & Charnley, 2011). BHR71-IRS1 is the only protostar in the sample with a H\({}_{2}\)S/OCS ratio within the uncertainties of the cometary ices ratio. The H\({}_{2}\)S/OCS ratio for the other sources is at least an order of magnitude lower than for 67P/C-G, while even considering the high uncertainties on the cometary value. The availability of H\({}_{2}\)S relative to H\({}_{2}\)O in cometary ice (0.0064 \(-\) 0.0156) appears to be higher than in the interstellar ices towards Mon R2 IRS2 (\(<0.0047\)). For W33A, the currently available upper limit (\(<0.03\)) is less constraining and hence no conclusion can be drawn about how its ices compare to those of comet 67P/C-G. The relative ratio of H\({}_{2}\)S to OCS is only one window onto the inventory of S-bearing molecules in gas and ice at different stages of star and planet formation, meanwhile the overall availability relative to, for example, H\({}_{2}\)O is another window that requires dedicated exploration.
### H\({}_{2}\)S/OCS ratio as an environmental (clustered/isolated) tracer
The measured gas-phase H\({}_{2}\)S/OCS ratios in the sample of young, low-mass protostars explored in this paper are predominantly lower (by as much as an order of magnitude) than the solid-state ratio measured through direct infrared observations of interstellar ices and indirectly via comets (Figure 4). There appears to be no correlation with binarity nor specific host cloud (Table 5). The dependence with evolutionary stage could not be properly explored, as the sample contains only one Class I source (TMC1), which did not result in detections of neither H\({}_{2}\)S nor OCS.
Figure 3: H\({}_{2}^{34}\)S line detected in IRAS 16293 B. The observed spectrum (in blue), rest frequency of the detected line (brown dashed line), spectroscopic uncertainty on the rest frequency of the detected line (yellow shaded region), and fitted synthetic spectrum (in pink) for source size: 2\(\arcsec\), excitation temperature: 125 K, and FWHM: 1 km s\({}^{-1}\).
The highest ratio of \(\geq\)9.7 is found for the warm component (250 K) of BHR71-IRS1, which is a wide binary (\(\sim 3\) 200 au; Bourke 2001; Parise et al. 2006; Chen et al. 2008; Tobin et al. 2019) Class 0 protostar. The ratio in BHR71-IRS1 resides within the uncertainty of the cometary ratio, but is in-between the two upper limits derived for the interstellar iecs. The overall envelope mass and bolometric luminosity of BHR71-IRS1 is comparable to those of other compact binary and wide binary systems. The similarity of its gas-phase H\({}_{2}\)S/OCS ratio to the ratio in iecs may suggest that it is displaying the most recently thermally desorbed volatiles that have not been subjected to gas-phase processing for long. However, what makes BHR71-IRS1 stand out is that it is located in an isolated cloud, i.e., it is not associated with processes typical for clustered environments such as dynamical interactions, mechanical and chemical feedback from outflows, and enhanced irradiation.
Figure 4: \(N\)(H\({}_{2}\)S)/\(N\)(OCS) of the studied sources. Different symbols represent different types of sources, i.e., ‘star’ for close binary (\(<500\) au), ‘square’ for wide binary (\(500-5000\) au), and ‘circle’ for single sources (within 5 000 au). The upper limits on the interstellar ice (W33A and Mon R2 IRS2) ratios are shown by downward arrows. The uncertainty on the H\({}_{2}\)S/OCS ratio in comet 67P/C-G is shown by the coral shaded region. The lower limit on the ratio in IRS7B is shown by an upwards arrow. The H\({}_{2}\)S/OCS ratios for the cold (cyan) and warm (orange) components of B1-c, and cold (cyan) component of BHR71-IRS1 are the best-available estimates pending opacity issues. These latter three data points do not have error bars associated to them in the figure to indicate that they are merely estimates.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Source & Class & Binarity & Environment & \(T_{\rm ES}\) (K) & \(N\)(H\({}_{2}\)S) (cm\({}^{-2}\)) & \(N\)(OCS) (cm\({}^{-2}\)) & \(N\)(H\({}_{2}\)S)/\(N\)(OCS) \\ \hline IRAS 16293-2422 A & 0 & CB & Clustered & 125\(\pm\)20 & (2.4\(\pm\)0.4)\(\times\)10\({}^{17}\) & (3.6\(\pm\)1.4)\(\times\)10\({}^{16}\) & 0.7\(\pm\)0.3 \\ IRAS 16293-2422 B & 0 & WB & Clustered & 125\(\pm\)20 & (9.2\(\pm\)1.7)\(\times\)10\({}^{16}\) & (7.0\(\pm\)0.8)\(\times\)10\({}^{16}\) & 1.3\(\pm\)0.3 \\ NGC 1333-IRAS4A & 0 & CB & Clustered & 150\(\pm\)300 & (3.4\(\pm\)0.8)\(\times\)10\({}^{16}\) & (1.8\(\pm\)0.2)\(\times\)10\({}^{16}\) & 1.9\(\pm\)0.5 \\ RCrA IRS7B & 0/I & WB & Clustered & 100\(\pm\)20 & (5.6\(\pm\)0.8)\(\times\)10\({}^{13}\) & \(\leq\)3.6\(\times\)10\({}^{13}\) & \(\geq\)1.5 \\ Per-B1-c & 0 & S & Clustered & 60 & \(>\)9.7\(\times\)10\({}^{15}\) & \(>\)5.0\(\times\)10\({}^{15}\) & (1.9) \\ & & & 200 & \(>\)1.2\(\times\)10\({}^{16}\) & \(>\)3.6\(\times\)10\({}^{15}\) & (3.3) \\ BHR71-IRS1 & 0 & WB & Isolated & 100 & \(>\)2.4\(\times\)10\({}^{16}\) & \(>\)2.7\(\times\)10\({}^{15}\) & (8.9) \\ & & & 250 & \(>\)3.3\(\times\)10\({}^{16}\) & (3.4\(\pm\)0.3)\(\times\)10\({}^{15}\) & \(\geq\)9.7 \\ Per-emb-25 & 0/I & S & Clustered & 50-300 & \(\leq\)8.3\(\times\)10\({}^{13}\) & \(\leq\)3.2\(\times\)10\({}^{14}\) & – \\ NGC 1333-IRAS4B & 0 & WB & Clustered & 100\(\pm\)20 & \(>\)5.8\(\times\)10\({}^{15}\) & (2.8\(\pm\)0.6)\(\times\)10\({}^{16}\) & \(\geq\)0.21 \\ Ser-SMM3 & 0 & S & Clustered & 100-250 & (5.8\(\pm\)3.2)\(\times\)10\({}^{14}\) & (8.7\(\pm\)4.9)\(\times\)10\({}^{14}\) & 0.7\(\pm\)0.5 \\ TMC1 & I & CB & Clustered & 40 & \(\leq\)1.5\(\times\)10\({}^{13}\) & \(\leq\)2.6\(\times\)10\({}^{13}\) & – \\ \hline Comet (67P/C-G) & & & & & & & 26.8\({}^{+7.5.4}_{-21.6}\) \\ \hline ISM iecs (W33A) & & & & & & & & \\ ISM iecs (Mon R2 IRS2) & & & & & & & & \\ \hline \end{tabular} 1
\end{table}
Table 5: H\({}_{2}\)S/OCS ratio for the studied sources, including their evolutionary class, binarity, environment, and the derived column densities of H\({}_{2}\)S and OCS for the stated excitation temperatures. The H\({}_{2}\)S/OCS ratios for the cold and warm components of B1-c, and the cold component of BHR71-IRS1 are the best-available estimates pending opacity issues.
Possibly, isolation resulted in lower irradiation of the ice grains during the prestellar phase in BHR71-IRS1, thus converting less H\({}_{2}\)S isces to OCS ices by photodissociation in the presence of CO ice. Consequently, leaving a higher H\({}_{2}\)S/OCS ratio in ices, which after evaporation resulted in a higher H\({}_{2}\)S/OCS ratio in the gas phase. Another reason could be more efficient hydrogenation chemistry in such a colder environment. On dust grains, hydrogenation is expected to be the most effective process leading to the formation of H\({}_{2}\)S (Wakelam et al., 2011; Esplugues et al., 2014). Hence, BHR71-IRS1 may have a higher H\({}_{2}\)S content and a lower OCS content, which results in a higher H\({}_{2}\)S/OCS ratio. Water deuteration is also higher by a factor of \(2-4\) in isolated protostars such as BHR71-IRS1 in comparison to those in clustered environments such as IRAS 16293 and IRAS4G (Jensen et al., 2019).
One alternative cause of lower H\({}_{2}\)S/OCS ratios towards clustered low-mass protostars could be local temperature differences in their birth clouds, e.g., due to enhanced irradiation from the neighbouring protostars. Laboratory experiments have proven that OCS forms readily in ices when interstellar ice-analogs are irradiated by high-energy photons (Ferrante et al., 2008; Garozzo et al., 2010; Jimenez-Escobar & Munoz Caro, 2011; Chen et al., 2015). This would lead to a lower H\({}_{2}\)S/OCS ratio.
Additionally, cosmic rays and other forms of radiation (UV and X-ray photons) are a ubiquitous source of ionization of the interstellar gas. It is a pivotal factor in the dynamical and chemical evolution of molecular clouds (Padovani et al., 2018, 2020). Cosmic rays are not attenuated in the molecular clouds as strongly as UV photons (Ivlev et al., 2018; Padovani et al., 2018; Silsbee et al., 2018). Thus, dust grains in the interstellar medium can be heated by impinging cosmic rays, thereby heating up the icy grain mantles and resulting in calamtous explosions (Leger et al., 1985; Ivlev et al., 2015), thereby activating chemistry in solids (Shingledecker et al., 2017). Magnetohydrodynamic simulations have shown a higher cosmic ray production in protostars in a clustered environment (Kuffmeier et al., 2020), which would be consistent with the lower H\({}_{2}\)S/OCS ratios for such protostars found in this work. The results suggest that the H\({}_{2}\)S/OCS ratio traces the environment (isolated/clustered) of the protostellar systems. However, a follow-up study is needed as the sample consisted of only one isolated source.
## 5 Conclusions
This work probed a sample of ten low-mass protostars for the presence of H\({}_{2}\)S, OCS, and their isotopologs using ALMA ACA Band 6 observations. For 5 out of 10 protostars, the H\({}_{2}\)S/OCS ratio was firmly constrained and for an additional 3, best-possible estimates were obtained. This ratio is thought to be a potential chemical and physical clock of star-forming regions, which sheds light on the sulfur depletion that transpires from the diffuse medium to the dense core stage. The main conclusions are:
* Main S-bearing species, H\({}_{2}\)S and OCS are detected in IRAS 16293-2422 A, IRAS 16293-2422 B, NGC 1333-IRAS4A, NGC 1333-IRAS4B, Per-B1-c, BHR71-IRS1, and Ser-SMM3. 1-\(\sigma\) upper limits on the column densities of OCS are derived for RCra IRS7B, TMC1, and Per-emb-25. 1-\(\sigma\) upper limits on the column densities H\({}_{2}\)S are derived for TMC1 and Per-emb-25.
* The gas-phase H\({}_{2}\)S/OCS ratio ranges from 0.2 to above 9.7, and is typically at least one order of magnitude lower than that of ices. The lowest ratio is obtained for IRAS 16293 A and Ser-SMM3, while the highest for BHR71-IRS1. The environment of the natal cloud, prior to the onset of star formation, may have played a major role in the distribution of sulfur across various S-bearing molecules, which have resulted in an order of magnitude spread in the H\({}_{2}\)S/OCS ratio.
* The upper limits derived for the interstellar ices (Mon R2 IRS2 and W33A) lie within the uncertainties of the cometary ices ratio, specifically that of comet 67P/Churyumov-Gerasimenko. The protostellar ratios are lower than the upper limits on the interstellar ices ratio and the cometary ices ratio by at least an order of magnitude for all sources except BHR71-IRS1.
* The lower ratio in clustered protostellar regions could be due to elevated birth cloud temperatures or due to additional radiation from nearby protostars, thereby enhancing the photodissociation pathways from H\({}_{2}\)S to OCS.
* The high H\({}_{2}\)S/OCS ratio in BHR71-IRS1 could be the result of less efficient photodissociation of H\({}_{2}\)S to OCS in the presence of CO ice in its isolated birth cloud or more efficient hydrogenation chemistry leading to more efficient H\({}_{2}\)S formation.
Follow-up high spatial resolution observations are required towards several sources to better constrain the spatial distribution and excitation temperatures associated with the H\({}_{2}\)S and OCS detections. Furthermore, the difference of more than ten-fold in the H\({}_{2}\)S/OCS ratio towards Class 0 protostars in clustered and isolated environments is a strong motivation for performing more spectroscopic observations towards such sources, thereby understanding the physical and chemical differences in the two types of environments. Observations from James Webb Space Telescope (JWST) could play an important role in constraining the H\({}_{2}\)S/OCS ice ratio in low- and intermediate-mass stars. More studies of the H\({}_{2}\)S/OCS ratio in a larger sample of Class 0 and Class I protostars in clustered and isolated environments should also be performed to further understand the sulfur chemistry in star-forming regions.
###### Acknowledgements.
The research was started as part of the Leiden/ESA Astrophysics Program for Summer Students (LEAPS) 2021. M.N.D. acknowledges the support by the Swiss National Science Foundation (SNSF) Ambiaione grant no. 180079, the Center for Space and Habitability (CSH) Fellowship, and the IAU Gruber Foundation Fellowship. This paper makes use of the following ALMA data: ADS/JAJA02017.1.00108.S and ADS/JA02017.11035.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018)2. The authors would like to thank Prof. Dr. Ewing van Dishock for useful discussions about the H\({}_{2}\)S/OCS ratio and the anonymous referee for constructive feedback.
Footnote 2: [http://www.astropy.org](http://www.astropy.org)
|
2301.06231 | Different spin relaxation property observed in linearly and circularly
polarized laser induced terahertz emission from Bi/Co bilayer | Recently, helicity-dependent photocurrent was reported in Bi single thin fi
lms. It is proposed that the origin of this photocurrent is the combination of
photo-spin conversion and spin-charge conversion effects in Bi and efficient
spin conversion in Bi is expected. In this study, we measured two types of
terahertz (THz) emissions from Bi/Co bilayer films induced by spin current
generation using laser-induced demagnetization of the Co layer and photo-spin
conversion effect in the Bi layer to investigate the spin current induced by
the two mechanisms simultaneously. We clearly observed diff erent Bi thickness
dependence of peak intensity and that of bandwidth for THz spin current in two
experiments, i.e., spin current induced by demagnetization of Co and that by
photo-spin conversion in Bi. The different Bi thickness dependence of spin
current intensity and bandwidth in two experiments is caused by different spin
relaxation properties of optically excited spin currents in Bi layers. | Kazuaki Ishibashi, Satoshi Iihama, Shigemi Mizukami | 2023-01-16T01:47:05Z | http://arxiv.org/abs/2301.06231v2 | Different spin relaxation property observed in linearly and circularly polarized laser induced terahertz emission from Bi/Co bilayer
###### Abstract
Recently, helicity-dependent photocurrent was reported in Bi single thin films. It is proposed that the origin of this photocurrent is the combination of photo-spin conversion and spin-charge conversion effects in Bi and efficient spin conversion in Bi is expected. In this study, we measured two types of terahertz (THz) emissions from Bi/Co bilayer films induced by spin current generation using laser-induced demagnetization of the Co layer and photo-spin conversion effect in the Bi layer to investigate the spin current induced by the two mechanisms simultaneously. We clearly observed different Bi thickness dependence of peak intensity and that of bandwidth for THz spin current in two experiments, _i.e._, spin current induced by demagnetization of Co and that by photo-spin conversion in Bi. The different Bi thickness dependence of spin current intensity and bandwidth in two experiments is caused by different spin relaxation properties of optically excited spin currents in Bi layers.
## I Introduction
Conversion between electron spin and physical quantities, such as charge, light, heat, and phonon, is one of the fundamental principles that enable the generation and detection of the spin current[1; 2; 3; 4]. To enhance the conversion efficiency for future spintronic devices, numerous studies have explored various materials, such as topological materials[5; 6] and heavy metals (e.g., Pt, W, Ta, and Bi[7; 8; 9; 10; 11; 12]). In particular, Bi is the basis of several topological materials, such as Bi\({}_{0.9}\)Sb\({}_{0.1}\), PtBi\({}_{2}\), and Bi\({}_{x}\)Se\({}_{1-x}\)[13; 14; 15]. Accordingly, Bi-based alloys are among the candidate materials with good spin current generation characteristics owing to their large spin orbit coupling and unique band structure. Thus, it is important to investigate the phenomena occurring in Bi for obtaining a basic understanding of the phenomena occurring in Bi-based alloys. In addition, Bi itself is expected to exhibit an efficient spin conversion effect and other interesting phenomena.
Recently, helicity-dependent (HD) photocurrent in Bi single thin film or Bi/Cu(or Ag) bilayer films was observed via pulse laser-induced terahertz (THz) emission[16] and transport measurement using a continuous wave laser[17]. The proposed mechanism for photocurrent in Bi is photo-induced inverse spin-Hall effect[18]. A circularly polarized laser induces electron spin in Bi depending on optical helicity via conversion from photon spin angular momentum (SAM) to electron-SAM, which is called the photo-spin conversion effect. Subsequently, the flow of electron-SAM is converted to charge current through the inverse spin-Hall effect. Although photon-SAM driven torques were observed in heavy metal/ferromagnet bilayer films via time-resolved magneto-optical Kerr effect measurement[2; 19; 20; 21], the abovementioned HD photocurrent in a single thin film has been mostly reported in Bi-related materials[16; 17; 22]. Therefore, the efficient photo-spin conversion effect in Bi is expected, which most likely originates from the band structure inherent to semi-metallic Bi. However, the details of photo-spin conversion in Bi have not been clarified yet because the photo-spin conversion and spin-charge conversion effects are observed simultaneously in the photocurrent measurement, thus, one cannot distinguish between two spin-related conversion effects.
In this study, to disentangle the two processes, namely the photo-spin conversion and spin-charge conversion effects, and gain insight into the underlying physics, we measured the THz emission induced by spin current generation using photo-spin conversion and laser-induced demagnetization simultaneously in Bi/Co bilayer films.
The THz emission experiment conducted with structures widely used in spintronics THz emitters[23; 24; 25], _e.g._, ferromagnet/Bi bilayers, has not been reported so far. When a femtosecond laser pulse is irradiated on the ferromagnet/Bi bilayer, spin current can be generated by the ultrafast demagnetization of the ferromagnetic layer due to the conservation of angular momentum[26; 27; 28; 29]. Then, the THz wave can be emitted owing to spin-transport and spin-charge conversion in the Bi layer. Thus, we can simultaneously investigate the difference in spin current via photo-spin conversion and laser-induced demagnetization using this structure.
Experiment
The samples were prepared using DC/RF magnetron sputtering. The stacking structure of samples was Glass sub./Bi(\(d_{\text{Bi}}\))/Co(5)/MgO(2)/Ta(2) (thickness is in nm). The thicknesses of the Bi layers \(d_{\text{Bi}}\) were varied from 10 to 120 nm. The Co layer generates the spin current from laser-induced demagnetization. The Bi layers generate spin current via the photo-spin conversion effect and convert the spin current into charge current via the spin-charge conversion effect. The MgO and Ta layers are capping layers to prevent oxidization. The Bi film in all samples was polycrystalline with the (003) and (012) preferred orientations indexed using hexagonal notation. The saturation magnetization of Co was almost constant with respect to Bi thickness (see Appendix for details on sample information).
The laser pulse-induced THz emission from Bi/Co films was measured using THz time domain spectroscopy (THz-TDS)[30; 31; 32]. Laser pulses are generated by a Ti:Sapphire femtosecond laser with a wavelength of 800 nm, pulse duration of 160 fs, and repetition rate of 1 kHz. Pump laser pulses are modulated by a mechanical chopper at a frequency of 360 Hz. A quarter wave plate (QWP) is placed in front of samples to control pump laser polarization. The pump laser was focused on the film with a fluence of 0.62 mJ/cm\({}^{2}\). The polarization of the THz wave emitted from the sample surface was analyzed with two wire grids[33; 27]. We measured the emitted THz wave using the electro-optic (EO) sampling method[34] with a 1-mm-thick ZnTe(110) crystal. All measurements were taken at room temperature.
## III Experimental results and discussion
### Laser-induced THz emission and Bi thickness dependence
We measured two kinds of THz waves emitted from Bi/Co bilayer films induced by linearly polarized and circularly polarized lasers as shown in Figs. 1(a) and 1(b), respectively. The difference between the two measurements is the source of spin current. The first spin current source is laser-induced demagnetization. When a linearly polarized laser is irradiated on the sample, ultrafast spin current is generated from the laser-induced demagnetization of Co, where the polarization of spin is parallel to the magnetization direction of Co. This spin current flows into an adjacent Bi layer and is converted to charge current through the inverse spin-Hall effect in Bi, which causes THz emission from the film surface [Fig. 1(a)]. The second source is photo-spin injection via the photo-spin conversion effect in Bi. When a circularly polarized laser is irradiated on the sample with an oblique incidence, in-plane electron spin is injected in Bi depending on the optical helicity via the photo-spin conversion effect. The incident angle was fixed at 45\({}^{\circ}\) in our measurement. Spin current is caused by the gradient of induced spin because of the finite penetration depth of the laser. This spin current is converted into a charge current and then the THz wave is emitted [Fig. 1(b)]. In contrast to the linearly polarized laser-induced THz emission, circularly polarized laser-induced THz emission is observed in Bi single thin film[16]. We conducted THz experiments using spin currents generated by two different mechanisms shown in Figs. 1(a) and 1(b), where the THz wave polarization induced by demagnetization is orthogonal to that induced by photo-spin injection to distinguish the two contributions via THz polarization analysis with two wire grids.
A typical THz signal \(V_{\text{THz}}\) induced by linearly polarized laser with two opposite sample magnetization \(\pm M\) orientations is shown in Fig. 2(a). The THz signal is inverted when the magnetization is reversed, which is consistent with THz signals emitted from magnetic heterostructures[25; 23]. It was found that there is a contribution from the ordinary Nernst effect in Bi[35] where the THz signal is linearly changed based on the external magnetic field strength. To remove the contribution of the ordinary Nernst effect, first an external magnetic field was applied to saturate the magnetization of Co and then the THz signal was measured without an external magnetic field. Fig. 2(b) shows typical circularly polarized laser pulse-induced THz signals \(V_{\text{THz}}\) with different optical helicities \(\sigma\pm\). The red and blue circles represent the THz signals induced by a right- and left-circularly polarized laser, respectively. The sign of the THz signal is reversed when the helicity of the circularly polarized laser pulse is changed, indicating the HD-THz signal. To focus on the HD and magnetization direction dependent contributions, we considered antisymmetric signals with respect to magnetization and optical helicity, _i.e._, \((V_{\text{THz}}(+M)-V_{\text{THz}}(-M))/2\) and \((V_{\text{THz}}(\sigma+)-V_{\text{THz}}(\sigma-))/2\).
Figs. 2(c) and 2(d) show the Bi thickness \(d_{\text{Bi}}\) dependence of the linearly polarized laser-induced THz signal and HD circularly polarized laser-induced THz signal, respectively. As the Bi thickness \(d_{\text{Bi}}\) increased from 20 nm to 120 nm, the amplitude of the THz signal induced by the linearly polarized laser decreased. In contrast to linearly polarized laser-induced THz, the amplitude of the HD-THz signal induced by the circularly polarized laser increased. Figs. 3(a) and 3(b) exhibit the \(d_{\text{Bi}}\) dependence of peak value of the linearly polarized laser-induced THz signal and HD circularly polarized laser-induced THz signal, respectively. Different \(d_{\text{Bi}}\) dependences were clearly observed in the two experiments. The trends observed in the two experiments, shown in Figs. 3(a) and 3(b), were consistent with those observed in previous studies[16; 25]. Moreover, the waveform of THz was different, _i.e._ its shape is broader for the HD-THz signal compared with the linearly polarized laser-induced THz signal. Those differences can be caused by different temporal dynamics of the laser-induced spin current. To obtain the tempo
ral spin current, we analyzed THz signals as described below.
discussed in the next section.
### Theoretical analysis of spin current
To explain the \(d_{\rm Bi}\) dependence of laser-induced spin current, we performed simulation of the spin-diffusion equation. Although superdiffusive spin-transport of optically excited electrons must be considered[38; 39], the spin-diffusion equation can be easily simulated using two simple parameters. The spin-diffusion equation is described as follows[40]:
\[\frac{\partial s(z,t)}{\partial t}=D\frac{\partial^{2}s(z,t)}{\partial z^{2}}- \frac{s(z,t)}{\tau_{\rm s}}+Q_{\rm s}(z,t), \tag{4}\]
where \(s\), \(D\), and \(\tau_{\rm s}\) denote the electron-SAM density, diffusion constant, and spin relaxation time, respectively. Here, \(Q_{\rm s}\) corresponds to the source term for the spin cur
Figure 2: (a) Linearly polarized laser-induced terahertz signal with different polarity of Co magnetization where laser is irradiated with normal incidence. (b) Circularly polarized laser-induced terahertz signal with different helicity of laser pulse where laser is irradiated with 45 deg. incident angle. Bi thickness \(d_{\rm Bi}\) dependence of (c) linearly polarized laser-induced terahertz signal and (d) HD circularly polarized laser-induced terahertz signal where difference between signal obtained with left- and right-circularly polarized light is taken.
rent, and we assumed two different spin current sources in two experiments as described below. First, demagnetization is considered as a spin source at the interface between the Co and Bi layers \(Q_{\rm s}^{\rm d}\), which can be expressed as follows:
\[Q_{\rm s}^{\rm d}(z,t)=\left\{\begin{array}{cc}-\frac{dc_{\rm s}}{\gamma} \frac{d}{dt}(\Delta M_{\rm s}(t;d_{\rm Bi}))&\quad\mbox{if}\ \ z=d_{\rm Bi},\\ 0&\quad\mbox{else}.\end{array}\right., \tag{5}\]
where \(\gamma\) and \(\Delta M_{\rm s}\) denote the gyromagnetic ratio and temporal dynamics of demagnetization, respectively. \(\Delta M_{\rm s}\) was evaluated by using the time-resolved magneto-optical Kerr effect (TRMOKE) measurement[41] with a constant pump fluence of 0.62 mJ/cm\({}^{2}\). The laser-induced spin current has been considered to be inversely proportional to the total layer thicknesses for spintronic THz emitters described in a previous study[24]. In fact, laser-induced demagnetization decreased with increasing \(d_{\rm Bi}\) values (see Appendix 4 for \(d_{\rm Bi}\) dependence of demagnetization), which is consistent with the assumption that the absorbed fluence per unit thickness decreases with an increase in the metallic layer thickness. Therefore, we considered \(d_{\rm Bi}\) dependence of demagnetization \(\Delta M_{\rm s}(t;d_{\rm Bi})\), which is proportional to \((d_{\rm Bi}+d_{\rm Co})^{-1}\), _i.e._, \(\Delta M_{\rm s}(t;d_{\rm Bi})=\Delta M_{\rm s}(t;d_{\rm Bi}=15)\cdot(15+d_{ \rm Co})/(d_{\rm Bi}+d_{\rm Co})\) using the demagnetization dynamics for a 15-nm-thick Bi sample, \(\Delta M_{\rm s}(t;d_{\rm Bi}=15)\).
On the other hand, photo-spin injection is considered as a spin current source for circularly polarized laser-induced THz signals. Photo-spin conversion-induced spin density \(Q_{\rm s}^{\rm p}\) can be considered as the conversion between absorbed photon-SAM and electron-SAM, which is described by the following equation[21]:
\[Q_{\rm s}^{\rm p}(z,t)=\frac{\eta a(z)F_{\rm p}}{\omega_{\rm l}}\sin\theta_{ \rm inc}G(t), \tag{6}\]
where \(F_{\rm p},\omega_{\rm l},\theta_{\rm inc}\), and \(G(t)\) denote the fluence of pump laser, laser angular frequency, incident angle of laser and temporal profile of the Gaussian laser pulse, respectively. The laser absorption profile inside the Bi layer \(a(z)\) is calculated using the transfer matrix method[42] (see Appendix for refractive index and light absorption). Here, we assumed that photon-SAM is entirely converted into electron-SAM, _i.e._, \(\eta=1\). The obtained \(s(z,t)\) can be converted into spin current via the following relation:
\[J_{\rm s}(z,t)=D\frac{\partial s(z,t)}{\partial z}, \tag{7}\]
Using Eq. (4) \(\sim\) (7), simulations with various \(D\) and \(\tau_{\rm s}\) values were performed (see Appendix for details of simulation results) and temporal spin current integrated across the Bi layer, namely, \(\int_{0}^{d_{\rm Bi}}J_{\rm s}(z,t)dz\), was obtained.
Figs. 5(a) and 5(b) show the simulated temporal spin current induced by the demagnetization of the Co layer and photo-spin injection into the Bi layer, respectively. Figs. 5(c) and 5(d) show the Fourier transformation spectra for spin currents corresponding to Figs. 5(a) and 5(b), respectively. The solid curve represents the result fitted with the Lorentzian function to evaluate the bandwidth \(\Delta f\). Obtained peak values and bandwidth of the spin current plotted as a function of Bi thickness are shown in Figs. 5(e) and 5(f). Simulation results roughly agree with experimentally observed Bi thickness dependence.
### Discussion
The initial sharp increase in the demagnetization-induced spin current is due to the spin diffusion in the Bi layer. The decrease over 20 nm was mainly caused by the decline in the spin current generated by the demagnetization of the Co layer as mentioned above [depicted by the solid blue symbols in Figs. 4(e) and 5(e)]. The photo-spin conversion-induced spin current in the thin region is negligible, which is attributed to the small light absorption in the Bi layer (see Appendix 3 for the
Figure 3: Peak value of (a) linearly polarized laser-induced terahertz signal and (b) HD terahertz signal plotted as a function of Bi thickness \(d_{\rm Bi}\).
laser absorption profile). In the thick region, the peak value of the spin current induced by photo-spin conversion gradually increases, which corresponds to the spin diffusion in the Bi layer [open red symbols in Figs. 4(e) and 5(e)]. Similarly, \(\Delta f\) remains almost constant for the spin current induced by demagnetization and decreases with an increase in \(d_{\rm Bi}\) for the spin current induced by photo-spin injection, which is obtained in the simulation [Figs. 4(f) and 5(f)]. The parameters used in Figs. 5(e) and 5(f) are \(D=2\times 10^{-3}\) m\({}^{2}\)/s and \(\tau_{\rm s}=0.04\) ps for the demagnetization-induced spin current and \(D=2\times 10^{-3}\) m\({}^{2}\)/s and \(\tau_{\rm s}=4\) ps for the photo-spin conversion-induced spin current. Note that the \(\tau_{\rm s}\) values used in the two experiments to explain the experimental results are two orders of magnitude different from each other. In fact, the increasing slope of the peak spin current value for demagnetization is much higher than that for photo-spin conversion, which is due to different spin relaxation lengths in the two experiments. This indicates that the spin relaxation length (here we discuss \(\sqrt{D\tau_{\rm s}}\)) for demagnetization-induced spin current is one order of magnitude shorter than that for photo-spin injection-induced spin current.
The differences in the spin relaxation property should be related to the energy level of the spin-transport for optically excited electron spins. The electron spin characteristics at the Fermi level are different from those at the optically excited state. The mechanism behind the spin current generated by laser-induced demagnetization is an \(s-d\) exchange coupling. The loss of angular momentum in local magnetic moment due to demagnetization transfers its angular momentum to mobile \(s\)-electrons owing to the \(s-d\) exchange coupling[43]. Although a shorter spin relaxation length of the laser-induced spin current in ferromagnet/nonmagnet heterostructures, which might be attributed to optically excited electrons, has been
Figure 4: Temporal spin current signal obtained from (a) linearly polarized laser-induced terahertz signal and (b) circularly polarized laser-induced terahertz signal for Bi(80) /Co(5) bilayer. (c) Fourier transformation spectrum for spin current induced by (c) linearly polarized laser and (d) circularly polarized laser, corresponding to (a) and (b). Solid curves represent Lorentzian fitting to obtain bandwidth of spectra. (e) Peak value of spin current plotted as a function of Bi thickness \(d_{\rm Bi}\). (f) Bandwidth of Fourier transformation spectrum \(\Delta f\) plotted as a function of \(d_{\rm Bi}\).
observed[44; 45; 46], the energy level of the electron spins may be close to the Fermi level. On the other hand, spin current generated by photo-spin conversion is possibly carried by optically excited electron spins in Bi. The abovementioned fact indicates that spin relaxation length of optically excited spins in Bi is likely longer than those near the Fermi level, which possibly stems from the semi-metallic characteristics of Bi.
## IV Conclusion
In this study, two kinds of THz emissions from Bi/Co bilayer films induced by the spin current generation due to demagnetization and photo-spin conversion effect with various Bi thicknesses were investigated simultaneously. The spin current peak intensity and bandwidth were discussed based on the spin-diffusion simulation with different spin current sources, namely, the demagnetization of Co and photo-spin conversion in Bi. It is revealed by the experimental and simulation results that the spin relaxation length of electron spins excited by the photo-spin conversion in Bi is much longer than that induced by the demagnetization of Co, which might be attributed to the semi-metallic characteristics of Bi.
###### Acknowledgements.
This study is partially supported by KAKENHI (19K15430, 21H05000) and X-NICS of MEXT JPJ011438. K. I. acknowledges Grant-in-Aid for JSPS Fellow (22J22178) and GP-Spin at Tohoku University, S. I. acknowledges the Murata Science Foundation, FRIS Creative Interdisciplinary Collaboration Program in Tohoku University, and JST, PRESTO Grand Number JPMJPR22B2. S. M. acknowledges CSRN in CSIS at Tohoku University.
Figure 5: Temporal spin current induced by (a) demagnetization of Co layer and (b) photo-spin injection into Bi layer, calculated by spin-diffusion equation. (c), (d) Fourier transformation spectrum for spin current corresponding to (a), (b). (e) Peak values of spin current obtained by spin-diffusion simulation plotted as a function of Bi thickness \(d_{\text{Bi}}\). (f) Bandwidth of Fourier transformation spectrum \(\Delta f\) obtained by spin-diffusion simulation plotted as a function of \(d_{\text{Bi}}\).
## Appendix
### Magnetic property
The magnetic property was evaluated with VSM measurements. Figure 6(a) illustrates the magnetic hysteresis loops for Bi(\(d_{\mathrm{Bi}}\))/Co(5) samples with varying Bi thicknesses. The saturation magnetization \(M_{\mathrm{s}}\) was evaluated from the magnetic hysteresis loops and plotted as a function of Bi thickness in Fig. 6(b). The shape of magnetic hysteresis loops and value of saturation magnetization remained approximately constant with respect to Bi thickness. The average saturation magnetization was 1.2 MA/m.
### Electrical conductivity of the sample
The electrical conductivities of thin film samples were evaluated using the four-point probe method. Figure 7 plots the electrical sheet conductivity as a function of Bi thickness \(d_{\mathrm{Bi}}\). The slope of the sheet conductivity changes at approximately \(d_{\mathrm{Bi}}=30\) nm. These data were used to calculate impedance \(Z\) [Eq. (3)]. The electrical conductivity at thicker regions was evaluated to be \(1.9\times 10^{4}\)\(\Omega^{-1}\cdot\mathrm{m}^{-1}\) based on the slope.
### Refractive index and absorption of light
Fig. 8(a) shows the experimentally obtained reflectance [blue solid circles] and transmittance [red open circles] plotted as functions of Bi thickness \(d_{\mathrm{Bi}}\). The dashed curves denote \(R\) and \(T\) values calculated using the transfer matrix method[42] with refractive indices listed in Table 1 for Glass sub./Bi2 (\(d_{\mathrm{Bi}}\))/Co/MgO/Ta thin film (Bi single layer model). The calculated results were consistent with those obtained experimentally in the thick region; however, there was a slight discrepancy in the thin region at \(d_{\mathrm{Bi}}<30\) nm. A similar trend was observed for electrical conductivity: it varied at around \(d_{\mathrm{Bi}}\) = 30 nm (See Appendix 2). To explain the discrepancy in the thin region, the Bi layer was divided into interfacial (\(<30\) nm) and bulk layers (\(>30\) nm). The refractive index of the bulk Bi layer (Bi2) was taken from literature while that of the interface Bi layer (Bi1) was obtained by fitting the experimental \(d_{\mathrm{Bi}}\) dependence of \(R\) and \(T\). The solid curves depicted in Fig. 8(a) correspond to \(R\) and \(T\) values calculated using the Bi bilayer model. \(R\) and \(T\) values at the thin region were explained well using the Bi bilayer model compared with the explanation provided
Figure 6: (a)In-plane magnetic hysteresis loops of Bi(\(d_{\mathrm{Bi}}\))/Co(5)/MgO(2)/Ta(2) (in nm) with various Bi thickness \(d_{\mathrm{Bi}}\). (b)The saturation magnetization obtained from the magnetic hysteresis loops plotted as a function of \(d_{\mathrm{Bi}}\).
Figure 7: Electrical sheet conductivities of thin films plotted as a function of \(d_{\mathrm{Bi}}\). The red and black dashed lines represent linear fitting in thin region and thick region, respectively.
by the Bi single layer model. Fig. 8(b) shows the calculated light absorption profile \(a(z)\) for the Bi(80)/Co(5) sample. The light absorption of circularly polarized light corresponds to an average of light absorption with s- and p-polarizations, which is used to calculate the photo-spin injection [Eq. (6)].
### Ultrafast demagnetization via TRMOKE measurement
Laser-excited ultrafast demagnetization of Co was evaluated using the TRMOKE measurement. Fig. 9(a) shows normalized magnetization dynamics for the Bi(15)/Co(5) bilayer film. The red solid curve denotes the fitting result obtained with the equation[41; 51]
\[\begin{split}\frac{\Delta\theta_{\rm K}(t)}{\theta_{\rm K}}& =\bigg{[}\bigg{\{}\frac{\Delta m_{1}}{\sqrt{1+t/\tau_{0}}}-\frac{ \Delta m_{2}\tau_{\rm E}-\Delta m_{1}\tau_{\rm M}}{\tau_{\rm E}-\tau_{\rm M}} \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt-\frac{\tau_{\rm E}(\Delta m_{1}-\Delta m_{2})}{\tau_{\rm E}- \tau_{\rm M}}\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt \hskip 1.422638pt\hskip 1.422638pt\hskip 1.422638pt\hskip 1.
show the peak value and bandwidth of spin current generated by photo-spin injection in the Bi layer plotted as a function of \(d_{\rm Bi}\) with different \(D\) and \(\tau_{\rm s}\) values. \(D\) can be obtained by the Wiedemann-Franz law given by, \(D=L\sigma/\gamma_{\rm e}\), where \(L,\sigma\), and \(\gamma_{\rm e}\) are the Lorenz number, electrical conductivity, and electronic heat capacity, respectively. When we use \(\sigma=1.9\)\(\times\)\(10^{4}\) (\(\Omega\cdot\)m)\({}^{-1}\) evaluated by the four-point probe measurement and \(\gamma_{\rm e}=0.37\)\(\rm{J\cdot m^{-3}\cdot K^{-2}}\) for Bi taken from the literature[52], \(D=1.2\)\(\times\)\(10^{-3}\) m\({}^{2}\)/s is obtained.
Figure 9: (a)Demagnetization dynamics for Bi(15)Co(5) bilayer film. The solid curve was fitting result with eq. (8). (b)Magnitude of magnetization plotted as a function of Bi thickness. The solid curve represents a fitted result using a function inverse proportional to total thickness of Co and Bi.
Figure 11: Spin current (a) peak intensity and (b) bandwidth induced by photo-spin injection into Bi plotted as a function of Bi thickness \(d_{\rm Bi}\) with different diffusion constant \(D\) and spin relaxation time \(\tau_{\rm s}\).
Figure 10: Spin current (a) peak intensity and (b) bandwidth induced by laser-induced demagnetization plotted as a function of Bi thickness \(d_{\rm Bi}\) with different diffusion constant \(D\) and spin relaxation time \(\tau_{\rm s}\). |
2301.00898 | Permutation Statistics in Conjugacy Classes of the Symmetric Group | We introduce the notion of a weighted inversion statistic on the symmetric
group, and examine its distribution on each conjugacy class. Our work
generalizes the study of several common permutation statistics, including the
number of inversions, the number of descents, the major index, and the number
of excedances. As a consequence, we obtain explicit formulas for the first
moments of several statistics by conjugacy class. We also show that when the
cycle lengths are sufficiently large, the higher moments of arbitrary
permutation statistics are independent of the conjugacy class. Fulman (J. Comb.
Theory Ser. A., 1998) previously established this result for major index and
descents. We obtain these results, in part, by generalizing the techniques of
Fulman (ibid.), and introducing the notion of permutation constraints. For
permutation statistics that can be realized via symmetric constraints, we show
that each moment is a polynomial in the degree of the symmetric group. | Jesse Campion Loth, Michael Levet, Kevin Liu, Eric Nathan Stucky, Sheila Sundaram, Mei Yin | 2023-01-02T23:07:48Z | http://arxiv.org/abs/2301.00898v2 | # Permutation Statistics in Conjugacy Classes
###### Abstract
We introduce the notion of a _weighted inversion statistic_ on the symmetric group, and examine its distribution on each conjugacy class. Our work generalizes the study of several common permutation statistics, including the number of inversions, the number of descents, the major index, and the number of excedances. As a consequence, we obtain explicit formulas for the first moments of several statistics by conjugacy class. We also show that when the cycle lengths are sufficiently large, the higher moments of arbitrary permutation statistics are independent of the conjugacy class. Fulman (_J. Comb. Theory Ser. A._, 1998) previously established this result for major index and descents. We obtain these results, in part, by generalizing the techniques of Fulman (ibid.), and introducing the notion of _permutation constraints_. For permutation statistics that can be realized via _symmetric_ constraints, we show that each moment is a polynomial in the degree of the symmetric group.
**Keywords.** permutation statistics, inversions, descents, excedances, weighted inversion statistic, moments, permutation constraints
**2020 AMS Subject Classification.** 05A05, 05E05, 60C05
Introduction
Let \(S_{n}\) denote the symmetric group of permutations on \([n]=\{1,2,\ldots,n\}\). A statistic on \(S_{n}\) is a map \(X:S_{n}\to\mathbb{R}\). The _distribution_ of \(X\) on \(S_{n}\) is the function \((x_{k})_{k\in\mathbb{R}}\), where \(x_{k}\) is mapped to the number of permutations \(\omega\in S_{n}\) such that \(X(\omega)=k\), i.e., \(x_{k}=|X^{-1}(k)|\). Perhaps the best known statistics are the numbers of descents, the major index, and the inversion number of a permutation (see [10, 11]).
We study the distributions of statistics on fixed conjugacy classes of \(S_{n}\). These distributions are known exactly for some classical statistics: Gessel and Reutenauer [11, Theorems 5.3, 5.5, 6.1] gave a generating function for the joint distribution of descents and major index by conjugacy class. Brenti [1] gave the generating function by conjugacy class for the excedance statistic in terms of the Eulerian polynomials. Some asymptotic results are also known: Fulman [14] showed that descents and major index exhibit an asymptotically normal distribution on conjugacy classes with sufficiently large cycles. Kim and Lee [13] subsequently extended this result to any conjugacy class of \(S_{n}\).
We focus on the properties of the moments of these distributions. Fulman [14] showed that for partitions \(\lambda\vdash n\) with each \(\lambda_{i}>2\ell\), the \(\ell\)th moment for descents of the conjugacy class \(C_{\lambda}\) is the same as for the entire symmetric group. In particular, this implies that the moments for descents and major index on a conjugacy class \(C_{\lambda}\) are dependent only on the smaller part sizes of \(\lambda\). Fulman provided two proofs of this - one using generating functions and the other a purely combinatorial proof that leveraged the structure of descent sets. This paper will establish similar dependence results for all permutation statistics, not just those with special descent structure.
Inspired by the combinatorial proof of [14, Theorem 3], we define a framework that allows us to calculate the first moment for multiple families of permutation statistics. It turns out that the first moment for all these statistics is only dependent on the number of parts of size one and two in \(\lambda\). The higher moments of these statistics are, in general, difficult to calculate explicitly. Remarkably, this framework allows us to show that the higher moments of all permutation statistics depend only on the small part sizes of \(\lambda\).
Finally, we show that for a natural class of permutation statistics (see Theorem 7.26) that include inversions, permutation patterns, and excedances, these moments are polynomial in \(n\). Using these polynomiality results and data for small values of \(n\), we can explicitly calculate some higher moments of some permutation statistics. Gatez and Pierson [12] established the analogous result for a different generalization of permutation patterns. While our generalization and that of Gaetz and Pierson [12] agree for permutation patterns on certain conjugacy classes, it is not clear that they both capture the same family of permutation statistics.
**Main results.** In this paper, we study the uniform distribution of various permutation statistics on individual conjugacy classes. Our analysis of the uniform distribution of a very large class of permutation statistics is accomplished by the introduction of two notions: _weighted inversion statistics_ (Section 4) and _(symmetric) permutation constraints_ (Section 7) on \(S_{n}\). In fact, the classically defined inversions, descents, and major index are specific instances of weighted inversion statistics. While the notion of a weighted inversion statistic is new, the notion of a permutation constraint can be traced back to [14, Theorem 3]. The notion of a permutation constraint is quite powerful, allowing us to reason about arbitrary permutation statistics. Although symmetric constraints do not appear to include all weighted inversion statistics, they are still quite general, capturing inversions, permutation pattern statistics, and excedances.
We first examine the expected values of weighted inversion statistics on individual conjugacy classes, obtaining the following independence result.
**Theorem 1.1**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\). The expected value of any weighted inversion statistic in the conjugacy class \(C_{\lambda}\) indexed by \(\lambda\) depends only on \(n\), \(a_{1}\), and \(a_{2}\)._
In the process of proving Theorem 1.1, we are able to derive explicit formulas for the expected values for several permutation statistics in individual conjugacy classes. See Table 1 for a summary of our results, as well as a comparison to the first moments of these statistics on the entire symmetric group.
**Remark 1.2**.: The generating function, expected value, and variance of des appear in Riordan [11, p. 216], while the generating function and expected value of inv are due to Rodrigues ([11, p. 237], [10, Notes for Chapter 1]).
The Mahonian statistics maj and inv are equidistributed over \(S_{n}\) by MacMahon [14], with a bijective proof via Foata's second fundamental transformation [10].
The Eulerian statistics exc and des are equidistributed over \(S_{n}\)[14], [15, Proposition 1.4.3] with a bijective proof via the first fundamental transformation [13, 16, 17].
When considering conjugacy classes where all cycles have length at least \(3\), we generalize the combinatorial algorithm of Fulman [12, Theorem 3]. Precisely, we consider the notion of a _permutation constraint_, which allows us to specify values of a permutation for certain elements of the domain. We then analyze the structure of the corresponding directed graph (see Section 7). Remarkably, the notion of permutation constraint allows us to reason about arbitrary permutation statistics.
We now turn our attention to the higher moments of arbitrary permutation statistics. For a permutation statistic \(X\) and a partition \(\lambda\vdash n\), denote \(\mathbb{E}_{\lambda}[X]\) to be the expected value of \(X\) taken over the conjugacy class \(S_{n}\) indexed by \(\lambda\).
**Theorem 1.3**.: _Let \(X\) be a permutation statistic that is realizable over a constraint set of size \(m\), and let \(k\geq 1\). If \(\lambda\vdash n\) has all parts of size at least \(mk+1\), then \(\mathbb{E}_{\lambda}[X^{k}]\) is independent of \(\lambda\)._
**Remark 1.4**.: As descents are weighted permutation statistics of size \(2\), our results in Table 1 and Theorem 1.3 imply [12, Theorem 2] as a corollary.
In Section 7 we consider the class of permutation statistics realizable over symmetric constraint sets. Starting with a single symmetric constraint statistic on \(S_{n_{0}}\), one can construct its _symmetric extensions_ to \(S_{n}\) with \(n\geq 1\). This class of permutation statistics is quite broad - including a number of well-studied statistics such as \(\widetilde{\mathrm{exc}},\mathrm{exc},\mathrm{aexc}\) which have size \(1\); \(\mathrm{inv},\mathrm{cdasc},\mathrm{cddes},\mathrm{cval},\mathrm{cpk}\) which have size \(2\); and lie which has size \(\leq 3\). For a full account of these statistics, see Sections 4, 5, and 7, as well as [1].
**Theorem 1.5**.: _Fix \(k,m\geq 1\). Let \((\lambda_{n})\) be a sequence of partitions, where \(\lambda_{n}\vdash n\) and all parts of \(\lambda_{n}\) have size at least \(mk+1\). Let \((X_{n})\) be a symmetric extension of a symmetric permutation statistic \(X=X_{n_{0}}\) induced by a constraint set of size \(m\). There exists a polynomial \(p_{X}(n)\) depending only on \(X\) such that \(p_{X}(n)=\mathbb{E}_{\lambda_{n}}[X_{n}^{k}]\)._
**Remark 1.6**.: In the proof of Theorem 1.5 (see Theorem 7.26), we are able to control both the degree and leading coefficient of these polynomials.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline statistic & \(\lambda=(1^{a_{1}}2^{a_{2}}\ldots)\vdash n\) & \(\lambda_{i}\geq 3\ \forall i\) & \(\lambda=(1^{a_{1}}2^{a_{2}})\) & \(\lambda=(2^{a_{2}})\) & All of \(S_{n}\) \\ \hline des & \(\frac{n^{2}-n+2a_{2}-a_{1}^{2}+a_{1}}{2n}\) & \(\frac{n-1}{2}\) & \(\frac{n^{2}-a_{1}^{2}}{2n}\) & \(\frac{n}{2}\) & \(\frac{n-1}{2}\) \\ \hline maj & \(\frac{n^{2}-n+2a_{2}-a_{1}^{2}+a_{1}}{4}\) & \(\frac{n(n-1)}{4}\) & \(\frac{n^{2}-a_{1}^{2}}{4}\) & \(\frac{n^{2}}{4}\) & \(\frac{n^{2}-n}{4}\) \\ \hline inv & \(\frac{3n^{2}-n+2a_{2}-a_{1}^{2}+a_{1}-2na_{1}}{12}\) & \(\frac{n(3n-1)}{12}\) & \(\frac{(3n+a_{1})(n-a_{1})}{12}\) & \(\frac{n^{2}}{4}\) & \(\frac{n^{2}-n}{4}\) \\ \hline baj & \(\frac{(n+1)(n^{2}-n+2a_{2}-a_{1}^{2}+a_{1})}{12}\) & \(\frac{n(n^{2}-1)}{12}\) & \(\frac{(n+1)(n^{2}-a_{1}^{2})}{12}\) & \(\frac{n^{2}(n+1)}{12}\) & \(\frac{1}{4}\binom{n+1}{3}\) \\ \hline baj \(-\)inv & \(\frac{(n-2)(n^{2}-n+2a_{2}-a_{1}^{2}+a_{1})}{12}\) & \(\frac{n(n-1)(n-2)}{12}\) & \(\frac{(n-2)(n^{2}-a_{1}^{2})}{12}\) & \(\frac{n^{2}(n-2)}{12}\) & \(\frac{1}{4}\binom{n}{3}\) \\ \hline cdes & \(\frac{n^{2}-n+2a_{2}-a_{1}^{2}+3a_{1}-2}{2(n-1)}\) & \(\frac{(n+1)(n-2)}{2(n-1)}\) & \(\frac{n^{2}-a_{1}^{2}+2a_{1}-2}{2(n-1)}\) & \(\frac{n^{2}-2}{2(n-1)}\) & \(\frac{n}{2}\) \\ \hline \(\widetilde{\mathrm{exc}}\) & \(\frac{n+a_{1}}{2}\) & \(\frac{n}{2}\) & \(\frac{n+a_{1}}{2}=a_{1}+a_{2}\) & \(\frac{n}{2}=a_{2}\) & \(\frac{n+1}{2}\) \\ \hline exc, aexc & \(\frac{n-a_{1}}{2}\) & \(\frac{n}{2}\) & \(\frac{n-a_{1}}{2}=a_{2}\) & \(\frac{n}{2}=a_{2}\) & \(\frac{n-1}{2}\) \\ \hline cdasc, cddes & \(\frac{n-a_{1}-2a_{2}}{6}\) & \(\frac{n}{6}\) & \(0\) & \(0\) & \(\frac{n-2}{6}\) \\ \hline cval, cpk & \(\frac{n-a_{1}+a_{2}}{3}\) & \(\frac{n}{3}\) & \(\frac{n-a_{1}}{2}=a_{2}\) & \(\frac{n}{2}=a_{2}\) & \(\frac{2n-1}{6}\) \\ \hline \end{tabular}
\end{table}
Table 1: Expected values of various statistics in the conjugacy class \(C_{\lambda}\) and in \(S_{n}\).
**Remark 1.7**.: After proving Theorem 1.5, we came across a result for permutation patterns due to Gaetz and Pierson [10, Theorem 1.2], who generalized a previous result of Gaetz and Ryba [10, Theorem 1.1(a)]. While Gaetz and Ryba utilized partition algebras and character polynomials to obtain their result, the proof technique employed by Gaetz and Pierson was purely combinatorial. In particular, the method of Gaetz and Pierson is quite similar to our techniques for establishing Theorem 1.5.
We show in Section 7 that permutation pattern statistics (in which we track the number of occurrences of a given permutation pattern within a specified permutation) are a special case of symmetric permutation constraint statistics - in fact, for infinitely many \(m\), there exists a permutation pattern that can be realized by a symmetric constraint set of size \(m\) - but the latter is a more general class of statistics. Permutation patterns require that the constraints induce permutations on the occurrences of the pattern. For instance, an occurrence of the 213-pattern in the permutation \(\omega\) is a triple \(x,y,z\) that occurs in the order \(x\cdots y\cdots z\), with \(y<x<z\).
Our more general symmetric permutation constraint statistics, however, need not induce sub-permutations. For instance, we are able to specify triples \(x,y,z\) such that \(y<x<z\) and \(y\) appears before both \(x\) and \(z\), without specifying the relative ordering of \(x\) and \(z\). With this in mind, a comparison of Theorem 1.5 and [10, Theorem 1.2] shows that these two results agree on permutation pattern statistics for conjugacy classes \(C_{\lambda}\) where all parts have sufficiently large size.
**Remark 1.8**.: Theorem 1.5 has practical value in explicitly computing higher moments for individual conjugacy classes. Namely, if we compute \(\mathbb{E}_{(n)}[X^{k}]\) for the class of \(n\)-cycles in \(S_{n}\), taken over \(\deg(\mathbb{E}_{(n)}[X^{k}])+1\) terms starting from \(n=mk+1\), then we can use polynomial interpolation to obtain a closed form solution for \(\mathbb{E}_{(n)}[X^{k}]\). Moreover, in light of Theorem 1.3, this moment for full cycles is identical to \(\mathbb{E}_{\lambda}[X^{k}]\), provided all parts of \(\lambda\) are at least \(mk+1\).
**Further related work.** There has been considerable work on constructing generating functions for permutation statistics.
It is well known, for instance, that the inversion and major index statistics admit the same distribution on the entire symmetric group, with the \(q\)-factorial as the generating function. Permutations with the \(q\)-factorial as their generating function are called _Mahonian_. A general account of Mahonian statistics can be found here [11]. It is known that Mahonian statistics are asymptotically normal with mean \(\binom{n}{2}/2\) and variance \([n(n-1)(2n+5)]/72\)[11].
For a permutation \(\omega\), let \(\mathrm{Des}(\omega)\) be the set of descents in \(\omega\) (that is, the set of indices \(i\) such that \(\omega(i)>\omega(i+1)\)). Let \(d(\omega):=|\mathrm{Des}(\omega)|+1\). The Eulerian polynomials as defined in [11] serve as the generating functions for \(d(\omega)\) (see [12, 13]). See [10] for a detailed treatment of the properties of Eulerian polynomials. It is known that \(d(\omega)\) is asymptotically normally distributed on \(S_{n}\), with mean \((n+1)/2\) and variance \((n-1)/12\) under the condition that the number of \(i\)-cycles vanishes asymptotically for all \(i\) (an early reference is [13, p. 216]; see also Fulman [12], who in turn cites unpublished notes of Diaconis and Pitman [10]). We note that descents also have connections to sorting and the theory of runs in permutations [12, Section 5], as well as to models of card shuffling [1, 1, 13].
**Outline of paper.** We start in Section 2 by outlining necessary definitions and notation. In Section 3, we establish some results on the first moments of descents and major index that demonstrate some of the techniques that we apply in conjugacy classes of the symmetric group. In Sections 4 and 5, we establish results on first moments in conjugacy classes of the symmetric group, including Theorem 1.1 and Table 1. We then apply these results to the entire symmetric group in Section 6. We conclude in Section 7 by defining permutation constraint statistics and establishing general results on their moments in conjugacy classes.
## 2 Preliminaries
We outline some definitions and results that will be used throughout our work. We start with three well-known statistics.
**Definition 2.1**.: Let \(\omega\) be a permutation in the symmetric group \(S_{n}\).
1. A _descent_ of \(\omega\) is an index \(i\in[n-1]\), such that \(\omega(i)>\omega(i+1).\) We write \[\mathrm{Des}(\omega)=\{i:\omega(i)>\omega(i+1)\}\]
for the set of descents. We write \(\operatorname{des}(\omega):=|\operatorname{Des}(\omega)|\) for the number of descents of \(\omega\). Following [12], we also denote \(d(\omega):=\operatorname{des}(\omega)+1\).
2. The _major index_\(\operatorname{maj}(\omega)\) of \(\omega\) is the sum of its descents: \[\operatorname{maj}(\omega):=\sum_{i\in\operatorname{Des}(\omega)}i.\]
3. An _inversion_ of \(\omega\) is a pair of indices \((i,j)\) such that \(1\leq i<j\leq n\) and \(\omega(i)>\omega(j)\). We write \[\operatorname{Inv}(\omega)=\{(i,j):i<j,\text{ but }\omega(i)>\omega(j)\}\] for the set of inversions. The _inversion number_\(\operatorname{inv}(\omega):=|\operatorname{Inv}(\omega)|\) is the number of inversions of \(\omega\).
Denote by \(C_{\lambda}\) the conjugacy class of the symmetric group \(S_{n}\) indexed by the integer partition \(\lambda\) of \(n\). The following fact is well known, e.g., [13] (or [1]).
**Proposition 2.2**.: _The order of the centralizer of an element of cycle type \(\lambda\) is \(z_{\lambda}=\prod_{i}i^{a_{i}}a_{i}!\), where \(\lambda\) has \(a_{i}\) parts equal to \(i\), \(i\geq 1\). For \(\lambda\vdash n\), the order of the conjugacy class \(C_{\lambda}\) is thus \(\frac{n!}{z_{\lambda}}\)._
Throughout this paper, we will use \(\operatorname{Pr}_{S_{n}}\) and \(\operatorname{Pr}_{\lambda}\) to denote probabilities in \(S_{n}\) and \(C_{\lambda}\) (with respect to the uniform measure). We similarly use \(\mathbb{E}_{S_{n}}\) and \(\mathbb{E}_{\lambda}\) for expected values on the corresponding probability spaces.
## 3 Warm-up: first moments of descents and major index
Fulman [12] previously determined the expected number of descents for _all_ conjugacy classes of \(S_{n}\) without restriction to cycle types. In this section, we give an elementary, bijective proof for the expected number of descents in conjugacy classes where each cycle has length at least 3. While our result does not fully encompass that of Fulman, our technique of conjugating by an involution provides a much simpler bijective proof. Furthermore, we will employ this technique in subsequent sections (see Section 4.1).
**Definition 3.1**.: Let \(\lambda\vdash n\) have all parts of size at least 2. Define:
\[\tau_{i,j}:C_{\lambda}\to C_{\lambda}\] \[\tau_{i,j}(\omega)=(i\,j)\omega(i\,j).\]
**Lemma 3.2**.: _For any fixed \(i,j\in[n]\) and \(\lambda\), \(\tau_{i,j}\) is an involution on \(C_{\lambda}\)._
Proof.: Since \(C_{\lambda}\) is closed under conjugating by permutations, the map is certainly well defined. Also, applying it twice to any \(\omega\) gives \((i\,j)(i\,j)\omega(i\,j)(i\,j)=\omega\).
Fulman previously established the following.
**Theorem 3.3** ([12, Theorem 2]).: _For a partition \(\lambda\) of \(n\) with \(n_{i}\)\(i\)-cycles, let \(C_{\lambda}\) be the conjugacy class corresponding to \(\lambda\). Then_
1. \(\mathbb{E}_{\lambda}[\operatorname{des}]=\frac{n-1}{2}+\frac{n_{2}-\binom{n_{i} }{2}}{n}\)_;_
2. _Fix_ \(k\geq 0,\) _and assume all parts of_ \(\lambda\) _have size at least_ \(2k+1\)_. Then the_ \(k\)_th moments of_ \(\operatorname{des}(\omega)\) _over_ \(C_{\lambda}\) _and over the full symmetric group_ \(S_{n}\) _are equal, i.e._ \[\mathbb{E}_{\lambda}[\operatorname{des}^{k}]=\mathbb{E}_{S_{n}}[ \operatorname{des}^{k}].\]
**Remark 3.4**.: In [12, Theorem 2], Fulman considered \(\operatorname{des}(\omega)\) for part (1) and \(d(\omega)=\operatorname{des}(\omega)+1\) for part (2). This differs with Theorem 3.3, where we consider \(\operatorname{des}(\omega)\) in both parts (1) and (2).
Lemma 3.2 gives the following simple proof of the following restricted case of Theorem 3.3 (1). In fact, we will actually obtain the entirety of Theorem 3.3 (1) using generalizations of this technique in Section 4.
**Observation 3.5**.: Suppose that all part sizes of \(\lambda\) are at least \(3\). Then applying \(\tau_{i,i+1}\) gives a bijection between permutations in \(C_{\lambda}\) with a descent at position \(i\), and those without.
**Corollary 3.6**.: _Let \(\lambda\vdash n\) such that each \(\lambda_{i}\geq 3\). We have that:_
\[\mathbb{E}_{\lambda}[\mathrm{des}]=\frac{n-1}{2}.\]
Proof.: The previous proposition gives us that the probability of having a descent at any position \(i\) is \(1/2\). There are \(n-1\) possible positions for a descent, so the result follows.
## 4 Weighted inversion statistics
In this section, we consider _weighted inversion statistics_, which contain descents, major index, and the usual inversions as special cases. We will give an explicit formula for the mean on \(C_{\lambda}\) of the indicator function of \((i,j)\) being an inversion. We then use this to derive a general formula for the expected value of any weighted inversion statistic on \(C_{\lambda}\). We start with definitions.
**Definition 4.1**.: Let \(\omega\in S_{n}\), and let \(1\leq i<j\leq n\). Define \(I_{i,j}\) to be the indicator function for an inversion at \((i,j)\), i.e., \(I_{i,j}(\omega)=1\) if \(\omega(i)>\omega(j)\) and \(I_{i,j}(\omega)=0\) otherwise.
A _weighted inversion statistic_ in \(S_{n}\) is any statistic that can be expressed in the form \(\sum_{1\leq i<j\leq n}\mathrm{wt}(i,j)I_{i,j}\), where \(\mathrm{wt}(i,j)\in\mathbb{R}\) for all \(i,j\).
**Remark 4.2**.: Observe that descents, major index, and inversions are three examples of weighted inversion statistics. These can respectively be expressed as \(\mathrm{des}(\omega)=\sum_{i=1}^{n-1}I_{i,i+1}(\omega)\), \(\mathrm{maj}(\omega)=\sum_{i=1}^{n-1}i\cdot I_{i,i+1}(\omega)\), and \(\mathrm{inv}(\omega)=\sum_{1\leq i<j\leq n}I_{i,j}(\omega)\). In general, if \(X=\sum_{1\leq i<j\leq n}\mathrm{wt}(i,j)I_{i,j}\) is a weighted inversion statistic, we can use linearity to express
\[\mathbb{E}_{\lambda}[X]=\sum_{1\leq i<j\leq n}\mathrm{wt}(i,j)\mathbb{E}_{ \lambda}[I_{i,j}]=\sum_{1\leq i<j\leq n}\mathrm{wt}(i,j)\Pr_{\lambda}[I_{i,j }=1]. \tag{4.1}\]
Hence, if we can explicitly formulate \(\mathbb{E}_{\lambda}[I_{i,j}]=\Pr_{\lambda}[I_{i,j}=1]\), then we can calculate \(\mathbb{E}_{\lambda}[X]\). This approach also allows us to obtain similar results for other permutation statistics, such as excedances and cyclic descents.
### Inversion indicator functions
In this subsection, we consider the expected value of \(I_{i,j}\) in \(C_{\lambda}\) for any \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\). Our main result will be an explicit formula in terms of \(n\), \(a_{1}\), \(a_{2}\), and the difference \(j-i-1\). Surprisingly, the expected value of \(I_{i,j}\) depends on \(a_{1}\) and \(a_{2}\) but is independent of \(a_{3},\ldots,a_{n}\), and depends on \(i\) and \(j\) through their difference \(j-i\) but not the actual values of \(i\) and \(j\) themselves.
One of our main tools will be applying the map \(\tau_{ij}\), as introduced in Section 3. Observe that for \(\omega\in C_{\lambda}\),
\[\tau_{ij}(\omega)(i)=\begin{cases}\omega(j)&\text{ if }\omega(j)\notin\{i,j \}\\ j&\text{ if }\omega(j)=i\\ i&\text{ if }\omega(j)=j\end{cases}\tau_{ij}(\omega)(j)=\begin{cases}\omega(i)& \text{ if }\omega(i)\notin\{i,j\}\\ i&\text{ if }\omega(i)=j\\ j&\text{ if }\omega(i)=i.\end{cases}\]
Motivated by the above cases, we partition \(C_{\lambda}\) into five sets based on \(i\) and \(j\):
\[\Omega_{1}^{ij} =\{\omega\in C_{\lambda}:\omega(i),\omega(j)\notin\{i,j\}\}, \tag{4.2}\] \[\Omega_{2}^{ij} =\{\omega\in C_{\lambda}:\omega(i)=j,\omega(j)=i\},\] \[\Omega_{3}^{ij} =\{\omega\in C_{\lambda}:\omega(i)=i,\omega(j)=j\},\] \[\Omega_{4}^{ij} =\{\omega\in C_{\lambda}:\omega(i)=j,\omega(j)\neq i\}\cup\{\omega \in C_{\lambda}:\omega(i)\neq j,\omega(j)=i\},\] \[\Omega_{5}^{ij} =\{\omega\in C_{\lambda}:\omega(i)=i,\omega(j)\neq j\}\cup\{ \omega\in C_{\lambda}:\omega(i)\neq i,\omega(j)=j\}.\]
Using the Law of Total Probability, we can decompose
\[\Pr_{\lambda}[I_{i,j}=1]=\sum_{k=1}^{5}\Pr_{\lambda}[\omega\in\Omega_{k}^{ij}] \cdot\Pr_{\lambda}[I_{i,j}(\omega)=1\mid\omega\in\Omega_{k}^{ij}]. \tag{4.3}\]
We can explicitly compute the quantities in this sum.
**Lemma 4.3**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\), fix \(i<j\) in \([n]\), and define \(\Omega_{k}=\Omega_{k}^{ij}\) as in (4.2). Then_
1. \(\Pr_{\lambda}[\omega\in\Omega_{2}]=\frac{2a_{2}}{n(n-1)},\)__
2. \(\Pr_{\lambda}[\omega\in\Omega_{3}]=\frac{a_{1}(a_{1}-1)}{n(n-1)},\)__
3. \(\Pr_{\lambda}[\omega\in\Omega_{4}]=\frac{2}{n-1}\cdot\left(1-\frac{a_{1}}{n}- \frac{2a_{2}}{n}\right),\) _and_
4. \(\Pr_{\lambda}[\omega\in\Omega_{5}]=\frac{2a_{1}}{n}\cdot\left(1-\frac{a_{1}-1 }{n-1}\right).\)__
Proof.: We proceed as follows.
1. We first note that if \(a_{2}=0\), then \(\omega\) has no 2-cycles. As \(\Omega_{2}^{ij}\) is precisely the set of permutations of \(C_{\lambda}\) containing the 2-cycle \((ij)\), we have that \(\Pr_{\lambda}[\omega\in\Omega_{2}]=0\), which agrees with the formula given. If instead \(a_{2}>0\), then \((ij)\) forming a cycle implies that the remaining \(n-2\) elements have cycle type \((1^{a_{1}},2^{a_{2}-1},\ldots,n^{a_{n}})\). Then the probability that \((ij)\) forms a 2-cycle is given by: \[\frac{|C_{(1^{a_{1}},2^{a_{2}-1},\ldots,n^{a_{n}})}|}{|C_{(1^{a_{1}},2^{a_{2} },\ldots,n^{a_{n}})}|}=\frac{2a_{2}}{n(n-1)},\] recalling that the formulas for the centralizer sizes are given by Proposition 2.2.
2. By definition, \(\Omega_{3}^{ij}\) contains the permutations of \(C_{\lambda}\) with fixed points at positions \(i\) and \(j\). Thus, if \(a_{1}\in\{0,1\}\), then \(\Pr_{\lambda}[\omega\in\Omega_{3}]=0\), which agrees with the formula given. If instead \(a_{1}>1\), then the probability that \((i)\) and \((j)\) form 1-cycles is given by \[\frac{|C_{(1^{a_{1}-2},2^{a_{2}},\ldots,n^{a_{n}})}|}{|C_{(1^{a_{1}},2^{a_{2} },\ldots,n^{a_{n}})}|}=\frac{a_{1}(a_{1}-1)}{n(n-1)}.\]
3. We first consider \(\{\omega\in C_{\lambda}:\omega(i)=j,\omega(j)\neq i\}\). Using the Law of Total Probability, we decompose \(\Pr_{\lambda}[\omega(i)=j,\omega(j)\neq i]\) into the sum of the following terms: \[\Pr_{\lambda}[i\text{ is in a 1 cycle of }\omega]\cdot\Pr_{\lambda}[\omega(i)=j,\omega(j)\neq i|i\text{ is in a 1 cycle of }\omega],\] \[\Pr_{\lambda}[i\text{ is in a 2 cycle of }\omega]\cdot\Pr_{\lambda}[\omega(i)=j,\omega(j)\neq i|i\text{ is in a 2 cycle of }\omega],\] \[\Pr_{\lambda}[i\text{ is not in a 1 or 2 cycle of }\omega]\cdot\Pr_{\lambda}[\omega(i)=j,\omega(j)\neq i|i\text{ is not in a 1 or 2 cycle of }\omega].\] The first two terms are 0, and hence we need only compute the third term. Observe that \[\Pr_{\lambda}[i\text{ is in a 1 cycle of }\omega]=\frac{|C_{(1^{a_{1}-1},2^{a_{2}},\ldots,n^{a_{n}})}|}{|C_{(1^{a_{1} },2^{a_{2}},\ldots,n^{a_{n}})}|}=\frac{a_{1}}{n}.\] Using our result from (1), \[\Pr_{\lambda}[i\text{ is in a 2 cycle of }\omega]=\sum_{k\neq i}\Pr_{\lambda}[\omega(i)=k, \omega(k)=i]=\frac{2a_{2}}{n}.\] Hence, \(\Pr_{\lambda}[i\text{ is not in a 1 or 2 cycle of }\omega]=1-\frac{a_{1}}{n}-\frac{2a_{2}}{n}.\) Finally, consider conjugation by \(\rho=(i)(1,2,\ldots,i-1,i+1,\ldots,n)\) on the elements in \(\Omega_{4}\). Since \(\rho\) acts by replacing each element of a cycle by its image under \(\rho\), it induces bijections among the sets
\[\{\omega\in C_{\lambda}:\omega(i)=k,i\text{ is not in a 1 or 2 cycle of }\omega\}\] for \(k\in[n]\setminus\{i\}\). Hence, \(\{\omega\in C_{\lambda}:i\text{ is not in a 1 or 2 cycle of }\omega\}\) decomposes into \(n-1\) sets of the same size based on the image of \(i\). We conclude that \[\Pr_{\lambda}[\omega(i)=j,\omega(j)\neq i|i\text{ is not in a 1 or 2 cycle of }\omega]=\frac{1}{n-1}.\] Combined, we have that \[\Pr_{\lambda}[\omega(i)=j,\omega(j)\neq i]=\frac{1}{n-1}\cdot\left(1-\frac{a _{1}}{n}-\frac{2a_{2}}{n}\right).\] Repeating this argument over \(\{\omega\in C_{\lambda}:\omega(i)\neq j,\omega(j)=i\}\) and adding the two terms implies (3).
4. We similarly first consider \(\{\omega\in C_{\lambda}:\omega(i)=i,\omega(j)\neq j\}\). Then \[\Pr_{\lambda}[\omega(i)=i,\omega(j)\neq j] =\Pr_{\lambda}[\omega(i)=i]\cdot\Pr_{\lambda}[\omega(j)\neq j| \omega(i)=i]\] \[=\frac{a_{1}}{n}\cdot\left(1-\Pr_{\lambda}[\omega(j)=j|\omega(i) =i]\right)\] \[=\frac{a_{1}}{n}\cdot\left(1-\frac{a_{1}-1}{n-1}\right).\] Repeating this argument over \(\{\omega\in C_{\lambda}:\omega(i)\neq i,\omega(j)=j\}\) and adding this to the expression above implies the result.
**Remark 4.4**.: The preceding lemma gives an explicit formula for \(\Pr_{\lambda}[\omega\in\Omega_{1}]\) using \(1-\sum_{k=2}^{5}\Pr[\omega\in\Omega_{k}]\). We will not need this explicit formulation.
**Lemma 4.5**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\), fix \(i<j\) in \([n]\), and define \(\Omega_{k}=\Omega_{k}^{ij}\) as in (4.2). Then_
1. \(\Pr_{\lambda}[(i,j)\in\mathrm{Inv}(\omega)|\omega\in\Omega_{1}]=\frac{1}{2}\)_,_
2. \(\Pr_{\lambda}[(i,j)\in\mathrm{Inv}(\omega)|\omega\in\Omega_{2}]=1\)_,_
3. \(\Pr_{\lambda}[(i,j)\in\mathrm{Inv}(\omega)|\omega\in\Omega_{3}]=0\)_,_
4. \(\Pr_{\lambda}[(i,j)\in\mathrm{Inv}(\omega)|\omega\in\Omega_{4}]=\frac{1}{2}+ \frac{j-i-1}{2(n-2)}\)_, and_
5. \(\Pr_{\lambda}[(i,j)\in\mathrm{Inv}(\omega)|\omega\in\Omega_{5}]=\frac{1}{2}- \frac{j-i-1}{2(n-2)}\)_._
**Remark 4.6**.: A priori, it was not intuitively clear to us why:
\[\Pr_{\lambda}[(i,j)\in\mathrm{Inv}(\omega)\mid\omega\in\Omega_{4}]+\Pr_{ \lambda}[(i,j)\in\mathrm{Inv}(\omega)\mid\omega\in\Omega_{5}]=1.\]
Prior to proving Lemma 4.5, we first highlight our intuition here. If \(k<i\) or \(k>j\), then conjugating by \((ij)\) interchanges elements that have \((i,j)\) as an inversion to ones that do not. If \(i<k<j\), then we have to track choices for \(k\) and "adjust" the probability from \(1/2\). The \((j-i-1)/[2(n-2)]\) term accounts for this. Precisely, in \(\Omega_{4}\), conjugating by \((ij)\) interchanges permutations that both have an inversion at \((i,j)\), and in \(\Omega_{5}\), conjugating by \((ij)\) interchanges permutations that both do not have an inversion at \((i,j)\).
Proof of Lemma 4.5.:
1. Note that the map \(\tau_{ij}\) induces a bijection between the sets \(\{\omega\in\Omega_{1}:\omega(i)>\omega(j)\}\) and \(\{\omega\in\Omega_{1}:\omega(i)<\omega(j)\}\) that partition \(\Omega_{1}\). Hence, these two sets must have the same size, and we conclude (1).
2. This follows immediately from the definition of inversion and the images of \(i\) and \(j\) in the set \(\Omega_{2}\).
3. This follows immediately from the definition of inversion and the images of \(i\) and \(j\) in the set \(\Omega_{3}\).
4. Observe that we can partition \[\{\omega\in C_{\lambda}:\omega(i)=j,\omega(j)\neq i\}=\bigsqcup_{k\notin\{i,j\}} \{\omega\in C_{\lambda}:\omega(i)=j,\omega(j)=k\}.\] Now consider conjugation by \[(i)(j)(1,2,\ldots,i-1,i+1,\ldots,j-1,j+1\ldots,n)\] on \(\Omega_{4}\). As in the proof of Lemma 4.3, this induces bijections among the sets \(\{\omega\in C_{\lambda}:\omega(i)=j,\omega(j)=k\}\) for each \(k\in[n]\setminus\{i,j\}\), and hence each of these disjoint sets has the same size. Additionally, \(\tau_{ij}\) induces a bijection between \(\{\omega\in C_{\lambda}:\omega(i)=j,\omega(j)=k\}\) and \(\{\omega\in C_{\lambda}:\omega(i)=k,\omega(j)=i\}\). Combining these two observations, we see that grouping elements by the images of \(i\) and \(j\) partitions \(\Omega_{4}\) into \(2(n-2)\) sets of the same size. Observe that the images of \(i\) and \(j\) are sufficient for determining if \((i,j)\in\operatorname{Inv}(w)\). When \(\omega(i)=j\), \(\omega(j)\) must be in \(\{1,2,\ldots,j-1\}\setminus\{i\}\) to have an inversion at \((i,j)\). When \(\omega(j)=i\), \(\omega(i)\) must be in \(\{i+1,\ldots,n\}\setminus\{j\}\) to have an inversion at \((i,j)\). Hence, \[\Pr_{\lambda}[(i,j)\in\operatorname{Inv}(\omega)|\omega\in\Omega_{4}]=\frac{ (j-2)+(n-i-1)}{2(n-2)}=\frac{(n-2)+(j-i-1)}{2(n-2)}=\frac{1}{2}+\frac{j-i-1}{2 (n-2)}.\qed\]
5. We can again partition \(\Omega_{5}\) into \(2(n-2)\) sets of the same size based on the image of \(i\) and \(j\). If \(\omega(i)=i\), \(\omega(j)\) must be in \(\{1,2,\ldots,i-1\}\) to produce an inversion at \((i,j)\). If \(\omega(j)=j\), then \(\omega(i)\) must be in \(\{j+1,\ldots,n\}\) to produce an inversion at \((i,j)\). Hence, \[\Pr_{\lambda}[(i,j)\in\operatorname{Inv}(\omega)|\omega\in\Omega_{5}]=\frac{ (i-1)+(n-j)}{2(n-2)}=\frac{(n-2)+(1+i-j)}{2(n-2)}=\frac{1}{2}-\frac{j-i-1}{2 (n-2)}.\qed\]
We have now established explicit formulas for all of the quantities in (4.2). Combining these, we compute the expected value of \(I_{i,j}\) on \(C_{\lambda}\).
**Lemma 4.7**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\). For any \(i<j\) in \([n]\),_
\[\Pr_{\lambda}[I_{i,j}=1]=\frac{1}{2}+\frac{a_{2}}{n(n-1)}-\frac{a_{1}(a_{1}-1) }{2n(n-1)}+(j-i-1)\cdot\frac{n-na_{1}-a_{1}+a_{1}^{2}-2a_{2}}{n(n-1)(n-2)}.\]
Proof.: Define \(\Omega_{k}=\Omega_{k}^{ij}\) as in (4.2). Starting with (4.3) and using Lemma 4.5, \(\Pr_{\lambda}[I_{i,j}=1]\) can be expressed as a sum of the following five terms:
1. \(\Pr_{\lambda}[\omega\in\Omega_{1}]\cdot\frac{1}{2}\),
2. \(\Pr_{\lambda}[\omega\in\Omega_{2}]\cdot\big{(}\frac{1}{2}+\frac{1}{2}\big{)}\),
3. \(\Pr_{\lambda}[\omega\in\Omega_{3}]\cdot\big{(}\frac{1}{2}-\frac{1}{2}\big{)}\),
4. \(\Pr_{\lambda}[\omega\in\Omega_{4}]\cdot\Big{(}\frac{1}{2}+\frac{j-i-1}{2(n-2) }\Big{)}\), and
5. \(\Pr_{\lambda}[\omega\in\Omega_{5}]\cdot\Big{(}\frac{1}{2}-\frac{j-i-1}{2(n-2) }\Big{)}\).
We group terms with positive \(1/2\) coefficients, use the fact that \(C_{\lambda}\) is a disjoint union of \(\{\Omega_{k}\}_{k=1}^{5}\), and apply Lemma 4.3 to obtain
\[\frac{1}{2}\sum_{k=1}^{5}\Pr_{\lambda}[\omega\in\Omega_{k}]+\frac {1}{2}\Pr_{\lambda}[\omega\in\Omega_{2}]-\frac{1}{2}\Pr_{\lambda}[\omega\in \Omega_{3}]+\frac{j-i-1}{2(n-2)}\Pr_{\lambda}[\omega\in\Omega_{4}]-\frac{j-i- 1}{2(n-2)}\Pr_{\lambda}[\omega\in\Omega_{5}]\] \[=\frac{1}{2}+\frac{a_{2}}{n(n-1)}-\frac{a_{1}(a_{1}-1)}{2n(n-1)}+ \frac{j-i-1}{(n-1)(n-2)}\left(1-\frac{a_{1}}{n}-\frac{2a_{2}}{n}\right)-\frac{ a_{1}(j-i-1)}{n(n-2)}\left(1-\frac{a_{1}-1}{n-1}\right).\] \[=\frac{1}{2}+\frac{a_{2}}{n(n-1)}-\frac{a_{1}(a_{1}-1)}{2n(n-1)}+ (j-i-1)\cdot\frac{n-na_{1}-a_{1}+a_{1}^{2}-2a_{2}}{n(n-1)(n-2)}.\qed\]
### First moment
We now apply our results on \(\mathbb{E}_{\lambda}[I_{i,j}]\) to calculate \(\mathbb{E}_{\lambda}[X]\) for any weighted inversion statistic. We start with our main theorem on weighted inversion statistics.
**Theorem 4.8**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\), and let \(X=\sum_{1\leq i<j\leq n}\text{wt}(i,j)I_{i,j}\) be a weighted inversion statistic. Also set \(\alpha_{n}(X):=\sum_{1\leq i<j\leq n}\text{wt}(i,j)\), and \(\beta_{n}(X):=\sum_{1\leq i<j\leq n}(j-i-1)\text{wt}(i,j)\). Then_
\[\mathbb{E}_{\lambda}[X]=\left(\frac{1}{2}+\frac{a_{2}}{n(n-1)}- \frac{a_{1}(a_{1}-1)}{2n(n-1)}\right)\cdot\alpha_{n}(X)+\left(\frac{n-na_{1}-a _{1}+a_{1}^{2}-2a_{2}}{n(n-1)(n-2)}\right)\cdot\beta_{n}(X).\]
Proof.: Note that \(\alpha_{n}(X)\) and \(\beta_{n}(X)\) are independent of the partition \(\lambda\). We start with (4.1) and apply Lemma 4.7 to see that \(\mathbb{E}_{\lambda}[X]\) is given by
\[\sum_{1\leq i<j\leq n}\text{wt}(i,j)\Pr_{\lambda}[I_{i,j}(\omega) =1]\] \[=\sum_{1\leq i<j\leq n}\text{wt}(i,j)\left(\frac{1}{2}+\frac{a_{2 }}{n(n-1)}-\frac{a_{1}(a_{1}-1)}{2n(n-1)}+(j-i-1)\cdot\frac{n-na_{1}-a_{1}+a_{1 }^{2}-2a_{2}}{n(n-1)(n-2)}\right)\] \[=\left(\frac{1}{2}+\frac{a_{2}}{n(n-1)}-\frac{a_{1}(a_{1}-1)}{2n( n-1)}\right)\cdot\sum_{1\leq i<j\leq n}\text{wt}(i,j)+\left(\frac{n-na_{1}-a_{1}+a_{1 }^{2}-2a_{2}}{n(n-1)(n-2)}\right)\cdot\sum_{1\leq i<j\leq n}\text{wt}(i,j)(j- i-1).\]
**Corollary 4.9**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\). The expected value of any weighted inversion statistic in \(S_{n}\) is independent of \(a_{3},\ldots,a_{n}\)._
We can apply the preceding theorem to obtain the expected number of some common statistics. Note that part (1) of the following corollary was previously established by Fulman [11].
**Corollary 4.10**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\), \(n\geq 2\). Then_
1. \(\mathbb{E}_{\lambda}[\mathrm{des}]=\frac{1}{2n}\left(n^{2}-n+2a_{2}-a_{1}^{2}+ a_{1}\right)\)_,_
2. \(\mathbb{E}_{\lambda}[\mathrm{maj}]=\frac{1}{4}\left(n^{2}-n+2a_{2}-a_{1}^{2}+a_{1 }\right),\)__
3. \(\mathbb{E}_{\lambda}[\mathrm{inv}]=\frac{1}{12}\left(3n^{2}-n+2a_{2}-a_{1}^{2} +a_{1}-2na_{1}\right).\)__
_In particular, in the case that \(a_{1}=a_{2}=0\), we have that \(\mathbb{E}_{\lambda}[\mathrm{des}]=\frac{n-1}{2}\), \(\mathbb{E}_{\lambda}[\mathrm{maj}]=\frac{n(n-1)}{4}\), and \(\mathbb{E}_{\lambda}[\mathrm{inv}]=\frac{3n^{2}-n}{12}\)._
Proof.: We use Theorem 4.8 for all three statistics \(X\).
1. The descent statistic des is defined by \(\text{wt}(i,i+1)=1\) for \(i\in\{1,2,\ldots,n-1\}\), and \(\text{wt}(i,j)=0\) otherwise. Hence \(\alpha_{n}(X)=\sum_{1\leq i<j\leq n}\text{wt}(i,j)=(n-1)\) and \(\beta_{n}(X)=\sum_{1\leq i<j\leq n}\text{wt}(i,j)(j-i-1)=0\). Then \[\mathbb{E}_{\lambda}[\mathrm{des}]=\left(\frac{1}{2}+\frac{a_{2}}{n(n-1)}- \frac{a_{1}(a_{1}-1)}{2n(n-1)}\right)\cdot(n-1)=\frac{1}{2n}\big{(}n^{2}-n+2a _{2}-a_{1}^{2}+a_{1}\big{)}.\]
2. The major index is defined by \(\text{wt}(i,i+1)=i\) and \(\text{wt}(i,j)=0\) otherwise. Now \(\alpha_{n}(X)=\sum_{1\leq i<j\leq n}\text{wt}(i,j)=\binom{n}{2}\) and \(\beta_{n}(X)=\sum_{1\leq i<j\leq n}\text{wt}(i,j)(j-i-1)=0\). Then \[\mathbb{E}_{\lambda}[\mathrm{maj}]=\left(\frac{1}{2}+\frac{a_{2}}{n(n-1)}- \frac{a_{1}(a_{1}-1)}{2n(n-1)}\right)\cdot\binom{n}{2}=\frac{1}{4}\left(n^{2}- n+2a_{2}-a_{1}^{2}+a_{1}\right).\]
3. Finally, the inversion statistic is defined by \(\text{wt}(i,j)=1\) for all \(i,j\). Then \(\alpha_{n}(X)=\sum_{1\leq i<j\leq n}\text{wt}(i,j)=\binom{n}{2}\), and using the substitution \(k=j-i-1\), we find that \(\beta_{n}(X)=\sum_{1\leq i<j\leq n}\text{wt}(i,j)(j-i-1)\) is given by \[\sum_{1\leq i<j\leq n}(j-i-1)=\sum_{i=1}^{n-1}\sum_{k=0}^{n-i-1}k= \sum_{i=1}^{n-1}\binom{n-i}{2}=\binom{n}{3}.\]
Combined, we see that
\[\mathbb{E}_{\lambda}[\operatorname{inv}] =\left(\frac{1}{2}+\frac{a_{2}}{n(n-1)}-\frac{a_{1}(a_{1}-1)}{2n(n- 1)}\right)\cdot\binom{n}{2}+\left(\frac{n-na_{1}-a_{1}+a_{1}^{2}-2a_{2}}{n(n-1) (n-2)}\right)\cdot\binom{n}{3}\] \[=\frac{1}{12}\left(3n^{2}-n+2a_{2}-a_{1}^{2}+a_{1}-2na_{1}\right).\qed\]
### Baj
In this subsection, we consider the curious permutation statistic \(\operatorname{baj}\) that was introduced by Zabrocki [23].
**Definition 4.11** ([23]).: Let \(\omega\in S_{n}\). Define
\[\operatorname{baj}(\omega)\coloneqq\sum_{i\in\operatorname{Des}(\omega)}i(n -i).\]
The statistic \(\operatorname{baj}-\operatorname{inv}\) is the Coxeter length function restricted to coset representatives of the extended affine Weyl group of type \(A_{n-1}\) modulo translations by coroots. It has a nice generating function over the symmetric group, due to Stembridge and Waugh [22]. Furthermore, in [11], using this generating function, a formula for the \(d\)th cumulant is given [11, Corollary 3.4], and it is shown that the asymptotic distribution of \(\operatorname{baj}-\operatorname{inv}\) on \(S_{n}\) is normal.
Observe that \(\operatorname{baj}\) is a weighted inversion statistic for the choice \(\operatorname{wt}(i,i+1)=i(n-i)\) and \(\operatorname{wt}(i,j)=0\) for \(j\neq i+1\). Using Theorem 4.8, we obtain the following.
**Proposition 4.12**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\), \(n\geq 2\). Then_
\[\mathbb{E}_{\lambda}[\operatorname{baj}]=\frac{1}{12}(n+1)(n^{2}-n+2a_{2}-a_{1 }^{2}+a_{1})=\frac{1}{3}(n+1)\mathbb{E}_{\lambda}[\operatorname{maj}].\]
### Cyclic descents
Cyclic descents were introduced by Paola Cellini [10]. While these are not weighted inversion statistics, a small adjustment of the methods of the previous subsections allows us to compute the first moment of cyclic descents on \(C_{\lambda}\).
**Definition 4.13** ([10]).: The _cyclic descent set_ of a permutation \(\omega\in S_{n}\) is defined to be the set
\[\operatorname{cDes}(\omega):=\{1\leq i\leq n:\omega(i)>\omega(i+1)\subset[n]\},\]
with the convention \(\omega(n+1):=\omega(1)\). Let \(\operatorname{cdes}(\omega):=|\operatorname{cDes}(\omega)|\).
**Theorem 4.14**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\), \(n\geq 2\). Then_
\[\mathbb{E}_{\lambda}[\operatorname{cdes}]=\frac{n}{2}+\frac{a_{2}-\binom{a_{1 }}{2}}{n-1}+\frac{a_{1}-1}{n-1},\]
_and hence the expected value of cyclic descents is independent of the conjugacy class if \(a_{1}=a_{2}=0\)._
Proof.: Writing \(J_{n}\) for the random variable which equals \(1\) if \(n\in\operatorname{cDes}(\omega)\) and \(0\) otherwise, we have
\[\mathbb{E}_{\lambda}[\operatorname{cdes}]=\sum_{1\leq i\leq n-1}\Pr_{\lambda}[ I_{i,i+1}=1]+\Pr_{\lambda}[J_{n}=1]=\mathbb{E}_{\lambda}[\operatorname{des}]+ \Pr_{\lambda}[J_{n}=1]. \tag{4.4}\]
From Lemma 4.7 we have
\[\Pr_{\lambda}[I_{1,n}=1]=\frac{1}{2}+\frac{a_{2}}{n(n-1)}-\frac{a_{1}(a_{1}-1) }{2n(n-1)}+\frac{n-na_{1}-a_{1}+a_{1}^{2}-2a_{2}}{n(n-1)}=\frac{1}{2}+\frac{ \binom{a_{1}}{2}-a_{2}-n(a_{1}-1)}{n(n-1)}.\]
Now \(n\) is a cyclic descent if and only if \(\omega(n)>\omega(1)\), i.e., if and only if \((1,n)\) is _not_ an inversion. Hence we have
\[\Pr_{\lambda}[J_{n}=1]=1-\Pr_{\lambda}[I_{1,n}=1]=\frac{1}{2}+\frac{a_{2}-{a_{1 }\choose 2}+n(a_{1}-1)}{n(n-1)}. \tag{4.5}\]
From Corollary 4.10, we have
\[\mathbb{E}_{\lambda}[\mathrm{des}]=\frac{n-1}{2}+\frac{a_{2}-{a_{1}\choose 2 }}{n}. \tag{4.6}\]
Equation (4.4) now gives the result.
## 5 Cyclic permutation statistics
In this section, we apply the techniques from Section 4 to the cases of several other permutation statistics that are not weighted inversion statistics. Such permutation statistics include cyclic descents and excedances. We call these cyclic permutation statistics, to reflect the fact that, in general, the value of the statistic can be read directly from the cycles in its cycle decomposition.
In particular, we show that, once again, the expected values depend on at most the number of fixed points and 2-cycles in the cycle type.
### Excedances
An _excedance_ of \(\omega\) is any index \(i\in[n]\) such that \(\omega(i)>i\). A _weak excedance_ of \(\omega\) is any index \(i\in[n]\) such that \(\omega(i)\geq i\). An _anti-excedance_[2] of \(\omega\) is any index \(i\in[n]\) such that \(\omega(i)<i\). Clearly \(i\) is an excedance of \(\omega\) if and only if \(\omega(i)\) is an anti-excedance of \(\omega^{-1}\), and conjugacy classes in \(S_{n}\) are closed with respect to taking inverses, so for any fixed conjugacy class, excedance and anti-excedance are equidistributed.
Let \(\mathrm{exc}(\omega)\) (respectively \(\widetilde{\mathrm{exc}}(\omega),\mathrm{exc}(\omega)\)) denote the number of excedances (respectively weak excedances, anti-excedances) of the permutation \(\omega\). While these are not weighted inversion statistics, the methods of Section 4 can be adapted to calculate their expected values in \(C_{\lambda}\).
**Theorem 5.1**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\). Then_
\[\mathbb{E}_{\lambda}[\mathrm{exc}]=\frac{1}{2}(n-a_{1})=\mathbb{E}_{\lambda}[ \mathrm{aexc}]\text{ and }\mathbb{E}_{\lambda}[\widetilde{\mathrm{exc}}]=\frac{1}{2}(n+a_{1}).\]
Proof.: Express \(\mathrm{exc}(\omega)=\sum_{j=1}^{n}I_{j}(\omega)\), where \(I_{j}\) is the indicator random variable on an excedance at position \(j\). Fixing \(j\), partition \(C_{\lambda}\) into the two sets \(\Omega_{1}=\{w\in C_{\lambda}:\omega(j)=j\}\) and \(\Omega_{2}=\{w\in C_{\lambda}:\omega(j)\neq j\}\). Then
\[\Pr_{\lambda}[I_{j}=1]=\Pr_{\lambda}[\omega\in\Omega_{1}]\cdot\Pr_{\lambda}[I _{j}(\omega)=1|\omega\in\Omega_{1}]+\Pr_{\lambda}[\omega\in\Omega_{2}]\cdot \Pr_{\lambda}[I_{j}(\omega)=1|\omega\in\Omega_{2}].\]
Observe that \(\Pr_{\lambda}[\omega\in\Omega_{1}]=\frac{a_{1}}{n}\) and \(\Pr_{\lambda}[I_{j}(\omega)=1|\omega\in\Omega_{1}]=0\). For \(\Pr_{\lambda}[I_{j}(\omega)=1|\omega\in\Omega_{2}]\), we can partition
\[\Omega_{2}=\bigsqcup_{k\neq j}\{w\in\Omega_{2}:w(j)=k\}.\]
Conjugation by \((j)(1,2,\ldots,j-1,j+1,\ldots,n)\) induces bijections among these sets, and thus they all must have the same size. Observe that in \(n-j\) of the \(n-1\) sets, an excedance at \(j\) occurs. Hence,
\[\Pr_{\lambda}[I_{j}=1]=\Pr_{\lambda}[\omega\in\Omega_{2}]\cdot\Pr_{\lambda}[ I_{j}(\omega)=1|\omega\in\Omega_{2}]=\left(1-\frac{a_{1}}{n}\right)\cdot\frac{n-j}{ n-1}.\]
For the excedance statistic, we conclude that
\[\mathbb{E}_{\lambda}[\mathrm{exc}]=\sum_{j=1}^{n}\Pr_{\lambda}[I_{j}=1]=\sum_ {j=1}^{n}\left(1-\frac{a_{1}}{n}\right)\cdot\frac{n-j}{n-1}=\left(\frac{n-a_{1 }}{n}\right)\cdot\frac{1}{n-1}\cdot{n\choose 2}=\frac{1}{2}(n-a_{1}).\]
We have already noted that for every fixed conjugacy class \(C\), excedance and anti-excedance are equidistributed on \(C\). For the weak excedance statistic \(\widetilde{\operatorname{exc}}(\omega)\), by definition, the only change in the above argument is that \(\Pr_{\lambda}[\widetilde{I}_{j}(\omega)=1|\omega\in\Omega_{1}]=1\) where \(\widetilde{I}_{j}\) is the weak excedance indicator function. Hence
\[\Pr_{\lambda}[\widetilde{I}_{j}(\omega)=1]=\Pr_{\lambda}[I_{j}(\omega)=1]+ \frac{a_{1}}{n}, \tag{5.1}\]
and
\[\mathbb{E}_{\lambda}[\widetilde{\operatorname{exc}}]=\mathbb{E}_{\lambda}[ \operatorname{exc}]+a_{1}=\frac{1}{2}(n+a_{1}).\qed\]
**Corollary 5.2**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\). Then the expected values of \(\operatorname{exc},\widetilde{\operatorname{exc}}\) and \(\operatorname{aexc}\) are independent of \(a_{2},\ldots,a_{n}\). In particular, when \(a_{1}=0\), we have that \(\mathbb{E}_{\lambda}[\operatorname{exc}]=\mathbb{E}_{\lambda}[\operatorname{ aexc}]=\mathbb{E}_{\lambda}[\widetilde{\operatorname{exc}}]=\frac{n}{2}\)._
### Cyclic double ascents and cyclic valleys
Several recent papers [2, 3] consider statistics derived from the excedance statistic. In [2], the following statistics are defined for \(\omega\in S_{n}\). The element \(i\in[n]\) is a
1. _cyclic valley_ of \(\omega\) if \(\omega^{-1}(i)>i<\omega(i)\);
2. _cyclic peak_ of \(\omega\) if \(\omega^{-1}(i)<i>\omega(i)\);
3. _cyclic double ascent_ of \(\omega\) if \(\omega^{-1}(i)<i<\omega(i)\); and
4. _cyclic double descent_ of \(\omega\) if \(\omega^{-1}(i)>i>\omega(i)\).
A cyclic double ascent (respectively, cyclic double descent) coincides with the _linked excedance_ (respectively, _linked anti-excedance_) defined in [3]. We follow the notation of [2], and write \(\operatorname{cval}(\omega)\) (respectively, \(\operatorname{cpk}(\omega)\)) for the number of cyclic valleys (respectively, cyclic peaks) of \(\omega\). Also write \(\operatorname{Cval}(\omega)\) (respectively, \(\operatorname{Cpk}(\omega)\)) for the _set_ of cyclic valleys (respectively, cyclic peaks) of \(\omega\). Clearly \(i\) is a cyclic valley of \(\omega\) if either \(i\) is the smaller letter in a 2-cycle, or if \(i\) appears in a cycle of \(\omega\) of length at least 3. In the latter case the cycle containing \(i\) must be of the form \((\ldots j\,i\,k\ldots)\) for \(j>i<k\). Let \(\rho\) be the reversing involution defined by \(\rho(i)=n+1-i\). Since the corresponding cycle of \(\rho\,\omega\rho^{-1}\) is \((\ldots,n+1-j,\,n+1-i,\,n+1-k,\ldots)\), it follows that
\[i\in\{1,\ldots,n-1\}\text{ is a cyclic valley of }\omega\iff n+1-i\in\{2, \ldots,n\}\text{ is a cyclic peak of }\rho\,\omega\rho^{-1},\]
and hence cyclic valleys and cyclic peaks are equidistributed over a fixed conjugacy class. The same argument shows that cyclic double descents and cyclic double ascents are equidistributed over a fixed conjugacy class.
The number of cyclic double ascents (respectively cyclic double descents) in a permutation \(\omega\) is denoted \(\operatorname{cdasc}(\omega)\) (respectively, \(\operatorname{cddes}(\omega)\)). Also, the _set_ of cyclic double ascents (respectively cyclic double descents) in a permutation \(\omega\) is denoted \(\operatorname{Cdasc}(\omega)\) (respectively, \(\operatorname{Cddes}(\omega)\)).
Now observe that our methods apply to the statistics \(\operatorname{cdasc}(\omega)\), \(\operatorname{cval}(\omega)\) and \(\operatorname{cddes}(\omega)\), \(\operatorname{cpk}(\omega)\) as well. Let \(I_{j}\) be the indicator function for a cyclic double ascent at index \(j\) and decompose \(\operatorname{cdasc}(\omega)=\sum_{j=2}^{n-1}I_{j}(\omega)\). Let \(I_{j}^{v}\) be the indicator function for a cyclic valley at \(j\), and write \(\operatorname{cval}(\omega)=\sum_{j=1}^{n-1}I_{j}^{v}(\omega)\). Define the sets
\[\begin{split}\Omega_{1}^{j}&=\{\omega\in C_{\lambda }:\text{ $j$ is in a 1-cycle}\},\\ \Omega_{2}^{j}&=\{\omega\in C_{\lambda}:\text{ $j$ is in a 2-cycle}\},\\ \Omega_{3}^{j}&=\{\omega\in C_{\lambda}:\text{ $j$ is not in a 1- cycle or 2-cycle}\}.\end{split} \tag{5.2}\]
Similar arguments as before imply the following results. First, we have the analogue of Lemma 4.3.
**Lemma 5.3**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\), fix \(j\in[n]\), and define \(\Omega_{k}=\Omega_{k}^{j}\) as in (5.2). Then_
1. \(\Pr_{\lambda}[\omega\in\Omega_{1}]=\frac{a_{1}}{n}\)_,_
2. \(\Pr_{\lambda}[\omega\in\Omega_{2}]=\frac{2a_{2}}{n}\)_, and_
3. \(\Pr_{\lambda}[\omega\in\Omega_{3}]=1-\frac{a_{1}}{n}-\frac{2a_{2}}{n}.\)__
Proof.: The proof follows the same arguments as Lemma 4.3.
**Theorem 5.4**.: _Let \(\lambda=(1^{a_{1}};2^{a_{2}},\ldots,n^{a_{n}})\vdash n\). Then_
1. \(\mathbb{E}_{\lambda}[\mathrm{cdasc}]=\frac{n-a_{1}-2a_{2}}{6}=\mathbb{E}_{ \lambda}[\mathrm{cddes}]\) _and_
2. \(\mathbb{E}_{\lambda}[\mathrm{cval}]=\frac{n-a_{1}+a_{2}}{3}=\mathbb{E}_{ \lambda}[\mathrm{cpk}].\)__
Proof.: Fix \(j\) and observe that if \(\omega\in\Omega_{1}^{j}\cup\Omega_{2}^{j}\), then \(j\) is not a cyclic double ascent of \(\omega\). Also, \(j\) is a cyclic valley of \(\omega\) only if \(\omega\in\Omega_{2}^{j}\cup\Omega_{3}^{j}\). Hence, by the Law of Total Probability, we have
\[\mathbb{E}_{\lambda}[I_{j}] =\sum_{k=1}^{3}\Pr_{\lambda}[\omega\in\Omega_{k}^{j}]\Pr_{ \lambda}[I_{j}(\omega)=1|\omega\in\Omega_{k}^{j}]=\Pr_{\lambda}[\omega\in \Omega_{3}^{j}]\Pr_{\lambda}[I_{j}(\omega)=1|\omega\in\Omega_{3}^{j}],\] \[\mathbb{E}_{\lambda}[I_{j}^{v}] =\sum_{k=1}^{3}\Pr_{\lambda}[\omega\in\Omega_{k}^{j}]\Pr_{ \lambda}[I_{j}^{v}(\omega)=1|\omega\in\Omega_{k}^{j}]\] \[=\Pr_{\lambda}[\omega\in\Omega_{3}^{j}]\Pr_{\lambda}[I_{j}^{v}( \omega)=1|\omega\in\Omega_{3}^{j}]+\Pr_{\lambda}[\omega\in\Omega_{2}^{j}]\Pr _{\lambda}[I_{j}^{v}(\omega)=1|\omega\in\Omega_{2}^{j}].\]
If we fix distinct \(i,k\in[n]\setminus\{j\}\), then conjugation by appropriate elements implies \(\Pr_{\lambda}[\omega(i)=j|\omega\in\Omega_{2}^{j}]=\Pr_{\lambda}[\omega(i)=j |\omega\in\Omega_{3}^{j}]=\frac{1}{n-1}\) and \(\Pr_{\lambda}[\omega(i)=j\wedge\omega(j)=k|\omega\in\Omega_{3}^{j}]=\frac{1}{( n-1)(n-2)}.\)
Now let \(i,j,k\) be elements appearing in succession in a cycle of length at least \(3\). A cyclic double ascent at \(j\) occurs if and only if \(i<j<k\), and hence there are a total of \((j-1)(n-j)\) choices \(\{i,k\}\) that result in a cyclic double ascent at \(j\neq 1,n\). A cyclic valley occurs if \(i>j<k\), and thus there are a total of \((n-j)(n-j-1)\) choices for \(\{i,k\}\) that result in a cyclic valley at \(j\neq n\). However, a cyclic valley also occurs at \(j\) when \((i,j)\) is a \(2\)-cycle with \(i>j\). There are \((n-j)\) choices for \(i\) in this case.
Combined with the preceding lemma, we see that
\[\mathbb{E}_{\lambda}[I_{j}] =\left(1-\frac{a_{1}}{n}-\frac{2a_{2}}{n}\right)\cdot\frac{(j-1)(n- j)}{(n-1)(n-2)},\] \[\mathbb{E}_{\lambda}[I_{j}^{v}] =\left(1-\frac{a_{1}}{n}-\frac{2a_{2}}{n}\right)\cdot\frac{(n-j-1 )(n-j)}{(n-1)(n-2)}+\frac{2a_{2}}{n}\cdot\frac{n-j}{n-1}.\]
Summing over all \(j\) gives
\[\mathbb{E}_{\lambda}[\mathrm{cdasc}] =\left(1-\frac{a_{1}}{n}-\frac{2a_{2}}{n}\right)\cdot\frac{1}{(n- 1)(n-2)}\cdot\sum_{j=2}^{n-1}(j-1)(n-j)=\left(1-\frac{a_{1}}{n}-\frac{2a_{2}}{ n}\right)\cdot\frac{n}{6},\] \[\mathbb{E}_{\lambda}[\mathrm{cval}] =\left(1-\frac{a_{1}}{n}-\frac{2a_{2}}{n}\right)\cdot\frac{1}{(n- 1)(n-2)}\cdot\sum_{j=1}^{n-1}(n-j-1)(n-j)+\frac{2a_{2}}{n(n-1)}\cdot\sum_{j=1 }^{n-1}(n-j)\] \[=\left(1-\frac{a_{1}}{n}-\frac{2a_{2}}{n}\right)\cdot\frac{n}{3}+ a_{2},\]
using the facts that \(\sum_{j=2}^{n-1}(j-1)(n-j)=\binom{n}{3}\) and \(\sum_{j=1}^{n-1}(n-j-1)(n-j)=2\binom{n}{3}.\) This finishes the proof.
These results confirm the fact that \(\mathrm{exc}(\omega)=\mathrm{cval}(\omega)+\mathrm{cdasc}(\omega)\).
## 6 First moments on \(S_{n}\) from conjugacy class
In this section, we consider connections between the first moments on conjugacy classes with those on all of \(S_{n}\). Observe that the expected values of a statistic \(X\) on individual conjugacy classes is related to the expected value on the entire symmetric group by the formula
\[\mathbb{E}_{S_{n}}[X]=\sum_{\lambda\vdash n}z_{\lambda}^{-1}\mathbb{E}_{ \lambda}[X], \tag{6.1}\]
since the order of the conjugacy class indexed by \(\lambda\) is \(n!/z_{\lambda}\).
In this section we analyse Equation (6.1) more carefully. The following identities will be useful.
**Lemma 6.1**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\). The following identities hold:_
1. \(\sum_{\lambda\vdash n}z_{\lambda}^{-1}=1\)_,_
2. \(\sum_{\lambda\vdash n}z_{\lambda}^{-1}a_{1}=1\)_,_
3. \(\sum_{\lambda\vdash n}z_{\lambda}^{-1}a_{1}^{2}=2\)_, and_
4. \(\sum_{\lambda\vdash n}z_{\lambda}^{-1}a_{2}=1/2\)_._
Proof.:
1. This is the class equation for \(S_{n}\)[10], a consequence of the fact that \(n!=\sum_{\lambda\vdash n}|C_{\lambda}|\).
2. This is Burnside's lemma for the symmetric group [10, 11].
3. Here we consider \(S_{n}\) acting on \(2\)-subsets of \([n]\). There is only one orbit, and a permutation fixes a \(2\)-subset \(\{i,j\}\) if and only if either \(i,j\) are both fixed points, or \(i,j\) form a \(2\)-cycle. Hence the number of \(2\)-subsets fixed by a permutation of cycle type \(\lambda\) with \(a_{k}\) parts of length \(k\), is \(\binom{a_{1}}{2}+a_{2}\), and Burnside's lemma gives \[\sum_{\lambda\vdash n}z_{\lambda}^{-1}\left(\binom{a_{1}}{2}+a_{2}\right)=1.\] (6.2) Similarly, by applying Burnside's lemma to the action of \(S_{n}\) on the set \([n]\times[n]\) of ordered pairs \((i,j)\), which has two orbits \(\{(i,i):1\leq i\leq n\}\) and \(\{(i,j):1\leq i,j\leq n,i\neq j\}\), and counting fixed points, we obtain \[\sum_{\lambda\vdash n}z_{\lambda}^{-1}\left(a_{1}+2\binom{a_{1}}{2}\right)=2 =\sum_{\lambda\vdash n}z_{\lambda}^{-1}a_{1}^{2}.\] (6.3)
4. Using (6.3) and the second identity also gives \[\sum_{\lambda\vdash n}2z_{\lambda}^{-1}\binom{a_{1}}{2}=1.\] (6.4) The last identity now follows from (6.2) and (6.4).
It is now easy to compute the first moments of the preceding statistics over the whole symmetric group; see Table 1 for an overview of our results, as well as a comparison to the literature. Note that we are able to obtain the first moment over the whole symmetric group without knowledge of the generating function for the statistic. Recall the definitions of \(\alpha_{n}(X)=\sum_{1\leq i<j\leq n}\text{wt}(i,j)\) and \(\beta_{n}(X)=\sum_{1\leq i<j\leq n}(j-i-1)\text{wt}(i,j)\) for a weighted inversion statistic \(X=\sum_{1\leq i<j\leq n}\text{wt}(i,j)I_{i,j}\) from Theorem 4.8.
**Proposition 6.2**.: _Let \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}})\vdash n\), and let \(X=\sum_{1\leq i<j\leq n}\text{wt}(i,j)I_{i,j}\) be a weighted inversion statistic. Then_
1. \(\mathbb{E}_{S_{n}}[X]=\frac{\alpha_{n}(X)}{2}\)_, and_
2. \(\mathbb{E}_{\lambda}[X]=\mathbb{E}_{S_{n}}[X]+f_{n}^{X}(a_{1},a_{2}),\) _where_ \(f_{n}^{X}\) _is a polynomial of degree at most_ \(2\) _in_ \(a_{1}\) _and_ \(a_{2}\) _such that_ \[\sum_{\lambda\vdash n}z_{\lambda}^{-1}f_{n}^{X}(a_{1},a_{2})=0.\]
Proof.: Note first that \(\Pr_{S_{n}}[I_{i,j}=1]=1/2\) for \(1\leq i<j\leq n\). The decomposition \(X=\sum_{1\leq i<j\leq n}\text{wt}(i,j)I_{i,j}\) implies
\[\mathbb{E}_{S_{n}}[X]=\frac{1}{2}\sum_{1\leq i<j\leq n}\text{wt}(i,j).\]
Although we can now conclude Part (2) as well, it is instructive to examine the different contributions to our expression for \(\mathbb{E}_{\lambda}[X]\) more carefully. Since \(\beta_{n}(X)=\sum_{1\leq i<j\leq n}(j-i-1)\text{wt}(i,j)\), from Theorem 4.8 we obtain
\[\mathbb{E}_{\lambda}[X] =\left(\frac{1}{2}+\frac{a_{2}}{n(n-1)}-\frac{a_{1}(a_{1}-1)}{2n( n-1)}\right)\alpha_{n}(X)+\left(\frac{n-na_{1}-a_{1}+a_{1}^{2}-2a_{2}}{n(n-1)(n-2)} \right)\beta_{n}(X)\] \[=\frac{\alpha_{n}(X)}{2}+\frac{1}{n(n-1)}\left(a_{2}-\binom{a_{1} }{2}\right)\alpha_{n}(X)+\frac{1}{n(n-1)(n-2)}\left(n(1-a_{1})+2\binom{a_{1}} {2}-2a_{2}\right)\beta_{n}(X).\]
The function \(f_{n}(X)\) is given by
\[f_{n}(X)=\frac{1}{n(n-1)}\left(a_{2}-\binom{a_{1}}{2}\right)\alpha_{n}(X)+ \frac{1}{n(n-1)(n-2)}\left(n(1-a_{1})+2\binom{a_{1}}{2}-2a_{2}\right)\beta_{n} (X).\]
Now Lemma 6.1 guarantees that the two sums
\[\sum_{\lambda\vdash n}z_{\lambda}^{-1}(1-a_{1}),\ \ \sum_{\lambda\vdash n}z_{ \lambda}^{-1}\left(a_{2}-\binom{a_{1}}{2}\right)\]
vanish identically. Since \(\alpha_{n}(X)\) and \(\beta_{n}(X)\) are independent of \(\lambda\), we obtain
\[\sum_{\lambda\vdash n}z_{\lambda}^{-1}f_{n}(X)=0\quad\text{and}\quad\sum_{ \lambda\vdash n}z_{\lambda}^{-1}\mathbb{E}_{\lambda}[X]=\frac{\alpha_{n}(X)}{ 2},\]
as claimed.
Now let \(Y\) be any of the cyclic permutation statistics considered in Section 5. Arguments analogous to the above give us the following.
**Proposition 6.3**.: _For any of the cyclic statistics \(Y\) from Section 5, the first moment on the conjugacy class \(C_{\lambda}\) for each \(\lambda=(1^{a_{1}},2^{a_{2}},\ldots)\vdash n\) is of the form_
\[\mathbb{E}_{\lambda}[Y]=\mathbb{E}_{S_{n}}[Y]+g_{n}(Y),\]
_where \(g_{n}(Y)\) is some polynomial of degree at most \(1\) in \(a_{1}\) and \(a_{2}\) such that \(\sum_{\lambda\vdash n}z_{\lambda}^{-1}g_{n}(Y)=0\). We have_
1. \(\mathbb{E}_{S_{n}}[\mathrm{exc}]=\frac{n-1}{2}\)_,_ \(\mathbb{E}_{S_{n}}[\mathrm{exc}]=\frac{n+1}{2}\)_,_
2. \(\mathbb{E}_{S_{n}}[\mathrm{cdas}]=\frac{n-2}{6}=\mathbb{E}_{S_{n}}[\mathrm{ cddes}]\)_, and_
3. \(\mathbb{E}_{S_{n}}[\mathrm{cval}]=\frac{2n-1}{6}=\mathbb{E}_{S_{n}}[\mathrm{cpk}]\)_._
Proof.: These follow as in Proposition 6.2, from Theorem 5.1 and Theorem 5.4, using Lemma 6.1.
We conclude this section by noting that we can now also compute the variance of the statistic exc, thanks to the following generating function derived in [10].
Recall that \(C_{\lambda}\) denotes the conjugacy class in \(S_{n}\) indexed by the partition \(\lambda\).
**Proposition 6.4**.: _[_10_, Corollary 7]_ _Let \(\lambda\) be a partition of \(n\) with \(a_{1}\) parts of size 1. Then_
\[\sum_{w\in C_{\lambda}}t^{\mathrm{exc}(w)}=\sum_{i=0}^{\lfloor(n-a_{1})/2 \rfloor}\gamma_{i}t^{i}(1+t)^{n-a_{1}-2i},\]
_where \(\gamma_{i}=2^{-n+a_{1}+2i}|\{w\in C_{\lambda}:\mathrm{cval}(w)=i\}|\)._
From this we can compute, essentially by differentiating twice to get the generating function for \(\operatorname{exc}^{2}\), the second moment over the conjugacy class \(C_{\lambda}\):
\[\mathbb{E}_{\lambda}[\operatorname{exc}^{2}]=\frac{(n-a_{1})(n-a_{1}+1)}{4}- \frac{1}{2}\mathbb{E}_{\lambda}[\operatorname{cval}]=\frac{(n-a_{1})^{2}}{4}+ \frac{n-a_{1}}{4}-\frac{n-a_{1}+a_{2}}{6},\]
and therefore the variance
\[\operatorname{Var}_{\lambda}[\operatorname{exc}]=\mathbb{E}_{\lambda}[ \operatorname{exc}^{2}]-\frac{(n-a_{1})^{2}}{4}=\frac{n-a_{1}-2a_{2}}{12}.\]
Hence we obtain, using Lemma 6.1, the second moment over all of \(S_{n}\),
\[\mathbb{E}_{S_{n}}[\operatorname{exc}^{2}]=\frac{(3n-2)(n+1)}{12},\]
and the variance over all of \(S_{n}\),
\[\operatorname{Var}_{S_{n}}[\operatorname{exc}]=\frac{n-2}{12}.\]
## 7 Permutation constraints and higher moments
In this section, we examine permutation statistics that track permutations respecting a specified partial function. Somewhat surprisingly, this notion captures the entire class of permutation statistics. This formulation allows us to extend a technique of Fulman [15, Theorem 3] to establish an independence result for the \(k\)th moment (\(k\geq 1\)) across individual conjugacy classes of arbitrary permutation statistics, provided each part of the indexing permutation is sufficiently large. Fulman [15, Corollary 5] established the analogous result for \(d(\omega)\) and maj. In the symmetric case, we also show that these higher moments are polynomials in \(n\).
We first start by defining the notion of a permutation constraint statistic.
**Definition 7.1**.: Suppose we have a set of pairs \(K:=\{(i_{1},j_{1}),(i_{2},j_{2}),\ldots,(i_{\ell},j_{\ell})\}\) with each \(i_{t}\in[n],j_{t}\in[n]\). We call this a _(permutation) constraint_ and say it has _size_\(m\) if \(K\) contains \(m\) pairs. Note that since \(K\) is a set, repeated pairs are not allowed. We say \(\omega\in S_{n}\)_satisfies_\(K\) if for each \((i_{t},j_{t})\in K\), \(\omega(i_{t})=j_{t}\). We say that \(K\) is _well-defined_ if all the \(i_{t}\in[n]\) are distinct and all the \(j_{t}\in[n]\) are distinct; note that some \(i_{t}\) may be equal to some \(j_{s}\). Define the _support_ of a constraint \(K\) to be the set of all (distinct) \(i_{t}\) and \(j_{s}\).
Given a constraint \(K\), construct the graph \(G(K)\) on vertices \([n]\) by drawing an edge between each pair \((i_{t},j_{t})\). We say that \(K\) is _acyclic of size_\(m\) if \(K\) is well-defined and \(G(K)\) is acyclic with \(m\) edges. Note that the graph constructed from a set of acyclic constraints will be a set of disconnected directed paths.
**Example 7.2**.: Consider the constraint \(K=\{(1,2),(2,3)\}\) of size \(2\). The permutation \((1234)\) satisfies \(K\), as \((1234)\) maps \(1\mapsto 2\) (specified by \((1,2)\in K\)) and \(2\mapsto 3\) (specified by \((2,3)\in K\)). Intuitively, permutations that satisfy \(K\) contain \(123\) as a subsequence within the _same_ cycle.
**Example 7.3**.: Consider the acyclic constraint \(K=\{(1,2),(2,3),(3,4)\}\) of size \(3\). The permutation \((1234)\) satisfies \(K\). Now the graph arising from \(K\) as in Definition 7.1 is acyclic - in particular, observe that the constraint \((4,1)\not\in K\). Nonetheless, \((1234)\) is a closed cycle. Thus, there may be cycles in the support of a constraint \(K\), even when \(K\) is itself acyclic.
Permutation constraints induce statistics on \(S_{n}\), which we formalize as follows.
**Definition 7.4**.: Let \(\mathcal{C}\) be a set of permutation constraints. The _size_ of \(\mathcal{C}\), denoted \(\operatorname{size}(\mathcal{C})\), is the maximum of the sizes of the constraints in \(\mathcal{C}\). Note that while the size of a single constraint \(K\in\mathcal{C}\) is simply its size as a set, this is not true for a set of constraints \(\mathcal{C}\).
**Definition 7.5**.: A _weighted constraint statistic_\(X\) is any statistic which can be expressed in the form \(\sum_{K\in\mathcal{C}}\operatorname{wt}(K)I_{K}\) where \(\mathcal{C}\) is a set of constraints, \(I_{K}\) is the indicator function that a permutation satisfies the constraint \(K\), and weights \(\operatorname{wt}(K)\in\mathbb{R}\setminus\{0\}\) for all \(K\). In this case, we say \(X\) is _realizable_ over \(\mathcal{C}\). If \(X\) can be
expressed in this form with \(\operatorname{wt}(K)=1\) for all \(K\in\mathcal{C}\), then \(X\) is the _unweighted constraint statistic_ induced by \(\mathcal{C}\).
Note that in general, the decomposition \(\sum_{K\in\mathcal{C}}\operatorname{wt}(K)I_{K}\) is not unique. The _size_ of a weighted constraint statistic \(X\) is defined as
\[\operatorname{size}(X)=\min\left\{\operatorname{size}(\mathcal{C})\,\middle| \,X=\sum_{K\in\mathcal{C}}\operatorname{wt}(K)I_{K}\text{ for }\operatorname{wt}(K)\in \mathbb{R}\setminus\{0\}\right\}.\]
**Remark 7.6**.: It turns out that the class of weighted constraint statistics actually captures all permutation statistics. Fix \(n\geq 1\). For a permutation \(\omega\in S_{n}\), consider its graph \(\mathcal{G}_{\omega}=\{(i,\omega(i)):i\in[n]\}\). The indicator for the constraint induced by \(\mathcal{G}_{\omega}\) is precisely the indicator function for the constraint specified by the permutation \(\omega\). The class of weighted constraint statistics includes indicator functions for any single permutation, as well as \(\mathbb{R}\)-linear combinations of them. This in turn captures the algebra of functions from \(S_{n}\to\mathbb{R}\).
In this section, we will establish independence results for higher moments of permutation statistics on individual conjugacy classes, provided all parts of the indexing partition are sufficiently large compared to the size of the statistic. Thus, when investigating an individual permutation statistic \(X\), it is of interest to exhibit _small_ constraint sets that realize \(X\).
**Remark 7.7**.: Any unweighted constraint statistic \(X\) can also be considered as a weighted constraint statistic. In general, the size of \(X\) as an unweighted permutation constraint statistic may be different than when viewing it as a weighted constraint statistic, though we only consider the notion of size for weighted constraint statistics.
The above definitions are a little abstract and very general, so we first give a few familiar examples.
**Example 7.8**.: The number of fixed points is a constraint statistic of size \(1\). To see this, let \(\mathcal{C}_{\operatorname{fix}}\) be the set of all constraints \(\{\{(i,i)\}:i=1,\ldots,n\}\). Then we have
\[\operatorname{Fix}(\omega)=\sum_{K\in\mathcal{C}_{\operatorname{fix}}}I_{K}( \omega).\]
**Example 7.9**.: Let \(\mathcal{C}_{i,j}\) be the set of all constraints \(\{\{(i,a),(j,b)\}:\,a>b\}\). Then we may express \(\operatorname{des},\operatorname{maj}\), and \(\operatorname{inv}\) in terms of these, meaning that these are weighted constraint statistics of size at most \(2\) (and indeed, des and \(\operatorname{inv}\) are unweighted). In particular define the following:
* \(\mathcal{C}_{\operatorname{inv}}=\cup_{1\leq i<j\leq n}\mathcal{C}_{i,j}\), and
* \(\mathcal{C}_{\operatorname{des}}=\cup_{1\leq i\leq n-1}\mathcal{C}_{i,i+1}\).
Then setting \(\operatorname{wt}(\{(i,a),(j,b)\}):=i\), we obtain
\[\operatorname{maj}(\omega)=\sum_{K\in\mathcal{C}_{\operatorname{des}}} \operatorname{wt}(K)I_{K}(\omega).\]
Similar formulas exist for \(\operatorname{des}\) and \(\operatorname{inv}\). We can also obtain more general statistics such as cyclic descents. For example,
\[\mathcal{C}_{\operatorname{cdes}}=\mathcal{C}_{\operatorname{des}}\cup \mathcal{C}_{n,1}.\]
Then in a similar manner to before we have that
\[\operatorname{cdes}(\omega)=\sum_{K\in\mathcal{C}_{\operatorname{cdes}}}I_{K}( \omega).\]
Note that these statistics actually have size equal to \(2\). This fact follows from our work on first moments, combined with Corollary 7.17 below.
We give another example of a weighted constraint statistic that is not a weighted inversion statistic: excedance.
**Example 7.10**.: Recall that an excedance is defined as an \(i\in[n]\) with \(\omega(i)>i\). We can define the corresponding set of constraints as follows:
\[\mathcal{C}_{\mathrm{exc}}=\cup_{1\leq i<j\leq n}\{\{(i,j)\}\}.\]
Then we have that
\[\mathrm{exc}(\omega)=\sum_{K\in\mathcal{C}_{\mathrm{exc}}}I_{K}(\omega).\]
**Remark 7.11**.: Note that weighted constraint statistics, even those that are realizable over constraints of size \(2\), are more general than weighted inversion statistics. Furthermore, permutation statistics realizable over constraints of size \(3\) already capture all \(14\) of the statistics from [1]. For instance, we will see below that the number of inversions between excedances where the greater excedance is _linked_ (denoted \(\mathrm{ile}\)) is realizable over symmetric constraints of size \(3\).
**Example 7.12**.: The number of inversions between excedances where the greater excedance is _linked_ is defined [1] by
\[\mathrm{ile}(\omega):=\#\{(i,j)\in[n]\times[n]:i<j<\omega(j)<\omega(i)\text{ and }\omega^{-1}(j)<j\}.\]
(Recall from Section 5.1 that the linked excedances of [1] coincide with the cyclic double ascents of [1].)
We are therefore counting occurrences of \(i<j\) with \(\omega^{-1}(j)<j<\omega(j)<\omega(i)\). This means we can define the following set of all constraints:
\[\mathcal{C}_{\mathrm{ile}}:=\cup_{1\leq i<j\leq n}\{\{(i,a),(j,b),(k,j)\}:k<j< b<a\}.\]
In a similar manner to before this gives
\[\mathrm{ile}(\omega)=\sum_{K\in\mathcal{C}_{\mathrm{ile}}}I_{K}(\omega).\]
**Example 7.13**.: [15] The _Denert_ statistic is defined by
\[\mathrm{den}(\omega) :=\#\{1\leq i<j\leq n:\omega(j)<\omega(i)\leq j\}\] \[+\#\{1\leq i<j\leq n:\omega(i)\leq j<\omega(j)\}\] \[+\#\{1\leq i<j\leq n:j<\omega(j)<\omega(i)\}\]
The statistic \(\mathrm{den}\) has the property that the joint distributions of the pairs \((\mathrm{exc},\mathrm{den})\) and \((\mathrm{des},\mathrm{maj})\) coincide. Such pairs are called Euler-Mahonian in the literature [15].
Observe that \(\mathrm{den}\) may be realized as an unweighted constraint statistic induced by a constraint set of size \(2\), since we have
\[\mathrm{den}(\omega)=\sum_{K\in\mathcal{C}_{\mathrm{den}}}I_{K}(\omega)\]
for the set of constraints
\[\mathcal{C}_{\mathrm{den}} :=\cup_{1\leq i<j\leq n}\{\{(i,a),(j,b)\}:b<a\leq j\}\] \[\bigcup\cup_{1\leq i<j\leq n}\{\{(i,a),(j,b)\}:a\leq j<b\}\] \[\bigcup\cup_{1\leq i<j\leq n}\{\{(i,a),(j,b)\}:j<b<a\}.\]
We now give a relatively simple observation.
**Proposition 7.14**.: _Let \(K\) be a well-defined constraint of size \(m\). Then we have:_
\[\Pr_{S_{n}}[\omega\text{ satisfies }K]=\frac{1}{n(n-1)(n-2)\ldots(n-m+1)}.\]
Proof.: Let \(K:=\{(i_{1},j_{1}),(i_{2},j_{2}),\ldots,(i_{m},j_{m})\}\), and suppose \(\omega\) satisfies \(K\). This means that we have \(\omega(i_{t})=j_{t}\) for \(t=1,\ldots,m\), which is possible since the constraint is well-defined. The number of permutations which satisfy these \(m\) values is just the number of permutations on the remaining \(n-m\) symbols, which is \((n-m)!\). Therefore the probability of a random permutation satisfying \(K\) is \((n-m)!/n!\) as required.
We are interested in the behavior of certain constraint statistics on fixed conjugacy classes. The key result of this section is the following, which says that for \(\lambda\) with all parts "large," the probability of a permutation in \(C_{\lambda}\) satisfying a constraint is only dependent on whether the constraint set is acyclic.
**Lemma 7.15**.: _Let \(\lambda\) have all parts of size at least \(m+1\), and let \(K\) be a constraint of size \(m\). If \(K\) is acyclic then we have_
\[\Pr_{\lambda}[\omega\text{ satisfies }K]=\frac{1}{(n-1)(n-2)\ldots(n-m)}.\]
_If \(K\) is not acyclic then we have_
\[\Pr_{\lambda}[\omega\text{ satisfies }K]=0.\]
Proof.: We first note that if \(K\) is not acyclic, then in order for \(\omega\) to satisfy \(K\), \(\omega\) must contain a cycle induced by constraints in \(K\). Since \(K\) has size \(m\), then this cycle is of length at most \(m\). However we assumed \(\omega\) is of cycle type \(\lambda\) with all cycles of length at least \(m+1\), so this is not possible.
Now suppose \(K\) is acyclic. We fix \(n\) and then prove this lemma by induction on \(m\). For \(m=1\), we will show that
\[\Pr_{\lambda}[\omega(i_{1})=j_{1}]=\frac{1}{n-1}.\]
This follows from the fact that conjugating by \((j_{1}\,k)\) for any \(k\neq i_{1},j_{1}\) maps from the set of \(\omega\) with \(\omega(i_{1})=j_{1}\) to those with \(\omega(i_{1})=k\). Therefore this probability is the same for each \(j_{1}\neq i_{1}\), and is zero for \(i_{1}=j_{1}\) since \(\lambda\) is fixed point free. Therefore the probability is \(1/(n-1)\) as required.
Assume the statement is true for \(m-1\). Let \(A=\{(i_{1},j_{1}),\ldots,(i_{m},j_{m})\}\) be an acyclic constraint of size \(m\). Let \(\lambda\vdash n\) have all parts of size at least \(m+1\), and label the cycles of any permutation in \(C_{\lambda}\) by \(c_{1},\ldots,c_{t}\). By Definition 7.4, we have
\[\Pr_{\lambda}[\omega\text{ satisfies }A] =\Pr_{\lambda}\left[\bigwedge_{\ell=1}^{m}\omega(i_{\ell})=j_{ \ell}\right]\] \[=\Pr_{\lambda}\left[\bigwedge_{\ell=1}^{m-1}\omega(i_{\ell})=j_{ \ell}\right|\omega(i_{m})=j_{m}\right]\cdot\Pr_{\lambda}[\omega(i_{m})=j_{m}]\] \[=\frac{1}{n-1}\sum_{h=1}^{t}\Pr_{\lambda}\left[\bigwedge_{\ell=1 }^{m-1}\left(\omega(i_{\ell})=j_{\ell}\wedge i_{m}\in c_{h}\right)\right| \omega(i_{m})=j_{m}\right]\] \[=\frac{1}{n-1}\sum_{h=1}^{t}\Pr_{\lambda}\left[\bigwedge_{\ell=1 }^{m-1}\omega(i_{\ell})=j_{\ell}\left|\,i_{m}\in c_{h}\wedge\omega(i_{m})=j_{ m}\right]\cdot\Pr_{\lambda}[i_{m}\in c_{h}\mid\omega(i_{m})=j_{m}].\]
Notice that \(A^{\prime}:=\{(i_{1},j_{1}),\ldots,(i_{m-1},j_{m-1})\}\) is an acyclic constraint of size \(m-1\). Let \(\lambda^{\prime}(h)\) be the partition obtained by reducing the size of the \(h^{th}\) part of \(\lambda\) by one. This is a partition of an \((n-1)\)-element set (though perhaps not \([n-1]\)) with all parts of size at least \(m-1\). It is then fairly straightforward to see that
\[\Pr_{\lambda}\left[\bigwedge_{\ell=1}^{m-1}\omega(i_{\ell})=j_{\ell}\left|\,i_ {m}\in c_{h}\wedge\omega(i_{m})=j_{m}\right]=\Pr_{\lambda^{\prime}(h)}[\omega \text{ satisfies }A^{\prime}]=\frac{1}{(n-2)(n-3)\ldots(n-m)},\]
where the last equality follows by the induction hypothesis. Note that the first term is \(n-2\), as the probability is in \(S_{n-1}\). Putting this altogether gives
\[\Pr_{\lambda}[\omega\text{ satisfies }A] =\frac{1}{n-1}\sum_{h=1}^{t}\frac{1}{(n-2)(n-3)\ldots(n-m)}\frac {\lambda_{h}}{n}\] \[=\frac{1}{(n-1)(n-2)\ldots(n-m)}.\]
This completes the inductive step and the proof.
As a consequence, we obtain that for each \(k\), the \(k\)th moment of these statistics is independent of conjugacy class, as long as the cycles are sufficiently long.
**Theorem 7.16**.: _Let \(X\) be a permutation statistic that is realizable over a constraint set of size \(m\), and fix \(k\geq 1\). If \(\lambda\vdash n\) has all parts of size at least \(mk+1\), then \(\mathbb{E}_{\lambda}[X^{k}]\) is independent of \(\lambda\)._
Proof.: Express \(X=\sum_{P\in\mathcal{C}}\operatorname{wt}(P)I_{P}\), where \(\operatorname{size}(\mathcal{C})=m\). We start by decomposing the variable \(X^{k}\) into random indicator variables.
\[\mathbb{E}_{\lambda}[X^{k}] =\sum_{P_{1}\in\mathcal{C}}\sum_{P_{2}\in\mathcal{C}}\dots\sum_{P _{k}\in\mathcal{C}}\prod_{i=1}^{k}\operatorname{wt}(P_{i})\cdot\mathbb{E}_{ \lambda}[I_{P_{i}}]\] \[=\sum_{P_{1}\in\mathcal{C}}\sum_{P_{2}\in\mathcal{C}}\dots\sum_{P _{k}\in\mathcal{C}}\left(\prod_{i=1}^{k}\operatorname{wt}(P_{i})\right) \operatorname{Pr}_{\lambda}\left[\bigwedge_{i=1}^{k}\omega\text{ satisfies }P_{i}\right].\]
We therefore continue by evaluating each of the individual probabilities in the sum.
Fix some tuple \(P_{1},P_{2},\dots,P_{k}\), and let \(Y\) be the union of all of these constraints excluding repeats. Write \(Y=\{(i_{1},j_{1}),\dots,(i_{s},j_{s})\}\), noting that all the pairs are distinct. We split into three cases.
* **Case 1**: Suppose first that \(Y\) is not well defined. Then there must be some repeated \(i_{t}\) or \(j_{t}\). Since we excluded repeats, there must be pairs of the form \(\{(i_{t},a),(i_{t},b)\}\) or \(\{(a,j_{t}),(b,j_{t})\}\). However \(\omega(i_{t})\) and \(\omega^{-1}(j_{t})\) can only take one value, so the probability of \(Y\) being satisfied is zero.
* **Case 2**: Suppose instead that \(Y\) is not acyclic. Then by Lemma 7.15, we have that \(\operatorname{Pr}_{\lambda}[w\text{ satisfies }Y]=0\).
* **Case 3**: \(Y\) is well defined, and no subsets of the values in \(\omega(i_{1})=j_{1},\omega(i_{2})=j_{2},\dots,\omega(i_{s})=j_{s}\) form a cycle. Then this is a set of acyclic constraints of size at most \(mk\). By the previous proposition we therefore have that \[\operatorname{Pr}_{\lambda}[\omega\text{ satisfies }Y]=\frac{1}{(n-1)(n-2)\dots(n-s)}.\]
In particular, none of these probabilities depend on the choice of \(\lambda\), so the result follows.
**Corollary 7.17**.: _Let \(X\) be a permutation statistic, and let \(\lambda\vdash n\). Suppose that \(\mathbb{E}_{\lambda}[X]\) depends on the number of parts of \(\lambda\) of size \(m\). Then any constraint set realizing \(X\) must have size at least \(m\)._
**Remark 7.18**.: Let \(\mathcal{C}\) be a constraint set of size \(m\). Clearly, if we can express \(X=\sum_{P\in\mathcal{C}}I_{P}\), then any minimum-sized constraint set realizing \(X\) has size at most \(m\).
The above corollary shows that calculating the first moment of \(X\) even just on specific conjugacy classes allows us to obtain a lower bound on the size of \(X\). This approach allows us to explicitly calculate the size for many statistics.
Once we have determined the size of \(X\), we can then apply Theorem 7.16, so we see that information on the higher moments of \(X\) can be obtained from the first moment, further highlighting the importance of the latter.
**Remark 7.19**.: It will be useful later to write the expectation \(\mathbb{E}_{\lambda}[X^{k}]\) from Theorem 7.16 more explicitly in the unweighted case, so we do this.
Let \(\mathcal{A}\) be the set of all the acyclic constraints from amongst the tuples \(P_{1},\dots,P_{k}\) in the sum. Let \(\mathcal{A}_{t}\) be the set of all the acyclic constraints in \(\mathcal{A}\) of size \(t\). Using the three previous cases, we may write the required expectation as
\[\mathbb{E}_{\lambda}[X^{k}] =\sum_{P\in\mathcal{A}}\operatorname{Pr}_{\lambda}[\omega\text{ satisfies }P]\] \[=\sum_{t}\frac{|\mathcal{A}_{t}|}{(n-1)(n-2)\dots(n-t)}.\]
This number is independent of the choice of \(\lambda\) as long as it has parts of size at least \(mk+1\). Observe that taking \(X(\omega)=\operatorname{maj}(\omega)\) or \(X(\omega)=d(\omega)=1+\operatorname{des}(\omega)\) yields [12, Theorem 2].
We continue by showing that when a statistic is _symmetric_, these moments are polynomial in \(n\). We now define this precisely.
**Definition 7.20**.: Let \(a_{1},\ldots,a_{n_{0}}\in[n]\). A function \(f:\{a_{1},\ldots,a_{n_{0}}\}\to[n]\) is _order-preserving_ when \(a_{i}<a_{j}\) if and only if \(f(a_{i})<f(a_{j})\) for all \(i,j\in[n_{0}]\). Note that any such function must be injective.
**Definition 7.21**.: Let \(\mathcal{C}\) be a set of permutation constraints, and let \(X\) be the unweighted constraint statistic induced by \(\mathcal{C}\). Take some \(P=\{(i_{1},j_{1})\ldots(i_{\ell},j_{\ell})\}\in\mathcal{C}\). Let the distinct symbols amongst the \(i_{1},\ldots,i_{\ell},j_{1},\ldots,j_{\ell}\) be \(1\leq a_{1}<a_{2}<\cdots<a_{n_{0}}\leq n\). If \(f(P):=\{(f(i_{1}),f(j_{1}))\ldots(f(i_{\ell}),f(j_{\ell}))\}\in\mathcal{C}\) for all such choices of \(P\in\mathcal{C}\) and order-preserving \(f:\{a_{1},\ldots,a_{n_{0}}\}\to[n]\), then we say that \(X\) is _symmetric_.
We start by examining how this definition relates to some familiar statistics.
* Inversions are symmetric: take any \(P=\{(a,b),(c,d)\}\in\mathcal{C}_{\operatorname{inv}}\) and any order preserving injection \(f:\{a,b,c,d\}\to[n]\). Then we must have \(a<c,b>d\), so \(f(a)<f(c),f(b)>f(d)\). Therefore \(f(P)=\{(f(a),f(b)),(f(c),f(d))\}\in\mathcal{C}_{\operatorname{inv}}\).
* Descents cannot be realized as symmetric constraint statistics using constraints of size \(2\). Let \(\mathcal{C}_{i,j}\) be as defined in Example 7.9. For example, take \(P=\{(1,5),(2,4)\}\!\in\!\mathcal{C}_{1,2}\subseteq\mathcal{C}_{\operatorname {des}}\). Let \(1<3<4<5\), with \(f(1)=1,f(2)=3,f(4)=4,f(5)=5\). Then \(f(P)=\{(1,5),(3,4)\}\!\in\!\mathcal{C}_{1,3}\not\subseteq\mathcal{C}_{ \operatorname{des}}\). We may iterate on this argument, replacing \((1,5),(2,4)\) with arbitrary values respecting the same relative ordering. It is not clear whether descents can be realized using a symmetric constraint set of larger size.
* The number of inversions between excedances, as defined in [1], is symmetric. This is because the constraints for this statistic are exactly the \((a,b),(c,d)\) with \(a<c<d<b\), so the images of these elements under an order-preserving \(f\) will give another valid constraint.
Given a symmetric permutation constraint statistic on \(S_{n_{0}}\), there is also a natural way of extending this statistic to any \(S_{n}\).
**Definition 7.22**.: Let \(X\) be a symmetric permutation constraint statistic on \(S_{n_{0}}\) induced by some \(\mathcal{C}\) supported on \([n_{0}]\). Then for any \(S_{n}\), we can define a symmetric permutation constraint statistic \(X_{n}\) on \(S_{n}\) by starting with the set of constraints \(\mathcal{C}\) for \(X\) and constructing the following set of constraints \(\mathcal{C}_{n}\) for \(X_{n}\).
* If \(n\leq n_{0}\), then let \(\mathcal{C}_{n}\) contain all \(P\in\mathcal{C}\) with support contained in \([n]\).
* If \(n>n_{0}\), then let \(\mathcal{C}_{n}\) contain all \(P\in\mathcal{C}\), as well as all \(f(P)\) for all order-preserving functions \(f:[n_{0}]\to[n]\). Note that we exclude repeated constraints in \(\mathcal{C}_{n}\).
Then by construction each \(X_{n}\) is symmetric. We call \((X_{n})\) a _symmetric extension of \(X\)_.
**Example 7.23**.: While the previous definition seems technical, there are several natural examples.
* Consider the constraint \(K=\{(1,2)\}\), and define the statistic \(X\) on \(S_{2}\) by \(X=I_{K}\). Then the \((X_{n})\) are the excedance statistics.
* Fix \(\omega\in S_{m}\). Let \(\mathcal{C}\) be the constraints of size \(m\) in \(S_{2m}\) that induce the permutation pattern statistic for \(\omega\) in \(S_{2m}\). Then each statistic in \((X_{n})\) is the number of appearances of the permutation pattern \(\omega\) for a given element in \(S_{n}\). Note that choosing \(\omega=(12)\in S_{2}\) results in the usual inversion statistics on \(S_{n}\).
**Remark 7.24**.: The preceding examples show that symmetric permutation constraint statistics are more general than permutation pattern statistics, as excedances cannot be expressed as a permutation pattern. See Remark 1.7 for more discussion, as well as a comparison of our work with that of Gaetz and Pierson [13].
**Remark 7.25**.: In general, it is necessary to consider symmetric extensions starting from some sufficiently large \(n_{0}\). Observe that both \(\{(1,2)\}\) and \(\{(1,2),(2,1)\}\) induce inv on \(S_{2}\). However, the symmetric extension of \(\{(1,2)\}\) yields the excedance statistic, while the symmetric extension of \(\{(1,2),(2,1)\}\) realizes transpositions. In the preceding example, we see that the symmetric extension starting with the inversion statistic on \(S_{4}\) results in the inversion statistics on all \(S_{n}\).
With this definition in hand, we now show that when all parts of a partition are sufficiently large, the moments of any statistic constructed in this manner are given by a single polynomial dependent only on \(n\).
**Theorem 7.26**.: _Fix \(k,m\geq 1\). Let \((\lambda_{n})\) be a sequence of partitions, where \(\lambda_{n}\vdash n\) and all parts of \(\lambda_{n}\) have size at least \(mk+1\). Let \((X_{n})\) be a symmetric extension of a symmetric permutation statistic \(X=X_{n_{0}}\) induced by a constraint set of size \(m\). There exists a polynomial \(p_{X}(n)\) depending only on \(X\) such that \(p_{X}(n)=\mathbb{E}_{\lambda_{n}}[X_{n}^{k}]\)._
Proof.: As in Theorem 7.16, it suffices to consider \(\mathcal{A}_{n}=\bigcup_{i}P_{n,i}\), where the union runs over all well-defined acyclic \(k-\)tuples of constraints in \(X_{n}\). Let \(\mathcal{A}_{n,t}\subseteq\mathcal{A}_{n}\) be the constraints of size \(t\). Note that each constraint \(P\in\mathcal{A}_{n}\) is a tuple of constraint, and multiple constraints may involve the same elements. Recall that the support of a constraint \(P=\{(i_{1},j_{1}),\ldots,(i_{t},j_{\ell})\}\in\mathcal{A}_{n}\) is the set of distinct elements among the \(i_{1},\ldots,i_{\ell},j_{1},\ldots,j_{\ell}\). Define \(\mathcal{A}_{n,t,s}\subseteq\mathcal{A}_{n,t}\) be the constraints of size \(t\) with support on \(s\) elements, where acyclicity of elements in \(\mathcal{A}_{n,t}\) implies \(t+1\leq s\leq 2t\). Then we have from Remark 7.19 that
\[\mathbb{E}_{\lambda_{n}}[X_{n}^{k}] =\sum_{t=1}^{mk}\frac{|\mathcal{A}_{n,t}|}{(n-1)(n-2)\ldots(n-t)} \tag{7.1}\] \[=\sum_{t=1}^{mk}\left(\frac{1}{(n-1)(n-2)\ldots(n-t)}\sum_{s=t+1 }^{2t}|\mathcal{A}_{n,t,s}|\right).\]
Now let \(\mathcal{A}^{\prime}_{n,t,s}\subseteq\mathcal{A}_{n,t,s}\) be the constraints that are supported on \([s]\). Observe that when \(n<s\), \(\mathcal{A}^{\prime}_{n,t,s}=\emptyset\), and since \(X_{n}\) is formed as the symmetric extension of \(X_{n_{0}}\), this \(\mathcal{A}^{\prime}_{n,t,s}\) is independent of \(n\) for \(n\geq s\), so we call this common set \(\mathcal{A}_{t,s}\). Furthermore, since \(X_{n}\) is symmetric, for \(n\geq s\), we can express
\[\mathcal{A}_{n,t,s}=\bigcup_{f}\bigcup_{P\in\mathcal{A}_{t,s}}f(P),\]
where the first union is over all order-preserving \(f\). Now as each \(P\) uses all elements of \([s]\) and each \(f\) is determined by its image in \([n]\), we have that \(f_{1}(P_{1})=f_{2}(P_{2})\) can only occur if \(f_{1}=f_{2}\) and \(P_{1}=P_{2}\). Then letting \(a_{t,s}=|\mathcal{A}_{t,s}|\), we have that for \(n\geq s\),
\[|\mathcal{A}_{n,t,s}|=\binom{n}{s}a_{t,s},\]
as there are \(\binom{n}{s}\) order-preserving functions \(f:[s]\to[n]\). Letting \(I_{s}(n)\) be the indicator function for \(n\geq s\), we see that (7.1) can be rewritten as
\[\mathbb{E}_{\lambda_{n}}[X_{n}^{k}] =\sum_{t=1}^{mk}\left(\frac{1}{(n-1)(n-2)\ldots(n-t)}\sum_{s=t+1}^ {2t}\binom{n}{s}a_{t,s}I_{s}(n)\right) \tag{7.2}\] \[=\sum_{t=1}^{mk}\sum_{s=t+1}^{2t}\binom{n}{s}\frac{a_{t,s}I_{s}( n)}{(n-1)(n-2)\ldots(n-t)}.\]
Observe that \(s\geq t+1\), and when \(n\geq s\), we have that
\[\binom{n}{s}\cdot\frac{1}{(n-1)(n-2)\ldots(n-t)} =\frac{1}{s!}\cdot\frac{n(n-1)(n-2)\ldots(n-s+1)}{(n-1)(n-2) \ldots(n-t)}\] \[=\frac{1}{s!}\cdot n(n-t-1)\ldots(n-s+1)\]
is a polynomial in \(n\) of degree \(s-t\). Furthermore, \(\lambda_{n}\) has all parts of size at least \(mk+1>t\), so \(n\geq mk+1>t\). When values of \(n\) with \(t<n<s\) are substituted, the above polynomial vanishes. Hence, we can rewrite (7.2) and omit the \(I_{s}\) indicator function to obtain
\[\mathbb{E}_{\lambda_{n}}[X_{n}^{k}]=\sum_{t=1}^{mk}\sum_{s=t+1}^{2t}\frac{a_{t, s}}{s!}\cdot n(n-t-1)\ldots(n-s+1). \tag{7.3}\]
We conclude that (7.2) is a polynomial in \(n\) of degree
\[\max_{P\in\mathcal{A}_{2mk}}(|\operatorname{supp}(P)|-\operatorname{size}(P) )\leq mk.\qed\]
**Remark 7.27**.: The proof of the preceding result gives a method for finding \(p_{X}(n)\), which we illustrate with an example. Consider the mean of the inversion statistic on conjugacy classes \(\lambda_{n}\) with cycle lengths of at least \(3\), so that \(m=2\) and \(k=1\) in (7.3). In the summation of (7.2), the only nonzero values involve \(t=2\), which implies \(s\in\{3,4\}\). Of the constraints in \(\operatorname{inv}\) using only values in the sets [3] and [4], we see that the acyclic ones that use all values are
\[\mathcal{A}_{2,3}=\left\{\{(1,3),(2,1)\},\{(1,2),(3,1)\},\{(1,3),(3,2)\},\{(2,3),(3,1)\}\right\},\]
\[\mathcal{A}_{2,4}=\left\{\{(1,4),(2,3)\},\{(1,4),(3,2)\},\{(1,3),(4,2)\},\{(2,4),(3,1)\},\{(2,3),(4,1)\},\{(3,2),(4,1)\}\right\}.\]
Then (7.3) becomes
\[\mathbb{E}_{\lambda_{n}}[\operatorname{inv}]=\frac{4}{3!}\cdot n+\frac{6}{4!} \cdot n(n-3)=\frac{3n^{2}-n}{12},\]
which agrees with our Corollary 4.10. For higher moments, explicit description of acyclic constraints in terms of \(k\)-tuples becomes significantly more complex, and this method becomes computationally very difficult.
In the case of certain statistics such as inversions, we can determine much more about the structure of this polynomial.
**Proposition 7.28**.: _Let \(\lambda\) be a partition of \(n\) with all parts of size at least \(2k+1\). Then \(\mathbb{E}_{\lambda}[\operatorname{inv}^{k}]\) is a polynomial in \(n\) of degree \(2k\) with leading coefficient \(2^{-2k}\)._
Proof.: The polynomiality follows from Theorem 7.26. From the proof of this Theorem we also have that
\[\mathbb{E}_{\lambda}[\operatorname{inv}^{k}]=\sum_{t=1}^{2k}\sum_{s=t+1}^{2t} \frac{a_{t,s}}{s!}\cdot n(n-t-1)\ldots(n-s+1). \tag{7.4}\]
Recall that \(a_{t,s}\) is the number \(k\)-tuples \(\{(a,b),(c,d)\}\) with \(a<c,b>d\) that consist of \(t\) distinct pairs and use exactly the elements in \([s]\). The degree of this polynomial corresponds to when \(s-t\leq 2k\) is maximal. Note that \(s-t=2k\) can occur only when \(s=4k\) and \(t=2k\), so it suffices to show that \(a_{2k,4k}\) is nonzero. Hence, we consider \(2k\) distinct pairs using all elements in \([4k]\).
There are \(\binom{4k}{4,4,\ldots,4}\) ways to partition \([4k]\) into \(k\) sets of four symbols. For each set of four symbols \(\{a,b,c,d\}\) suppose that \(a<b<c<d\). Then there will be \(6\) ways to put this set into two pairs which relate to an inversion constraint, which are
\[\{(a,c),(d,b)\},\,\{(a,d),(b,c)\},\,\{(a,d),(c,b)\},\,\{(b,c),(d,a)\},\,\{(c,b ),(d,a)\},\,\{(b,d),(c,a)\}.\]
Therefore in total we have \(a_{2k,4k}=\binom{4k}{4,4,\ldots,4}6^{k}=(4k)!/4^{k}\). Substituting this back into (7.4) gives a leading coefficient of \(1/4^{k}\) for the \(x^{2k}\) term as required.
As an application, we can use polynomial interpolation on \(2k+1\) values of \(n\) to explicitly compute \(\mathbb{E}_{\lambda}[\operatorname{inv}^{k}]\) when all parts of \(\lambda\) have size at least \(2k+1\). The case of the second moment of \(\operatorname{inv}\) is given below.
**Corollary 7.29**.: _Let \(\lambda\) be a partition of \(n\) with all parts of size at least \(5\). Then_
\[\mathbb{E}_{\lambda}[\mathrm{inv}^{2}]=\frac{1}{16}n^{4}-\frac{1}{72}n^{3}-\frac {1}{80}n^{2}-\frac{49}{360}n,\]
_and consequently,_
\[\mathrm{Var}_{\lambda}[\mathrm{inv}]=\frac{1}{36}n^{3}-\frac{7}{360}n^{2}- \frac{49}{360}n.\]
Proof.: We consider the conjugacy class \(C_{(n)}\) corresponding to full cycles in \(S_{n}\). Using code, we find the following values:
\[\mathbb{E}_{(5)}[\mathrm{inv}^{2}] =109/3,\] \[\mathbb{E}_{(6)}[\mathrm{inv}^{2}] =1151/15,\] \[\mathbb{E}_{(7)}[\mathrm{inv}^{2}] =2156/15,\] \[\mathbb{E}_{(8)}[\mathrm{inv}^{2}] =247,\] \[\mathbb{E}_{(9)}[\mathrm{inv}^{2}] =3977/10,\]
The result for \(\mathbb{E}_{\lambda}[\mathrm{inv}^{2}]\) follows by polynomial interpolation, and \(\mathrm{Var}_{\lambda}[\mathrm{inv}]=\mathbb{E}_{\lambda}[\mathrm{inv}^{2}]-( \mathbb{E}_{\lambda}[\mathrm{inv}])^{2}\) then follows by direct calculation.
**Remark 7.30**.: We compare Corollary 7.29 with Feller's corresponding result for the full \(S_{n}\)[12, p. 257, equations (6.1)-(6.3)]:
\[\mathbb{E}_{S_{n}}[\mathrm{inv}]=\frac{1}{4}n(n-1),\]
\[\mathbb{E}_{S_{n}}[\mathrm{inv}^{2}]=\frac{1}{16}n^{4}-\frac{7}{72}n^{3}+\frac {5}{48}n^{2}-\frac{5}{72}n,\]
\[\mathrm{Var}_{S_{n}}[\mathrm{inv}]=\frac{1}{72}(2n^{3}+3n^{2}-5n).\]
We note that leading terms coincide.
## 8 Conclusion
In this paper, we investigated the distributions of various permutation statistics on individual conjugacy classes. We first introduced general notions of permutation statistics, including (i) weighted inversion statistics, which generalized inversions, major index, descents, and baj, and (ii) permutation constraints. We utilized the notion of permutation constraints to reason about arbitrary permutation statistics. Precisely, we showed that the higher moments are independent of the conjugacy class indexed by the partition \(\lambda\vdash n\), provided all parts of \(\lambda\) are sufficiently large. For permutation statistics realizable over symmetric constraints, we were further able to establish polynomiality for the higher moments on individual conjugacy classes indexed by \(\lambda\vdash n\), again provided that all parts of \(\lambda\) are sufficiently large. Our work leaves open several questions.
In Proposition 6.2, we showed that for any conjugacy class \(\lambda\) and a weighted inversion statistic \(X\), \(\mathbb{E}_{\lambda}[X]\) can be written as \(\mathbb{E}_{S_{n}}[X]\) plus some error term \(f_{n}^{X}(a_{1},a_{2})\), which is a degree \(2\) polynomial depending only on \(X\) and \(a_{i}\) (\(i=1,2\)), the number of cycles of size \(i\) in \(\lambda\). As our independence results in Section 7 require that all parts of \(\lambda\) be sufficiently large, we suspect that Proposition 6.2 can be extended in the following manner.
**Problem 8.1**.: Show that \(\mathbb{E}_{\lambda}[X^{k}]=\mathbb{E}_{S_{n}}[X^{k}]+f_{n}^{X^{k}}(a_{1}, \ldots,a_{2k})\), where \(a_{i}\) is the number of cycles of length \(i\) in \(\lambda\), and \(f_{n}^{X^{k}}\) is a polynomial of degree at most \(2k\), (necessarily) satisfying the condition
\[\sum_{\lambda\vdash n}z_{\lambda}^{-1}f_{n}^{X^{k}}(a_{1},\ldots,a_{2k})=0.\]
Our technique in establishing Proposition 6.2 required detailed case analysis. Moving to even the second moment, the number of cases grows substantially. It would be of interest to find a tractable technique that easily extends to higher moments.
As we have not only an independence result, but also polynomiality on the higher moments of permutation statistics realizable over symmetric constraint sets, it seems plausible that such statistics admit a nice asymptotic distribution. In particular, a central limit theorem for descents on individual conjugacy classes is known [11, 12, 13]. We thus ask the following.
**Problem 8.2**.: Fix \(k,m\geq 1\). Let \((X_{n})\) be a symmetric extension of a symmetric permutation statistic of size \(m\). Let \(\lambda_{n}\) be a partition of \(n\), with each part of size at least \(mk+1\). Establish a central limit theorem for \((X_{n})\) on \(\lambda_{n}\).
While we have established that a number of statistics such as \(\operatorname{inv}\), exc, aexc, cdasc, and cddes are symmetric, we have been unable to show that any of the statistics in this paper are _not_ symmetric. In particular, we do not have tractable conditions to show that a permutation statistic is not symmetric. Thus, we ask the following.
**Problem 8.3**.: Provide a characterization of when a permutation statistic is realizable over a symmetric constraint set.
In light of Theorem 7.26 and the fact that the first moment of cdes is a rational function on any individual conjugacy class (Theorem 4.14), we have that the family \((\operatorname{cdes}_{n})\) cannot be realized as the symmetric extension of any permutation statistic \(X\). We conjecture that no individual \(\operatorname{cdes}_{m}\) is itself symmetric. However, it is not clear how to establish this. Furthermore, we conjecture that \(\operatorname{des},\operatorname{maj},\operatorname{baj}\), and \(\operatorname{baj}-\operatorname{inv}\) are not realizable over any symmetric permutation constraints or as the symmetric extensions of any permutation statistic.
Since our work in this paper establishes results for the Coxeter group of type \(A\), it is natural to ask the following.
**Problem 8.4**.: Extend the results of this paper to other Coxeter groups.
It is likely that the calculations would need to be updated to the setting of the given family of Coxeter groups being considered, but that the techniques in this paper might still apply. Ideally, one might hope for a general technique that can handle all Coxeter groups without redoing the calculations for each such family.
Given a statistic \(X\) on the symmetric group \(S_{n}\), the first moments \(\mathbb{E}_{\lambda}[X]\) are class functions, and may thus be interpreted as the character of a possibly virtual representation of \(S_{n}\). Equation (6.1), which gives the first moment of \(X\) on all of \(S_{n}\), is then precisely the multiplicity of the trivial module in some (virtual) representation of \(S_{n}\). Thus one could ask if there is a representation-theoretic interpretation of our results, beyond the connection with character polynomials as in [10].
**Problem 8.5**.: Investigate representation-theoretic interpretations of these results.
|
2302.10103 | The Cosmic Timeline Implied by the JWST High-redshift Galaxies | The so-called `impossibly early galaxy' problem, first identified via the
Hubble Space Telescope's observation of galaxies at redshifts z > 10, appears
to have been exacerbated by the more recent James Webb Space Telescope (JWST)
discovery of galaxy candidates at even higher redshifts (z ~ 17) which,
however, are yet to be confirmed spectroscopically. These candidates would have
emerged only ~ 230 million years after the big bang in the context of LCDM,
requiring a more rapid star formation in the earliest galaxies than appears to
be permitted by simulations adopting the concordance model parameters. This
time-compression problem would therefore be inconsistent with the age-redshift
relation predicted by LCDM. Instead, the sequence of star formation and galaxy
assembly would confirm the timeline predicted by the R_h=ct universe, a
theoretically advanced version of LCDM that incorporates the `zero active mass'
condition from general relativity. This model has accounted for many
cosmological data better than LCDM, and eliminates all of its inconsistencies,
including the horizon and initial entropy problems. The latest JWST discoveries
at z > 14, if confirmed, would add further support to the idea that the R_h=ct
universe is favored by the observations over the current standard model. | Fulvio Melia | 2023-02-20T17:06:28Z | http://arxiv.org/abs/2302.10103v1 | # The Cosmic Timeline Implied by the _Jwst_ High-redshift Galaxies
###### Abstract
The so-called 'impossibly early galaxy' problem, first identified via the _Hubble Space Telescope_'s observation of galaxies at redshifts \(z>10\), appears to have been exacerbated by the more recent _James Webb Space Telescope_ (_JWST_) discovery of galaxy candidates at even higher redshifts (\(z\sim 17\)) which, however, are yet to be confirmed spectroscopically. These candidates would have emerged only \(\sim 230\) million years after the big bang in the context of \(\Lambda\)CDM, requiring a more rapid star formation in the earliest galaxies than appears to be permitted by simulations adopting the concordance model parameters. This time-compression problem would therefore be inconsistent with the age-redshift relation predicted by \(\Lambda\)CDM. Instead, the sequence of star formation and galaxy assembly would confirm the timeline predicted by the \(R_{\rm h}=ct\) universe, a theoretically advanced version of \(\Lambda\)CDM that incorporates the 'zero active mass' condition from general relativity. This model has accounted for many cosmological data better than \(\Lambda\)CDM, and eliminates all of its inconsistencies, including the horizon and initial entropy problems. The latest _JWST_ discoveries at \(z\gtrsim 14\), if confirmed, would add further support to the idea that the \(R_{\rm h}=ct\) universe is favored by the observation over the current standard model.
keywords: cosmology: observations - cosmology: theory - large-scale structure of the Universe - stars: Population III - galaxies high-redshift
## 1 Introduction
A surprising number of high-redshift galaxy candidates (\(z>12\)) have already been discovered by the _James Webb Space Telescope_ (_JWST_) in just the first few weeks of operation (see Table 1). Identified through the Early Release Observations (ERO) (Pontoppidan et al., 2022), the Cosmic Evolution Early Release Science (CEERS) (Finkelstein et al., 2022) and Through the Looking GLASS (GLASS-_JWST_) (Treu et al., 2022) science programs, many of them surpass the distance record previously set at \(z=11.1\) by the _Hubble Space Telescope_ (_HST_) (Oesch et al., 2016). This is certainly true of candidates up to \(z\sim 13\), whose redshift has been confirmed spectroscopically (Robertson et al., 2022).
But the fact that some of these well-formed \(\sim 10^{9}\)\(M_{\odot}\) structures (at \(z\sim 16-17\)) appear to have emerged only \(\sim 230\) Myr after the big bang contrasts with their predicted formation in the standard model of cosmology, which we here take to be \(\Lambda\)CDM with the _Planck_ optimized parameters: a Hubble constant, \(H_{0}=67.4\pm 0.5\) km s\({}^{-1}\) Mpc\({}^{-1}\), a matter density \(\Omega_{\rm m}=0.315\pm 0.007\), scaled to today's critical density (\(\equiv 3c^{2}H_{0}^{2}/8\pi G\)), and a spatial curvature constant, \(k\approx 0\)(Planck Collaboration et al., 2020). In discussing this 'impossibly early galaxy' problem (Melia, 2014, 2020), two principal issues typically emerge. The first is whether the gas budget in the early Universe, notably the fraction of baryons condensed within an assumed dark-matter halo distribution, was sufficient to account for this high-\(z\) galaxy demographic (Behroozi & Silk, 2018). The answer could be yes (Donnan et al., 2022), as long as all of the available baryonic gas in halos was converted into stars. The \(z\sim 16-17\) galaxy candidates fall close to the \(\Lambda\)CDM limit, but do not exceed it.
The second concerns whether the dynamics of structure formation could account for the highly compressed timeline implied by these discoveries (for the most recent work on this topic, see Yajima et al., 2022; Keller et al., 2022; Kannan et al., 2022; Inayoshi et al., 2022; Haslbauer et al., 2022; Mirocha & Furlanetto, 2023; Whitler et al., 2023). It is the dynamics, of course, coupled to the physical processes responsible for cooling the gas, that would have governed how quickly stars could condense and assemble into billion solar-mass structures. One must also fold into this discussion how the'stellar age' of the high-\(z\) sources (column 5 in Table 1) should be interpreted. An examination of the galaxy age versus star formation activity at \(z>8\)(Furtala et al., 2023; Whitler et al., 2023) suggests that the young stellar populations producing much of the current luminosity are built upon older components that formed at \(z>15\), and are being observed during bursts of star formation.
## 2 High-\(z\) Galaxies in \(\Lambda\)cdm
A more indicative evolutionary history for these galaxies is therefore provided by the broad range of simulations tracing the growth of initial perturbations consistent with the measured
anisotropies in the cosmic microwave background. This study has evolved considerably over the past decade, as each new set of observations has pushed the formation of galaxies to progressively higher redshifts. The first generation of simulations (Barkana & Loeb, 2001; Miralda-Escude, 2003; Bromm & Larson, 2004; Ciardi & Ferrara, 2005; Glover, 2005; Greif et al., 2007; Wise & Abel, 2008; Salvaterra et al., 2011; Greif et al., 2012; Jaacks et al., 2012) began to elucidate how the first (Pop III) stars probably formed by redshift \(z\sim 20\), nestled in the core of dark-matter halos with mass \(M_{\rm halo}\sim 10^{6}\,M_{\odot}\)(Haiman et al., 1996; Tegmark et al., 1997; Abel et al., 2002; Bromm et al., 2002). This delay after the big bang resulted from the combined influence of several processes, including the initial gravitational collapse of the dark-matter perturbations and the subsequent inefficient, radiative cooling of the primordial gas. The baryonic matter cooled and condensed into stars only after enough molecular hydrogen accumulated to enhance the energy loss rate (Galli & Palla, 1998; Omukai & Nishi, 1998). In the standard model, the Universe would have been \(\sim 180\) Myr old at redshift 20. But not all of the halos and their baryonic content would necessarily have taken this long to condense. The more recent simulations (Yajima et al., 2022; Keller et al., 2022; Kannan et al., 2022; Inayoshi et al., 2022; Haslbauer et al., 2022; Mircha & Furlanetto, 2023; Whitler et al., 2023), in particular, show that the halos could have been distributed across this age, some appearing perhaps as early as \(\sim 120\) Myr after the big bang (more on this below).
A typical baryonic gas cloud could subsequently have formed a protostar at the center of its halo host, eventually growing to become a \(>100\,M_{\odot}\) main sequence (Pop III) star (Kroupa, 2002; Chabrier, 2003). This is where a second major difficulty emerges, however. The UV radiation from such massive stars would have destroyed all of the \(H_{2}\) in the original condensation, suggesting that most of the minihalos likely contained--at most--only a handful of Pop III stars (Yoshida et al., 2008). Thus, many of these early structures were probably not the galaxies we see today. Nevertheless, a sufficiently broad distribution of Pop III masses could have included many below \(\sim 100\,M_{\odot}\), whose impact on the star formation would have been less severe (Yajima et al., 2022; Keller et al., 2022; Kannan et al., 2022; Inayoshi et al., 2022; Haslbauer et al., 2022; Mirocha & Furlanetto, 2023; Whitler et al., 2023).
The outpouring of radiative and mechanical energy from this first phase of stellar formation would have reheated and expelled the surrounding gas, further delaying the formation of additional stars until the plasma had time to cool again and condense to high densities. The time for this gas re-incorporation would have been another \(\sim 100\) Myr, i.e., roughly the dynamical time for a first-galaxy halo to assemble (Yoshida et al., 2004; Johnson et al., 2007). And now the Universe was almost 300 Myr old.
To its credit, this scenario does provide a plausible explanation for the size of the observed galaxies. It suggests that a critical mass of \(>10^{8}\,M_{\odot}\), along with an implied virial temperature \(>10^{4}\) K, would have allowed atomic line emission to cool the condensing gas (Wise & Abel, 2007), and recollect and mix all components of the previously shocked plasma (Greif et al., 2007). In other words, at least the mass inferred for the earliest _JUST_ galaxies (Table 1) is consistent with this theoretical understanding.
But in the context of these simulations, there's no getting around the fact that the appearance of \(\sim 10^{9}\,M_{\odot}\) galaxies at \(z\sim 16-17\), if confirmed spectroscopically, would create a significant problem. One needs a billion new stars to have formed in only \(\sim 70-90\) Myr by the time the Universe was only \(t\sim 230\) Myr old. None of the calculations to date have been able to account for the formation of galaxies at this redshift, when the expelled, hot gas hadn't re-cooled and re-condensed yet.
This tension between the theoretical and observational times has motivated the introduction of additional features and phys
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Name & \(z^{a}\) & log(\(M/M_{\odot}\)) & SFR & Stellar Age\({}^{b}\) & Reference \\ & & & (\(M_{\odot}\) yr\({}^{-1}\)) & (Myr) & \\ \hline
1. S5-z17-1 & 16.66\({}^{+1.36}_{-0.34}\) & 8.8\({}^{+0.8}_{-0.5}\) & 9.7\({}^{+30.7}_{-62}\) & - & Harikane et al. (2022) \\
2. CEERS-93316 & 16.4\({}^{+0.1}_{-0.36}\) & 9.0\({}^{+0.4}_{-0.4}\) & 10.0\({}^{+0.10}_{-0.10}\) & Donnan et al. (2022); Naidu et al. (2022); Harikane et al. (2022) \\
3. S5-z12-1 & 13.72\({}^{+1.92}_{-1.92}\) & 8.1\({}^{+0.3}_{-0.3}\) & 2.2\({}^{+1.8}_{-0.1}\) & - & Harikane et al. (2022) \\
4. WHL0137-5021 & 12.8\({}^{+1.3}_{-1.5}\) & 8.53\({}^{+0.18}_{-0.32}\) & 5.1\({}^{+1.9}_{-1.4}\) & 58\({}^{+3.5}_{-35}\) & Bradley et al. (2022) \\
5. WHL0137-5124 & 12.8\({}^{+1.2}_{-1.4}\) & 8.65\({}^{+0.30}_{-0.30}\) & 6.9\({}^{+1.9}_{-1.9}\) & 59\({}^{+3.3}_{-1.9}\) & Bradley et al. (2022) \\
6. GLASS-213 & 12.4 \(\pm\) 0.2 & 9.0\({}^{+0.4}_{-0.4}\) & 7.3\({}^{+1.7}_{-1.3}\) & Naidu et al. (2022); Harikane et al. (2022) \\
7. GLASS-21-2 & 12.22\({}^{+0.64}_{-0.11}\) & 8.6\({}^{+0.8}_{-0.49}\) & 2.0\({}^{+0.13}_{-0.6}\) & Harikane et al. (2022); Donnan et al. (2022) \\
8. Maisie’s Galaxy & 11.8\({}^{+0.11}_{-0.31}\) & 8.5\({}^{+0.44}_{-0.44}\) & 2.1\({}^{+2.0}_{-2.0}\) & 18\({}^{+18}_{-2.0}\) & Finkelstein et al. (2022); Harikane et al. (2022) \\
9. GN-z11\({}^{c}\) & 11.09\({}^{+0.68}_{-0.12}\) & 9.0 \(\pm\) 0.4 & 24 \(\pm\) 10 & 40\({}^{+60}_{-34}\) & Oesch et al. (2016) \\
10. GLASS-z11 & 10.6 \(\pm\) 0.3 & 9.4 \(\pm\) 0.3 & 12\({}^{+9}_{-1}\) & 111\({}^{+34}_{-34}\) & Naidu et al. (2022); Harikane et al. (2022); Donnan et al. (2022) \\
11. WHL0137-3407 & 10.5\({}^{+1.0}_{-10.5}\) & 8.78\({}^{+0.17}_{-0.33}\) & 7.3\({}^{+2.3}_{-1.2}\) & 70\({}^{+30}_{-44}\) & Bradley et al. (2022) \\
12. WHL0137-5347 & 10.2\({}^{+0.9}_{-1.7}\) & 9.01\({}^{+0.21}_{-0.32}\) & 14.6\({}^{+5.8}_{-3.5}\) & 62\({}^{+43}_{-43}\) & Bradley et al. (2022) \\
13. WHL0137-5330 & 10.0\({}^{+7.9}_{-7.9}\) & 8.77\({}^{+0.26}_{-0.26}\) & 6.4\({}^{+1.8}_{-1.8}\) & 83\({}^{+48}_{-48}\) & Bradley et al. (2022) \\ \hline \end{tabular}
* Photometric redshift (with \(2\sigma\) uncertainties) for the WHL sources calculated using the Calzetti et al. (2000) dust law. GLASS, CEERS and Maisie’s Galaxy redshifts (with \(1\sigma\) uncertainties) were fit with the EAZY code (Brammer et al., 2008).
* Mass-weighted age of the current luminous stars, when available.
* Discovered by _HST_ prior to _JUST_.
\end{table}
Table 1: _JUST_ highest-redshift galaxies and their derived properties
ical processes designed to mitigate the disagreement as much as possible (Yajima et al., 2022; Keller et al., 2022; Kannan et al., 2022; Inayoshi et al., 2022; Haslbauer et al., 2022; Mirocha & Furlanetto, 2023; Whitler et al., 2023). For example, in their most detailed simulations to date, Yajima et al. (2022) and Keller et al. (2022) have demonstrated that the large scatter in cooling times and the presence of systems with weaker Pop III supernovae that expel far less of the condensed baryonic gas (Kitayama & Yoshida, 2005; Frebel & Norris, 2015) would have allowed galaxies observed by _JWST_ at \(z\lesssim 14\) to still have formed in the context of \(\Lambda\)CDM.
Kannan et al. (2022) have shown that a variable stellar initial mass function may also have produced some galaxies earlier than previously thought. A top-heavy stellar mass distribution appears to have a similar effect (Inayoshi et al., 2022), while different star formation histories could reduce the actual stellar masses of the galaxy candidates, thereby partially alleviating the tension (Haslbauer et al., 2022). Mirocha & Furlanetto (2023) suggest that at least three modifications to the _HST_-calibrated models would help lessen the tension: (i) the adoption of halo mass-independent star formation (SFR) efficiencies; (ii) a substantial scatter in galaxy SFRs at fixed halo masses; and (iii) the non-trivial effects of dust, both on the inferred _JWST_ colours and on the produced stellar masses and ages. Finally, Whitler et al. (2023) conclude that the tension may be eased if young stellar populations formed in these early galaxies on top of older stellar populations.
Nevertheless, even with all of these modifications, the predicted galaxy masses at \(z\lesssim 14\) appear to fall short of those observed by factors of a few. And they are significantly smaller than those of the galaxy candidates at \(z\sim 16-17\). Thus, if the _JWST_ highest redshift sources are eventually confirmed, even the more optimistic recent simulations would be unable to explain their origin.
The four galaxy growth curves in Figure 1 compare the time required to reach the inferred stellar mass of GLASS-z13 (the sixth entry in Table 1) by \(t\sim 345\) Myr (corresponding to its redshift \(z\sim 12.4\)), based on four of the main simulations discussed above. For the sake of giving \(\Lambda\)CDM the most optimal evolutionary outcome, we shall adopt the Salvaterra et al. (2011) and Jaacks et al. (2012) frameworks which, though not as detailed and nuanced as the more recent work, predict a shorter growth time once star formation is initiated, thus making it easier to fit the observed galaxies within the \(\Lambda\)CDM timeline.
The various factors discussed above may now be seen more quantitatively with the simulated galaxy growth trajectories shown in Figure 2, which also includes several critical epochs in \(\Lambda\)CDM. We assume that the _JWST_ galaxy candidates in Table 1 followed a history of growth like those of Salvaterra et al. (2011) and Jaacks et al. (2012) at \(z\sim 8\), except that their cosmic time \(t\) is suitably translated to match the redshift at which they are observed. This is justified by the fact that \(t\) is actually the proper time in the comoving frame, and the Birkhoff theorem ensures that the local growth rate was not overly affected by the cosmic expansion exterior to the bound system (Weinberg, 1972; Melia, 2020). In other words, once a galaxy halo becomes gravitationally bound and star formation is initiated, its evolution thereafter should be roughly translationally invariant in \(t\).
In the Salvaterra et al. (2011) simulations, the doubling time (i.e., the inverse of the specific star formation rate sSFR, defined as the stellar mass created per unit time per billion solar-masses) and the evolutionary time at which the galaxy is observed, appears to be universally equal to \(\sim 0.1-0.3\). By comparison,
Figure 1: Comparison of galaxy growth curves with four different simulations for GLASS-z13 (line 6 in Table 1), with a final stellar mass \(M_{*}\sim 10^{9}\ M_{\odot}\): Salvaterra [6] corresponding to trajectory [6] in Figure 2; FOREVER22 (Yajima et al., 2022); SIMBA (Keller et al., 2022); and OBELISK (Keller et al., 2022). The vertical dashed line indicates the age of this galaxy (\(\sim 345\) Myr) in the context of _Planck_-\(\Lambda\)CDM.
Figure 2: Growth of stellar mass in the high-\(z\) galaxy candidates discovered by _JWST_ at \(10<z<16\), as a function of cosmic time \(t\), in \(\Lambda\)CDM. The principal epochs are (i) the initial halo condensation and cooling due to molecular hydrogen. This epoch typically extended over the period \(0.4\lesssim t\lesssim 180\) Myr, but could have been as short as \(\sim 120\) Myr for some of the objects; (ii) the formation of the first Pop III stars at \(t\sim 120-180\) Myr, i.e., \(z\gtrsim 20\) in this model, (iii) the transition to Pop II star formation at \(t\lesssim 280\) Myr, and the observed Epoch of Reionization (EoR) from \(z\sim 15\) down to 6 (i.e., \(280<t<927\) Myr). These galaxy growth trajectories are primarily based on the observed SFRs and the hydrodynamical simulations in Jaacks et al. (2012), cross-checked with independent and alternative calculations in Salvaterra et al. (2011). The labels on the curves correspond to the catalog listings in Table 1.
the Jaacks et al. (2012) calculations show that the SFR for high-\(z\) galaxies is 'bursty', with an average value between \(z\sim 15\) and \(z\sim 6\) following an exponentially increasing function with characteristic timescale \(t_{\rm c}\sim 70-200\) Myr, scaling with stellar mass in the range \(10^{6}<M_{\rm*}<10^{10}\,M_{\odot}\). The trajectories plotted in Figure 2 follow this exponential growth, using the observed galaxy mass and redshift to fix the end points. Given that the sSFRs probably fluctuated during their evolution (Furtak et al., 2023; Whitler et al., 2023), we take an average of the star formation rates quoted in Table 1, i.e., \(\langle\)sSFR\(\rangle\sim 12.4\) Gyr\({}^{-1}\), as a fiducial value for each galaxy.
The overall impression one gets from this illustration is that the new _JWST_ high-\(z\) galaxies, if confirmed, would not be consistent with the standard picture in \(\Lambda\)CDM. The previously growing tension developing at \(z\sim 12\) has now become a more serious discomdace at \(z\sim 16-17\). There does not appear to be any possibility with our current physical theories of explaining how a billion-solar mass aggregate of stars could have condensed even before the primordial gas was allowed to cool and form most of the very first Pop III, and any of the Pop II, populations. In this regard, our conclusion concerning the implausibility of forming the _JWST_ high-\(z\) galaxies in the context of \(\Lambda\)CDM is fully consistent with the findings of an alternative approach to this problem (Boylan-Kolchin, 2022), based on the use of the Sheth-Tormen (Sheth & Tormen, 1999) mass function to determine the abundance of massive halos at high redshifts (Wang et al., 2022).
## 3 The Timeline in \(R_{\rm h}=ct\)
But previous comparative tests between \(\Lambda\)CDM and a theoretically more advanced version, known as the \(R_{\rm h}=ct\) universe (Melia & Shevchuk, 2012; Melia, 2020), have already hinted at the possibility that the age-redshift relation in the latter may be a better match to the Universe's evolutionary history than that in the former. For example, \(\Lambda\)CDM has considerable difficulty accounting for the seeding and growth of billion-solar mass black holes by redshift \(z\sim 7-8\), while the timeline in \(R_{\rm h}=ct\) matches them very well (Melia, 2013, 2018b). A more complete description of this model, including its theoretical foundation and a comparison of its predictions with the data, may be seen in Melia (2018a, 2020). A review of the current problems with the standard model, pointing to a need for further development, e.g., as suggested by \(R_{\rm h}=ct\), is provided in Melia (2022).
One of the essential features of \(R_{\rm h}=ct\) that distinguishes it from \(\Lambda\)CDM is its expansion factor, \(a(t)\propto t\), which results in the simple age-redshift relation \(1+z=t_{0}/t\) in terms of the current age, \(t_{0}\) of the Universe. In this cosmology, the gravitational radius \(R_{\rm h}\) is equivalent to the Hubble radius \(c/H(t)\)(Melia, 2018c), so \(t_{0}=1/H_{0}\). Thus, if for simplicity we use the same Hubble constant as \(\Lambda\)CDM, we find that \(t_{0}\approx 14.5\) Gyr. The _JWST_ galaxy trajectories recalculated with these relations are displayed in Figure 3, along with the correspondingly adjusted temporal phases. The EoR redshift range \(6<z<15\) here corresponds to 906 Myr \(<t<2.07\) Gyr. That is, the Dark Ages ended at \(t\sim 906\) Myr, providing ample time for the Universe to assemble billion-solar mass galaxies once Pop III and Pop II stars started forming in numbers. Very tellingly, all of the high-\(z\) galaxy candidates discovered so far appear towards the end of the dark ages, where one would expect them to be if they contributed--perhaps even dominated--the reionization process. In this model, one would need to find a billion-solar mass galaxy at \(z\sim 50\) to run into a similar age-redshift inconsistency as that in \(\Lambda\)CDM.
## 4 Conclusion
The time compression problem in the standard model has been worsening for several years. Attempts at remedying the situation with the premature formation of supermassive black holes have focused on two principal modifications: (i) the creation of massive (i.e., \(\sim 10^{5}\,M_{\odot}\)) seeds; and (ii) super-Eddington accretion rates. The first of these is still speculative because it requires the collapse of an essentially zero angular momentum, optically-thick, radiation-dominated plasma, which would have experienced substantial support from its internal pressure (Melia, 2009); the second appears to have been ruled out by measurements suggesting that the most distant quasars are accreting at or below their Eddington rate (Mortlock et al., 2011; De Rosa et al., 2011; Willott et al., 2010).
The _JWST_ discovery of high-\(z\) galaxy candidates may have worsened this timing problem considerably if their redshifts are confirmed spectroscopically, because billion-solar mass structures must have formed in only \(\sim 70-90\) Myr in some cases, and even prior to formation of the very first stars in others. The simulations completed to date in the context of \(\Lambda\)CDM have difficulty accounting for this outcome at \(z\sim 17\), reinforcing the view that the standard model may not be able to account for the formation of structure at cosmic dawn, if the redshifts of these candidate galaxies are confirmed spectroscopically.
Instead, the timeline predicted by the \(R_{\rm h}=ct\) cosmology would fit the birth and growth of both high-\(z\) quasars and galaxies very well, adding to the growing body of evidence supporting the introduction of the zero active mass condition from general relativity as an indispensible modification to \(\Lambda\)CDM.
Of course, there are still at least two ways out of this dilemma. First, the _JWST_ candidate galaxies at \(z\gtrsim 14\) may simply be
Figure 3: Same as Figure 2, except now for the \(R_{\rm h}=ct\) Universe. The EoR here corresponds to 906 Myr \(<t<2.07\) Gyr, and the Dark Ages extend up to \(\sim 906\) Myr. The first Pop III stars emerged at \(z\gtrsim 79\) and the transition to Pop II stars occurred at \(z\sim 51\). The extended period between the onset of Pop II star formation and the appearance of the first _JWST_ galaxies (shown here in red) is absent in Figure 2. In this cosmology, the _JWST_ galaxy candidates are seen towards the end of the dark ages, where one would expect them to be if they were responsible for re-ionizing the intergalactic medium. Most importantly, all of these primordial galaxies would have started their growth _well after_ the transition from Pop III to Pop II star formation had been completed at \(\sim 280\) Myr.
mis-identified sources at lower redshifts. According to Furtak et al. (2023), only about half of the high-\(z\) galaxy photometric redshifts may be safely ruled out as low-\(z\) interlopers. The other half, including those of the most distant candidates, still await spectroscopic confirmation. Second, it is possible that we may be missing something in the basic theory, and this caveat cannot be ignored. The initial cooling of the primordial gas may have been due to something other than molecular hydrogen. An unknown process may have permitted the plasma to cool more efficiently, allowing Pop III stars to form even earlier than \(t\sim 120-180\) Myr.
Certainly, the most recent simulations of Yajima et al. (2022) and Keller et al. (2022) indicate that such a process permitting the formation of Pop III stars as early as \(\lesssim 100\) Myr would lessen the tension with the \(z\sim 16-17\) galaxies in \(\Lambda\)CDM. This may not completely resolve the problem, but would go a long way to mitigating the overall disagreement between the _JWST_ observations and the standard model. Future simulations will probe such possibilities in even greater detail, perhaps uncovering a solution based on new physics in \(\Lambda\)CDM. As of today, however, the _JWST_ discoveries--if confirmed--would support the timeline in \(R_{\rm h}=ct\), but not in \(\Lambda\)CDM.
## Acknowledgments
I am very grateful to the anonymous referee for a detailed and very constructive review, which has led to significant improvements in the manuscript.
## Data Availability Statement
No new data were generated or analysed in support of this research.
|
2304.02851 | N$_c$-mixture occupancy model | A class of occupancy models for detection/non-detection data is proposed to
relax the closure assumption of N$-$mixture models. We introduce a community
parameter $c$, ranging from $0$ to $1$, which characterizes a certain portion
of individuals being fixed across multiple visits. As a result, when $c$ equals
$1$, the model reduces to the N$-$mixture model; this reduced model is shown to
overestimate abundance when the closure assumption is not fully satisfied.
Additionally, by including a zero-inflated component, the proposed model can
bridge the standard occupancy model ($c=0$) and the zero-inflated N$-$mixture
model ($c=1$). We then study the behavior of the estimators for the two extreme
models as $c$ varies from $0$ to $1$. An interesting finding is that the
zero-inflated N$-$mixture model can consistently estimate the zero-inflated
probability (occupancy) as $c$ approaches $0$, but the bias can be positive,
negative, or unbiased when $c>0$ depending on other parameters. We also
demonstrate these results through simulation studies and data analysis. | Huu-Dinh Huynh, Wen-Han Hwang | 2023-04-06T04:00:51Z | http://arxiv.org/abs/2304.02851v1 | # N\({}_{c}\)-mixture occupancy model
###### Abstract
A class of occupancy models for detection/non-detection data is proposed to relax the closure assumption of N\(-\)mixture models. We introduce a community parameter \(c\), ranging from 0 to 1, which characterizes a certain portion of individuals being fixed across multiple visits. As a result, when \(c\) equals 1, the model reduces to the N\(-\)mixture model; this reduced model is shown to overestimate abundance when the closure assumption is not fully satisfied. Additionally, by including a zero-inflated component, the proposed model can bridge the standard occupancy model (\(c=0\)) and the zero-inflated N\(-\)mixture model (\(c=1\)). We then study the behavior of the estimators for the two extreme models as \(c\) varies from 0 to 1. An interesting finding is that the zero-inflated N\(-\)mixture model can consistently estimate the zero-inflated probability (occupancy) as \(c\) approaches 0, but the bias can be positive, negative, or unbiased when \(c>0\) depending on other parameters. We also demonstrate these results through simulation studies and data analysis.
**Keywords:** Community parameter; N\(-\)mixture models; Zero-inflated model
Introduction
Estimating the occupancy and abundance of species in a specific region, despite imperfect detection, is a crucial problem in ecology conservation and management. In reality, a species may exist at a survey site but go undetected due to limitations in the survey method or timing. Zero counts in a site sampling survey can be caused by the species not being present or by detection errors. If this issue is not addressed, the occupancy rate will be underestimated. Site occupancy models (MacKenzie _et al._, 2002), based on multiple-visit detection/non-detection (occurrence, presence/absence) data, can estimate the occurrence rate by accounting for detection errors. These models, which can be temporal or spatial replication surveys, are cost-effective as they do not require individual identification or marking. They are widely used in species distribution modeling, with various extensions such as multi-season open models, multi-species models, dynamic models, and spatial-temporal models (MacKenzie _et al._, 2017; Hogg _et al._, 2021; MacKenzie _et al._, 2009; Johnson _et al._, 2013) having been developed. This study focuses on single-species, single-season occupancy models.
In the standard occupancy model (MacKenzie _et al._, 2002), detection probability and species abundance are confounded and cannot be distinguished from one another. To overcome this limitation, Royle and Nichols (2003); Royle (2004) introduced N\(-\)mixture occupancy models that allow for the separation of detection probability and species abundance. These models enable the estimation of species abundance through multiple-visit occurrence or count data. N\(-\)mixture models have received a lot of attention in the literature (Haines, 2016a,b; Joseph _et al._, 2009; Gomez _et al._, 2018) as they can estimate population size like capture-recapture models, but without the need for capturing and marking individuals. However, the performance of these models depends on the assumptions made (Dennis _et al._, 2015; Link _et al._, 2018; Barker _et al._, 2018). Therefore, our goal is to improve these models by relaxing the closure model assumption, which is often violated even in single-season surveys (Kendall _et al._, 2013; Otto _et al._, 2013).
The closure assumption states that the number of individuals at a site remains constant during multiple visits. Surveys are often conducted through temporal or spatial replication, and sometimes with the use of multiple detectors or a combination of these methods (MacKenzie _et al._, 2017; Kendall and White, 2009). Temporal replication involves surveying the same sites at different times, while spatial replication involves selecting random sampling units within a larger area at a single site. While this closure assumption is reasonable for surveys conducted in the same locations over a short period, it may not be appropriate for highly mobile species (Hayes and Monfils, 2015). We also note that the study in Royle (2004) is an example of temporal replication, despite the paper's title referencing spatial replication to indicate the distribution of sites.
The inference of the N\(-\)mixture occupancy model has been shown to produce biased point estimates and incorrect interval estimates when the closure assumption is violated. While a few studies have highlighted these effects (Denes _et al._, 2015; Duarte _et al._, 2018; Dail and Madsen, 2011; Ke _et al._, 2022), most evidence is derived from simulation studies. To the best of our knowledge, there is a lack of theoretical results that explain the behavior of estimation bias under the N\(-\)mixture model. However, we have recently provided theoretical evidence under the proposed N\({}_{c}-\)mixture model, which is an extension of the N\(-\)mixture model. It is important to note that Dail and Madsen (2011) also proposes a class of generalized N\(-\)mixture models, which allows for the immigration and emigration of species populations and estimates year-to-year immigration and emigration rates, providing valuable insights for conservation managers. However, this model is a multi-season open
population model that goes in a different direction than our extension.
The N\({}_{c}-\)mixture model is designed to create a framework that can unify both temporal and spatial replicating surveys. For example, consider a triple-visit survey conducted at a site where \(N_{1}\), \(N_{2}\), and \(N_{3}\) represent the number of observable individuals during each visit. These variables are assumed to be identically distributed random variables. In the case of temporal replication, where the surveys are conducted in a short period, the three \(N_{j}\) variables can be considered equal and meet the closure assumption of the N\(-\)mixture model. In contrast, for spatial replication, the \(N_{j}\) variables are treated as independent. To account for both scenarios, we decompose the \(N_{j}\) variables into two components: \(N_{j}=K+M_{j}\) for \(j=1,2,3\), where \(K\) represents the number of common individuals during the triple-visit survey, and \(M_{j}\) represents the number of non-common individuals. We assume that \(K\) and \(M_{j}\) are independent, with \(E(K)=cE(N_{j})\) for some \(0\leq c\leq 1\). This parameter \(c\), referred to as the _community parameter_, indicates the proportion of individuals who are residents and remain fixed during the triple-visit. It also allows us to easily let \(K\) degenerate to \(0\) if \(c=0\), or let \(M_{j}\) degenerate to \(0\) if \(c=1\). Figure 1 provides an illustration of this decomposition.
Under the N\({}_{c}-\)mixture model, we are able to demonstrate that if the community parameter \(c\) is incorrectly specified as \(1\) (i.e., the N\(-\)mixture model is used instead), the mean abundance will be overestimated. Additionally, our results indicate that the bias increases as the community parameter \(c\) decreases, reaching infinity as \(c\) approaches \(0\).
We propose an extension to the N\({}_{c}-\)mixture model by incorporating a zero-inflated component. This allows us to bridge the standard occupancy model when \(c=0\)(MacKenzie _et al._, 2002) and the zero-inflated N\(-\)mixture model when \(c=1\)(Haines, 2016a). We then investigate the behavior of estimators for these two extreme models as the community parameter \(c\) ranges from \(0\) to \(1\). Our findings reveal that the standard occupancy model underestimates the zero-inflated probability (occupancy), and the bias increases as the community parameter \(c\) increases. However, an interesting finding is that the zero-inflated N\(-\)mixture model can estimate the occupancy consistently as \(c\) approaches \(0\), but the bias can be positive or negative depending on the other parameters.
The paper is organized as follows: In Section 2, we develop the N\({}_{c}-\)mixture model and present estimation methods. In Section 3, we consider the zero-inflated N\({}_{c}-\)mixture model. Section 4 includes simulation studies to evaluate estimator performance. Two real data
Figure 1: An illustration of the N\({}_{c}-\)mixture model. In a triple-visit survey of a site, some individuals are present in the site for each of the three visits (denoted as \(v_{1}\), \(v_{2}\), and \(v_{3}\)). The figure illustrates the situations corresponding to the community parameter \(c=0\) (left), \(c=0.5\) (middle), and \(c=1\) (right). The colored circles represent those individuals who are residents (or, equivalently, are fixed) in the site during the three visits. For \(c=0\), the number of individuals can differ from visit to visit, mimicking the scenario of spatial replication. For \(c=1\), the number of individuals is constant, and the closure assumption of the N\(-\)mixture model is satisfied. For \(c=0.5\), around half of the individuals were residents during the survey.
examples are presented in Section 5, and a discussion is in Section 6. Web Appendix A includes proofs for all propositions.
## 2 N\({}_{c}-\)mixture model
### Notation and estimation
Consider a multiple-visit sampling survey consisting of \(n\) sites and \(T\) visits. Let \(N_{ij}\) be the number of individuals at site \(i\) during the \(j\)-th visit. Following the motivation in the Introduction (see Figure 1), we decompose \(N_{ij}\) into two independent Poisson distributed random variables: \(K_{i}\) and \(M_{ij}\). The expected value of \(K_{i}\) is \(c\mu\) and the expected value of \(M_{ij}\) is \((1-c)\mu\), where \(0\leq c\leq 1\) and \(\mu>0\) are constant parameters. Therefore, the number of individuals at each site, \(N_{ij}\), follows an identical Poisson distribution with mean \(\mu\), representing the abundance parameter over sites (per each visit). However, the numbers of individuals from multiple visits at a site, \(N_{ij},j=1,\cdots,T\), are not independent as they share a common variable \(K_{i}\). The community parameter \(c\) characterizes the degree of dependence between visits, as it also represents the correlation between each visit of \(N_{ij}\). Note that \(M_{ij}\) and \(K_{i}\) are degenerate to \(0\) when \(c=1\) and \(c=0\), respectively.
To model the data observation process, we assume that each individual is independently detectable with a probability of detection \(r\). If the species is detected at site \(i\) during visit \(j\), let \(Y_{ij}=1\), otherwise \(Y_{ij}=0\). We denote the probability of detection at site \(i\) during visit \(j\) as \(P(Y_{ij}=1)=p_{ij}\), then \(p_{ij}=1-(1-r)^{N_{ij}}\) when \(N_{ij}\) is known. This forms an N\({}_{c}-\)mixture model.
It is clear that \(Y_{ij},j=1,\cdots,T\), are exchangeable variables, though not independent unless \(c=0\). Let \(Y_{i}=\sum_{j=1}^{T}Y_{ij}\) be the frequency of occurrence at site \(i\). In Web Appendix A, we show that the probability function of \(Y_{i}\) under the N\({}_{c}-\)mixture model is given by
\[f(y_{i};\boldsymbol{\theta})=\binom{T}{y_{i}}\exp(-d\mu rT-c\mu)\sum_{k=0}^{y _{i}}\binom{y_{i}}{k}(-1)^{k}\exp\left\{c\mu(1-r)^{T-y_{i}+k}+d\mu r(y_{i}-k) \right\}, \tag{1}\]
where \(\boldsymbol{\theta}=(\mu,r,c)\) is the vector of model parameters and \(d=1-c\). Equation (1) is also referred to as the N\({}_{c}-\)mixture model.
When the community parameter \(c=1\), the model in equation (1) simplifies to
\[f(y_{i};\boldsymbol{\theta}_{1})=\binom{T}{y_{i}}\exp(-\mu)\sum_{k=0}^{y_{i}} \binom{y_{i}}{k}(-1)^{k}\exp\left\{\mu(1-r)^{T-y_{i}+k}\right\}, \tag{2}\]
where \(\boldsymbol{\theta}_{1}=\boldsymbol{\theta}\) with \(c=1\). This is the probability function of the N\(-\)mixture model (Royle and Nichols, 2003), but the explicit form (2) is firstly given in Haines (2016a).
Similarly, when the community parameter \(c=0\), the model (1) reduces to
\[f(y_{i};\boldsymbol{\theta}_{0})=\binom{T}{y_{i}}\left\{\exp(-\mu r)\right\}^ {T-y_{i}}\left\{1-\exp(-\mu r)\right\}^{y_{i}}, \tag{3}\]
where \(\boldsymbol{\theta}_{0}=\boldsymbol{\theta}\) with \(c=0\). The reduced model (3) is a binomial distribution denoted as \(Y_{i}\sim\text{Bino}(T,p)\), where \(p=1-\exp(-\mu r)\). We also note that the binary variables \(Y_{ij}\) are independently and identically distributed with a mean of \(E\{1-(1-r)^{N_{ij}}\}\) when \(c=0\). A calculation using the Poisson moment generating function shows that
\(1-\exp(-\mu r)=p\), which leads to the same result as (3). However, the model parameters \(\mu\) and \(r\) in (3) cannot be separated, making the model unidentifiable, and only their product \(\mu\times r\) is identifiable.
When the community parameter \(0<c<1\), the N\({}_{c}-\)mixture model (1) is identifiable for \(T\geq 3\). The log-likelihood function of the parameter vector \(\boldsymbol{\theta}\) is given by \(\ell(\boldsymbol{\theta})=\sum_{i=1}^{n}\log\{f(y_{i};\boldsymbol{\theta})\}\), and the maximum likelihood estimation is straightforward. For later use, the likelihood function can be represented in the framework of the multinomial model. Let \(m_{j}=\sum_{i=1}^{n}\mathbf{I}(y_{i}=j)\) for \(j=0,\cdots,T\), where \(\mathbf{I}(\cdot)\) is the indicator function. These statistics \(m_{j}\) can be viewed as a result of the multinomial model with cell probabilities \(f(j;\boldsymbol{\theta})\). Therefore, we can write \(\ell(\boldsymbol{\theta})=\sum_{j=0}^{T}m_{j}\log\left\{f(j;\boldsymbol{\theta })\right\}\), and the score function for \(\boldsymbol{\theta}\) is
\[S(\boldsymbol{\theta})=\sum_{j=0}^{T}\frac{\partial f(j;\boldsymbol{\theta})}{ \partial\boldsymbol{\theta}}\left\{\frac{m_{j}-nf(j;\boldsymbol{\theta})}{f( j;\boldsymbol{\theta})}\right\}. \tag{4}\]
### Occupancy rate
Royle and Nichols (2003) define the occupancy rate \(\psi\) as a derived parameter under the N\(-\)mixture model. Specifically, \(\psi=P(K_{i}>0)=1-\exp(-\mu)\) in terms of our notations. Under the N\({}_{c}-\)mixture framework, the occupancy rate per each visit can be defined as \(P(N_{ij}>0)=1-\exp(-\mu)\). Alternatively, the rate can be defined as \(1-P(N_{ij}=0,\ \forall\ 1\leq j\leq T)\), where if some \(N_{ij}>0\), the site \(i\) is considered occupied by the species. In this way, the occupancy rate is \(1-\exp\left\{-\mu-(T-1)d\mu\right\}\), which depends on the number of visits \(T\) and converges to \(1\) when \(T\) increases infinitely. Both definitions, therefore, differ from the current concept of site occupancy. The problem is that the number of individuals at site \(i\) may vary from visit to visit because \(M_{ij},j=1,\ldots,T\), are random in the N\({}_{c}-\)mixture model. Therefore, rather than directly defining occupancy, we would like to include a zero-inflated parameter to determine site occupancy under the N\({}_{c}-\)mixture model. This extension will be addressed in Section 3.
### Behavior of the N\(-\)mixture model estimators
In this subsection, we examine the behavior of N\(-\)mixture model estimators for \(\mu\) and \(r\) when the number of individuals at a site during multiple visits is not a fixed constant (i.e., \(c<1\)).
First, we examine the scenario where the community parameter \(c\) is known and \(T=2\) (double-visit). In this scenario, we use the notation \(\tilde{\mu}_{c}\) and \(\tilde{r}_{c}\) to represent the (restricted) maximum likelihood estimators (MLEs) of \(\mu\) and \(r\), respectively. As shown in Web Lemma 1 of Web Appendix A, the MLEs have closed forms when \(T=2\), which are given by \(\tilde{\mu}_{c}=cz_{1}^{2}/(2z_{1}+z_{2})\) and \(\tilde{r}_{c}=(2z_{1}+z_{2})/(cz_{1})\), where \(z_{1}=\log\left\{2n/(2m_{0}+m_{1})\right\}\) and \(z_{2}=\log(m_{0}/n)\). These closed forms allow us to easily determine the limits of the MLEs of the N\(-\)mixture model, \(\tilde{\mu}_{1}\) and \(\tilde{r}_{1}\), as given in the following proposition.
**Proposition 1**.: _Under the N\({}_{c}-\)mixture model with a double-visit survey, as the number of sites increases to infinity, the estimators \(\tilde{\mu}_{1}\) and \(\tilde{r}_{1}\) converge to \(\mu/c\) and \(cr\) respectively with probability one, for all \(0<c\leq 1\)._
Proposition 1 delivers insights into the behavior of N\(-\)mixture model estimators when the community parameter is less than \(1\). The results show that the MLEs are consistent when
1, which is a common property of the maximum likelihood approach. Additionally, the results indicate that the N\(-\)mixture model overestimates the abundance parameter \(\mu\), and the bias increases as the community parameter \(c\) decreases. In contrast, the N\(-\)mixture model exhibits the opposite bias behavior for the detection probability \(r\). Despite these biases, the estimator \(\tilde{\mu}_{1}\times\tilde{r}_{1}\) can consistently estimate \(\mu\times r\) under the framework of N\({}_{c}-\)mixture model, which is noteworthy.
We initially believed that the results of Proposition 1 would hold for surveys with more than two visits (\(T>2\)), but further investigation revealed that this is only partially true. The correct part is that there are moment estimators of the N\(-\)mixture model that exhibit similar behaviors to those described in Proposition 1. To demonstrate this, we derived the moments of \(Y_{i}\) under the N\(-\)mixture model and defined the resulting estimators as \(\tilde{\mu}_{\rm 1M}\) and \(\tilde{r}_{\rm 1M}\). The following proposition summarizes the results of this analysis.
**Proposition 2**.: _The method of moment estimators of the N\(-\)mixture model, \(\tilde{\mu}_{\rm 1M}\) and \(\tilde{r}_{\rm 1M}\), are obtained by solving the equations_
\[\bar{Y}=Tp\ \ \mbox{and}\ \ \bar{Y}^{2}=Tp+T(T-1)\{2p-1+(1-p)^{2-r}\},\]
_where \(\bar{Y}\) is the sample average and \(\bar{Y}^{2}=\sum_{i=1}^{n}Y_{i}^{2}/n\). Furthermore, if the N\({}_{c}-\)mixture model is true, as the number of sites increases to infinity, the estimators \(\tilde{\mu}_{\rm 1M}\) and \(\tilde{r}_{\rm 1M}\) converge to \(\mu/c\) and \(cr\) respectively with probability one for all \(0<c\leq 1\)._
In the case of multiple-visit surveys, our simulation study (Section 4) found that the limit of the MLE of the N\(-\)mixture model, \(\tilde{\mu}_{1}\), exhibits a similar pattern to \(\mu/c\) when \(0<c<1\). However, there is a discrepancy between them, and we can only determine the behavior when the community parameter \(c\) is close to 0.
**Proposition 3**.: _Under the N\({}_{c}-\)mixture model with \(c\) approaching zero, as the number of sites \(n\) increases to infinity, the estimators \(\tilde{\mu}_{1}\) and \(\tilde{r}_{1}\) converge such that \(\tilde{\mu}_{1}\to\infty\), \(\tilde{r}_{1}\to 0\), and \(\tilde{\mu}_{1}\tilde{r}_{1}\to\mu r\) with probability one._
Based on the results presented, we suggest that when \(c<1\), the MLE of the N\(-\)mixture model tends to overestimate the parameter \(\mu\) and underestimate the parameter \(r\), and the bias increases as \(c\) moves further away from 1. However, the estimator \(\tilde{\mu}_{1}\times\tilde{r}_{1}\) may still be able to consistently estimate \(\mu\times r\) at certain ranges of \(c\), such as when \(c\) is close to 0, within the framework of the N\({}_{c}-\)mixture model.
## 3 Zero-inflated N\({}_{c}-\)mixture model
To account for the species occupancy, we extend the model (1) by incorporating a zero-inflation component. Following MacKenzie _et al._ (2002), let \(\psi\) be the site occupancy probability, then the probability of a zero count (\(Y_{i}=0\)) is \((1-\psi)+\psi f(0;\boldsymbol{\theta})\). The likelihood function thus has an additional parameter \(\psi\) so that we may write
\[L(\boldsymbol{\theta},\psi)=\prod_{i=1}^{n}\left\{(1-\psi)\mathbf{I}(y_{i}=0)+ \psi f(y_{i};\boldsymbol{\theta})\right\}. \tag{5}\]
We refer the model (5) as a zero-inflated N\({}_{c}-\)mixture (ZIN\({}_{c}\)) model.
When \(c=1\), the model (5) becomes a zero-inflated N\(-\)mixture (ZIN) model, as previously described in Haines (2016a). In the case of \(c=0\), it simplifies to a zero-inflated binomial
(ZIB) model with the detection probability \(p=1-\exp(-\mu r)\), which is also known as the first occupancy model (MacKenzie _et al._, 2002). Thus, the ZIN\({}_{c}\) model unifies both the ZIB and ZIN models into a single framework, with the special cases of \(c=0\) and \(c=1\) corresponding to ZIB and ZIN, respectively.
### Estimation
When the community parameter \(0<c<1\), the ZIN\({}_{c}\) model is identifiable for \(T\geq 4\). By defining \(p_{0}(\boldsymbol{\theta},\psi)=(1-\psi)+\psi f(0;\boldsymbol{\theta})\) and \(f(+;\boldsymbol{\theta})=1-f(0;\boldsymbol{\theta})\), the likelihood function can be written as
\[L(\boldsymbol{\theta},\psi)=L_{0}(\boldsymbol{\theta},\psi)\times L_{1}( \boldsymbol{\theta}),\]
where \(L_{0}(\boldsymbol{\theta},\psi)=\{p_{0}(\boldsymbol{\theta},\psi)\}^{m_{0}} \{1-p_{0}(\boldsymbol{\theta},\psi)\}^{n-m_{0}}\) and \(L_{1}(\boldsymbol{\theta})=\prod\limits_{j=1}^{T}\left\{f(j;\boldsymbol{ \theta})/f(+;\boldsymbol{\theta})\right\}^{m_{j}}.\) Note that \(L_{0}\) reflects the probability function of \(\mathbf{I}(Y_{i}>0)\) while \(L_{1}\) is the conditional likelihood based on \(Y_{i}>0\). As the conditional likelihood function is independent of the occupancy probability \(\psi\), it allows us to estimate \(\boldsymbol{\theta}\) without the confounding of \(\psi\)(Karavarsamis and Huggins, 2020). Specifically, we can find the conditional MLE of \(\boldsymbol{\theta}\), denoted as \(\hat{\boldsymbol{\theta}}\), by solving the score function of \(L_{1}(\boldsymbol{\theta})\). Additionally, by taking the MLE of the profile likelihood \(L_{0}(\hat{\boldsymbol{\theta}},\psi)\) we can find \(\hat{\psi}=(n-m_{0})/\{nf(+;\hat{\boldsymbol{\theta}})\}\). As shown in Web Appendix A, Web Lemma 2 confirms that the estimators \(\hat{\boldsymbol{\theta}}\) and \(\hat{\psi}\) resulting from this method are also the usual MLEs based on (5). The asymptotic variances of \(\hat{\boldsymbol{\theta}}\) and \(\hat{\psi}\) can be derived in the usual way.
In the zero-inflated type models, the main focus is on the estimation of the occupancy probability. Let \(\tilde{\psi}_{c}\) be the MLE of \(\psi\) for the ZIN\({}_{c}\) model, given that the community parameter \(c\) is known. We next examine the behavior of \(\tilde{\psi}_{0}\) and \(\tilde{\psi}_{1}\), which correspond to the occupancy estimators for the ZIB and ZIN models, respectively.
### Behaviors of \(\tilde{\psi}_{0}\) and \(\tilde{\psi}_{1}\)
Like the N\({}_{c}-\)mixture model, the ZIN\({}_{c}\) model also has an identifiability issue for \(\mu\) and \(r\) when \(c=0\). As a result, under the ZIB model, only the parameter \(p\) (i.e., \(1-\exp(-\mu r)\)) and the occupancy probability \(\psi\) can be estimated. In practice, to fit ZIB models, it is common to set \(r=1\) to estimate \(\mu\times r\) with the resulting abundance estimator (\(\tilde{\mu}_{0}\)).
**Proposition 4**.: _Under the ZIN\({}_{c}\) model with \(c>0\), the ZIB occupancy estimator \(\tilde{\psi}_{0}\) shows an underestimation of \(\psi\) with probability one as \(n\) increases to infinity. A linear approximation of this underestimation can be represented as \(\tilde{\psi}_{0}\approx\frac{p}{p+\Delta}\psi\) where_
\[\Delta=\frac{\{1-(1-p)^{T}\}\left\{f(0;\boldsymbol{\theta})-(1-p)^{T}\right\} }{\left[\frac{1}{p}\{1-(1-p)^{T}\}-T(1-p)^{T-1}\right]\left\{1-f(0;\boldsymbol {\theta})\right\}}, \tag{6}\]
_and \(\Delta\) increases as \(c\) increases, when \(c=0\) then \(\Delta=0\)._
We notice that as either \(T\) or \(\mu\times r\) increases, the value of \(\Delta\) decreases to zero. This result is expected, as when there are more visits or when the species abundance is high, the observed occupancy approaches \(\psi\). The linear approximation bias provides a reasonable representation of the trend of \(\tilde{\psi}_{0}\) in various aspects, depicting that as \(c\) moves away from \(0\) (or as the correlation between visits increases), the underestimation becomes more significant.
**Remark 1.** As a direct consequence of Proposition 4, we can also see that the corresponding abundance estimator \(\tilde{\mu}_{0}\) increases as \(c\) increases, and that \(\tilde{\mu}_{0}\) at \(c=0\) is a consistent estimator of the product of species abundance and detection probability (\(\mu\times r\)).
Estimators of the ZIN model tend to have bias when the community parameter \(c\) is less than 1. However, the behavior of these biases is complex. For example, the bias of the occupancy estimator \(\tilde{\psi}_{1}\) does not vary monotonically with decreasing \(c\). Despite this, when \(c\approx 0\), the estimators of the ZIN model (except \(\tilde{\psi}_{1}\)) behave similarly to those of the N\(-\)mixture model, as shown in Proposition 3. Interestingly, \(\tilde{\psi}_{1}\) can consistently estimate \(\psi\) at \(c=0\), as shown in the next proposition. For clarity, if there is no confusion, \(\tilde{\mu}_{1}\) and \(\tilde{r}_{1}\) in this proposition also refer to the MLE of the ZIN model (or the restricted MLE of the ZIN\({}_{c}\) model).
**Proposition 5.**_Under the \(N_{c}-\)mixture model with \(c\) approaching zero, as the number of sites \(n\) increases, the estimators \(\tilde{\mu}_{1}\) and \(\tilde{r}_{1}\) converge such that \(\tilde{\mu}_{1}\to\infty\), \(\tilde{r}_{1}\to 0\), and \(\tilde{\mu}_{1}\tilde{r}_{1}\to\mu r\) with probability one. Additionally, \(\tilde{\psi}_{1}\) is a consistent estimator for \(\psi\) when \(c=0\)._
The estimator \(\tilde{\psi}_{1}\) is also a consistent estimator under the ZIN (or ZIN\({}_{c}\) with \(c=1\)) model. Therefore, Proposition 5 shows that \(\tilde{\psi}_{1}\) behaves like a bridge with both ends (at \(c=0\) and 1) at the same level; however, it is not clear whether the bridge deck is always above or below this level, or if it varies in different sections. We will further investigate this behavior through a simulation study.
### Tests for ZIN and ZIB
The null hypothesis of the ZIN or ZIB model can be tested within the framework of the ZIN\({}_{c}\) model, as it is equivalent to testing the values \(\{c=1\}\) or \(\{c=0\}\). A simple method to justify the hypothesis is to use a Wald-type confidence interval based on the estimate of \(c\). A more formal approach is to use the likelihood ratio test, as the null hypothesis is a submodel of the full ZIN\({}_{c}\) model. However, it is important to note that the asymptotic distribution of the likelihood ratio test under the null hypothesis is a mixture of 0 and chi-square distributions, rather than the usual chi-square distribution (Self and Liang, 1987). In practice, we also suggest generating bootstrap samples under the null hypothesis to find the \(p-\)value. This is similar to the conclusion of Dail and Madsen (2011).
## 4 Simulation study
We conducted simulations to evaluate the performance of proposed models and estimators. We considered two scenarios: the N\({}_{c}-\)mixture model and the ZIN\({}_{c}\) model. In the first scenario, we computed the maximum likelihood estimators for both the N\(-\)mixture model and N\({}_{c}-\)mixture model. In the second scenario, we calculated the maximum likelihood estimators for the ZIB, ZIN, and ZIN\({}_{c}\) models. All estimates were calculated by using the optim function in the R software (R Core Team, 2022).
We specified the true parameter values as \(\mu=1,2\), \(r=0.25,0.5\), and \(c=0,0.05,\ldots,1\) for the simulations. For both scenarios, the number of sites was set at \(n=200,500,1000\), and the number of visits was set at \(T=5,7,10\). In the second scenario, we also set the occupancy probability to \(\psi=0.7\). We generated 1,000 data sets for each parameter setting and calculated the estimates and associated standard error estimates for each data set.
To account for outliers in some of the simulated data sets, we present the median of the parameter estimates (Med), the median of the estimated standard error (Med.se), and the median absolute deviation (MAD) scaled to align with the normal distribution. Additionally, we report the coverage percentage (CP) of the nominal 95% Wald-type confidence intervals. We note that in some instances, the numerical methods utilized for estimating the parameters for each model did not converge, with a higher frequency of non-convergence observed for the ZIN model and a lower frequency for the ZIN\({}_{c}\) model. However, in most cases, the percentage of failures was minor and not reported.
### Simulation study A: N\({}_{c}-\)mixture model
The median estimates of \(\mu\) for \(T=7\) and \(n=500\) are displayed in Figure 2. A comprehensive examination of the simulation results for \(c=(0.25,0.5,0.75)\) can be found in Web Tables 1-6 in Web Appendix B.
Figure 2 illustrates the behavior of the N\(-\)mixture abundance estimator \(\tilde{\mu}_{1}\) as a function of the community parameter \(c\). The figure shows that the estimator exhibits a positive bias for all values of \(c\) less than 1. This bias follows a monotonically decreasing pattern and only approaches the true value when \(c\) equals 1. When \(c\) is close to 0, the bias can be substantial but has been removed from the figure for ease of visualization. Addi
Figure 2: Median estimates of abundance \(\mu\) for N\(-\)mixture and N\({}_{c}-\)mixture models (Simulation study A) as a function of the community parameter \(c\), with the number of visits \(T=7\) and sites \(n=500\). The sub-graphs correspond to four combinations of \(\mu=1,2\) and individual detection probabilities \(r=0.25,0.5\). Data points with high values were removed for clarity.
that \(\tilde{\mu}_{1}\) is greater than \(\mu/c\) for all values of \(c\) less than 1, although the difference has not been explicitly explored. In a separate simulation study (not reported), we calculated the moment estimator for the N\(-\)mixture model and found that its pattern closely resembled the reference curve \(\mu/c\). These simulation results agree with the findings outlined in Propositions 1-3. Corresponding conclusions can also be drawn from the results of the estimator \(\tilde{r}_{1}\) shown in Web Figure 1 of Web Appendix B. The MLE of the N\({}_{c}-\)mixture model can consistently estimate \(\mu\), but it also frequently exhibits bias around \(c=0\). This is mainly because the parameters of \(\mu\) and \(r\) are almost unidentifiable when \(c\approx 0\). The bias is more pronounced when \(\mu\) is large and \(r\) is small, but it becomes smaller when \(n\) or \(T\) increases; see Web Table 6 (for \(n=1000\) and \(T=10\) cases).
In Web Tables 1-6, it can be seen that the N\(-\)mixture abundance estimator has a relative bias ranging from 40% to 400% when \(c\) varies from 0.75 to 0.25. The bias is more pronounced when \(T\) or \(r\) increases but less severe when \(\mu\) increases. On the other hand, the N\({}_{c}-\)mixture model shows nearly unbiased estimates in all cases, except when \(n=200,\ T=5\), and \(r=0.25\), where the relative bias can reach up to 7.5%-16% when \(\mu\) varies from 2 to 1; see Web Tables 1 and 4.
In terms of mean absolute deviation (MAD), both models show a decrease in variation as \(c\) increases, with the N\(-\)mixture model showing a much steeper decrease compared to the N\({}_{c}-\)mixture model. Specifically, when \(c=0.25\), the MAD of \(\tilde{\mu}_{1}\) can be 2 to 4 times that of \(\tilde{\mu}\), but the former is usually smaller than the latter when \(c=0.75\). The asymptotic standard error estimates, as measured by Med.se, generally match the results of the corresponding MAD, making them reasonably reliable for the scenarios considered. The Wald-type confidence interval estimator of the N\({}_{c}-\)mixture model performs well when data information is sufficient, with a close match to the nominal 95% confidence level at \(n\geq 500\) and \(T\geq 7\). However, in some cases with small \(r\) and \(c\), the coverage probability (CP) can be lower than 80% (as seen in Web Tables 1, 4, and 5), indicating an unsatisfactory performance. The N\(-\)mixture model estimator often reaches 0 for the coverage probability due to the severe bias problem of \(\tilde{\mu}_{1}\) when \(c\leq 0.75\).
The results from Web Tables 1-6 on the detection probability parameter \(r\) are similar to \(\mu\), with the only difference being that the N\(-\)mixture model estimator's bias is in the opposite direction. Finally, it is found that the median of the product \(\tilde{\mu}_{1}\times\tilde{r}_{1}\) presents nearly unbiased results for estimating \(\mu\times r\) in all cases. Note that this property is only proven for the range \(c\approx 0\) (Proposition 5), but the simulation results suggest that the range of \(c\) can be extended somewhere.
### Simulation study B: ZIN\({}_{c}\) model
In Figures 3 and 4, we present the median estimate results for the parameters \(\mu\) and \(\psi\) when \(T=7\) and \(n=500\). The detailed simulation results for \(c=(0.25,0.5,0.75)\) can be found in Web Tables 7-12 of the Web Appendix B. In the ZIB model, we have fixed the value of \(r=1\). This is because the parameter \(\mu\times r\) is non-separable in this case.
In Figure 3, we can see that the ZIB estimator \(\tilde{\mu}_{0}\) consistently underestimates \(\mu\). In fact, \(\tilde{\mu}_{0}\) is approximately equal to \(\mu\times r\) when \(c\) is close to 0 and increases as \(c\) increases (as stated in Remark 1). The ZIN estimator \(\tilde{\mu}_{1}\) has a similar trend as the N\(-\)mixture model when \(c\) is close to 0, but it falls off quickly, rises slowly, and becomes consistent with \(\mu\) at \(c=1\). In general, \(\tilde{\mu}_{1}\) mainly exhibits positive bias but occasionally also shows negative bias at some \(0<c<1\). The ZIN\({}_{c}\) estimator \(\hat{\mu}\) typically exhibits some bias at both ends of the range of \(c\), which can be very large at \(c\approx 0\) when \(r\) is small. As expected, increasing \(n\) and \(T\) can
mitigate bias issues of the maximum likelihood estimator.
In Figure 4, the ZIB estimator \(\tilde{\psi}_{0}\) was found to underestimate \(\psi\) when \(c>0\), with a greater bias observed when \(\mu=1\) compared to \(\mu=2\). The relative bias reached a maximum of \(-35\%\) when \(c=1\) and \(\mu=1\). The approximate formula of Proposition 4 was found to be consistent with the behavior trend of \(\tilde{\psi}_{0}\), although it is not shown in the figure. The ZIN occupancy estimator \(\tilde{\psi}_{1}\) was found to fit well with \(\psi\) at both ends of the \(c\) range, as stated in Proposition 5. A positive bias was observed around \(c\approx 0\) and a negative bias around \(c\approx 1\). The partial derivative of \(\frac{\partial}{\partial c}\tilde{\psi}_{1}\) was proven to be positive when \(c\approx 0\), indicating overestimation of \(\psi\) in that region. However, at \(c\approx 1\), the partial derivative was found to be positive in most situations, with some negative signs observed in some instances, such as when \(\mu=2\) and \(r=0.15\). In this case, the plot of \(\tilde{\psi}_{1}\) against \(c\) exhibited behavior similar to an arch bridge, with only positive biases observed for all \(0<c<1\). The ZIN\({}_{c}\) estimator \(\tilde{\psi}\) was found to be unbiased in general, except when \(c\approx 1\). The bias was found to be smaller when \(\mu\) or \(r\) is increased.
Next, we present a more detailed summary of the performance of the three occupancy estimators based on the results in Web Tables 7-12. The upper half of Web Table 7 (\(\mu\times r=0.25\) and \(T=5\)) shows that the ZIB occupancy estimator \(\tilde{\psi}_{0}\) has a relative bias of \(-11\%\) at \(c=0.25\), which becomes more pronounced as \(c\) increases, reaching a maximum of \(-31\%\) at \(c=0.75\). However, increasing \(T\) or \(\mu\times r\) can reduce the bias of \(\tilde{\psi}_{0}\) as seen in Web Tables 7-12, which is consistent with Proposition 4. In Web Tables 7-9 (\(\mu=1\)), the ZIN occupancy
Figure 3: Median estimates of abundance \(\mu\) for ZIB, ZIN\({}_{c}\), and ZIN models (Simulation study B) as a function of the community parameter \(c\), with the number of visits \(T=7\) and sites \(n=500\). The sub-graphs correspond to four combinations of \(\mu=1,2\) and individual detection probabilities \(r=0.25,0.5\). Data points with high values were removed for clarity.
estimator \(\tilde{\psi}_{1}\) has a relative bias of around 28%-40% at \(c=0.25\). Specifically, the ZIN model often estimated \(\psi\) as one at \(c=0.25\) when \(T\geq 7\) and \(r=0.5\) (Web Tables 8-9). The bias of \(\tilde{\psi}_{1}\) decreases as \(c\) increases, showing a small negative bias at \(c=0.75\). When \(\mu\) is increased to \(\mu=2\), Web Tables 10-12 show that \(\tilde{\psi}_{1}\) overestimates \(\psi\) in all cases, with the most significant bias around 10%-13% at \(c=0.5\), and small or even negligible bias at \(c=0.25,0.75\). Web Tables 7-12 also reveal that the bias of the ZIN\({}_{c}\) occupancy estimator \(\hat{\psi}\) is generally close to 0, except for a few cases of \(r=0.25\), \(T=5\), and \(c=0.75\) (Web Table 7).
The ZIB occupancy estimator has the lowest MAD, indicating that it is the most stable of the three methods. When \(\mu=1\), the MAD of the ZIN estimator decreases with increasing \(c\); however, when \(\mu=2\), the MAD is more prominent at \(c=0.5\) due to the significant bias of the ZIN estimator at this point. In contrast, the MAD of the ZIN\({}_{c}\) estimator increases with increasing \(c\) values; when \(\mu=1\), the MAD at \(c=0.75\) can be 1.5 to 3 times the MAD at \(c=0.25\), but the increase is much smaller when \(\mu=2\). Generally, the stabilities of all three estimators improve with increasing values of \(n,\ T,\ r\), and \(\mu\).
The Med.se of the ZIB occupancy estimator fits the corresponding MAD reasonably well, except for some negative biases observed at \(c=0.75\). However, the resulting interval estimator is only reliable when \(\mu=2\) and \(c=0.25\); in other cases, the corresponding CP may even reach zero showing very poor performance. For the ZIN occupancy estimator, Med.se often overestimates MAD, particularly with increasing \(c\), but this is less of an issue when \(r\) or \(\mu\) are increased. The resulting CP also shows many abnormal values, even when
Figure 4: Median estimates of occupancy rate \(\psi\) for ZIB, ZIN\({}_{c}\) and ZIN models (Simulation study B) plotted against the community parameter \(c\), with the number of visits \(T=7\) and sites \(n=500\). The sub-graphs correspond to four combinations of \(\mu=1,2\) and individual detection probabilities \(r=0.25,0.5\).
it appears to be close to the nominal level in some cases. For example, Web Tables 8-9 show that the CP can reach about 99% in the upper panels but may drop to zero in the bottom panels. As a result, the interval estimates for the ZIN model are not considered reliable. The ZIN\({}_{c}\) estimator, being a maximum likelihood estimate, can perform well on Med.se and CP measures when the data information is sufficient, but its performance is not guaranteed otherwise.
Lastly, we note that abundance estimators can be similarly affected by issues such as violation of model assumptions or a lack of sufficient data, which can result in substantial bias in some instances, such as when \(c=0.25\) and \(r=0.25\). In general, compared to abundance estimators, the corresponding occupancy estimators are relatively less affected by these issues and their performance is relatively robust.
## 5 Examples
### Example 1. Fisher data
The fisher (\(Martes\pennanti\)) is a carnivorous mammal native to the boreal forests of North America. In 2000, a survey program using noninvasive methods was conducted to collect data on fisher species distribution in northern and central California (Zielinski _et al._, 2005; Royle and Dorazio, 2008). The data consists of multiple-visit occurrences at \(n=464\) sites, each visited \(T=8\) times. See Web Table 13 for the data. Out of the eight visits, 400 sites had zero counts, resulting in a sample occupancy rate of \(64/464=13.8\%\). We analyzed the fisher data using the models presented in Sections 2 and 3. The results are summarized in the top panel of Table 1, where we also calculated the Akaike information criterion (AIC) value for each model to evaluate model fit.
The AIC values for the ZIN and ZIN\({}_{c}\) models were significantly lower than those of the N\(-\)mixture and N\({}_{c}-\)mixture models, respectively, indicating that including zero-inflated probabilities improves the fit of the model to the fisher data. Among the three zero-inflated models, the ZIN\({}_{c}\) had the smallest AIC value. Furthermore, the ZIN\({}_{c}\) model estimated the community parameter \(c\) to be in the middle of \((0,1)\) and the associated Wald-type confidence interval (Web Table 14) provided strong evidence against the null hypotheses \(c=0\) and \(c=1\). The likelihood ratio statistics for the tests \(c=0\) and \(c=1\) were 40.21 and 8.23 respectively, and the \(p-\)values based on 10,000 bootstrap samples were 0 and 0.006, respectively, leading to the same conclusion for rejecting ZIB and ZIN models. To sum up, the ZIN\({}_{c}\) model fits the fisher data better than the other models, however, the occupancy estimates for the three zero-inflated models are similar.
It is worth noting that the ZIN model estimated \(\psi\) to be 0.175, which appears larger than the other models, but the ZIN model has an unoccupied probability of \((1-\psi)+\psi\exp(-\mu)\). Therefore, to compare occupancy rates with other models, we should use \(\psi\{1-\exp(-\mu)\}\) with an estimate of \(0.175\{1-\exp(-1.79)\}=0.146\) (called the occupied probability of the ZIN model), which is similar to other estimates of \(\psi\). In contrast, the resulting abundance estimates are quite different, with ZIN reporting a higher value and ZIB having the lowest value. The standard error estimates for the three models also follow the same order, with the parameter estimates for ZIN showing the greatest estimated variation. This phenomenon is similar to what was observed in Web Table 8 of Simulation Study B.
### Example 2: Breeding Bird Survey (BBS) data
This example aims to understand how the community parameter \(c\) may vary among different species in data from the same survey. We consider the data used in Royle (2006), originally from the North American Bird Breeding Survey (BBS) program, which includes records of occurrence data for five species of birds - blue juay (\(Cyanocitta\)\(cristata\)), catbird (\(Dumetella\)\(carolinensis\)), common yellow-throat (\(Geothlypis\)\(trichas\)), tree swallow (\(Tachycineta\)\(bicolor\)), and song sparrow (\(Melospira\)\(melodia\))- from 50 locations with 11 repeated visits. The sample occupancy rates for the five species (in the above order) are 66%, 38%, 72%, 58%, and 52%; see Web Table 13 for the data.
\begin{table}
\begin{tabular}{l l l l l l} \hline Model & \(\mu\) & \(r\) & \(c\) & \(\psi\) & AIC \\ \hline & \multicolumn{5}{c}{**Fisher**} \\ ZIN\({}_{c}\) & 0.93 (0.21) & 0.49 (0.10) & 0.51 (0.13) & 0.154 (0.020) & 626.9 \\ ZIN & 1.79 (0.68) & 0.22 (0.05) & \(1^{*}\) & 0.175 (0.030) & 633.2 \\ ZIB & 0.51 (0.04) & \(1^{*}\) & 0\({}^{*}\) & 0.140 (0.016) & 663.1 \\ N\({}_{c}-\)mix. & 0.13 (0.02) & 0.46 (0.03) & 0.93 (0.02) & 636.8 \\ N\(-\)mix. & 0.16 (0.02) & 0.37 (0.02) & \(1^{*}\) & 0.150 (0.017) & 650.6 \\ \hline & \multicolumn{5}{c}{**Blue Jay**} \\ ZIN & 48.96 (427.97) & 0 (0.04) & \(1^{*}\) & 0.729 (0.091) & 170.1 \\ ZIB & 0.22 (0.03) & \(1^{*}\) & 0\({}^{*}\) & 0.723 (0.077) & 168.1 \\ N\(-\)mix. & 1.64 (0.56) & 0.09 (0.03) & \(1^{*}\) & 0.806 (0.109) & 169.5 \\ \hline & \multicolumn{5}{c}{**Cathbird**} \\ ZIN\({}_{c}\) & 0.60 (0.32) & 0.42 (0.21) & 0.19 (0.14) & 0.421 (0.079) & 139.1 \\ ZIN & 0.85 (1.31) & 0.16 (0.09) & \(1^{*}\) & 0.740 (0.691) & 137.8 \\ ZIB & 0.26 (0.04) & \(1^{*}\) & 0\({}^{*}\) & 0.403 (0.074) & 138.5 \\ N\({}_{c}-\)mix. & 0.52 (0.17) & 0.19 (0.06) & 0.97 (0.12) & & 137.9 \\ N\(-\)mix. & 0.55 (0.14) & 0.18 (0.04) & \(1^{*}\) & 0.421 (0.079) & 135.9 \\ \hline & \multicolumn{5}{c}{**Common Yellow-Throat**} \\ ZIN\({}_{c}\) & 0.93 (0.23) & 0.47 (0.09) & 0.56 (0.10) & 0.775 (0.075) & 222.9 \\ ZIN & 2.06 (0.81) & 0.20 (0.05) & \(1^{*}\) & 0.851 (0.123) & 227.8 \\ ZIB & 0.49 (0.04) & \(1^{*}\) & 0\({}^{*}\) & 0.723 (0.064) & 254.5 \\ N\({}_{c}-\)mix. & 1.09 (0.38) & 0.31 (0.09) & 0.88 (0.15) & & 226.7 \\ N\(-\)mix. & 1.45 (0.25) & 0.23 (0.03) & \(1^{*}\) & 0.765 (0.059) & 226.8 \\ \hline & \multicolumn{5}{c}{**Tree Swallow**} \\ ZIN\({}_{c}\) & 0.75 (0.22) & 0.45 (0.08) & 0.61 (0.12) & 0.682 (0.106) & 196.4 \\ ZIN & 1.81 (0.75) & 0.18 (0.05) & \(1^{*}\) & 0.726 (0.135) & 204.0 \\ ZIB & 0.40 (0.04) & \(1^{*}\) & 0\({}^{*}\) & 0.587 (0.071) & 228.0 \\ N\({}_{c}-\)mix. & 0.48 (0.11) & 0.46 (0.07) & 0.72 (0.08) & & 198.4 \\ N\(-\)mix. & 1.02 (0.19) & 0.22 (0.03) & \(1^{*}\) & 0.641 (0.069) & 203.9 \\ \hline & \multicolumn{5}{c}{**Song Sparrow**} \\ ZIN\({}_{c}\) & 1.01 (0.59) & 0.37 (0.08) & 0.90 (0.09) & 0.678 (0.240) & 184.7 \\ ZIN & 2.23 (0.98) & 0.21 (0.06) & \(1^{*}\) & 0.573 (0.105) & 187.9 \\ ZIB & 0.54 (0.05) & \(1^{*}\) & 0\({}^{*}\) & 0.501 (0.072) & 210.5 \\ N\({}_{c}-\)mix. & 0.58 (0.13) & 0.42 (0.05) & 0.93 (0.04) & & 183.4 \\ N\(-\)mix. & 0.81 (0.16) & 0.31 (0.04) & \(1^{*}\) & 0.555 (0.071) & 190.5 \\ \hline \end{tabular}
\end{table}
Table 1: Parameter estimates (standard errors in parenthesis) and AIC values for each model fitted to the fisher data and the five bird species data of the BBS survey. Note that \({}^{*}\) indicates that the parameter value is a fixed constant without estimation. The occupancy rate \(\psi\) of the N\(-\)mixture model is a deriving parameter defined as \(\psi=1-\exp(-\mu)\); there is no occupancy rate for the N\({}_{c}-\)mixture model (Section 2.2). For the blue juay data, the results for the ZIN\({}_{c}\) and N\({}_{c}-\)mixture models are omitted because they are reduced to ZIB and N\(-\)mixture models, respectively. The five bird species are sorted according to the estimate of \(c\) from the ZIN\({}_{c}\) model.
Results are reported in Table 1, where the order of the species was actually sorted according to the estimated value of \(c\) in the ZIN\({}_{c}\) model. The first two species show small estimates of \(c\), with the ZIN\({}_{c}\) model even degenerating to the ZIB model for the blue jay data due to an estimate of \(c=0\). All the considered models showed comparable AIC values for both species, indicating a similar level of model fit. However, the parameter estimates of \(\mu,\ r,\) and \(c\) for each model produced conflicting results. We believe this is a problem of model identifiability, similar to the one described Royle (2006) and Link (2003). Royle (2006) concluded that this problem is more pronounced in occupancy models when the probability of detection is low, which is the case for these two species.
The next two species, common yellow-throat and tree sparrow, show middle estimates of \(c\), where both 95% Wald type confidence interval does not include 0 or 1 (Web Table 14 ). The results for model comparisons here can be summarized as similar to Example 1, particularly for zero-inflated models. In terms of AIC, the ZIN\({}_{c}\) model performs better than other models.
The last species, song sparrow, has a high estimate of \(c\) of 0.90, which is close to 1. In fact, the upper limit of its 95% confidence interval exceeds the upper bound of 1. In this case, the standard errors of \(\hat{\mu}\) and \(\hat{\psi}\) for the ZIN\({}_{c}\) model are relatively high, indicating that estimation uncertainty increases as \(c\) approaches 1. This phenomenon was also observed in Simulation Study B.
In Web Table 14, we also report bootstrap \(p-\)values for the likelihood ratio tests to the hypotheses of ZIB and ZIN models, which suggest significant evidence against the null hypotheses for the last three species of BBS data.
## 6 Discussion
The N\(-\)mixture model has been extended to allow for the number of individuals at a sample site to vary with each visit. In this extension, the number of individuals at a site per visit is treated as a latent variable that is decomposed into two components, with only one of these components being considered in the original N\(-\)mixture model. It should be noted that the extended N\({}_{c}-\)mixture model is only applicable to closed populations (single season) due to the assumption of constant \(\mu\). While it is possible to relax this assumption using regression models with the site and/or visit covariates, it is unclear whether this approach is effective for some multiseasonal data.
Unlike the N\(-\)mixture model, the occupancy of the N\({}_{c}-\)mixture model may not be well defined by the abundance parameter. To address this, we propose a zero-inflated N\({}_{c}-\)mixture model that uses a zero-inflated probability to define occupancy explicitly. This new model unifies the commonly used standard occupancy and zero-inflated N\(-\)mixture models into one framework. As a result, we discover interesting properties of both models, such as the use of standard occupancy estimators under the zero-inflated N\(-\)mixture model or vice versa.
Our extension models offer greater flexibility in fitting data, but they also increase the complexity of the model. These models may also experience issues such as numerical instability, parameter identifiability, and model identification problems, particularly when data is sparse, such as in cases of low detection probability, low abundance, few replicates of visits, or a small sample of sites. These issues have been observed in both simulation studies and data analysis. Indeed, these problems were initially raised in N\(-\)mixture models (Dennis _et al._, 2015; Barker _et al._, 2018; Link _et al._, 2018) as a caution against their use. Our extension models require an additional community parameter, which may exacerbate
these problems. However, these issues may be mitigated by incorporating additional capture-recapture data (Barker _et al._, 2018) or including covariates in the models (Link _et al._, 2018). Further research is necessary to improve the estimation in these contexts.
There are several potential avenues for extending this work, each of which would require further implementation and investigation.
* Using the negative binomial distribution to model abundance instead of the Poisson distribution. This distribution is often more appropriate for biological and ecological data analysis, but it also includes an additional aggregation parameter, increasing model complexity and estimation difficulty.
* Examining count data instead of occurrence data. This would require a likelihood function without an explicit form, making estimation computationally intensive, particularly for large datasets.
* Incorporating covariates into the models. Including covariates is important for real-world applications. This can be done by using a log link for abundance and a logit link for detection and occupancy probability. Incorporating species covariates for the community parameter in a joint species model may also be meaningful.
* Extending to time-to-detection data. Time-to-detection occupancy models have been developed recently, which also show some advantages over occurrence data models (Priyadarshani _et al._, 2022). Strebel _et al._ (2021) propose an N\(-\)mixture time-to-detection occupancy model that enables estimating the abundance without marking individuals. An analogous N\({}_{c}-\)mixture occupancy model for time-to-detection data is currently under development.
## Acknowledgements
This work was supported by the National Science and Technology Council of Taiwan.
|
2301.02057 | TextDescriptives: A Python package for calculating a large variety of
metrics from text | TextDescriptives is a Python package for calculating a large variety of
metrics from text. It is built on top of spaCy and can be easily integrated
into existing workflows. The package has already been used for analysing the
linguistic stability of clinical texts, creating features for predicting
neuropsychiatric conditions, and analysing linguistic goals of primary school
students. This paper describes the package and its features. | Lasse Hansen, Ludvig Renbo Olsen, Kenneth Enevoldsen | 2023-01-05T13:19:17Z | http://arxiv.org/abs/2301.02057v3 | # TextDescriptives: A Python package for calculating a large variety of metrics from text
###### Abstract
TextDescriptives is a Python package for calculating a large variety of statistics from text. It is built on top of spaCy and can be easily integrated into existing workflows. The package has already been used for analysing the linguistic stability of clinical texts, creating features for predicting neuropsychiatric conditions, and analysing linguistic goals of primary school students. This paper describes the package and its features.
Python natural language processing spacy feature extraction
## 1 Summary
Natural language processing (NLP) tasks often require a thorough understanding and description of the corpus. Document-level metrics can be used to identify low-quality data, assess outliers, or understand differences between groups. Further, text metrics have long been used in fields such as the digital humanities where e.g. metrics of text complexity are commonly used to analyse, understand and compare text corpora. However, extracting complex metrics can be an error-prone process and is rarely rigorously tested in research implementations. This can lead to subtle differences between implementations and reduces the reproducibility of scientific results.
TextDescriptives offers a simple and modular approach to extracting both simple and complex metrics from text. It achieves this by building on the spaCy framework (Honnibal et al. 2020). This means that TextDescriptives can easily be integrated into existing workflows while leveraging the efficiency and robustness of the spaCy library. The package has already been used for analysing the linguistic stability of clinical texts (Hansen et al. 2022), creating features for predicting neuropsychiatric conditions (Hansen et al. 2023), and analysing linguistic goals of primary school students (Tannert 2023).
## 2 Statement of need
Computational text analysis is a broad term that refers to the process of analyzing and understanding text data. This often involves calculating a set of metrics that describe relevant properties of the data. Dependent on the task at hand, this can range from simple descriptive statistics related to e.g. word or sentence length to complex measures of text complexity, coherence, or quality. This often requires drawing on multiple libraries and frameworks or writing custom code. This can be time-consuming and prone to bugs, especially with more complex metrics.
TextDescriptives seeks to unify the extraction of document-level metrics, in a modular fashion. The integration with spaCy allows the user to seamlessly integrate TextDescriptives in existing pipelines as well as giving the TextDescriptives package access to model-based metrics such as dependency graphs and part-of-speech tags. The ease of use and the variety of available metrics allows researchers and practitioners to extend the granularity of their analyses within a tested and validated framework.
Implementations of the majority of the metrics included in TextDescriptives exist, but none as feature complete. The textstat library [20] implements the same readability metrics, however, each metric has to be extracted one at a time with no interface for multiple extractions. spacy-readability [14] adds readability metrics to spaCy pipelines, but does not work for new versions of spaCy (>=3.0.0). The textacy [15] package has some overlap with TextDescriptives, but with a different focus. TextDescriptives focuses on document-level metrics, and includes a large number of metrics not included in textacy (dependency distance, coherence, and quality), whereas textacy includes components for preprocessing, information extraction, and visualization that are outside the scope of TextDescriptives. What sets TextDescriptives apart is the easy access to document-level metrics through a simple user-facing API and exhaustive documentation.
## 3 Features & Functionality
TextDescriptives is a Python package and provides the following spaCy pipeline components: textdescriptives.descriptive_stats: Calculates the total number of tokens, number of unique tokens, number of characters, and the proportion of unique tokens, as well as the mean, median, and standard deviation of token length, sentence length, and the number of syllables per token. textdescriptives.readability: Calculates the Gunning-Fog index, the SMOG index, Flesch reading ease, Flesch-Kincaid grade, the Automated Readability Index, the Coleman-Liau index, the Lix score, and the Rix score. textdescriptives.dependency_distance: Calculates the mean and standard deviation of the dependency distance (the average distance between a word and its head word), and the mean and the standard deviation of the proportion adjacent dependency relations on the sentence level. textdescriptives.pos_proportions: Calculates the proportions of all part-of-speech tags in the documents. textdescriptives.coherence: Calculates the first- and second-order coherence of the document based on word embedding similarity between sentences. textdescriptives.quality: Calculates the text-quality metrics proposed in Rae et al. Rae2022 and Raffel et al. Raffel2020. These measures can be used for filtering out low-quality text prior to model training or text analysis. These include heuristics such as the number of stop words, ratio of words containing alphabetic characters, proportion of lines ending with an ellipsis, proportion of lines starting with a bullet point, ratio of symbols to words, and whether the document contains a specified string (e.g. "lorem ipsum"), as well as repetitious text metrics such as the proportion of lines that are duplicates, the proportion of paragraphs in a document that are duplicates, the proportion of n-gram duplicates, and the proportion of characters in a document that are contained within the top n-grams.
All the components can be added to an existing spaCy pipeline with a single line of code, and jointly extracted to a dataframe or dictionary with a single call to textdescriptives.extract_{df|dict}(doc).
## 4 Example Use Cases
Descriptive statistics can be used to summarize and understand data, such as by exploring patterns and relationships within the data, getting a better understanding of the data set, or identifying any changes in the distribution of the data. Readability metrics, which assess the clarity and ease of understanding of written text, have a variety of applications, including the design of educational materials and the improvement of legal or technical documents [21]. Dependency distance can be used as a measure of language comprehension difficulty or of sentence complexity and has been used for analysing properties of natural language or for similar purposes as readability metrics [19, 10]. The proportions of different parts of speech in a document have been found to be predictive of certain mental disorders and can also be used to assess the quality and complexity of text [16]. Semantic coherence, or the logical connection between sentences, has primarily been used in the field of computational psychiatry to predict the onset of psychosis or schizophrenia [15, 17], but it also has other applications in the digital humanities. Measures of text quality are useful cleaning and identifying low-quality data [18, 19].
## 5 Target Audience
The package is mainly targeted at NLP researchers and practitioners. In particular, researchers from fields new to NLP such as the digital humanities and social sciences as researchers might benefit from the readability metrics as well as the more complex, but highly useful, metrics such as coherence and dependency distance.
## 6 Acknowledgements
The authors thank the contributors of the package including Martin Bernstorff for his work on the part-of-speech component, and Frida Hestrup and Roberta Rocca for important fixes. The authors would also like to Dan Sattrup Nielsen for helpful reviews on early iterations of the text quality implementations.
|
2310.14174 | An In-Context Schema Understanding Method for Knowledge Base Question
Answering | The Knowledge Base Question Answering (KBQA) task aims to answer natural
language questions based on a given knowledge base. Recently, Large Language
Models (LLMs) have shown strong capabilities in language understanding and can
be used to solve this task. In doing so, a major challenge for LLMs is to
overcome the immensity and heterogeneity of knowledge base schemas.Existing
methods bypass this challenge by initially employing LLMs to generate drafts of
logic forms without schema-specific details.Then, an extra module is used to
inject schema information to these drafts.In contrast, in this paper, we
propose a simple In-Context Schema Understanding (ICSU) method that enables
LLMs to directly understand schemas by leveraging in-context learning.
Specifically, ICSU provides schema information to LLMs using schema-related
annotated examples. We investigate three example retrieval strategies based on
raw questions, anonymized questions, and generated SPARQL queries. Experimental
results show that ICSU demonstrates competitive performance compared to
baseline methods on both the KQA Pro and WebQSP datasets. | Yantao Liu, Zixuan Li, Xiaolong Jin, Yucan Guo, Long Bai, Saiping Guan, Jiafeng Guo, Xueqi Cheng | 2023-10-22T04:19:17Z | http://arxiv.org/abs/2310.14174v2 | # An In-Context Schema Understanding Method for Knowledge Base Question Answering
###### Abstract
The Knowledge Base Question Answering (KBQA) task aims to answer natural language questions based on a given knowledge base. As a kind of common methods for this task, semantic parsing-based ones first convert natural language questions to logical forms (e.g., SPARQL queries) and then execute them on knowledge bases to get answers. Recently, Large Language Models (LLMs) have shown strong abilities in language understanding and may be adopted as semantic parsers in such kind of methods. However, in doing so, a great challenge for LLMs is to understand the schema of knowledge bases. Therefore, in this paper, we propose an In-Context Schema Understanding (ICSU) method for facilitating LLMs to be used as a semantic parser in KBQA. Specifically, ICSU adopts the In-context Learning mechanism to instruct LLMs to generate SPARQL queries with examples. In order to retrieve appropriate examples from annotated question-query pairs, which contain comprehensive schema information related to questions, ICSU explores four different retrieval strategies. Experimental results on the largest KBQA benchmark, KQA Pro, show that ICSU with all these strategies outperform that with a random retrieval strategy significantly (from 12% to 78.76% in accuracy).
## 1 Introduction
The Knowledge Base Question Answering (KBQA) task has been a challenging problem in the Natural Language Processing (NLP) field, which aims to answer natural language questions based on given knowledge bases (KBs). A kind of the common methods for this task is semantic parsing-based methods, where natural language questions are first converted into logical forms, such as SPARQL queries, and then executed on knowledge bases to retrieve answers. As shown in Figure 1, the question, "Among the constitutional monarchies with an inflation rate less than 6500 percentage, which one has the smallest unemployment rate?", is parsed into corresponding SPARQL query, and then executed on the KB to obtain its answer "Eswatini".
Recently, LLMs (e.g., ChatGPT) show the impressive natural language understanding abilities and performs well on a variety of NLP tasks Hendrycks et al. (2021);,A). Moreover, LLMs also demonstrate strong abilities to generate formal language, such as programming language Chen et al. (2021); Nijkamp et al. (2023);,A). Therefore, we would like to know that whether LLMs can be used as a semantic parser for KBQA by combining the above two abilities. Previous work Tan et al. (2023) claims that LLMs are not capable of generating accurate SPARQL queries because they are not aware of the schemes about the KB (e.g., entity and relation types). In this paper, we intend to re-check this issue.
Actually, enabling LLMs to generate accurate SPARQL queries has two steps. First, LLMs should align the semantic elements in the questions to the elements in the KB. Second, LLMs compose
Figure 1: An illustrative disgram of LLM-based KBQA.
the aligned elements to generate proper SPARQL queries. The first step challenges LLMs majorly because the LLMs can not align some semantic elements to the schema-related elements in the KB. For example, as shown in Figure 1, the schema-related elements, such as relation "<pred:unit>" should be aligned by the corresponding semantic elements "6500 percentage" in the question. One straightforward approach is attaching all schema-related elements in the prompt. However, this approach is not suitable since the size of such elements is usually quite large. Considering that each question only relates to a few schema-related elements and the annotated queries contain the corresponding schema-related elements about the question, we can let LLMs be aware of such elements via using a few annotated examples (e.g., question-SPARQL pairs) over the given KB as prompts.
Motivated by this, we propose an In-Context Schema Understanding (ICSU) method, which leverages LLMs to conduct semantic parsing for KBQA. First, ICSU explores three different example retrieval strategies based on raw questions, anonymized questions, SPARQL queries, and further propose a hybrid strategy that mixed above three. Follow that, ICSU adopts the In-context Learning mechanism to instruct LLMs to generate SPARQL queries with examples retrieved via above four strategies. Comparing with a random retrieval strategy, ICSU shows a significant improvement in accuracy (from 12% to 70%+ on text-davinci-003), which demonstrates the effectiveness of the proposed model.
## 2 Related Work
### Knowledge Base Question Answering
KBQA task methodologies are primarily divided into two categories: semantic parsing-based and information retrieval-based Herzig et al. (2021); Nie et al. (2022); Miller et al. (2016); Saxena et al. (2020); Schlichtkrull et al. (2018); Zhou et al. (2018); Qiu et al. (2020). Semantic parsing-based methods, such as STAGG Yih et al. (2015), convert natural language questions into formal language queries for execution on knowledge bases, offering precise answers and interpretability. However, these methods, recently formulated as a Seq2Seq generation task within an encoder-decoder framework, face semantic and structural discrepancies between natural language questions Wu et al. (2021); Shin et al. (2021); Xu et al. (2020); Wu et al. (2021). In contrast, information retrieval-based methods form a question-specific graph from the knowledge base, ranking entities based on their question relevance. Compared with the latter one, semantic parsing-based methods can provide more accurate answers and better interpretability.
### Large Language Models and In-Context Learning
Large Language Models, such as GPT-\(3\)Brown et al. (2020), LLaMA Touvron et al. (2023), Taori et al. (2023) have become noteworthy due to their impressive performance on a variety of downstream tasks without the need for fine-tuning. In-Context learning is a key technique for LLMs that allows LLMs to make predictions relying on contexts augmented with a few examples.
Recently, some methods adopt LLMs in KBQA tasks. Tan et al. (2023) evaluates the performance of LLMs on KBQA tasks via directly using LLMs as knowledge bases. Baek et al. (2023) searches the relevant facts and attaches them to the input question from the knowledge base to enhance the ability of LLMs on KBQA.
## 3 The Proposed ICSU method
### LLM Prompting for KBQA
To enable ICSU to generate accurate SPARQL queries, we design prompt \(x\) for each question. Specifically, \(x=(i,e,q)\), contains three elements: 1) A task instruction \(i\) that provides a brief overview of the semantic parsing task in KBQA; 2) An example set \(e\) containing several examples that provide schema information of the given question; 3) An input question \(q\) that requires LLMs to provide the corresponding SPARQL query.
The key problem for ICSU is how to retrieve proper examples that contain comprehensive schema information related to questions. In the following, we will introduce the details of the proposed there example retrieval strategies.
### Example Retrieval Strategies
ICSU investigates three different strategies for retrieving examples from the annotated question-query pairs. Furthermore, ICSU also contains a hybrid strategy with the combination of the three strategies.
#### 3.2.1 Raw-Question Based Strategy
Driven by the intuition that the more similar an example is, the more schema information it can provide, ICSU computes similarities between the input question \(q\) and the annotated questions (i.e., questions in the training set), and retrieves the top-k annotated question-query pairs with the highest similarity scores. Specifically, ICSU adopts MPNet Song et al. (2020), a sentence embedding method, to get the embeddings of the questions and calculates the similarities based on negative Euclidean score.
#### 3.2.2 Anonymized-Question Based Strategy
Specific entities usually cannot be shared across different questions. Besides, schema information is more about the entity types and relation types rather than the specific entities. Therefore, we propose to anonymize questions by replace the entities with their corresponding entity types. More specifically, we use FLERT Schweter and Akbik (2020), an 18-class NER model, to recognize types of entities. For different entities of the same type in the question, a number suffix is added for distinction.
For example, given a question "Which movie is shorter, The Greatest Story Ever Told or Rhinestone?", it will be transformed to "Which movie is shorter, [WORK_OF_ART_0] or [WORK_OF_ART_1]?". The similarity between the anonymized questions is then calculated in the same manner as Section 3.2.1.
#### 3.2.3 SPARQL Based Strategy
The above two strategies are based on the natural language questions but not SPARQL, where the annotated queries are not fully utilized. Since the corresponding SPARQL of the input question is not available, we propose to generate a draft SPARQL query for the input question 1 and then retrieve the examples according the similarity between SPARQL queries. The similarity is then calculated in the same manner as Section 3.2.1.
Footnote 1: The draft SPARQL query can be generated using the method described above. We use anonymized-question based strategy to ensure better quality of draft SPARQL queries.
#### 3.2.4 Hybrid Strategy
Each of these three strategies has different retrieval preferences. To increase the diversity of the retrieved examples, we combine these three strategies to form a hybrid strategy. In particular, the order in which we combine the three strategies is as follows: the anonymized-question based strategy, the SPARQL based strategy, and the raw-question based strategy.
## 4 Experiment
### Datasets and Metric
KQA Pro Cao et al. (2022) is selected as the benchmark to evaluate the performance of ICSU. As the largest KBQA benchmark, KQA Pro contains 117,790 natural questions along with their corresponding SPARQL queries. We adopt _execution accuracy_ as our metric following the work Nie et al. (2022), which is determined by whether the generated logical form queries return correct answers. For queries with multiple valid answers, we require the execution results to exactly match all ground-truth answers.
### Experimental Setup
Our experiments are conducted using four different LLMs: LLaMA-7BTouvron et al. (2023), Alpaca-7BTaori et al. (2023), text-davinci-003, and Chat-GPT Ouyang et al. (2022). We adopt ICSU without any examples and with randomly retrieved examples as our baselines. The number of In-Context examples is experimentally set to \(6\).
### Experimental Results
As illustrated in Table 1, ICSU (w.o. Examples) and ICSU (Random) denote ICSU without examples and with randomly retrieved examples, re
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Models** & **LLaMA-7B** & **Alpaca-7B** & **ChatGPT** & **text-davinci-003** \\ \hline
**ICSU (w.o. Examples)** & 0.00 & 0.00 & 0.00 & 0.00 \\
**ICSU (Random)** & 1.21 & 2.87 & 5.01 & 12.72 \\ \hline
**ICSU (Raw)** & 14.90 & 40.80 & 68.97 & 71.58 \\
**ICSU (Anonymized)** & 20.50 & 52.27 & 73.32 & 75.32 \\
**ICSU (SPARQL)** & **23.50** & **54.82** & 73.66 & 76.37 \\
**ICSU (Hybrid)** & 21.51 & 52.88 & **76.16** & **78.76** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The experimental results on KQA Pro.
spectively. ICSU (Raw), ICSU (Anonymized), ICSU (SPARQL) and ICSU (Hybrid) denote the proposed ICSU with example retrieval strategies mentioned in Section 3.2. The results of ICSU (w.o. Examples) demonstrate that LLMs totally failed to be a semantic parser because they are not aware of any schema information of the given questions. All the four strategies outperform ICSU (Random) drastically. Notably, the proposed strategies with text-davinci-003 achieve improvement of more than 58.86%, which verifies the effectiveness of the proposed ICSU. Moreover, ICSU (Anonymized) demonstrates a significant performance improvement compared to the ICSU (Raw), which verifies that the entity-specific information in the questions misleads the model to select entity-specific examples instead of examples providing more useful schema information. Furthermore, ICSU (SPARQL) also offers satisfactory improvement compared to the ICSU (Anonymized). ICSU (SPARQL) utilizes the useful information in annotated queries more effectively and thus achieves better performance.
Compared to other LLMs, text-davinci-003 achieves the best result with other strategies. Considering that text-davinci-003 is a hundred billion LLM compared to other LLMs, this result may indicate that increasing the model scale might enhance the semantic parsing ability of LLMs. ICSU (Hybrid) gets the best performance with ChatGPT and text-davinci-003, while underperforms ICSU (SPARQL) with LLaMA-7B and Alpaca-7B. ICSU (Hybrid) increase the diversity of examples and powerful LLMs, like ChatGPT or text-davinci-003, achieve the best results due to their abilities to utilize more rich schema information from diverse examples. However, according to the results of ICSU (raw), LLaMA-7B and Alpaca-7B cannot capture the useful schema information from these entity-specific examples from the raw-question based strategy. The performance of ICSU (Hybrid) with LLaMA-7B and Alpaca-7B is also influenced by such less helpful examples.
However, the significant performance gap between OpenAI's LLMs (text-davinci-003 and ChatGPT) and those in the Open Source Community (LLaMA-7B, Alpaca-7B) cannot be ignored. It indicates that the open source community still has a long way to go to match OpenAI's pace.
### Detail Analysis
To study how the schema information contributes to the final results, we conduct detailed analysis of ICSU with ChatGPT on validation set. We use the relation coverage (the percentage of relations in the ground truth SPARQL are mentioned in the retrieved examples) to reflect the question-related schema information contained in examples. Specifically, we conduct statistic on (relation coverage, accuracy) pairs of the models with four proposed retrieval strategies under \(k(k=1,...,6)\) examples and plot them on Figure 2. It can be observed that the accuracy increases when the relation coverage gets larger, which verifies the motivation that the ability of LLMs as the semantic parser gets more powerful along with the increasing of schema information contained in the retrieved examples.
## 5 Conclusion
This research has proven the efficacy of InContext Schema Understanding (ICSU) in augmenting Large Language Models' (LLMs) proficiency for Knowledge Base Question Answering (KBQA), thus providing an innovative strategy for semantic parsing that enables LLMs to generate SPARQL queries from examples and better understand knowledge base schemas. A significant leap in accuracy, from 12% to over 70%, was achieved by implementing three retrieval strategies: raw-question based, anonymized-question based, SPARQL based and the hybrid strategy, outdoing the random retrieval one and showing promising prospects for incorporating LLMs into KBQA.
Figure 2: Detail analysis on the correlation between relation coverage and accuracy.
### Limitations
Our study investigated the application of Large Language Models (LLMs) as semantic parsers in Knowledge Based Question Answering (KBQA) tasks using the In-Context Schema Understanding (ICSU) method. Experimental findings indicated performance enhancements, though certain inherent constraints should be acknowledged. A significant constraint is the dependency on examples derived from annotated question-query pairs for in-context learning, which improved the LLMs' ability to create SPARQL queries but assumes the availability of high-quality, annotated pairs. This presumption could hinder the model's generalization capabilities, particularly in situations where such pairs are either lacking or not diverse enough to encapsulate a spectrum of potential queries.
|
2310.03408 | Disentangling the Effects of Structure and Lone-Pair Electrons in the
Lattice Dynamics of Halide Perovskites | Metal halide perovskites have shown great performance as solar energy
materials, but their outstanding optoelectronic properties are paired with
unusually strong anharmonic effects. It has been proposed that this intriguing
combination of properties derives from the "lone pair" 6$s^2$ electron
configuration of the Pb$^{2+}$ cations, and associated weak pseudo-Jahn-Teller
effect, but the precise impact of this chemical feature remains unclear. Here
we show that in fact an $ns^2$ electron configuration is not a prerequisite for
the strong anharmonicity and low-energy lattice dynamics encountered in this
class of materials. We combine X-ray diffraction, infrared and Raman
spectroscopies, and first-principles molecular dynamics calculations to
directly contrast the lattice dynamics of CsSrBr$_3$ with those of CsPbBr$_3$,
two compounds which bear close structural similarity but with the former
lacking the propensity to form lone pairs on the 5$s^0$ octahedral cation. We
exploit low-frequency diffusive Raman scattering, nominally symmetry-forbidden
in the cubic phase, as a fingerprint to detect anharmonicity and reveal that
low-frequency tilting occurs irrespective of octahedral cation electron
configuration. This work highlights the key role of structure in perovskite
lattice dynamics, providing important design rules for the emerging class of
soft perovskite semiconductors for optoelectronic and light-harvesting devices. | Sebastián Caicedo-Dávila, Adi Cohen, Silvia G. Motti, Masahiko Isobe, Kyle M. McCall, Manuel Grumet, Maksym V. Kovalenko, Omer Yaffe, Laura M. Herz, Douglas H. Fabini, David A. Egger | 2023-10-05T09:28:40Z | http://arxiv.org/abs/2310.03408v2 | # Disentangling the Effects of Structure and Lone-Pair Electrons
###### Abstract
Metal halide perovskites have shown great performance as solar energy materials, but their outstanding optoelectronic properties are paired with unusually strong anharmonic effects. It has been proposed that this intriguing combination of properties derives from the "long pair" electrons of the octahedral metal cations, but the precise impact of this chemical feature remains unclear. Here we show that in fact a lone pair of electrons is not a prerequisite for the strong anharmonicity and low-energy lattice dynamics encountered in this class of materials. We combine X-ray diffraction, infrared and Raman spectroscopies, and first-principles molecular dynamics calculations to directly contrast the lattice dynamics of CsSrBr\({}_{3}\) with those of CsPbBr\({}_{3}\), two compounds which bear close structural similarity but with the former lacking lone pairs on the octahedral metal. We exploit low-frequency diffusive Raman scattering, nominally symmetry-forbidden in the cubic phase, as a fingerprint to detect anharmonicity and reveal that low-frequency tilting occurs irrespective of lone pair presence. This work highlights the key role of structure in perovskite lattice dynamics, providing important design rules for the emerging class of soft perovskite semiconductors for optoelectronic and light-harvesting devices.
Halide perovskites (HaPs) with formula AMX\({}_{3}\) generated enormous research interest because of their outstanding performance in optoelectronic devices, most notably in efficient solar cells.[1; 2; 3] These compounds are highly unusual among the established semiconductors because they feature an intriguing combination of properties. Strong anharmonic fluctuations[4; 5; 6] in these soft materials appear together with optoelectronic characteristics that are favorable for technological applications.[7; 8] This confluence raised puzzling questions regarding the microscopic characteristics of the materials and the compositional tuning of their properties alike. On the one hand, the soft anharmonic nature of the HaP structure may be beneficial in self-healing mechanisms of the material,[9; 10; 11] allowing for low-energy synthesis routes in their fabrication. On the other hand, pairing of anharmonic fluctuations and optoelectronic processes for key quantities of HaPs, _e.g._, band gaps,[12; 13; 14; 15] optical absorption profiles,[16; 17; 18] and charge-carrier mobilities.[19; 20; 21; 22; 23; 24; 25] exposed incomplete microscopic rationales for the fundamental physical processes involved in solar-energy conversion. Established materials design rules are now being challenged by these observations, opening a gap in our protocols for making improved compounds.
Significant efforts are now underway to discern the chemical effects giving rise to these remarkable properties of HaPs. Because lattice dynamical and optoelectronic properties appear both to be special and coupled in unusual ways, a common origin in chemical bonding could underlie these phenomena. In this context, an interesting chemical feature is that the octahedral cations in these compounds often bear an ns\({}^{2}\) electron configuration (_e.g._, Pb\({}^{2+}\) with configuration [Xe]\(6s^{2}\)), which is not present in many other semiconductors.[26] This particular aspect of HaPs can lead to formation of a lone pair of electrons (LPE) that impacts crystal structures and lattice dynamics[27; 28; 29; 30] as well as ionic dielectric responses.[26; 28; 31; 32] At the same time, the LPE plays a role in optoelectronic properties of these materials: its influence on the dielectric function can modify the Coulomb screening that is relevant for small exciton binding energies, reduced recombination rates and other key properties of HaPs.[33; 34]
Confluences of the LPE with structural and lattice-dynamical properties were investigated in previous work exploring the chemical space of HaPs. Gao _et al.[30]_ found an inverse relationship between the Goldschmidt tolerance factor, \(t\),[35] and anharmonic octahedral tilting motions. Similarly, Huang _et al._ varied the A-site cation
to explore interrelations of chemical, structural, and dynamical effects in HaPs,[32] reporting \(t\)-induced modulations of octahedral tiltings and LPE stereoactivity. A recent study by several of the present authors found that Cs\({}_{2}\)AgBiBr\({}_{6}\) lacks some expressions of lattice anharmonicity found in other HaP variants.[36] Because every other octahedral cation (Ag\({}^{+}\)) cannot form a LPE in this compound, changing the electron configuration of the cations may also suppress certain aspects of the lattice dynamics in HaPs. Taken together, previous work assigned a central role of the LPE in the anharmonic lattice dynamics of HaPs in addition to its established effect on the electronic structure and dielectric screening. However, exploring the chemical space of HaPs in this way simultaneously changes their structures. Therefore, isolating the convoluted occurrences of LPE- and purely structurally-determined changes in the lattice dynamics of HaPs remained challenging, making an assessment of the precise impact of LPE on anharmonicity in these soft semiconductors largely inaccessible.
Here, we address this issue and show that presence of a LPE is not required for the strong anharmonicity in the low-energy lattice dynamics of soft HaP semiconductors. We disentangle structural and chemical effects in the lattice dynamics of HaPs by comparing the well-known CsPbBr\({}_{3}\) with the far less studied CsSrBr\({}_{3}\). Both exhibit almost identical geometrical and structural parameters, but CsSrBr\({}_{3}\) cannot form a LPE at the M-site owing to the noble gas electron configuration of Sr\({}^{2+}\), allowing separation of the effects of the LPE and the geometry on the lattice dynamics in a direct manner. Combining electronic structure and molecular dynamics (MD) calculations with X-ray diffraction (XRD), infrared (IR) and Raman spectroscopies, we assess a key fingerprint of vibrational anharmonicity, _i.e._, the Raman central peak, which is a broad peak towards zero frequency in the Raman spectrum resulting from diffusive inelastic scattering.[37; 38; 39; 26; 30; 31; 32; 32] While the electronic structure and dielectric properties of CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) are very different, their vibrational anharmonicities are found to be remarkably similar. In particular, the crucial dynamic octahedral tiltings giving rise to the Raman central peak are still present even in absence of the LPE in CsSrBr\({}_{3}\). Our results provide microscopic understanding of precisely how the LPE influences the anharmonic octahedral tiltings that dynamically break the average cubic symmetry in both compounds, and rule out the LPE stereocativity as the sole reason for appearances of such anharmonicity in soft HaPs. These findings are important for chemical tuning of HaPs needed for new materials design.
## Results
### Electronic structure and bonding
We first investigate the electronic structure and bonding of CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) using density-functional theory (DFT). Figure 1 shows their band structure, total and projected density of states (DOS), as well as the total and projected crystal-orbital Hamilton population (COHP) of the high-temperature cubic phases of CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\). The electronic band structure and bonding of CsPbBr\({}_{3}\) were extensively investigated before:[40] the conduction band minimum (CBM) is formed by anti-bonding interactions (positive COHP in Figure 1c) between Pb-6\(p\) and Br-4\(p\)/Br-4\(s\) orbitals, while the valence band maximum (VBM) is formed by anti-bonding interactions between Br-4\(p\) and Pb-6\(s\) orbitals, where the nominally filled 6\(s\) orbital of Pb\({}^{2+}\) allows the possible formation of a LPE.
The electronic structure of CsSrBr\({}_{3}\) exhibits entirely different characteristics,[41; 26] especially a much larger band gap and weaker covalent interactions. Notably, the magnitude of the COHP is significantly reduced with respect to that of CsPbBr\({}_{3}\), indicating much greater ionicity, and the COHP is almost entirely recovered by interactions between Cs and Br. Importantly, all bands derived from antibonding interactions between Sr-5\(s\) and Br-4\(p\)/Br-4\(s\)
Figure 1: **Electronic structure.** DFT-computed electronic band structure of cubic CsPbBr\({}_{3}\) (panel a) and corresponding total and projected density of states (DOS, panel b) and crystal-orbital Hamilton population (COHP, panel c). Panels d—f show the same data for CsSrBr\({}_{3}\).
are empty due to the electron configuration of Sr\({}^{2+}\) ([Kr]), and there is no potential for lone pair formation on Sr\({}^{2+}\). A manifestation of the lack of a lone-pair-compatible electron configuration in CsSrBr\({}_{3}\) is that there is no cross-gap hybridization of the halide valence orbitals. By contrast, Br-4\(p\) orbitals hybridize with Pb-6\(p\) across the gap of CsPbBr\({}_{3}\) (see the pCOHP in Figure 1c). This leads to large Born effective charges, _i.e._, large changes in the macroscopic polarization upon ionic displacements [42; 43; 44] reported in Table 1, which for CsPbBr\({}_{3}\) are more than double the formal charge of Pb (+2) and Br (-1) and much larger than the corresponding values for CsSrBr\({}_{3}\). Similarly, there is also a larger electronic contribution to the dielectric response in CsPbBr\({}_{3}\) and it features a larger value of the dielectric function at the high-frequency limit (\(\varepsilon_{\infty}\)) compared to CsSrBr\({}_{3}\).
**Structural properties and phase transitions**
In spite of the markedly different electronic structure and bonding characteristics, CsSrBr\({}_{3}\) and CsPbBr\({}_{3}\) exhibit the same high-temperature cubic crystal structure (\(Pm\bar{3}m\)) and very similar lattice parameters (see Supplementary Information). One can rationalize this through the nearly identical ionic radii of Pb\({}^{2+}\) and Sr\({}^{2+}\) (119 and 118 pm) and the resulting Goldschmidt factors for the compounds (0.862 and 0.865). Furthermore, both materials exhibit the same sequence of structural phase transitions from the high-temperature cubic to the low-temperature orthorhombic phase (with an intermediate tetragonal phase), as shown by temperature-dependent lattice parameters in Figure 2 that were determined _via_ XRD. The cubic-to-tetragonal phase transition temperature of CsSrBr\({}_{3}\) (\(\sim\)520 K) is noticeably higher than that of CsPbBr\({}_{3}\) (\(\sim\)400 K) [46; 47] and slightly higher (\(\sim\)10 K) than that reported for Eu-doped CsSrBr\({}_{3}\):Eu 5%. [48] The volumetric thermal expansion coefficient (\(\alpha_{V}\)) of CsSrBr\({}_{3}\) (\(\sim\)1.32 \(\times\) 10\({}^{-4}\) K\({}^{-1}\) at 300 K) is large and similar to that of CsPbBr\({}_{3}\) (\(\sim\)1.29 \(\times\) 10\({}^{-4}\) K\({}^{-1}\), see the Supplementary Information for details), in good agreement with the one reported for CsSrBr\({}_{3}\):Eu. [48] Just as for other inorganic HaPs, \(\alpha_{V}\) of CsSrBr\({}_{3}\) slightly decreases with temperature. [49; 50] The similarity of geometric factors and structural phase transitions suggests that the octahedral tilting dynamics in CsSrBr\({}_{3}\) might be similar to those in CsPbBr\({}_{3}\), which contrasts with their markedly different electronic structure, and prompts a deeper investigation of the impact of the LPE on structural dynamics.
### Lower-temperature lattice dynamics
We conduct IR and Raman spectroscopy at different temperatures as well as DFT-based harmonic-phonon calculations. The measured IR spectra show that the dominant CsSrBr\({}_{3}\) features are blue-shifted compared to those of CsPbBr\({}_{3}\) (see Figure 3a). Indeed, our DFT calculations of IR activities find a significant softening of the infrared-active TO modes in CsPbBr\({}_{3}\) compared to those in CsSrBr\({}_{3}\) (see Figure 3b): the most prominent IR-active TO mode in CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) appears at \(\sim\)68 and 146 cm\({}^{-1}\), respectively, corresponding to the same irreducible representation (\(B3u\)) with similar eigenvectors (see Supplementary Information) in each system. This is in line with the presence of the LPE in CsPbBr\({}_{3}\) leading to a softening of particularly those phonon modes associated with the changes in dielectric properties that we reported above (_cf._ Table 1), which is more relevant for the observed shifts than the difference in the atomic masses (see Supplementary Information).
Moreover, the LO/TO splitting is enhanced in CsPbBr\({}_{3}\) compared to CsSrBr\({}_{3}\) and the LO phonon modes are hardened. Related to this, the CsPbBr\({}_{3}\) IR spectrum exhibits a broad feature which is known as the Reststrahlen band as has been reported before for
Figure 2: **Structural properties.** Temperature-dependent lattice parameters of CsPbBr\({}_{3}\) (panel a) and CsSrBr\({}_{3}\) (panel b) determined by XRD throughout the orthorhombic—tetragonal—cubic phases. We show reduced lattice parameters \(\tilde{a}\), \(\tilde{b}\) and \(\tilde{c}\) for better visualization. Dashed vertical lines indicate phase-transition temperatures. Error bars from Pawley fitting are smaller than the markers and are omitted.
\begin{table}
\begin{tabular}{c|c|c|c|c} Compound & \(\varepsilon_{\infty}\) & \(Z_{\text{Cs}}^{*}\) & \(Z_{\text{M--site}}^{*}\) & \(Z_{\text{Br}}^{*}\) \\ & & & & \((xx,yy,zz)\) \\ \hline \hline CsPbBr\({}_{3}\) & 5.39 & 1.38 & 4.33 & (-0.63, -0.63, -4.46) \\ CsSrBr\({}_{3}\) & 3.02 & 1.35 & 2.43 & (-0.91, -0.91, -1.97) \\ \end{tabular}
\end{table}
Table 1: **Dielectric properties of cubic CsMBr\({}_{3}\).** Dielectric constant in the high-frequency limit with respect to the optical phonon mode frequencies, \(\varepsilon_{\infty}\), and Born effective charges, \(Z_{i}^{*}\), of cubic CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) as calculated by DFT. We report \(Z_{\text{Br}}^{*}\) for the Br bonded with Pb/Sr along the \(z\) axis.
MA-based HaPs.[51] This particular effect results in near-zero transmission through the material in a frequency range between the TO and LO modes, represented by high IR intensity values, and occurs in polar materials with larger Born-effective charges. Because the TO modes are softened and the LO modes hardened in CsPbBr\({}_{3}\) compared to CsSrBr\({}_{3}\), and because the latter is less polar (_cf._ Table 1), the absence of the LPE leads to a much less pronounced, blue-shifted Reststrahlen band appearing in a smaller frequency window in CsSrBr\({}_{3}\) (see Figure 3a, and Supplementary Information).
Figure 3c shows the 80 K Raman spectra of CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\), which are in good agreement with the Raman activities calculated for harmonic phonons (Figure 3d). Unlike IR, the Raman spectrum of CsSrBr\({}_{3}\) exhibits no substantial energy shifts with respect to CsPbBr\({}_{3}\). Computing the phonon DOS for the orthorhombic phase of both compounds with DFT (see Supplementary Information), we find that they exhibit similar phonon DOS below 100 cm\({}^{-1}\), _i.e._, in the region of most of the Raman-active modes. The similar phonon DOS and the contributions of the M-site at low frequencies explain the limited shift of the CsSrBr\({}_{3}\) Raman spectrum. Above this range, CsPbBr\({}_{3}\) exhibits few vibrational states while CsSrBr\({}_{3}\) shows its most pronounced phonon DOS peaks, which correspond well with the strongest IR mode calculated from the harmonic approximation.
### High-temperature lattice dynamics
A key signature of vibrational anharmonicity in HaPs at higher temperatures is the Raman central peak.[30; 31; 32; 37; 38; 39; 26] We use this feature that is nominally symmetry-forbidden in the cubic phase as a fingerprint to directly investigate how the presence of the LPE determines anharmonicity in these materials, using Raman spectroscopy and DFT-based MD simulations. Interestingly, a central peak also appears in the high-temperature Raman spectrum of the LPE-absent CsSrBr\({}_{3}\) (see Figure 4 and Supplementary Information for full temperature range). The presence of this feature in CsSrBr\({}_{3}\) shows that LPE-induced distortions are not required for the low-frequency diffusive Raman scattering and anharmonicity to occur. This result, together with the identical phase-transition sequences of both materials (see Figure 2), led us to investigate the role of tilting instabilities in CsSrBr\({}_{3}\) and CsPbBr\({}_{3}\).
We first calculate harmonic phonon dispersions of both compounds (see Figure 5) and find these to be remarkably similar for cubic CsSrBr\({}_{3}\) and CsPbBr\({}_{3}\) in the low frequency region, in line with the aforementioned similarities in the phonon DOS of the orthorhombic phase. Specifically, both compounds exhibit the same dynamic tilting instabilities at the edge of the Brillouin zone (BZ),
Figure 3: **Lattice dynamics at lower temperatures.** a) IR-reflectivity spectra (dashed curves) and fitted imaginary part of the dielectric function (solid curves, see Supplemental Information for details) of CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) measured at room temperature. b) DFT-calculated IR-absorption spectra within the harmonic approximation for the orthorhombic phases. c) Raman spectra of orthorhombic CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) measured at 80 K. d) DFT-calculated Raman spectra of both compounds within the harmonic approximation for the orthorhombic phases.
governed by in-phase (M point) and three degenerate out-of-phase (R point) rotations. These rotation modes are not only active in the phase transitions, but they also have been discussed to drive the dynamic disorder of halide perovskites.[4; 52; 53; 54; 14; 55]
We perform MD calculations of CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) in the cubic phase to calculate the frequency-resolved dynamic changes of octahedral rotation angles, \(\Phi_{\alpha}(\omega)\) (see Figure 6 and Equation 1 in the Methods Section). Figure 6b shows \(\Phi_{\alpha}(\omega)\) for CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) and indicates strong low-frequency tilting components in both CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\). Recently, a phenomenological model for the description of the temperature-dependent Raman spectra of cubic HaPs proposed the inclusion of a low-frequency anharmonic feature, which was associated with transitions between minima of a double-well potential energy surfaces[39] that correspond to different octahedral tiltings.[56; 57; 58; 54; 59; 60] Our results confirm that substantial octahedral dynamics correspond to low-frequency features dynamically breaking the cubic symmetry in CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\).[4; 59; 60; 41; 48] Interestingly, this low-frequency component appears irrespective of the presence of the LPE and induces the formation of relatively long-lived (tens of ps) structural distortions (see Supplementary Information), which strongly deviate from the average cubic symmetry. This suggests that the dynamic deviations from the long-range, crystallographic structure enable the low-frequency Raman response without violating the selection rules.
We investigate the impact of the LPE on octahedral tilting tendencies[30] by computing the Fourier-transform of cross-correlations between rotation angles and M-site displacements, \(C_{\alpha\beta}(\omega)\) (see Equation 2 in the Methods section). Larger values of \(C_{\alpha\beta}\) generally indicate stronger coupling between octahedral rotations and Pb displacements. Absence of the LPE becomes evident in the low intensity of \(C_{\alpha\beta}(\omega)\) for CsSrBr\({}_{3}\) (Figure 6c), which is less than half of that of CsPbBr\({}_{3}\), especially at low-frequencies relevant for the slow, anharmonic, symmetry-breaking rotational features. This suggests that the presence of the LPE in CsPbBr\({}_{3}\) enhances the low-frequency octahedral tilting, in line with the literature.[30] However, the non-zero \(C_{\alpha\beta}\) for CsSrBr\({}_{3}\) shows that the presence of a LPE is not necessary to couple octahedral rotations and M-site displacements. We speculate that the LPE-enhanced tilting could contribute to the fact that CsPbBr\({}_{3}\) has a lower tetragonal-to-cubic phase transition temperature compared to the LPE-absent CsSrBr\({}_{3}\).
## Discussion
HaPs are promising solar materials showing unconventional combinations of favorable optoelectronic properties and anharmonicity in their soft structures. In the search for explanations of this confluence, the presence and stereochemical activity of the LPE have been proposed as possible chemical influences. However, when controlling these effects _via_ composition of HaPs one also changes ionic sizes and with it the energy scale of octahedral tilting, which inherently modulates the extent of anharmonicity. This complication had to date hindered a precise separation of structural and chemical effects in the lattice dynamic of these materials.
Here, we directly disentangled these effects by comparing CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\), two HaP compounds with similar ionic radii and structural properties but entirely different orbital interactions that leave CsSrBr\({}_{3}\) without the ability to form a LPE. Growth of CsSrBr\({}_{3}\) and CsPbBr\({}_{3}\) single crystals, and the combination of XRD, IR and Raman spectroscopy with first-principles calculations, enabled
Figure 4: **Lattice dynamics at higher temperature.** Raman spectra of CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) in the high-temperature cubic phase. The central peak appears for both compounds irrespective of the presence of LPEs.
Figure 5: **Dynamic instabilities in the lattice dynamics.** Harmonic phonon dispersion of cubic CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\) showing the dynamic instabilities in the high-temperature, cubic phase of both compounds. The imaginary modes at the M and R points are the in-phase and out-of-phase tilting depicted on the right panels. The tilting modes are almost identical for CsSrBr\({}_{3}\) and CsPbBr\({}_{3}\).
us to precisely investigate the role of the LPE in the anharmonic effects. We found that the electronic structure associated with the formation of a LPE is paramount for the optoelectronic properties of HaPs, because its absence resulted in large changes in dielectric screening, the direct nature of the band gap, and the carrier effective masses.[26] However, using the Raman central peak at higher temperatures as a fingerprint to detect anharmonicity, we found it to appear also for the LPE-absent CsSrBr\({}_{3}\). Our MD calculations showed that the presence of the Raman central peak correlates with the occurance of prominent low-frequency features of slow, anharmonic rotations of the octahedra. Altogether, these findings demonstrate that the perovskite structure allows for anharmonic vibrational dynamics to occur, irrespective of the presence of the LPE, which establishes this somewhat unusual behavior as a generic effect in this material class. Since these octahedral dynamics impact the optoelectronic characteristics of these systems, our results have implications for synthesis of new HaPs with improved properties for technological applications. For instance, Pb-Sr alloying has been proposed as a method to tune the band gap of HaPs for light emission and absorption applications.[41] Our work implies that such Sr alloying for tuning electronic and dielectric properties preserves the strongly anharmonic lattice dynamics.
The relevance of these findings for material design strategies of HaP compounds is seen when putting our results in the context of previous work discussing anharmonic effects in this class of materials. Specifically, cubic CsPbBr\({}_{3}\), CsSnBr\({}_{3}\), CsGeBr\({}_{3}\), (CH\({}_{3}\)NH\({}_{3}\))\({}_{0.13}\)(CH\({}_{3}\)CH\({}_{2}\)NH\({}_{3}\))\({}_{0.87}\)PbBr\({}_{3}\), CH(NH\({}_{2}\))\({}_{2}\)PbBr\({}_{3}\), and, here, CsSrBr\({}_{3}\) are all reported to exhibit dynamic hopping between low symmetry minima on the potential energy surface.[30; 31; 32; 61] By contrast, the high symmetry phase of Cs\({}_{2}\)AgBiBr\({}_{6}\) is anharmonically stabilized and exhibits well-defined normal modes and a soft-mode transition on cooling.[36] Cs\({}_{2}\)SnBr\({}_{6}\), on the other hand, lacks any phase transitions and similarly exhibits well-defined normal modes.[62] Where previously the strength of the LPE distortion or the density of cations with LPE appeared to be a plausible predictor of broad, nominally symmetry-forbidden Raman scattering resulting in a central peak, our work suggests that instead the differing symmetry in both the structure and the chemical bonding of metal halide perovskites and double-perovskites may be a controlling factor.
In conclusion, the ns\({}^{2}\) electron configuration in HaPs that can result in a LPE is crucial to several favorable electronic features[26; 33; 40] and gives rise to the elevated ionic dielectric response _via_ enhancement of Born effective charges.[33; 42] However, we found that presence of a LPE is not necessary to produce dynamic symmetry-breaking of the sort that gives rise to broad, intense Raman scattering in the high temperature phases of HaPs and that has been associated with the unique optoelectronic properties in these compounds such as long charge-carrier lifetimes and photostabularities. Instead, such dynamic symmetry breaking is common to all cubic bromide and iodide (single-)perovskites thus far studied to the best of our knowledge. These results highlight the key role of structural chemistry in the anharmonic dynamics of halide perovskites, providing a new criterion for the design of soft optoelectronic semiconductors.
Figure 6: **Impact of LPE on octahedral dynamics at higher temperature.** a) Schematic representation of the MBr\({}_{6}\) octahedron aligned along the \(z\) Cartesian axis. The octahedral rotation angle around \(z\), \(\phi_{z}\), is defined as the average of the angles formed by the \(x/y\) Cartesian axis and the vector connecting two in-plane Br atoms at opposing edges of the octahedron (\(\phi_{z}^{(x)}\) in red and \(\phi_{z}^{(y)}\) in blue). Note that a clockwise rotation is defined as positive and counter-clockwise as negative. b) Fourier transform of the octahedral rotation angle, \(\Phi_{\alpha}(\omega)\), and c) cross-correlation between rotation angle and M-site displacement, \(C_{\alpha\beta}(\omega)\), calculated using DFT-MD trajectories of cubic CsPbBr\({}_{3}\) (upper panels, 525 K) and CsSrBr\({}_{3}\) (lower panels, 570 K).
## Methods
### Electronic Structure Calculations
DFT calculations were performed with Vienna ab-initio simulation package (VASP) code [63] using the projector-augmented wave (PAW) method [64]. We employed the Perdew-Burke-Ernerzhof (PBE) exchange-correlation functional [65] and the Tkatchenko-Scheffler (TS) scheme [66] - using an iterative Hirshfeld partitioning of the charge density [67; 68] - to account for dispersive interactions. This setup has been shown to accurately describe the structure of HaPs [69; 70]. All static calculations used an energy convergence threshold of \(10^{-6}\) eV, a plane-wave cutoff of 500 eV, and a \(\Gamma\)-centered \(k\)-grid of \(6\times 6\times 6\) (\(6\times 4\times 6\)) for the \(Pm\bar{3}m\) (\(Pnma\)) structures. Lattice parameters were optimized by a fitting procedure using the Birch-Murnaghan equation of state [71; 72]. The final structures used in all subsequent calculations were obtained by relaxing the ionic degrees of freedom until the maximum residual force was below \(10^{-4}\) eV/A. The total and projected electronic DOS and COHP, were calculated by partitioning the DFT-calculated band structure into bonding and antibonding contributions using the LOBSTER code [73; 74]. For this task, the DFT-computed electronic wave functions were projected onto Slater-type orbitals (basis set name: "pbeVaspFit2015") [73] including Cs 6s, 5p and 5s, Pb 6s and 6p, and Br 4p and 4s states. The maximum charge spilling in this procedure was 1.3%. Spin-orbit coupling was not included in our calculations, since it is currently not supported by the LOBSTER code. We emphasize that our focus is on the orbital contributions to the (anti) bonding interactions, rather than on a quantitative descriptions of the energy.
### Phonon Calculations
Phonon dispersions and DOSs were obtained _via_ the finite displacements method implemented in the phonopy package [75]. For these calculations, we used \(2\times 2\times 2\) supercells with 40 (160) atoms of the \(Pm\bar{3}m\) (\(Pnma\)) CsMBr\({}_{3}\) structures reducing \(k\)-space sampling accordingly. IR and Raman spectra were computed with the phonopy-spectroscopy package [76], using zone-center phonon modes, Born-effective charges and polarizabilities, calculated with density functional perturbation theory (DFPT) [77].
### First-principles Molecular Dynamics
DFT-based MD calculations were performed for \(2\times 2\times 2\) supercells of the \(Pm\bar{3}m\) structures using a Nose-Hoover thermostat within the canonical ensemble (NVT), as implemented in VASP [78]. The simulation temperature was set to \(T\)=525 and 570 K for CsPbBr\({}_{3}\) and CsSrBr\({}_{3}\), respectively. An 8 fs timestep, reduced \(k\)-grid of \(3\times 3\times 3\), and energy convergence threshold of \(10^{-5}\) eV were used for the 10 ps equilibration and 115 ps production runs.
### Octahedral Rotation Dynamics and Cross-correlations
We quantified the octahedral dynamics using the rotation angles, \(\phi_{\alpha}\), around a given Cartesian axis \(\alpha\) (see Figure 6a). The frequency-resolved rotational dynamics were calculated as the Fourier transform of \(\phi_{\alpha}\):
\[\Phi_{\alpha}(\omega)=\frac{1}{N_{\rm steps}}\int_{0}^{\infty}\phi_{\alpha}( t)e^{-i\omega t}dt, \tag{1}\]
where \(N_{\rm steps}\) is the number of snapshots. To compute the angles we selected 1000 equally spaced snapshots. We calculated the frequency-resolved cross-correlation between octahedral rotation angles (around a Cartesian direction \(\alpha\)) and the displacements (along a Cartesian direction \(\beta\)) of the corresponding M-site, \(d_{\beta}^{\rm M}(t)\), as:
\[C_{\alpha\beta}(\omega)=\frac{1}{N_{\rm steps}}\int_{0}^{\infty}\frac{\langle \phi_{\alpha}(t+\delta t)\cdot d_{\beta}^{\rm M}(t)\rangle}{\langle\phi_{ \alpha}(t)\cdot d_{\beta}^{\rm M}(t)\rangle}e^{-i\omega t}dt. \tag{2}\]
### Polycrystalline Sample Preparation
CsBr (Alfa Aesar, 99.9%), anhydrous SrBr\({}_{2}\) (Alfa Aesar, 99%), Cs\({}_{2}\)CO\({}_{3}\), PbO, and concentrated aqueous HBr were purchased and used as received. Guided by the reported pseudo-binary phase diagram [79], polycrystalline CsSrBr\({}_{3}\) for X-ray powder diffraction and Raman spectroscopy was prepared by a solid-state reaction at 600 \({}^{\circ}\)C. CsBr (5 mmol, 1064 mg) and SrBr\({}_{2}\) (5 mmol, 1237 mg) were ground and pressed into a 5 mm diameter pellet, placed in an alumina crucible, and flame-sealed under \(\sim\)1/3 atmosphere of argon in a fused silica ampoule. The reaction yields a porous, colorless pellet which is easily separated from the crucible and ground in inert atmosphere. Polycrystalline CsPbBr\({}_{3}\) for X-ray powder diffraction was prepared in ambient atmosphere by precipitation from aqueous hydrobromic acid. PbO (2 mmol, 446.4 mg) was dissolved in 2 mL hot concentrated HBr under stirring. Cs\({}_{2}\)CO\({}_{3}\) (1 mmol, 325.8 mg) was added slowly resulting in an immediate bright orange precipitate. 13 mL additional HBr was added and the mixture left to stir. After an hour, stirring was stopped and the mixture allowed to cool to room temperature. Excess solution was decanted, and the remaining mixture was evaporated to dryness on a hotplate and ground. Phase purity of all prepared compounds was established by powder XRD.
### Single Crystal Preparation
Single crystals of CsSrBr\({}_{3}\) were grown by the Bridgman method from a stoichiometric mixture of the binary metal bromides in a 10 mm diameter quartz ampoule. CsSrBr\({}_{3}\) was pulled at 0.5 mm/h through an 800 \({}^{\circ}\)C hot zone, yielding a multi-crystalline rod from which several-mm single crystal regions could be cleaved.
CsSrBr\({}_{3}\) is extremely hygroscopic and all preparation and handling was performed in an inert atmosphere.
The vertical Bridgman method was used to grow large, high-quality single crystals of CsPbBr\({}_{3}\). After synthesis and purification (see Supplemental Information for details), the ampoule was reset to the hot zone for the Bridgman Growth. The zone 1 temperature was set to 650 \({}^{\circ}\)C with a 150 \({}^{\circ}\)C/h ramp rate, and held for 12 h to ensure a full melt before sample motion occurred. The zone 2 and 3 temperatures were set to 375 \({}^{\circ}\)C. These temperatures were held for 350 h while the ampoule was moved through the furnace at a rate of 0.9 mm/h under 0.3 rpm rotation. After the motion had ceased, the zone 1 temperature ramped to 375 \({}^{\circ}\)C to make the temperature profile in the furnace uniform. The cooling program was set to slow during the phase transitions occurring near 120 and 90 \({}^{\circ}\)C, with a 10 \({}^{\circ}\)C/h cooling rate from 375 \({}^{\circ}\)C to 175 \({}^{\circ}\)C, a 2.5 \({}^{\circ}\)C/h slow cooling rate from 175 \({}^{\circ}\)C to 75 \({}^{\circ}\)C, and a 10 \({}^{\circ}\)C/h rate to 30 \({}^{\circ}\)C. The resulting CsPbBr\({}_{3}\) ingot was orange-red and had large (\(\geq\)5 mm) transparent single-crystalline domains, though the edges of some portions exhibited twinning.
## X-ray Diffraction
Polycrystalline samples were ground with silicon powder (as an internal standard and diluent) and packed in borosilicate glass capillaries. Powder XRD patterns were measured in Debye\(-\)Scherrer geometry using a STOE Stadi P diffractometer (Mo K\({}_{\alpha 1}\) radiation, Ge-(111) monochromator, Mythen 1K Detector) equipped with a furnace. Data were analyzed by sequential Pawley refinement using GSAS-II.[80]
### Infrared Reflectivity Measurements
IR-reflection spectra in the THz range were measured as a combination of time-domain THz spectroscopy (TDS) for the low-frequency end and bolometer detection for the higher frequencies. Bolometer spectra were measured using a Bruker 80v Fourier-transform IR spectrometer with a globar source and a bolometer detector cooled to liquid He temperatures. The crystals were mounted for reflection measurements and the instrument was sealed in vacuum. A gold mirror was used as reflection reference. TDS was performed using a Spectra Physics Mai Tai-Empower-Spitfire Pro Ti:Sapphire regenerative amplifier. The amplifier generates 35 fs pulses centered at 800 nm at a repetition rate of 5 kHz. THz pulses were generated by a spintronic emitter, which was composed of 1.8 nm of Co\({}_{40}\)Fe\({}_{40}\)B\({}_{20}\) sandwiched between 2 nm of Tungsten and 2 nm of Platinum, all supported by a quartz substrate. The THz pulses were detected using electro-optic sampling in a (100)-ZnTe crystal. A gold mirror was used as reflection reference. The sample crystals, THz emitter and THz detector were held under vacuum during the measurements.
TDS offers better signal at low frequency, while bolometer measurements have an advantage over TDS at higher frequencies. Therefore, the spectra were combined and merged at 100 cm\({}^{-1}\). Owing to scattering losses, the absolute intensity of reflected light can not be taken quantitatively. Therefore, the spectra were scaled to the signal level at 100 cm\({}^{-1}\) before merging the data. The final reflectivity spectra are given in arbitrary units. The phonon frequencies and overall spectral shape allows for fitting to the dielectric function.
### Raman Spectroscopy
All the measurements were taken in a home-built back scattering Raman system. For all measurements, the laser was focused with a 50x objective (Zeiss, USA), and the Rayleigh scattering was then filtered with a notch filter (Ondax Inc., USA). The beam was focused into a spectrometer 1 m long (FHR 1000, Horiba) and then on a CCD detector. To get the unpolarized Raman spectrum for the single crystals (CsSrBr\({}_{3}\) low temperatures and CsPbBr\({}_{3}\)), two orthogonal angles were measured in parallel and cross configurations (four measurements overall). The unpolarized spectrum is a summation of all four spectra. The samples were cooled below room temperature by a Janis cryostat ST-500 controlled by Lakeshore model 335 and were heated above room temperature by a closed heating system (Linkam Scientific). Due to the extreme sensitivity of CsSrBr\({}_{3}\) to ambient moisture, CsSrBr\({}_{3}\) powder was flame-sealed in a small quartz capillary for the high-temperature measurements, and a single crystal was loaded into a closed cell under an Ar environment for the low temperatures measurements. CsSrBr\({}_{3}\) low temperature measurements were taken with a 2.5 eV CW diode laser (Coherent Inc.). CsSrBr\({}_{3}\) high-temperature measurement and all the CsPbBr\({}_{3}\) measurements were taken with a 1.57 eV CW diode laser (Coherent Inc.).
|
2308.01055 | Towards optimal sensor placement for inverse problems in spaces of
measures | The objective of this work is to quantify the reconstruction error in sparse
inverse problems with measures and stochastic noise, motivated by optimal
sensor placement. To be useful in this context, the error quantities must be
explicit in the sensor configuration and robust with respect to the source, yet
relatively easy to compute in practice, compared to a direct evaluation of the
error by a large number of samples. In particular, we consider the
identification of a measure consisting of an unknown linear combination of
point sources from a finite number of measurements contaminated by Gaussian
noise. The statistical framework for recovery relies on two main ingredients:
first, a convex but non-smooth variational Tikhonov point estimator over the
space of Radon measures and, second, a suitable mean-squared error based on its
Hellinger-Kantorovich distance to the ground truth. To quantify the error, we
employ a non-degenerate source condition as well as careful linearization
arguments to derive a computable upper bound. This leads to asymptotically
sharp error estimates in expectation that are explicit in the sensor
configuration. Thus they can be used to estimate the expected reconstruction
error for a given sensor configuration and guide the placement of sensors in
sparse inverse problems. | Phuoc-Truong Huynh, Konstantin Pieper, Daniel Walter | 2023-08-02T10:05:46Z | http://arxiv.org/abs/2308.01055v2 | # Towards optimal sensor placement for inverse problems in spaces of measures
###### Abstract.
This paper studies the identification of a linear combination of point sources from a finite number of measurements. Since the data are typically contaminated by Gaussian noise, a statistical framework for its recovery is considered. It relies on two main ingredients, first, a convex but non-smooth Tikhonov point estimator over the space of Radon measures and, second, a suitable mean-squared error based on its Hellinger-Kantorovich distance to the ground truth. Assuming standard non-degenerate source conditions as well as applying careful linearization arguments, a computable upper bound on the latter is derived. On the one hand, this allows to derive asymptotic convergence results for the mean-squared error of the estimator in the small small variance case. On the other, it paves the way for applying optimal sensor placement approaches to sparse inverse problems.
Keywords. inverse problems, optimal sensor placement, Radon measures, off-the-grid sparse recovery, frequentistic-inference.
2020 Mathematics Subject Classification. 35Q62, 35R30, 62K05, 65J22
## 1. Introduction
The identification of an unknown signal \(\mu^{\dagger}\) comprising finitely many point sources lies at the heart of challenging applications such as acoustic or seismic inversion [25, 17], microscopy [21], astronomy [30], as well as initial value identification [6]. A popular mathematical model for the recovery of the locations \(y_{n}^{\dagger}\in\varOmega_{s}\) and amplitudes \(q_{n}^{\dagger}\) of its \(N_{s}^{\dagger}\) individual point sources is given by integral equations
\[z_{j}^{d}(\varepsilon)=\int_{\varOmega_{s}}k(x_{j},y)\ \mathrm{d}\mu^{ \dagger}(y)+\varepsilon_{j}=\sum_{n=1}^{N_{s}^{\dagger}}q_{n}^{\dagger}k(x_{j},y_{n}^{\dagger})+\varepsilon_{j}\quad\text{for }j=1,\ldots,N_{o}. \tag{1.1}\]
Here, \(k\in\mathcal{C}(\varOmega_{o}\times\varOmega_{s})\) and \(x_{j}\in\varOmega_{o}\) denote a sufficiently smooth given integral kernel and measurement locations, respectively. This type of _ill-posed inverse problem_ is challenging for a variety of reasons. First and foremost, we neither assume knowledge on the amplitudes and positions of the sources nor on their number. This adds an additional combinatorial component to the generally nonlinear nonconvex problem. Second, inference on \(\mu^{\dagger}\) is only possible through a finite number of indirect measurements \(z^{d}\). Additional challenges are given by the appearance of unobservable, deterministic or random, measurement noise \(\varepsilon\) in the problem. A recently popularized approach to alleviating many of these difficulties is to identify \(\mu^{\dagger}\) with a finite linear combination of Dirac measures
\[\mu^{\dagger}=\sum_{n=1}^{N_{s}}q_{n}^{\dagger}\delta_{y_{n}^{ \dagger}}\quad\text{where}\quad\int_{\varOmega}k(x_{j},y)\ \mathrm{d}\delta_{y_{n}^{\dagger}}(y)=k(x_{j},y_{n}^{ \dagger}). \tag{1.2}\]
Subsequently, we try to recover \(\mu^{\dagger}\) by the stable solution of the linear, ill-posed, operator equation
\[\text{find }\mu\in\mathcal{M}(\varOmega_{s})\colon\quad z^{d}( \varepsilon)=K\mu\text{ where }K\mu=\left(\int_{\varOmega}k(x_{1},y)\ \mathrm{d}\mu(y);\ldots;\int_{\varOmega}k(x_{N_{o}},y)\ \mathrm{d}\mu(y)\right)\]
over the space of Radon measures \(\mathcal{M}(\Omega_{s})\) defined on \(\Omega_{s}\). At first glance, this might seem counterintuitive: The space \(\mathcal{M}(\Omega_{s})\) is way larger than the set of "sparse" signals of the form (1.2). Thus, this lifting should contribute to the ill-posedness of the problem. However, it also bypasses the nonlinear dependency of \(k(x_{j},\cdot)\) onto the location of the sources and enables the use of powerful tools from variational regularization theory for the reconstruction of \(\mu^{\dagger}\). Central objects in this context, are the (noiseless) _minimum norm problem_
\[\min_{\mu\in\mathcal{M}(\Omega_{s})}\|\mu\|_{\mathcal{M}(\Omega_{s})}\quad s.t. \quad K\mu=K\mu^{\dagger}\] ( \[\mathcal{P}_{0}\] )
as well as the question whether \(\mu^{\dagger}\) is _identifiable_, i.e., its unique solution. A sufficient condition for the latter, is, e.g., the injectivity of the restricted operator \(K_{|\operatorname{supp}\mu^{\dagger}}\) as well as the existence of a, in some sense minimal, dual certificate \(\eta^{\dagger}\in\mathcal{C}^{2}(\Omega_{s})\) satisfying a _strengthened source condition_
\[|\eta^{\dagger}(y)|\leq 1\quad\text{for all }y\in\Omega_{s},\ \eta^{\dagger}(y_{n}^{ \dagger})=\operatorname{sign}(q_{n}^{\dagger}),\quad|\eta^{\dagger}(y)|<1\quad \text{for all }y\in\Omega_{s}\setminus\{y^{\dagger}\}_{n=1}^{N_{s}}.\]
For example, in particular settings, the groundbreaking paper [5] shows that \(\mu^{\dagger}\) is identifiable if the source locations \(y_{n}^{\dagger}\) are sufficiently well separated.
However, measurements stemming from experiments are always affected by errors, either due to external influences, imperfectness of the measurement devices or human failure. These have to be taken into account in order to guarantee a stable recovery of \(\mu^{\dagger}\).
### Sparse inverse problems with deterministic noise
Despite the popularity of sparse inverse problems, most of the existing work, to the best of our knowledge, focuses on deterministic noise \(\varepsilon\). In this context, several manuscripts, see e.g. [8, 1, 29, 10] for a non-exhaustive list, study the approximation of an identifiable \(\mu^{\dagger}\) by solutions to the Tikhonov-regularized problem
\[\bar{\mu}(\varepsilon)\in\mathfrak{M}(\varepsilon)\coloneqq\operatorname*{ arg\,min}_{\mu\in\mathcal{M}(\Omega_{s})}\left[\frac{1}{2}\|K\mu-z^{d}( \varepsilon)\|_{\Sigma_{0}^{-1}}^{2}+\beta\|\mu\|_{\mathcal{M}(\Omega_{s})} \right],\] ( \[\mathcal{P}_{\beta,\varepsilon}\] )
where \(\Sigma_{0}\) is a positive definite diagonal matrix and the regularization parameter \(\beta=\beta(\|\varepsilon\|)>0\) is adapted to the strength of the noise. This represents a challenging _nonsmooth_ minimization problem over the infinite-dimensional and non-reflexive space of Radon measures. Moreover, due to its lack of strict convexity, its solutions are typically not unique. Under mild conditions on the choice of \(\beta\), arbitrary solutions \(\bar{\mu}(\varepsilon)\) approximate \(\mu^{\dagger}\) in the weak*-sense, i.e.
\[\int_{\Omega_{s}}\varphi(y)\ \mathrm{d}\bar{\mu}(\varepsilon)(y)\to\int_{ \Omega_{s}}\varphi(y)\ \mathrm{d}\mu^{\dagger}(y)\quad\text{for all }\varphi\in \mathcal{C}(\Omega),\]
as \(\varepsilon\) goes to zero. Moreover, it was shown in [8] that if the minimal dual certificate \(\eta^{\dagger}\) associated to problem (\(\mathcal{P}_{0}\)) satisfies the strengthened source condition and its curvature does not degenerate around \(y_{n}^{\dagger}\), \(\bar{\mu}(\varepsilon)\) is unique and of the form
\[\bar{\mu}(\varepsilon)=\sum_{n=1}^{N_{s}^{\dagger}}\bar{q}_{n}(\varepsilon) \delta_{\bar{y}_{n}(\varepsilon)}\quad\text{with}\quad|\bar{q}_{n}(\varepsilon) -q_{n}^{\dagger}|+\|\bar{y}_{n}(\varepsilon)-y_{n}^{\dagger}\|=\mathcal{O}(\| \varepsilon\|)\]
provided that \(\|\varepsilon\|\) and \(\beta\) are small enough.
### Sparse inverse problems with random noise
From a practical perspective, assuming knowledge on the norm of the error is very restrictive or even unrealistic and a statistical model for the measurement error is more appropriate. While the literature on deterministic sparse inversion is very rich, there are only few works dealing with randomness in the problem. We point out, e.g., [4] in which the authors consider additive i.i.d. noise \(\varepsilon\) on measurements stemming from a low-pass filtering of the signal. A reconstruction \(\mu^{\dagger}(\varepsilon)\) is obtained by solving a constrained version of (\(\mathcal{P}_{\beta,\varepsilon}\)) and the authors show that, with high probability, there holds \(Q_{\mathrm{hi}}\ast(\bar{\mu}(\varepsilon))\approx Q_{\mathrm{hi}}\ast(\mu^{ \dagger})\) where \(Q_{\mathrm{hi}}\) is a high-resolution kernel. Moreover, in [29] the authors consider deterministic noise but allow for randomness in the forward operator \(K\). Their main result provides an estimate on an optimal transport energy between the total variation measure \(|\bar{\mu}(\varepsilon)|\) and a lumped measure \(\tilde{\mu}\ll\mu^{\dagger}\) with \(\tilde{\mu}(\{y_{n}^{\dagger}\})=|\tilde{\mu}(\varepsilon)|(B_{R}(y_{n}^{ \dagger}))\) where \(R>0\) is a small radius. These again hold with high probability. Finally, we also mention [9] in which the authors propose a first step towards _Bayesian
_inversion_ for sparse problems, i.e. both measurement noise as well as the unknown \(\mu^{\dagger}\) are considered to be random variables. A suitable prior is constructed and well-posedness of the associated Bayesian inverse problem is shown.
In this paper, similar to [4], we adopt a frequentist viewpoint on sparse inverse problems and assume that the measurement errors follow a known probability distribution. In contrast, the unknown signal \(\mu^{\dagger}\) is treated as a deterministic object. More in detail, we assume that \(\varepsilon\sim\gamma_{p}\coloneqq\mathcal{N}(0,\varSigma)\) where \(\varSigma=p^{-1}\varSigma_{0}\) and \(\varSigma_{0}\) is a positive definite diagonal matrix with \(\operatorname{tr}(\varSigma_{0}^{-1})=1\). The scalar \(p>0\) denotes the overall precision of the measurement error and represents, loosely speaking, the counterpart of \(1/\|\varepsilon\|\) in the random setting. In the present paper, we rely on a Tikhonov-type estimator \(\bar{\mu}(\varepsilon)\in\mathfrak{M}(\varepsilon)\) for the reconstruction of \(\mu^{\dagger}\) and investigate its closedness to the ground truth. However, the randomness of the noise poses various new challenges. First and foremost, the uncertainty of the noise propagates to the estimator. Thus \(\bar{\mu}\) has to be interpreted as a random variable. Second, unlike the deterministic setting of [8], our asymptotic analysis cannot exclusively rely on smallness assumptions on the Euclidean norm of the noise: some realizations of \(\varepsilon\) might be very large, albeit with small probability. Consequently, solutions to \((\mathcal{P}_{\beta,\varepsilon})\) can exhibit undesirable features such as clustering phenomena around \(y_{n}^{\dagger}\) or spurious sources far away from the true support. In particular, the reconstructed signal may comprise more or less than \(N_{s}^{\dagger}\) Dirac Deltas. Thus, we require a suitable distance which is compatible with weak* convergence on bounded subsets of \(\mathcal{M}(\varOmega_{s})\) and allows to assess the closeness of two arbitrary measures. We find a suitable candidate in generalizations of optimal transport energies; cf. also [29].
Despite its various difficulties, stochastic noise also provides new opportunities. For example, unlike the deterministic case, we are given a whole distribution of the measurement data and not only one particular realization. The setting of the sparse inverse problem further suggests that the uncertainty in the estimate \(\bar{\mu}\) critically depends on the number of measurements and their position. Formalizing this connection leads to the mathematical program of _optimal sensor placement_ or optimal design, i.e. an optimization of the measurement setup to mitigate the influence of the noise before any data is collected in a real experiment. This requires a cheap-to-evaluate _design criterion_ which allows to compare the quality of different sensor setups. For linear inverse problems in Hilbert spaces, a popular performance indicator is the mean-squared error of the associated least-squares estimator since it admits a closed form representation through its decomposition into variance and bias; see, e.g., [15]. For nonlinear problems, _locally optimal_ sensor placement approaches rely on a linearization of the forward model around a best guess for the unknown parameters; see, e.g., [31]. To the best of our knowledge, optimal sensor placement for nonsmooth estimation problems and for infinite dimensional parameter spaces beyond the Hilbert space setting is uncharted territory.
### Contribution
Taking these observations into consideration, we are led to the analysis of the _worst-case mean-squared-error_ of the estimator
\[\operatorname{MSE}[\bar{\mu}]\coloneqq\mathbb{E}_{\gamma_{p}}\left[\sup_{ \mu\in\mathfrak{M}(\cdot)}d_{\operatorname{HK}}(\mu,\mu^{\dagger})^{2}\right] =\int_{\mathbb{R}^{N_{o}}}\sup_{\mu\in\mathfrak{M}(\varepsilon)}d_{ \operatorname{HK}}(\mu,\mu^{\dagger})^{2}\ \mathrm{d}\gamma_{p}(\varepsilon), \tag{1.3}\]
where \(d_{\operatorname{HK}}\) denotes an extension of the Hellinger-Kantorovich distance introduced in [20] to signed measures. Taking the supremum over the solution set further avoids the requirement of a _mean-surable selection_ from \(\mathfrak{M}(\varepsilon)\). The existence of the latter poses a non-trivial problem in itself and goes beyond the scope of the current work. We also point out that, in comparison to linear inverse problems in Hilbert space, \(\operatorname{MSE}[\bar{\mu}]\) does not admit a closed form expression and its computation requires both, a sampling of the expected value as well as an efficient way to calculate the Hellinger-Kantorovich distance. This prevents its direct use in the context of optimal sensor placement for sparse inverse problems.
In order to explain our main result, let us denote by \(\boldsymbol{q}^{\dagger}=(q_{1}^{\dagger};\ldots;q_{N_{s}^{\dagger}}^{\dagger})\) and \(\boldsymbol{y}^{\dagger}=(y_{1}^{\dagger};\ldots;y_{N_{s}^{\dagger}}^{\dagger})\) the vectors of coefficients and positions of sources, respectively. By abbreviating \(\boldsymbol{m}^{\dagger}=(\boldsymbol{q}^{\dagger};\boldsymbol{y}^{\dagger})\),
we consider the parametrized problem
\[\min_{\mathbf{m}=(\mathbf{q};\mathbf{y})\in(\mathbb{R}\times\Omega_{s})^{N_{s}^{\dagger}}} \left[\frac{1}{2}\|G(\mathbf{m})-z^{d}(\varepsilon)\|^{2}_{\Sigma_{0}^{-1}}+\beta\| \mathbf{q}\|_{1}\right]\quad\text{where}\quad G(\mathbf{m})=\sum_{n=1}^{N_{\dagger}^{ \dagger}}q_{n}K\delta_{y_{n}} \tag{1.4}\]
as well as its linearization
\[\min_{\delta\mathbf{m}=(\delta\mathbf{q};\mathbf{\hat{y}})\in\mathbb{R}^{(1+d)N_{s}^{ \dagger}}}\left[\frac{1}{2}\|G^{\prime}(\mathbf{m}^{\dagger})\delta\mathbf{m}- \varepsilon\|^{2}_{\Sigma_{0}^{-1}}+\beta\operatorname{sign}(\mathbf{q}^{\dagger })^{\top}\delta\mathbf{q}\right]. \tag{1.5}\]
If \(\mu^{\dagger}\) satisfies the strengthened source condition and if the associated minimal norm dual certificate \(\eta^{\dagger}\) is non-degenerate, i.e. there holds
\[|\eta(y)|\leq 1-\theta\min\left\{\theta,\;\min_{n=1,\dots,N_{s}}\|\sqrt{|q^{ \dagger}_{n}|}(y-y^{\dagger}_{n})\|^{2}_{2}\right\}\quad\text{for all $y\in\Omega_{s}$} \tag{1.6}\]
for some \(\theta>0\), see [28], we provide the following upper estimate on the mean-squared error:
\[\mathbb{E}_{\gamma_{p}}\left[\sup_{\mathbf{\mu}\in\mathfrak{M}(\cdot)}d_{\text{HK }}(\mu,\mu^{\dagger})^{2}\right]\leq 8\mathbb{E}_{\gamma_{p}}[\|\delta \widehat{\mathbf{m}}\|^{2}_{W_{\dagger}}]+C_{3}\exp\left[-\left(\frac{\theta^{2} \beta_{0}}{64C_{4}}\right)^{2}/(2N_{o})\right], \tag{1.7}\]
for a parameter choice of \(\beta(p)=\beta_{0}/\sqrt{p}\), \(\beta_{0},p>0\) large enough and the unique solution \(\delta\widehat{\mathbf{m}}=\delta\widehat{\mathbf{m}}(\varepsilon)\) of (1.5), see Theorem 6.1. The weighted Euclidean norm \(\|\cdot\|_{W_{\dagger}}\) is induced by a positive definite matrix \(W_{\dagger}\) which is related to the ground truth \(\mathbf{m}^{\dagger}\). Similar estimates also hold pointwise with high probability. From a qualitative perspective, these results can be interpreted as a refined analogue of [8] in the stochastic noise setting: For sufficiently large \(\beta_{0}>0\), the second term in is negligible and the worst-case behavior of the mean-squared error is dominated by \(\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mathbf{m}}\|^{2}_{W_{\dagger}}]\). Since \(\delta\widehat{\mathbf{m}}\) is linear in \(\varepsilon\) and the noise is Gaussian, the latter admits a closed form expression
\[\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mathbf{m}}\|^{2}_{W_{\dagger}}]=\frac{1 }{p}\left(\operatorname{tr}(W_{\dagger}\mathcal{I}_{0}^{-1})+\beta_{0}^{2}\| \mathcal{I}_{0}^{-1}(\mathbf{\rho};\mathbf{0})\|^{2}_{W_{\dagger}}\right)=\mathcal{O} (1/p)\]
for a matrix \(\mathcal{I}_{0}\) and a vector \(\mathbf{\rho}\) given by
\[\mathcal{I}_{0}:=G^{\prime}(\mathbf{m}^{\dagger})^{\top}\Sigma_{0}^{-1}G^{\prime }(\mathbf{m}^{\dagger})\quad\text{and}\quad\mathbf{\rho}=\operatorname{sign}\mathbf{q}^{ \dagger}.\]
Moreover, the multiplicative constant in the expectation
\[\psi_{\beta_{0}}(\mathbf{x},\Sigma_{0})=\operatorname{tr}(W_{\dagger}\mathcal{I} _{0}^{-1})+\beta_{0}^{2}\|\mathcal{I}_{0}^{-1}(\mathbf{\rho};\mathbf{0})\|^{2}_{W_{ \dagger}}\]
explicitly depends on the measurement setup and resembles the "classical" \(\Lambda\)-optimal design criterion; cf. [15]. These observations, together with the intractability of evaluating the mean-squared error directly, suggest its use as optimal design criterion in the context of optimal sensor placement. We further solidify this idea by clarifying the dependence of the constants \(C_{3}\) and \(C_{4}\) on the measurement setup.
The proof of (1.7) relies on a splitting of the set of possible measurement errors \(\mathbb{R}^{N_{o}}\) into a set of "nice" events \(\mathcal{A}_{\text{nice}}\) as well as a precise estimation of the measure of its complement \(\mathbb{R}^{N_{o}}\setminus\mathcal{A}_{\text{nice}}\). On \(\mathcal{A}_{\text{nice}}\), we show that problems \((\mathcal{P}_{\beta,\varepsilon})\) and (1.4) admit unique minimizers \(\bar{\mu}\) and \(\widehat{\mathbf{m}}=(\widehat{\mathbf{q}},\widehat{\mathbf{y}})\), respectively, which are related by
\[\bar{\mu}=\sum_{n=1}^{N_{\dagger}^{\dagger}}\widehat{q}_{n}\delta_{\widehat{y }_{n}}.\]
Moreover, applying a fully quantitative implicit function theorem yields
\[d_{\text{HK}}(\bar{\mu},\mu^{\dagger})^{2}\leq R(\widehat{\mathbf{q}},\mathbf{q}^{ \dagger})\|\widehat{\mathbf{m}}-\mathbf{m}^{\dagger}\|^{2}_{W_{\dagger}}\leq 8\| \delta\widehat{\mathbf{m}}\|^{2}_{W_{\dagger}}. \tag{1.8}\]
for some \(R(\widehat{\mathbf{q}},\mathbf{q}^{\dagger})\approx 1\). This estimate critically depends on the choice of the \(d_{\text{HK}}\) in (1.3) as well as its interpretation as unbalanced Wasserstein-2 distance. While similar equalities can be derived for other popular metrics such as the Kantorovich-Rubinstein distance, see Appendix B, this would introduce additional constants in (1.8) stemming from an inverse inequality of discrete \(\ell_{1}\) and weighted \(\ell_{2}\) norms. Thus, the resulting right hand side in (1.7) would overestimate the true error by a potentially substantial factor. In contrast, (1.8) is sharp in the sense that the factor of \(8\) can,
mutatis mutandis, be replaced by any \(1+\delta\), \(\delta>0\) (at the cost of decreasing the range on \(p>0\) for which it is valid).
### Further related work
Sparse minimization problems beyond inverse problemsMinimization problems over spaces of measures represent a sensible extension of \(\ell_{1}\)-regularization towards decision variables on continuous domains. Consequently, problems of the form \((\mathcal{P}_{\beta,\varepsilon})\) naturally appear in a variety of different applications, detached from inverse problems. We point out, e.g., optimal actuator placement, optimal sensor placement [22], as well as the training of shallow neural networks [2]. Non-degeneracy conditions similar to (1.6) play a crucial role in this context and form the basis for an in-depth (numerical) analysis of the problem, e.g., concerning the derivation of fast converging solution methods, [7, 11, 26], or finite element error estimates [19].
Inverse problems with random noiseFrequentist approaches to inverse problems have been studied previously in, e.g., [13, 32]. These works focus on the "lifting" of deterministic regularization methods as well as of their consistency properties and convergence rates to the random noise setting. This only relies on minimal assumptions on the inverse problem, e.g., classical source conditions, and thus covers a wide class of settings. Similar to the present work, an important role is played by a splitting of the possible events into a set on which the deterministic theory holds and its small complement. However, we want to stress that the proof of the main estimate in (1.7) is problem-taylored and relies on exploiting specific structural properties of inverse problems in spaces of measures. Moreover, our predominant goal is _not_ the consistency analysis of an estimator but the derivation of a useful and mathematically sound design criterion for sparse inverse problems.
### Organization of the paper
The paper is organized as follows: In Section 3, we recall some properties of the minimum norm problem \((\mathcal{P}_{\beta,\varepsilon})\) and the Tikhonov regularized problem \((\mathcal{P}_{\beta,\varepsilon})\) as well as its solutions. In Section 4, we define the Hellinger-Kantorovich distance and investigate its properties. Section 5 is devoted to study the analysis of the linearized estimate \(\delta\widehat{\boldsymbol{m}}\). Using these results, we then study sparse inverse problems with random noise in Section 6 and provide a sharp upper bound for \(\mathrm{MSE}[\bar{\mu}]\) in Section 6.2. Finally, in Section 7 we present some numerical examples to verify our theory.
## 2. Preliminaries and notation
Before going into the main part of the paper, we introduce some main notations which are used throughout the paper. Firstly, we use the following convention: Letters \(c_{i},C_{i}\), \(i=1,2,\ldots\) denote constants that may vary from line to line. The notation \(C=C(a,b,\ldots)\) indicates that \(C\) depends on \(a,b,\ldots\). We use "\(:=\)" to denote a definition, which is read as "is defined to be". We denote by \(\varOmega_{s}\subset\mathbb{R}^{d}\) and \(\varOmega_{o}\subset\mathbb{R}^{d}\) the compact sensor and observation set, where \(d\geq 1\) and \(\varOmega_{s}\) has a nonempty interior.
A vector in \(X^{m}\), where \(X\) is a set and \(m>1\), will be written in bold face, for instance \(\boldsymbol{y}=(y_{1};\ldots;y_{N_{s}})\in\varOmega_{s}^{N_{s}}\), \(\boldsymbol{q}=(q_{1};\ldots;q_{N_{s}})\in\mathbb{R}^{N_{s}}\) and \(\boldsymbol{x}=(x_{1};\ldots;x_{N_{s}})\in\varOmega_{o}^{N_{o}}\) are vectors of coefficients, positions of sources and positions of observations, respectively, where the formal definitions are introduced in the sequel. We write \((\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{n})\) and \((\boldsymbol{a}_{1};\ldots;\boldsymbol{a}_{n})\) to stack vectors \(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{n}\) horizontally and vertically, respectively. We write \(\|\cdot\|_{p}\) for the usual \(\ell^{p}\)-norm on \(\mathbb{R}^{m}\). For a vector \(x\in\mathbb{R}^{m}\) and a positively defined matrix \(W\in\mathbb{R}^{m\times m}\), we define the weighted \(W\)-norm of \(x\) as \(\|x\|_{W}:=\|W^{1/2}x\|_{2}\). The closed ball in this weighted norm is denoted by \(B_{W}(x,r):=\{\,x^{\prime}\in\mathbb{R}^{m}\,:\,\|x^{\prime}-x\|_{W}\leq r\,\}\). Next, for a linear map \(A:X\to Y\), the operator norm of \(A\) is given by \(\|A\|_{X\to Y}=\sup_{\|x\|_{X}\leq 1}\|Ax\|_{Y}\). Similarly, any bilinear map \(A:X_{1}\times X_{2}\to Y\) has a natural operator norm \(\|A\|_{X_{1}\times X_{2}\to Y}:=\sup_{\|x_{1}\|_{X_{1}}\leq 1,\|x_{2}\|_{X_{2}} \leq 1}\|A(x_{1},x_{2})\|_{Y}\).
Throughout the paper, by a slight abuse of notation, we denote by \(\varepsilon\) a variable deterministic noise, a random variable, or its realization, which will be clear from the context.
### Integral kernels
Let \(k:\Omega_{o}\times\Omega_{s}\to\mathbb{R}\) be a real-valued kernel. We introduce the following notations which turn \(k\) into vector-valued kernels: \(k[\mathbf{x}](y)=k[\mathbf{x},y]\) is a row vector with
\[k[\mathbf{x},y]:=\left(k(x_{1},y);\ldots;k(x_{N_{o}},y)\right),\quad\mathbf{x}=(x_{1}; \ldots;x_{N_{o}})\in\Omega_{o}^{N_{o}},\quad y\in\Omega_{s},\]
while \(k[x,\mathbf{y}]\) is a column vector with
\[k[x,\mathbf{y}]:=\left(k(x,y_{1}),\ldots,k(x,y_{N_{s}})\right),\quad x\in\Omega_{ o},\quad\mathbf{y}=(y_{1};\ldots;y_{N_{s}})\in\Omega_{s}^{N_{s}}.\]
Similarly, we also have the matrix \(k[\mathbf{x},\mathbf{y}]\) defined as
\[k[\mathbf{x},\mathbf{y}]:=\left(k(x_{1},\mathbf{y});\ldots;k(x_{N_{o}},\mathbf{y})\right).\]
When \(k=k(x,\cdot)\) is a smooth function in variable \(y\), we consider the \(r^{\text{th}}\)-derivative of \(k\) the tensor of partial derivatives is \(y\) by \(\nabla_{y\cdots y}^{\top}k(x,y)\). In particular, \(\nabla_{y}k(x,y)\) and \(\nabla_{yy}^{2}k(x,y)\) are the gradient and Hessian of \(k\) (with respect to variable \(y\),) respectively. We note that \(\nabla_{y}k\colon\Omega_{o}\times\Omega_{s}\to\mathbb{R}^{N_{s}}\) is a vector valued kernel and thus we define \(\nabla_{y}^{\top}k[x,\mathbf{y}]\) as a matrix defined by
\[\nabla_{y}^{\top}k[x,\mathbf{y}]=(\nabla_{y}k(x,y_{1})^{\top},\nabla_{y}k(x,y_{2} )^{\top},\ldots,\nabla_{y}k(x,y_{N_{s}})^{\top}).\]
Similarly, \(\nabla_{y}^{\top}k[\mathbf{x},\mathbf{y}]\) is a block matrix defined by
\[\nabla_{y}^{\top}k[\mathbf{x},\mathbf{y}]=(\nabla_{y}^{\top}k[x_{1},\mathbf{y}],\ldots, \nabla_{y}^{\top}k[x_{N_{o}},\mathbf{y}]).\]
Throughout the paper, we assume that the kernel is sufficiently regular:
* The kernel \(k\in\mathcal{C}(\Omega_{o}\times\Omega_{s})\) is three-times differentiable in the variable \(y\). For abbreviation, we further set \[C_{k} :=\sup_{x\in\Omega_{o},y\in\Omega_{s}}|k(x,y)|, C_{k}^{\prime} :=\sup_{x\in\Omega_{o},y\in\Omega_{s}}\|\nabla_{y}k(x,y)\|_{2},\] \[C_{k}^{\prime\prime} :=\sup_{x\in\Omega_{o},y\in\Omega_{s}}\|\nabla_{yy}^{2}k(x,y)\|_ {2\to 2}, C_{k}^{\prime\prime\prime} :=\sup_{x\in\Omega_{o},y\in\Omega_{s}}\|\nabla_{yyy}^{3}k(x,y)\|_ {2\times 2\to 2}.\] By means of the kernel \(k\), we define \[[\mathcal{K}\mu](x)=\int_{\Omega_{s}}k(x,y)\,\mathrm{d}\mu(y),\quad x\in \Omega_{o},\quad\mu\in\mathcal{M}(\Omega_{s}),\] associated to \(\mu\in\mathcal{M}(\Omega_{s})\) as well as the weak* continuous forward operator \(K\colon\mathcal{M}(\Omega_{s})\to\mathbb{R}^{N_{o}}\) given by \[K\mu=([\mathcal{K}\mu](x_{1});[\mathcal{K}\mu](x_{2});\ldots;[\mathcal{K}\mu] (x_{N_{o}})).\] Moreover, consider the operator \(K^{*}\colon\mathbb{R}^{N_{o}}\to\mathcal{C}(\Omega_{s})\) given by \[[K^{*}z](y)=\sum_{j=1}^{N_{o}}z_{j}k(x_{j},y)\quad\text{for all}\quad z\in \mathbb{R}^{N_{o}}.\] Then \(K^{*}\) is linear and continuous and there holds \[\int_{\Omega}[K^{*}z](y)\ \mathrm{d}\mu(y)=z^{\top}[K\mu]\quad\text{for all}\quad\mu \in\mathcal{M}(\Omega_{s}),\ z\in\mathbb{R}^{N_{o}}.\]
### Space of Radon measures
We recall some properties of Radon measures. Let \(\Omega\subset\mathbb{R}^{d}\), \(d\geq 1\) be a compact set. We define the space of Radon measures \(\mathcal{M}(\Omega)\) as the topological dual of the space \(\mathcal{C}(\Omega)\) of continuous functions on \(\Omega\) endowed with the supremum norm. It is then a Banach space equipped with the dual norm
\[\|\mu\|_{\mathcal{M}(\Omega)}:=\sup\left\{\int_{\Omega}f\,\mathrm{d}\mu:f\in \mathcal{C}(\Omega),\|f\|_{\mathcal{C}(\Omega)}\leq 1\right\}.\]
Weak* convergence of a sequence in \(\mathcal{M}(\Omega)\) will be denoted by "\(\rightharpoonup\)". Next, by the definition of the total variation norm, its subdifferential is defined by
\[\partial\|\mu\|_{\mathcal{M}(\Omega_{s})}:=\left\{\eta\in\mathcal{C}(\Omega_{s }):|\eta(y)|\leq 1,\forall y\in\Omega_{s}\text{ and }\int_{\Omega_{s}}\eta\,\mathrm{d}\mu=\|\mu\|_{\mathcal{M}(\Omega_{s})} \right\},\]
see for instance [8]. In particular, for a discrete measure \(\mu=\sum_{n=1}^{N}q_{n}\delta_{y_{n}}\) one has
\[\partial\|\mu\|_{\mathcal{M}(\Omega_{s})}=\left\{\eta\in\mathcal{C}(\Omega_{s}): |\eta(y)|\leq 1,\forall y\in\Omega_{s}\text{ and }\eta(y_{n})=\operatorname{sign}(q_{n}),\forall n=1, \ldots,N\right\}.\]
Finally, by \(\mathcal{M}^{+}(\Omega)\) we refer to the set of positive Radon measures on \(\Omega\).
## 3. Sparse inverse problems with deterministic noise
As already outlined in the introduction, our interest lies in the stable recovery of a sparse ground truth measure
\[\mu^{\dagger}=\sum_{n=1}^{N^{\dagger}}q_{n}^{\dagger}\delta_{y_{n}^{\dagger}} \quad\text{for some}\quad q_{n}^{\dagger}\in\mathbb{R},\]
by solving the Tikhonov regularization (\(\mathcal{P}_{\beta,\varepsilon}\)) associated to the inverse problem \(z^{d}=K\mu\) given noisy data \(z^{d}\). In this preliminary section, we give some meaningful examples of this abstract setting and briefly recap the key concepts and results in the case of additive deterministic noise
\[z^{d}(\varepsilon)=K\mu^{\dagger}+\varepsilon\quad\text{for some}\quad \varepsilon\in\mathbb{R}^{N_{o}}.\]
In particular, we clarify the connection between (\(\mathcal{P}_{\beta,\varepsilon}\)) and (\(\mathcal{P}_{0}\)) and recall a first qualitative statement on the asymptotic behavior of solutions to (\(\mathcal{P}_{\beta,\varepsilon}\)) for a suitable a priori regularization parameter choice \(\beta=\beta(\varepsilon)\).
### Examples
Sparse inverse problems appear in a variety of interesting applications. In the following, we give several examples which fit into the abstract setting considered in the present work.
**Example 3.1**.: Consider the advection-diffusion equation
\[\partial_{t}u-\nabla(\boldsymbol{D}\cdot\nabla u)+\nabla\cdot(\kappa u)=0\text { in }(0,T)\times\mathbb{R}^{d}, \tag{3.1}\]
together with the initial value \(u(0,\cdot)=\mu\). The boundary condition is given by \(u\to 0\) as \(x\to\infty\). This equation describes the rate of change of the concentration of the contaminant \(u(t,x)\). For simplicity, we consider a two-dimensional medium, and both \(\kappa=(\kappa_{1},\kappa_{2})\) and \(\boldsymbol{D}=\operatorname{diag}(D_{1},D_{2})\) are independent of \(x\). Here the solution to (3.1) is given by
\[u(t,x)=\int_{\mathbb{R}^{2}}G(x-y,t)\,\mathrm{d}\mu(y)\]
where \(G(x,t)\) is the Green's function of the advection-diffusion equation, which is given by
\[G(x,t)=\frac{1}{4\pi\sqrt{D_{1}D_{2}t}}\exp(-\|x-\kappa t\|_{\boldsymbol{D}^{ -1}}^{2}/(4t)).\]
Here, if one seeks to identify the initial value \(\mu\) from finite number of measurements at time \(T_{o}>0\) in the observation set \(\Omega_{o}\subset\mathbb{R}^{2}\), the kernel is given by \(k(x,y)=G(x-y,T_{o})\).
**Example 3.2**.: Consider the advection-diffusion equation on a bounded smooth domain \(\Omega\), together with the Dirichlet boundary conditions \(u|_{(0,T)\times\partial\Omega}=0\), then there exists a kernel \(G(x,y,t)\) such that
\[u(t,x)=\int_{\Omega}G(x,y,t)\,\mathrm{d}\mu(y),\]
see, e.g., [12]. In this case, for observations at time \(T_{o}\) we choose \(k=G(\cdot,\cdot,T_{o})\). For \(\Omega_{o}\subset\Omega\) (i.e., no observation near the boundary), the regularity requirements on \(\partial\Omega\) are not necessary since one can employ interior regularity arguments; see, e.g., [14].
### Tihkonov regularization of sparse inverse problems
In this section, we briefly summarize some preliminary results concerning the regularized problem \((\mathcal{P}_{\beta,\varepsilon})\) as well as its solution set. We start by discussing its well-posedness.
**Proposition 3.3**.: problem \((\mathcal{P}_{\beta,\varepsilon})\) admits a solution \(\bar{\mu}\). Furthermore, any solution \(\bar{\mu}\) to \((\mathcal{P}_{\beta,\varepsilon})\) satisfies \(\|\bar{\mu}\|_{\mathcal{M}(\Omega_{s})}\leq\|\varepsilon\|_{\Sigma_{0}^{-1}}^ {2}/(2\beta)+\|\mu^{\dagger}\|_{\mathcal{M}(\Omega_{s})}\) and the solution set
\[\mathfrak{M}(\varepsilon)=\arg\min\left(\mathcal{P}_{\beta,\varepsilon}\right)\]
is weak* compact.
Proof.: Existence of a minimizer of \((\mathcal{P}_{\beta,\varepsilon})\) is guaranteed by [3, Proposition 3.1] noticing that the forward operator \(K:\mathcal{M}(\Omega_{s})\to\mathbb{R}^{N_{o}}\) of \((\mathcal{P}_{\beta,\varepsilon})\) is weak*-to-strong continuous. For the upper bound we use the optimality of \(\bar{\mu}\) compared to \(\mu^{\dagger}\) as well as the definition of \(z^{d}(\varepsilon)\) to get
\[\beta\|\bar{\mu}\|_{\mathcal{M}(\Omega_{s})}\leq\frac{1}{2}\|K\bar{\mu}-z^{d} \|_{\Sigma_{0}^{-1}}^{2}+\beta\|\bar{\mu}\|_{\mathcal{M}(\Omega_{s})}\leq \frac{1}{2}\|\varepsilon\|_{\Sigma_{0}^{-1}}^{2}+\beta\|\mu^{\dagger}\|_{ \mathcal{M}(\Omega_{s})}.\]
Moreover, \(\mathfrak{M}(\varepsilon)\) is weak* closed since the objective functional in \((\mathcal{P}_{\beta,\varepsilon})\) is weak* lower semicontinuous. Combining both observations, we conclude the weak* compactness of \(\mathfrak{M}(\varepsilon)\).
In particular, note that \(\mathfrak{M}(\varepsilon)\) is, in general, not a singleton due to the lack of strict convexity in \((\mathcal{P}_{\beta,\varepsilon})\). Moreover, we recall that the inverse problem was introduced as a lifting of the nonconvex and combinatorial integral equation (1.1). From the same perspective, \((\mathcal{P}_{\beta,\varepsilon})\) can be interpreted as a convex relaxation of the parametrized problem
\[\inf_{\begin{subarray}{c}\mathbf{y}\in\Omega_{s}^{N},\ \mathbf{q}\in\mathbb{R}^{N},\\ N\in\mathbb{N}\end{subarray}}\left[\frac{1}{2}\|k[\mathbf{x},\mathbf{y}]\mathbf{q}-z^{d} \|_{\Sigma_{0}^{-1}}^{2}+\beta\|\mathbf{q}\|_{1}\right], \tag{3.2}\]
In the following proposition, we show that this relaxation is exact, i.e. there exists at least one solution to (3.2) and its minimizers parametrize sparse solutions to \((\mathcal{P}_{\beta,\varepsilon})\).
**Proposition 3.4**.: There holds \(\min\left(\mathcal{P}_{\beta,\varepsilon}\right)=\inf\eqref{eq:p}\). For a triple \((\bar{N},\bar{\mathbf{y}},\bar{\mathbf{q}})\) with \(\bar{y}_{i}\neq\bar{y}_{j}\), \(i\neq j\), the following statements are equivalent:
* The triple \((\bar{N},\bar{\mathbf{y}},\bar{\mathbf{q}})\) is a solution of (3.2).
* The parametrized measure \(\bar{\mu}=\sum_{n=1}^{\bar{N}}\bar{q}_{n}\delta_{\bar{y}_{n}}\) is a solution of \((\mathcal{P}_{\beta,\varepsilon})\).
Moreover, \((\mathcal{P}_{\beta,\varepsilon})\) admits at least one solution of this form with \(\bar{N}\leq N_{o}\).
Proof.: Given \((N,\mathbf{y},\mathbf{q})\) with \(y_{i}\neq y_{j}\), \(i\neq j\), note that the sparse measure
\[\mu(\mathbf{y},\mathbf{q})=\sum_{n=1}^{N}q_{n}\delta_{y_{n}}\quad\text{satisfies} \quad K\mu(\mathbf{y},\mathbf{q})=k[\mathbf{x},\mathbf{y}]\mathbf{q},\ \|\mu(\mathbf{y},\mathbf{q})\|_{\mathcal{M}(\Omega_{s})}=\|\mathbf{q}\|_{1}.\]
Hence, one readily verifies \(\min\left(\mathcal{P}_{\beta,\varepsilon}\right)=\inf\eqref{eq:p}\) as well as the claimed equivalence due to the weak* density of the set of sparse measures in \(\mathcal{M}(\Omega_{s})\) and since the objective functional in \((\mathcal{P}_{\beta,\varepsilon})\) is weakly* lower semicontinuous. The existence of a sparse solution to \((\mathcal{P}_{\beta,\varepsilon})\) follows similarly to [25, Theorem 3.7].
The equivalence between both of these problems will play a significant role in our subsequent analysis. Additional insight on the structure of solutions to \((\mathcal{P}_{\beta,\varepsilon})\) can be gained through the study of its first order necessary and sufficient optimality conditions. Since our interest lies in sparse solutions, we restrict the following proposition to this particular case.
**Proposition 3.5**.: A measure \(\bar{\mu}=\sum_{n=1}^{\bar{N}}\bar{q}_{n}\delta_{\bar{y}_{n}}\) is a solution of \((\mathcal{P}_{\beta,\varepsilon})\) if and only if
\[|\bar{\eta}(y)|\leq 1\text{ for all }y\in\Omega_{s},\quad\bar{\eta}(\bar{y}_{n})= \operatorname{sign}(\bar{q}_{n}),\quad\forall n=1,\ldots,\bar{N},\]
where
\[\bar{\eta}=-K^{*}\Sigma_{0}^{-1}(K\bar{\mu}-z^{d})/\beta=K^{*}\Sigma_{0}^{-1} \left(z^{d}-k[\mathbf{x},\bar{\mathbf{y}}]\bar{\mathbf{q}}\right)/\beta.\]
Note that \(\bar{\eta}\) is independent of the particular choice of the solution to \((\mathcal{P}_{\beta,\varepsilon})\). We will refer to it as the dual certificate associated to \((\mathcal{P}_{\beta,\varepsilon})\) in the following. Finally, we give a connection between \((\mathcal{P}_{\beta,\varepsilon})\) and the minimum norm problem \((\mathcal{P}_{0})\) in the vanishing noise limit. The following general convergence property follows directly from [16].
**Proposition 3.6**.: Assume that \(\beta=\beta(\varepsilon)\) is chosen such that
\[\beta\to 0\text{ and }\frac{\|\varepsilon\|_{\mathcal{P}_{0}^{-1}}^{2}}{ \beta}\to 0\text{ as }\|\varepsilon\|_{\mathcal{P}_{0}^{-1}}\to 0.\]
Then solutions to \((\mathcal{P}_{\beta,\varepsilon})\) subsequentially converge weakly-* towards solutions of \((\mathcal{P}_{0})\).
### Radon minimum norm problems
Following Proposition (3.6), guaranteed recovery of the ground truth measure requires that \(\mu^{\dagger}\) is identifiable, i.e. the unique solution of \((\mathcal{P}_{0})\). In this section, we briefly summarize some key concepts regarding \((\mathcal{P}_{0})\) and state sufficient assumptions for the latter. For this purpose, introduce the associated Fenchel dual problem
\[\min_{\zeta\in\mathbb{R}^{N_{o}}}\left[-\langle\mu^{\dagger},K^{*}\Sigma_{0}^ {-1}\zeta\rangle+\mathbb{I}_{\|K^{*}\Sigma_{0}^{-1}\zeta\|_{C(\Omega_{s})} \leq 1}\right]. \tag{3.3}\]
as well as the minimal-norm dual certificate
\[K^{*}\Sigma_{0}^{-1}\zeta^{\dagger}\in\mathcal{C}^{2}(\Omega_{s})\quad\text{ where}\quad\zeta^{\dagger}=\underset{\zeta\in\mathbb{R}^{N_{o}}}{\arg\min}\{\,\|\zeta\|_{2}\ :\ \zeta\in\arg\min\,(\ref{eq:1})\,\}.\]
Note that both are well-defined following [25, Proposition A.2]. Moreover, by standard results from convex analysis, a given \(\mu\in\mathcal{M}(\Omega_{s})\) is a solution to \((\mathcal{P}_{0})\) if and only if \(\eta^{\dagger}\in\partial\|\mu\|_{\mathcal{M}(\Omega_{s})}\). The following assumptions on \(\mu^{\dagger}\) and \(\eta^{\dagger}\) are made throughout the paper:
**(A2)**: _Structure of \(\mu^{\dagger}\)_: There holds_
\[\mu^{\dagger}=\sum_{n=1}^{N_{s}^{\dagger}}q_{n}^{\dagger}\delta_{y_{n}^{ \dagger}}\quad\text{where}\quad q_{n}^{\dagger}\neq 0,\ y_{n}^{\dagger}\in \operatorname{int}(\Omega_{s})\quad\text{for all}\quad n=1,\dots,N_{s}^{ \dagger}.\]
**(A3)**: _Source condition_: We have
\[|\eta^{\dagger}(y)|\leq 1\quad\text{for all}\quad y\in\Omega_{s}\quad\text{ and}\quad\eta^{\dagger}(y_{n}^{\dagger})=\operatorname{sign}(q_{n}^{\dagger})\quad\text{ for all}\quad n=1,\dots,N_{s}.\]
**(A4)**: _Strengthened source condition_: There holds
\[|\eta^{\dagger}(y)|<1\quad\text{for all}\quad y\in\Omega_{s}\setminus\{y_{n}^{ \dagger}\}_{n=1}^{N_{s}^{\dagger}}\]
and the operator \(K_{|\operatorname{supp}\mu^{\dagger}}\coloneqq k[\mathbf{x},\mathbf{y}^{\dagger}]\) is injective.
Here, Assumption A3 is equivalent to \(\eta^{\dagger}\in\partial\|\mu^{\dagger}\|_{\mathcal{M}(\Omega_{s})}\), i.e., \(\mu^{\dagger}\) is indeed a solution to \((\mathcal{P}_{0})\), whereas Assumptions A2 and A4 imply its uniqueness. While Assumption A4 seems very strong at first glance, it can be explicitly verified in some settings (see, e.g., [5]) and is often numerically observed in practice. According to [8, Proposition 5] we have the following:
**Proposition 3.7**.: Let Assumptions A2-A4 hold. Then \(\mu^{\dagger}\) is the unique solution of \((\mathcal{P}_{0})\).
As a consequence, Proposition 3.6 implies \(\bar{\mu}\rightharpoonup^{*}\mu^{\dagger}\). Moreover, according to [8, Proposition 1], the dual certificates \(\bar{\eta}\) associated to \((\mathcal{P}_{\beta,\varepsilon})\) approximate the minimal norm dual certificate \(\eta^{\dagger}\) in a suitable sense. Taking into account Assumption A3 as well as Proposition 3.5, we thus conclude that the reconstruction of \(\mu^{\dagger}\) from (3.2) is governed by the convergence of the global extrema of \(\bar{\eta}\) towards those of \(\eta^{\dagger}\). However, in order to capitalize on this observation in our analysis, we need to compute a closed form expression for \(\eta^{\dagger}\). In general, this is intractable due to the global constraint \(|\eta^{\dagger}(z)|\leq 1\), \(z\in\Omega_{s}\). As a remedy, the authors of [8] introduce a simpler proxy replacing this constraint by finitely many linear ones noting that
\[\nabla\eta^{\dagger}(y_{n}^{\dagger})=0,\quad\eta^{\dagger}(y_{n}^{\dagger})= \operatorname{sign}(q_{n}^{\dagger})\quad\text{for all}\quad\ n=1,\dots,N_{s}^ {\dagger}.\]
The computation of the associated vanishing derivative pre-certificate \(\eta_{\mathrm{PC}}:=K^{*}\Sigma_{0}^{-1}\zeta_{\mathrm{PC}}\in\mathcal{C}^{2}( \varOmega_{s})\) where
\[\zeta_{\mathrm{PC}}=\operatorname*{arg\,min}_{\zeta\in\mathbb{R}^{N_{o}}}\{\| \zeta\|_{2}:\nabla\eta_{\mathrm{PC}}(y_{i}^{\dagger})=0,\quad\eta_{\mathrm{PC }}(y_{n}^{\dagger})=\operatorname{sign}(q_{n}^{\dagger})\quad\text{ for all }\quad n=1,\ldots,N_{s}^{ \dagger}\}\]
only requires the solution of a linear systems of equations and coincides with \(\eta^{\dagger}\) under appropriate conditions, see [8, Proposition 7]. Finally, in order to derive quantitative statements on the the reconstruction error between \(\bar{\mu}\) and \(\mu^{\dagger}\), we require the non-degeneracy of the minimal norm dual certificate of \(\mu^{\dagger}\) in the sense of [8]. Since we aim to use (1.7) in the context of optimal sensor placement, that is, we need to track the dependence of the involved constants on the measurement setting, we utilize the following quantitative definition; cf. [28].
**Definition 3.8**.: We say that \(\eta\in\mathcal{C}^{2}(\varOmega_{s})\) is \(\theta-\)non-degenerate or \(\theta-\)admissible for the sparse measure \(\mu=\sum_{n=1}^{N_{s}}q_{n}\delta_{y_{n}}\) and \(\theta>0\) if there holds
\[|\eta(y)|\leq 1-\theta\min\left\{\theta,\min_{n=1,\ldots,N_{s}}\|w_{n}^{ \dagger}(y-y_{n})\|_{2}^{2}\right\},\quad\eta(y_{n})=\operatorname{sign}(q_{n })\qquad\text{for all }\quad y\in\varOmega_{s} \tag{3.4}\]
and weights \(w_{n}^{\dagger}=\sqrt{|q_{n}^{\dagger}|}\).
Due to the regularity of \(\eta\) one readily verifies that (3.4) is equivalent to
\[-\operatorname{sign}\eta(y_{n})\nabla^{2}\eta(y_{n})\geq 2\theta|w_{n}^{ \dagger}|^{2}\operatorname{Id}\quad\text{ for every }\quad n=1,2,\ldots,N_{s},\]
as well as
\[|\eta(y)|\leq 1-\theta^{2},\quad\text{for all}\quad y\in\varOmega_{s} \setminus\bigcup_{n=1,\ldots,N_{s}}B_{w_{n}^{\dagger}}(y_{n},\sqrt{\theta}).\]
## 4. Distances on spaces of measures
In order to quantitatively study the reconstruction error of estimators of the source \(\mu^{\dagger}\), we introduce a distance function on \(\mathcal{M}(\varOmega_{s})\) which measures the error between the estimated source measure \(\widehat{\mu}\) and the reference measure \(\mu^{\dagger}\). An obvious choice of distance would be the total variation norm on \(\mathcal{M}(\varOmega_{s})\), however it is not suitable for quantifying the reconstruction error. In fact, evaluating \(d_{\mathrm{TV}}(\mu_{1},\mu_{2})=\|\mu_{1}-\mu_{2}\|_{\mathcal{M}(\varOmega_{ s})}\) for sparse measures \(\mu_{1},\mu_{2}\in\mathcal{M}(\varOmega_{s})\) is simple by noting that
\[d_{\mathrm{TV}}(q_{1}\delta_{y_{1}},q_{2}\delta_{y_{1}})=|q_{1}-q_{2}|,\]
but for \(y_{1}\neq y_{2}\), one has
\[d_{\mathrm{TV}}(q_{1}\delta_{y_{1}},q_{2}\delta_{y_{2}})=|q_{1}|+|q_{2}|,\]
that is, \(d_{\mathrm{TV}}\) does not quantify the reconstruction error of the source positions, and small perturbations of the source points lead to a constant error in the metric. Hence, in general one cannot rely on TV distance to evaluate the quality of the reconstruction. In the following, we consider an extension of the Hellinger-Kantorovich (H-K) metric [20] to signed measures, which possesses certain properties that will be discussed below. The construction of the H-K distance is more involved than another often used candidate, namely the Kantorovich-Rubinstein (K-R) distance (see, e.g. [24, 18]) or flat metric, which is directly obtained as a dual norm of a space of Lipschitz functions (see Appendix B). It induces the same topology of weak* convergence, and is bounded by the H-K metric [20]. Since our estimates are going to be asymptotically sharp in H-K, but only an upper bound in K-R, we focus on H-K in the following.
The Hellinger-Kantorovich metric [20] is a generalization of the Wasserstein-2 distance (see, e.g., [23]) for measures which are not necessarily of the same norm. We first assume the case of positive measures \(\mu_{1},\mu_{2}\geq 0\) and define the H-K metric in terms of the Wasserstein-2 metric as:
\[d_{\mathrm{HK}}(\mu_{1},\mu_{2})^{2}:=\inf\left\{W_{2}(\widetilde{\mu}_{1}, \widetilde{\mu}_{2})\;\big{|}\;\widetilde{\mu}_{1},\widetilde{\mu}_{2}\in \mathcal{P}_{2}(\mathbb{R}^{+}\times\varOmega_{s})\colon h_{2}(\widetilde{ \mu}_{1})=\mu_{1},h_{2}(\widetilde{\mu}_{2})=\mu_{2}\right\}.\]
Here, \(\mathcal{P}_{2}(\mathbb{R}^{+}\times\Omega_{s})\) are the probability measures of with finite second moment on \(\mathbb{R}^{+}\times\Omega_{s}\), the two-homogeneous marginal is
\[h_{2}(\widetilde{\mu})=\int_{\mathbb{R}^{+}}r^{2}\,\mathrm{d}\widetilde{\mu}(r, \cdot)\in\mathcal{M}(\Omega_{s}),\]
and \(\mathbb{R}^{+}\times\Omega_{s}\) is endowed with a conic metric
\[d_{\mathrm{cone}}((r_{1},y_{1}),(r_{2},y_{2}))^{2}:=(\sqrt{r_{1}}-\sqrt{r_{2}} )^{2}+4\sqrt{r_{1}r_{2}}\sin_{+}^{2}(\|y_{1}-y_{2}\|_{2}/2), \tag{4.1}\]
where \(\sin_{+}(z):=\sin(\min\{\,z,\pi/2\,\})\). For a detailed study of this metric its properties, and different equivalent formulations in terms of Entropy-Transport formulations we refer to [20].
For signed measures, we note that for any distance based on a norm (such as the TV or K-R distance) one observes that
\[d(\mu_{1},\mu_{2})=\left\|(\mu_{1}^{+}+\mu_{2}^{-})-(\mu_{2}^{+}+\mu_{1}^{-}) \right\|=d(\mu_{1}^{+}+\mu_{2}^{-},\mu_{2}^{+}+\mu_{1}^{-}), \tag{4.2}\]
by using the Jordan decomposition \(\mu_{i}=\mu_{i}^{+}-\mu_{i}^{-}\). Motivated by (4.2), we define
\[d_{\mathrm{HK}}(\mu_{1},\mu_{2}):=d_{\mathrm{HK}}(\mu_{1}^{+}+\mu_{2}^{-},\mu_ {2}^{+}+\mu_{1}^{-}), \tag{4.3}\]
which is indeed a metric on \(\mathcal{M}(\Omega_{s})\) and fulfills \(d_{\mathrm{HK}}(\mu_{1},\mu_{2})\leq d_{\mathrm{HK}}(\mu_{1}^{+},\mu_{2}^{+}) +d_{\mathrm{HK}}(\mu_{1}^{-},\mu_{2}^{-})\).
In contrast to the total variation distance, the Hellinger-Kantorovich distance between two Dirac measures \(q_{1}\delta_{y_{1}}\) and \(q_{2}\delta_{y_{2}}\) can be computed by
\[d_{\mathrm{HK}}(q_{1}\delta_{y_{1}},q_{2}\delta_{y_{2}})^{2}=(\sqrt{|q_{1}|}- \sqrt{|q_{2}|})^{2}+4\sqrt{|q_{1}||q_{2}|}\sin_{+}^{2}(\|y_{1}-y_{2}\|_{2}/2),\]
which is exactly the conic metric given in (4.1). Clearly, it is evidence that for small perturbations of both the source positions and coefficients, the resulting change of the H-K distance remains small. Hence, it is reasonable to employ this type of distance to measure the reconstruction error.
One next advantage of the H-K distance is that it is compatible with the weak* topology on \(\mathcal{M}(\Omega_{s})\), namely it induced weak* convergence on bounded set in \(\mathcal{M}(\Omega_{s})\).
**Proposition 4.1**.: The Hellinger-Kantorovich distance of signed measures defined in (4.3) metrizes weak* convergence of signed measures on bounded set in \(\mathcal{M}(\Omega_{s})\). More precisely, a bounded sequence \(\{\mu_{n}\}_{n\in\mathbb{N}}\subset\mathcal{M}(\Omega_{s})\) converges weakly* to a measure \(\mu\) if only if \(d_{\mathrm{HK}}(\mu_{n},\mu)\to 0\) as \(n\to\infty\).
Proof.: Assume that \(d_{\mathrm{HK}}(\mu_{n},\mu)\to 0\) as \(n\to\infty\). One can write
\[\mu_{n}-\mu=(\mu_{n}^{+}+\mu^{-})-(\mu^{+}+\mu_{n}^{-})=:\mu_{n}^{1}-\mu_{n}^{ 2}, \tag{4.4}\]
which implies \(d_{\mathrm{HK}}(\mu_{n}^{1},\mu_{n}^{2})=d_{\mathrm{HK}}(\mu_{n},\mu)\to 0.\) Since \(\|\mu_{n}^{i}\|_{\mathcal{M}}\leq\|\mu_{n}^{\pm}\|_{\mathcal{M}}+\|\mu^{\mp}\| _{\mathcal{M}}\leq 2M\) and the HK-distance metrizes weak* convergence on bounded sequences of non-negative measures (see [20, Theorem 7.15]), we have \(\mu_{n}^{1}-\mu_{n}^{2}\rightharpoonup^{*}0\), which means that \(\mu_{n}\rightharpoonup^{*}\mu\).
Conversely, assume that \(\mu_{n}\rightharpoonup^{*}\mu\). Consider the decomposition (4.4) and suppose that the distance \(d_{\mathrm{HK}}(\mu_{n},\mu)\) does not converges to zero. Then there exists a subsequence, denoted by the same symbol, such that
\[d_{\mathrm{HK}}(\mu_{n}^{1},\mu_{n}^{2})=d_{\mathrm{HK}}(\mu_{n},\mu)\geq\delta >0. \tag{4.5}\]
We now use the fact that \(\|\mu_{n}^{i}\|_{\mathcal{M}}\leq 2M\) to extract a further subsequence (again with the same symbol) such that \(\mu_{n}^{i}\rightharpoonup^{*}\widehat{\mu}^{i}\), which implies \(\mu_{n}-\mu=\mu_{n}^{1}-\mu_{n}^{2}\rightharpoonup^{*}\widehat{\mu}^{1}- \widehat{\mu}^{2}\). Due to (4.5) and the fact that the HK-distance metrizes weak* convergence on bounded sequences of non-negative measures we have that \(\widehat{\mu}^{1}\neq\widehat{\mu}^{2}\) and thus \(\mu_{n}-\mu\rightharpoonup^{*}\widehat{\mu}^{1}-\widehat{\mu}^{2}\neq 0\). Thus the subsequence \(\{\mu_{n}\}_{n\in\mathbb{N}}\) does not converge weak* to \(\mu\) and the original sequence \(\{\mu_{n}\}_{n\in\mathbb{N}}\) can not converge to \(\mu\).
To evaluate the reconstruction error, the distance between finitely supported measures is needed since the reference measure as well as the reconstructed measure are known to be sparse. In fact, we only need a (sharp) upper bound for the H-K distance, which will be provided for the finitely supported case below in term of a (weighted) \(\ell^{2}\)-type distance. This is yet another advantage of the H-K distance in comparison to other distances.
**Proposition 4.2**.: Let \(\mu\) and \(\mu^{\dagger}\) be finitely supported with the same number \(N\) of support points and \(\operatorname{sign}q_{n}=\operatorname{sign}q_{n}^{\dagger}\), \(\forall n=1,\ldots,N\). Then we have
\[d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2}\leq R(\boldsymbol{q},\boldsymbol{q}^{ \dagger})\sum_{n=1}^{N}\left(\frac{|q_{n}-q_{n}^{\dagger}|^{2}}{4|q_{n}^{ \dagger}|}+|q_{n}^{\dagger}|\,\|y_{n}-y_{n}^{\dagger}\|_{2}^{2}\right),\]
where \(R(\boldsymbol{q},\boldsymbol{q}^{\dagger}):=\max\bigg{\{}\,\sqrt{|q_{n}|/|q_{ n}^{\dagger}|},\,\sqrt{|q_{n}^{\dagger}|/|q_{n}|}\;:\;n=1,\ldots,N\,\bigg{\}}\).
Loosely speaking, the H-K distance between two discrete measures \(\mu\) and \(\mu^{\dagger}\) with the same number of support points could be upper bounded by a weighted \(\ell^{2}\)-type distance of their corresponding coefficients and positions.
Proof.: We use that any finitely supported positive measure with \(N\) support points \(\mu\) can be extended with \(h_{2}(\widetilde{\mu})=\mu\) according to
\[\widetilde{\mu}=\frac{1}{N}\sum_{n=1}^{N}\delta_{(r_{n},y_{n})},\quad\text{ where }r_{n}=\sqrt{N|q_{n}|}.\]
In addition, notice that \(d_{\mathrm{HK}}(\mu,\mu^{\dagger})=d_{\mathrm{HK}}(\mu^{1},\mu^{2})\) where \(\mu^{1}:=\mu^{+}+\mu^{\dagger,-}\) and \(\mu^{2}:=\mu^{\dagger,+}+\mu^{-}\) are positive measures with \(N\) support of points. Thus, combining this with the fact that \((1/N)\,d_{\mathrm{cone}}((r_{1},y_{1}),(r_{2},y_{2}))=d_{\mathrm{cone}}((r_{ 1}/\sqrt{N},y_{1}),(r_{2}/\sqrt{N},y_{2}))\) it follows:
\[d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2} \leq\sum_{n=1}^{N_{\dagger}}\left[\left(\sqrt{|q_{n}|}-\sqrt{|q_{ n}^{\dagger}|}\right)^{2}+4\sqrt{|q_{n}||q_{n}^{\dagger}|}\cdot\sin_{+}^{2} \left(\|y_{n}-y_{n}^{\dagger}\|_{2}/2\right)\right]\] \[\leq\sum_{n=1}^{N_{\dagger}}\left(\frac{\left(q_{n}-q_{n}^{ \dagger}\right)^{2}}{4\sqrt{|q_{n}||q_{n}^{\dagger}|}}+\sqrt{|q_{n}||q_{n}^{ \dagger}|}\cdot\|y_{n}-y_{n}^{\dagger}\|_{2}^{2}\right).\]
Here, we have used \(\sin_{+}^{2}(\cdot)\leq(\cdot)^{2}\) and \((\sqrt{a}-\sqrt{b})^{2}=(a-b)^{2}/(\sqrt{a}+\sqrt{b})^{2}\leq(a-b)^{2}/(4\sqrt {ab})\). This immediately implies the estimate.
The previous result motivates to define a weighted \(\ell^{2}\)-norm for the given parameters \((\boldsymbol{q};\boldsymbol{y})\in(\mathbb{R}\setminus\{0\})^{N}\times \Omega_{s}^{N}\). More precisely, we define the weight \(w=\sqrt{|\boldsymbol{q}|}:=(\sqrt{|q_{1}|},\cdots,\sqrt{|q_{N}|})\in(\mathbb{R }\setminus\{0\})^{N}\) and the associated weighted norm for a perturbation \((\delta\boldsymbol{q};\delta\boldsymbol{y})\in\mathbb{R}^{N}\times\mathbb{R }^{dN}\) as
\[\|(\delta\boldsymbol{q};\delta\boldsymbol{y})\|_{W}^{2}:=\frac{1}{4}\|w^{-1} \delta\boldsymbol{q}\|_{2}^{2}+\|w\,\delta\boldsymbol{y}\|_{2}^{2}=\sum_{n=1 }^{N}\left(\frac{|\delta q_{n}|^{2}}{4|q_{n}|}+|q_{n}|\|\delta y_{n}\|_{2}^{2} \right), \tag{4.6}\]
where \((w\delta\boldsymbol{y})_{n}=w_{n}\delta y_{n}\) denotes the entry-wise (Hadamard) product. Here, the diagonal matrix \(W=\operatorname{diag}((w^{-2}/4;w^{2};\ldots;w^{2}))\) induces the norm in (4.6). Then by Proposition 4.2, we have
\[d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2}\leq R(\boldsymbol{q},\boldsymbol{q}^{ \dagger})\|(\boldsymbol{q}-\boldsymbol{q}^{\dagger};\boldsymbol{y}-\boldsymbol {y}^{\dagger})\|_{W_{\dagger}}^{2},\]
where \(W_{\dagger}\) is the diagonal weight matrix defined above for the weight \(w^{\dagger}=\sqrt{|\boldsymbol{q}^{\dagger}|}\). Moreover, two different weighted norms are equivalent up to the same factor
\[R(\boldsymbol{q},\boldsymbol{q}^{\dagger})^{-1}\|(\delta\boldsymbol{q}; \delta\boldsymbol{y})\|_{W_{\dagger}}^{2}\leq\|(\delta\boldsymbol{q};\delta \boldsymbol{y})\|_{W}^{2}\leq R(\boldsymbol{q},\boldsymbol{q}^{\dagger})\|( \delta\boldsymbol{q};\delta\boldsymbol{y})\|_{W_{\dagger}}^{2} \tag{4.7}\]
because \(R(\boldsymbol{q},\boldsymbol{q}^{\dagger})=\max\{\,\|w/w^{\dagger}\|_{\infty},\, \|w^{\dagger}/w\|_{\infty}\}\). For \(\mu\approx\mu^{\dagger}\) the factor \(R(\boldsymbol{q},\boldsymbol{q}^{\dagger})\) is arbitrarily close to one. In other words, asymptotically for \(\mu\approx\mu^{\dagger}\) the upper bound from Proposition 4.2 is sharp:
\[d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2}\approx\|(\boldsymbol{q}-\boldsymbol{q}^{ \dagger};\boldsymbol{y}-\boldsymbol{y}^{\dagger})\|_{W_{\dagger}}^{2}.\]
## 5. Fully explicit estimates for the deterministic reconstruction error
The Hellinger-Kantorovich distance allows us to quantify the reconstruction error between the unknown source \(\mu^{\dagger}\) and measures obtained by solving \((\mathcal{P}_{\beta,\varepsilon})\). This will be done in two steps. First, we study the approximation of \(\mathbf{m}^{\dagger}=(\mathbf{q}^{\dagger};\mathbf{y}^{\dagger})\), i.e., the support points and coefficients of the ground truth, by stationary points \(\mathbf{\widetilde{m}}=\mathbf{\widetilde{m}}(\varepsilon)\) of the nonconvex parametrized problem
\[\min_{\mathbf{m}=(\mathbf{q};\mathbf{y})\in(\mathbb{R}\times\Omega_{s})^{N_{s}}}\left[ \frac{1}{2}\|G(\mathbf{m})-G(\mathbf{m}^{\dagger})-\varepsilon\|_{\Sigma_{0}^{-1}}^{2} +\beta\|\mathbf{q}\|_{1}\right], \tag{5.1}\]
where the source-to-observable map \(G\) satisfies
\[G(\mathbf{m})=G(\mathbf{q};\mathbf{y})=k[\mathbf{x},\mathbf{y}]\mathbf{q}=\sum_{n=1}^{N}q_{n}k[\mathbf{x}, y_{n}].\]
By Assumption A1, the latter is three times differentiable. Notice that (5.1) is obtained from (3.2) by fixing \(N_{s}=N_{s}^{\dagger}\) points of sources in the formulation. Hence, solutions, let alone stationary points, of problem (5.1) do not parametrize minimizers of \((\mathcal{P}_{\beta,\varepsilon})\) in general. Moreover, it is clear that problem (5.1) is primarily of theoretical interest since its practical realization requires knowledge of \(N_{s}^{\dagger}\). Thus, in a second step, we investigate for which noises \(\varepsilon\), \(\mathbf{\widetilde{m}}\) parametrizes the unique solution of \((\mathcal{P}_{\beta,\varepsilon})\). While these results build upon similar techniques as [8], we give a precise, quantitative characterization of this asymptotic regime and clarify the dependence of the involved constants on the problem parameters, e.g., the measurement points \(\mathbf{x}\). This is necessary, for both, lifting these deterministic results to the stochastic setting in Section 3 and utilizing the derived error estimates in the context of optimal sensor placement. A central role in this regard will be played by the linearized problem
\[\min_{\delta\mathbf{m}=(\delta\mathbf{q};\delta\mathbf{y})\in\mathbb{R}^{(1+d)N_{s}}}\left[ \frac{1}{2}\|G^{\prime}(\mathbf{m}^{\dagger})\delta\mathbf{m}-\varepsilon\|_{\Sigma_{ 0}^{-1}}^{2}+\beta\operatorname{sign}(\mathbf{q}^{\dagger})^{\top}\delta\mathbf{q} \right]. \tag{5.2}\]
Note that here we have linearized both, the mapping \(G\) as
\[G(\mathbf{q}^{\dagger}+\delta\mathbf{q},\mathbf{y}^{\dagger}+\delta\mathbf{y}) \approx G(\mathbf{q}^{\dagger},\mathbf{y}^{\dagger})+G^{\prime}(\mathbf{q}^{ \dagger},\mathbf{y}^{\dagger})(\delta\mathbf{q},\delta\mathbf{y})\] \[=G(\mathbf{q}^{\dagger},\mathbf{y}^{\dagger})+G(\delta\mathbf{q},\mathbf{y}^{ \dagger})+G^{\prime}_{\mathbf{y}}(\mathbf{q}^{\dagger},\mathbf{y}^{\dagger};\delta\mathbf{y})\] \[=k[\mathbf{x},\mathbf{y}^{\dagger}]\mathbf{q}^{\dagger}+k[\mathbf{x},\mathbf{y}^{ \dagger}]\delta\mathbf{q}+(\nabla_{y}^{\top}k[\mathbf{x},\mathbf{y}^{\dagger}]\circ\mathbf{q}^ {\dagger})\delta\mathbf{y},\]
using that
\[G^{\prime}(\mathbf{m})=\begin{pmatrix}k[\mathbf{x},\mathbf{y}]&\nabla_{y}^{\top}k[\mathbf{x}, \mathbf{y}]\circ\mathbf{q}\end{pmatrix}\quad\text{where}\quad(\nabla_{y}^{\top}k[\mathbf{x},\mathbf{y}^{\dagger}]\circ\mathbf{q}^{\dagger})_{i,j}:=\nabla_{y}k(x_{i},y_{j}^{\dagger })^{\top}q_{j}^{\dagger},\]
as well as the \(\|\cdot\|_{1}-\)norm with
\[\|\mathbf{q}^{\dagger}+\delta\mathbf{q}\|_{1}\approx\|\mathbf{q}^{\dagger}\|_{1}+ \operatorname{sign}(\mathbf{q}^{\dagger})^{\top}\delta\mathbf{q}.\]
The following proposition characterizes the solutions of (5.1) and (5.2). Since its proof relies on standard computations, we omit it for the sake of brevity.
**Proposition 5.1**.: The solutions \(\mathbf{\widetilde{m}}\) to (5.1) fulfill the stationarity condition
\[S(\mathbf{\widetilde{m}}):=G^{\prime}(\mathbf{\widetilde{m}})^{\top}\Sigma_{0}^{-1}(G( \mathbf{\widetilde{m}})-G(\mathbf{m}^{\dagger})-\varepsilon)+\beta(\mathbf{\widetilde{\rho} };\mathbf{0})=0, \tag{5.3}\]
for some \(\mathbf{\widetilde{\rho}}\in\partial\|\mathbf{\widetilde{q}}\|_{1}\). The solutions of (5.2) satisfy
\[G^{\prime}(\mathbf{m}^{\dagger})^{\top}\Sigma_{0}^{-1}(G^{\prime}(\mathbf{m}^{\dagger })\delta\mathbf{\widetilde{m}}-\varepsilon)+\beta(\mathbf{\rho};\mathbf{0})=0,\]
where \(\mathbf{\rho}=\operatorname{sign}\mathbf{q}^{\dagger}\). If \(G^{\prime}(\mathbf{m}^{\dagger})\) has full column rank then the Fisher information matrix
\[\mathcal{I}_{0}:=G^{\prime}(\mathbf{m}^{\dagger})^{\top}\Sigma_{0}^{-1}G^{\prime}( \mathbf{m}^{\dagger}) \tag{5.4}\]
is invertible and the unique solution of (5.2) is given by
\[\begin{split}\delta\mathbf{\widetilde{m}}(\varepsilon)&:= \mathcal{I}_{0}^{-1}\left(G^{\prime}(\mathbf{m}^{\dagger})^{\top}\Sigma_{0}^{-1} \varepsilon-\beta(\mathbf{\rho};\mathbf{0})\right)\\ &=(\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger}))^{+}\Sigma_{0}^ {-1/2}\varepsilon-\beta\mathcal{I}_{0}^{-1}(\mathbf{\rho};\mathbf{0})\end{split} \tag{5.5}\]
where \((\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger}))^{+}=\mathcal{I}_{0}^{-1}G^{ \prime}(\mathbf{m}^{\dagger})^{\top}\Sigma_{0}^{-1/2}\) is the pseudo-inverse of \(\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger})\).
Since (5.1) is nonconvex, the stationarity condition (5.3) is only necessary but not sufficient for optimality. In the the following, we call any solution to (5.3) a stationary point.
### Error estimates for stationary points
In this section, we prove that for sufficiently small noise \(\varepsilon\), problem (5.1) admits a unique stationary point \(\widehat{\mathbf{m}}(\varepsilon)\) in the vicinity of \(\mathbf{m}^{\dagger}\). Moreover, loosely speaking, \(\mathbf{m}^{\dagger}\) and \(\mathbf{m}^{\dagger}+\delta\widehat{\mathbf{m}}(\varepsilon)\) provide Taylor expansions of zeroth and first order, respectively, for \(\widehat{\mathbf{m}}(\varepsilon)\). For the sake of brevity, we suppress the dependence of \(\widehat{\mathbf{m}}=(\widehat{\mathbf{q}};\widehat{\mathbf{y}})\) and \(\delta\widehat{\mathbf{m}}\) on \(\varepsilon\) in the following. To begin, we require the following auxiliary results.
**Proposition 5.2**.: The following estimates hold:
\[\|\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m})\delta\mathbf{m}\|_{2} \leq C_{k}\|\delta\mathbf{q}/w\|_{2}+C_{k}^{\prime}\sqrt{\|\mathbf{q}\|_{ 1}}\|w\delta\mathbf{y}\|_{2},\] \[\|\Sigma_{0}^{-1/2}G^{\prime\prime}(\mathbf{m})(\delta\mathbf{m},\tau\bm {m})\|_{2} \leq C_{k}^{\prime}(\|\delta\mathbf{q}/w\|_{2}\|w\tau\mathbf{y}\|_{2}+\| \tau\mathbf{q}/w\|_{2}\|w\delta\mathbf{y}\|_{2})+C_{k}^{\prime\prime}\|w\delta\mathbf{y}\| _{2}\|w\tau\mathbf{y}\|_{2},\]
where \(w_{n}=\sqrt{|q_{n}|}\). In particular, with the \(W\)-norm \(\|\delta\mathbf{m}\|_{W}^{2}:=\|\delta\mathbf{q}/w\|_{2}^{2}/4+\|w\delta\mathbf{y}\|_{2}^{2}\) with \(W=W(\mathbf{m})\), we have
\[\|\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m})\|_{W\to 2} \leq(2C_{k}+C_{k}^{\prime})\sqrt{\|\mathbf{q}\|_{1}}, \tag{5.6}\] \[\|\Sigma_{0}^{-1/2}G^{\prime\prime}(\mathbf{m})\|_{W\times W\to 2} \leq 4C_{k}^{\prime}+C_{k}^{\prime\prime}. \tag{5.7}\]
Proof.: We first notice that \(\|v\|_{\Sigma_{0}^{-1}}\leq\|v\|_{\infty}\) due to \(\operatorname{tr}\Sigma_{0}^{-1}=1\). In addition, one can write
\[[G^{\prime}(\mathbf{m})\delta\mathbf{m}]_{k} =\sum_{n=1}^{N_{s}}k(x_{k},y_{n})\delta q_{n}+(\nabla_{y}k(x_{k}, y_{n}))^{\top}\delta y_{n}\,q_{n}\] \[=\sum_{n=1}^{N_{s}}k(x_{k},y_{n})w_{n}\,\delta q_{n}/w_{n}+(\nabla _{y}k(x_{k},y_{n}))^{\top}w_{n}\delta y_{n}\,q_{n}/w_{n}\] \[=\sum_{n=1}^{N_{s}}(\nabla_{y}k(x_{k},y_{n}))^{\top}\delta y_{n} \,\tau q_{n}+(\nabla_{y}k(x_{k},y_{n}))^{\top}\tau y_{n}\,\delta q_{n}\] \[+\delta y_{n}^{\top}\nabla_{yy}^{2}k(x_{k},y_{n})\tau y_{n}q_{n}.\]
Here, we choose \(w_{n}=\sqrt{|q_{n}|}\). Hence, by estimating term by term, we have
\[\|G^{\prime}(\mathbf{m})\delta\mathbf{m}\|_{\Sigma_{0}^{-1}}\leq C_{k}\|w\|_{2}\| \delta\mathbf{q}/w\|_{2}+C_{k}^{\prime}\|\mathbf{q}/w\|_{2}\|w\delta\mathbf{y}\|_{2}\leq(2 C_{k}+C_{k}^{\prime})\,\sqrt{\|\mathbf{q}\|_{1}}\|\delta\mathbf{m}\|_{W},\]
which implies (5.6). A similar argument gives (5.7).
**Proposition 5.3**.: Let \(r=r(\mu^{\dagger}):=\min\{w_{n}^{\dagger}/8,d_{w_{n}^{\dagger}}(y_{n}^{ \dagger},\partial\Omega)/2\,:\,n=1,\ldots,N_{s}\}\). Then for every \(\mathbf{m}\in B_{W_{\dagger}}(\mathbf{m}^{\dagger},r)\), there holds \(\operatorname{sign}\mathbf{q}=\operatorname{sign}\mathbf{q}^{\dagger}\) and
\[R(\mathbf{q},\mathbf{q}^{\dagger})\leq 2\quad\text{and}\ 1/2\|\mathbf{q}^{\dagger}\|_{1}\leq \|\mathbf{q}\|_{1}\leq 2\|\mathbf{q}^{\dagger}\|_{1}, \tag{5.8}\]
where \(R(\mathbf{q},\mathbf{q}^{\dagger})\) is the maximal ratio of the weights \(w_{n}\) and \(w_{n}^{\dagger}\) from Proposition 4.2.
In addition, for all \(\mathbf{m},\mathbf{m}^{\prime}\in B_{W_{\dagger}}(\mathbf{m}^{\dagger},r)\) and \(\delta\mathbf{m}\), there holds
\[\left\|\Sigma_{0}^{-1/2}(G(\mathbf{m})-G(\mathbf{m}^{\prime}))\right\|_{2} \leq L_{G}\|\mathbf{m}-\mathbf{m}^{\prime}\|_{W_{\dagger}}, \tag{5.9}\] \[\left\|\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m})\delta\mathbf{m}\right\|_{2} \leq L_{G}\|\delta\mathbf{m}\|_{W_{\dagger}},\] (5.10) \[\left\|\Sigma_{0}^{-1/2}(G^{\prime}(\mathbf{m})-G^{\prime}(\mathbf{m}^{ \prime}))\delta\mathbf{m}\right\|_{2} \leq L_{G^{\prime}}\|\mathbf{m}-\mathbf{m}^{\prime}\|_{W_{\dagger}}\|\delta\mathbf{m} \|_{W_{\dagger}}, \tag{5.11}\]
where \(L_{G}:=4(2C_{k}+C_{k}^{\prime})\sqrt{\|\mathbf{q}^{\dagger}\|_{1}}\) and \(L_{G^{\prime}}:=2(4C_{k}^{\prime}+C_{k}^{\prime\prime})\).
Proof.: For \(\mathbf{m}\in B_{W_{\dagger}}(\mathbf{m}^{\dagger},r)\), one has
\[|q_{n}-q_{n}^{\dagger}|/w_{n}^{\dagger}\leq\|(\mathbf{q}-\mathbf{q}^{\dagger})/w^{ \dagger}\|_{2}\leq 4\|\mathbf{m}-\mathbf{m}^{\dagger}\|_{W_{\dagger}}\leq 1/2w_{n}^{ \dagger},\quad\forall n=1,\ldots,N_{s}.\]
This implies \(1/2\leq q_{n}/q_{n}^{\dagger}\leq 3/2\) for all \(n=1,2,\ldots,N_{s}\). Hence, \(\operatorname{sign}\boldsymbol{q}=\operatorname{sign}\boldsymbol{q}^{\dagger}\) and (5.8) follows. Also, the condition \(r\leq d_{w_{n}^{\dagger}}(y_{n}^{\dagger},\partial\Omega)/2\) guarantees that \(y_{n}\in\Omega_{s}\) for all \(n=1,\ldots,N_{s}\). By Proposition 5.2 and (5.8), it can now be seen that
\[\begin{split}\left\|\Sigma_{0}^{-1/2}(G(\boldsymbol{m})-G( \boldsymbol{m}^{\prime}))\right\|_{2}&=\left\|\Sigma_{0}^{-1/2} \int_{0}^{1}G^{\prime}\left(\boldsymbol{m}^{\prime}+t(\boldsymbol{m}- \boldsymbol{m}^{\prime})\right)\mathrm{d}t(\boldsymbol{m}-\boldsymbol{m}^{ \prime})\right\|_{2}\\ &\leq\int_{0}^{1}\left\|\Sigma_{0}^{-1/2}G^{\prime}\left( \boldsymbol{m}^{\prime}+t(\boldsymbol{m}-\boldsymbol{m}^{\prime})\right) \right\|_{W_{\dagger}\to 2}\mathrm{d}t\|\boldsymbol{m}-\boldsymbol{m}^{ \prime}\|_{W_{\dagger}},\end{split} \tag{5.12}\]
for every \(\boldsymbol{m},\boldsymbol{m}^{\prime}\in B_{W_{\dagger}}(\boldsymbol{m}^{ \dagger},r)\). Next, since \(\boldsymbol{m}^{\prime}+t(\boldsymbol{m}-\boldsymbol{m}^{\prime})\in B_{W_{ \dagger}}(\boldsymbol{m}^{\dagger},r)\), for \(W=W(\boldsymbol{m}^{\prime}+t(\boldsymbol{m}-\boldsymbol{m}^{\prime}))\) and \(W_{\dagger}=W_{\dagger}(\boldsymbol{m}^{\dagger})\), we use (4.7), (5.8) and (5.6) to deduce that
\[\begin{split}\left\|\Sigma_{0}^{-1/2}G^{\prime}\left(\boldsymbol{ m}^{\prime}+t(\boldsymbol{m}-\boldsymbol{m}^{\prime})\right)\right\|_{W_{ \dagger}\to 2}&\leq 2\left\|\Sigma_{0}^{-1/2}G^{\prime}\left( \boldsymbol{m}^{\prime}+t(\boldsymbol{m}-\boldsymbol{m}^{\prime})\right) \right\|_{W\to 2}\\ &\leq 2\left(2C_{k}+C_{k}^{\prime}\right)\sqrt{\|\boldsymbol{q}^{ \prime}+t(\boldsymbol{q}-\boldsymbol{q}^{\prime})\|_{1}}\\ &\leq 4\left(2C_{k}+C_{k}^{\prime}\right)\sqrt{\|\boldsymbol{q}^{ \dagger}\|_{1}}.\end{split} \tag{5.13}\]
Combining (5.12) and (5.13), we deduce now (5.9). Similarly, (5.10) follows from (5.6) and (5.8) as well. Moreover, (5.11) can be proved using the estimate (5.7) with a similar argument.
We now state the main result of this section.
**Proposition 5.4**.: Suppose that \(G^{\prime}(\boldsymbol{m}^{\dagger})\) has full column rank. Then, for some constant \(C_{1}=C_{1}(k,\mu^{\dagger},\|\mathcal{I}_{0}^{-1}\|_{W_{\dagger}^{-1}\to W_ {\dagger}})\) and radius \(0<\hat{r}\leq r/2\) and all \(\varepsilon\) with \(C_{1}(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta)\leq 1\), the stationarity condition (5.1) admits a unique solution \(\widehat{\boldsymbol{m}}=\widehat{\boldsymbol{m}}(\varepsilon)\) on \(B_{W^{\dagger}}(\boldsymbol{m}^{\dagger},(3/2)\hat{r})\). Moreover, the stationary point satisfies \(\widehat{\boldsymbol{m}}\in B_{W^{\dagger}}(\boldsymbol{m}^{\dagger},\hat{r})\) as well as
\[\begin{split}\|\widehat{\boldsymbol{m}}-\boldsymbol{m}^{ \dagger}\|_{W_{\dagger}}&\leq 2\|\delta\widehat{\boldsymbol{m}}\|_{W_{ \dagger}}\leq C_{1}(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta),\\ \|\widehat{\boldsymbol{m}}-\boldsymbol{m}^{\dagger}-\delta \widehat{\boldsymbol{m}}\|_{W_{\dagger}}&\leq C_{1}^{2}(\| \varepsilon\|_{\Sigma_{0}^{-1}}+\beta)^{2}.\end{split}\]
Proof.: Since \(G^{\prime}(\boldsymbol{m}^{\dagger})\) has full column rank, the Fisher information matrix \(\mathcal{I}_{0}\) defined in (5.4) is invertible. Hence, the map \(T(\boldsymbol{m}):=\boldsymbol{m}-\mathcal{I}_{0}^{-1}S(\boldsymbol{m})\) is well-defined, where \(S(\boldsymbol{m})\) is the residual of the stationarity equation given in (5.3) with \(\bar{\boldsymbol{\rho}}=\boldsymbol{\rho}=\operatorname{sign}\boldsymbol{q}^{\dagger}\). In order to obtain the claimed results, we aim to show that \(T\) is a contraction and argue similarly to the proof of the Banach fixed point theorem. However, since the correct domain of definition for the map \(T\) is difficult to determine beforehand, we provide a direct proof.
We start by showing that \(T\) is Lipschitz continuous on the ball \(B_{W^{\dagger}}(\boldsymbol{m}^{\dagger},\hat{r})\) for some as of yet undetermined \(0<\hat{r}\leq r\) with Lipschitz constant \(\kappa(\hat{r})\leq 1/2\) if \(\varepsilon\) is chosen suitably. For this purpose, consider two points \(\boldsymbol{m}\) and \(\boldsymbol{m}^{\prime}\) in \(B_{W^{\dagger}}(\boldsymbol{m}^{\dagger},\hat{r})\), their difference \(\delta\boldsymbol{m}=\boldsymbol{m}-\boldsymbol{m}^{\prime}\) and the difference of their images \(\delta\boldsymbol{m}_{T}=T(\boldsymbol{m})-T(\boldsymbol{m}^{\prime})\). Note that
\[\begin{split}\mathcal{I}_{0}\delta\boldsymbol{m}_{T}& =\mathcal{I}_{0}\delta\boldsymbol{m}-(S(\boldsymbol{m})-S(\boldsymbol{m}^ {\prime}))\\ &=(G^{\prime}(\boldsymbol{m}^{\dagger})-G^{\prime}(\boldsymbol{m} ))^{\top}\Sigma_{0}^{-1}G^{\prime}(\boldsymbol{m}^{\dagger})\delta \boldsymbol{m}\\ &\quad+G^{\prime}(\boldsymbol{m})^{\top}\Sigma_{0}^{-1}(G^{ \prime}(\boldsymbol{m}^{\dagger})\delta\boldsymbol{m}-(G(\boldsymbol{m})-G( \boldsymbol{m}^{\prime})))\\ &\quad-(G^{\prime}(\boldsymbol{m})-G^{\prime}(\boldsymbol{m}^{ \prime}))^{\top}\Sigma_{0}^{-1}(G(\boldsymbol{m}^{\prime})-G(\boldsymbol{m}^ {\dagger})-\varepsilon).\end{split}\]
We multiply this equation from the left with \((\delta\boldsymbol{m}_{T})^{\top}\) and consider each term on the right hand side separately. Using Proposition 5.3, we have for the first term
\[\begin{split}&(\delta\boldsymbol{m}_{T})^{\top}(G^{\prime}( \boldsymbol{m}^{\dagger})-G^{\prime}(\boldsymbol{m}))^{\top}\Sigma_{0}^{-1}G^{ \prime}(\boldsymbol{m}^{\dagger})\delta\boldsymbol{m}\\ &=\left(\Sigma_{0}^{-1/2}(G^{\prime}(\boldsymbol{m}^{\dagger})-G^{ \prime}(\boldsymbol{m}))\delta\boldsymbol{m}_{T}\right)^{\top}\Sigma_{0}^{-1/2}G^{ \prime}(\boldsymbol{m}^{\dagger})\delta\boldsymbol{m}\\ &\leq\|\Sigma_{0}^{-1/2}(G^{\prime}(\boldsymbol{m}^{\dagger})-G^{ \prime}(\boldsymbol{m}))\delta\boldsymbol{m}_{T}\|_{2}\|\Sigma_{0}^{-1/2}G^{ \prime}(\boldsymbol{m}^{\dagger})\delta\boldsymbol{m}\|_{2}\\ &\leq L_{G}L_{G^{\prime}}\|\boldsymbol{m}^{\dagger}-\boldsymbol{m} \|_{W_{\dagger}}\|\delta\boldsymbol{m}_{W_{\dagger}}\|\delta\boldsymbol{m}_{T}\|_{W_ {\dagger}}.\end{split}\]
For the second term we estimate
\[(\delta\mathbf{m}_{T})^{\top}G^{\prime}(\mathbf{m})^{\top}\Sigma_{0}^{-1}(G^ {\prime}(\mathbf{m}^{\dagger})\delta\mathbf{m}-(G(\mathbf{m})-G(\mathbf{m}^{\prime})))\] \[=\left(\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m})\delta\mathbf{m}_{T}\right)^ {\top}\Sigma_{0}^{-1/2}\int_{0}^{1}(G^{\prime}(\mathbf{m}^{\dagger})-G^{\prime}( \tau\mathbf{m}+(1-\tau)\mathbf{m}^{\prime}))\delta\mathbf{m}\,\mathrm{d}\tau\] \[\leq L_{G}L_{G^{\prime}}\int_{0}^{1}\lVert\mathbf{m}^{\dagger}-(\tau \mathbf{m}+(1-\tau)\mathbf{m}^{\prime})\rVert_{W_{\dagger}}\,\mathrm{d}\tau\lVert \delta\mathbf{m}\rVert_{W_{\dagger}}\lVert\delta\mathbf{m}_{T}\rVert_{W_{\dagger}}\]
and for the third term we have
\[(\delta\mathbf{m}_{T})^{\top}(G^{\prime}(\mathbf{m})-G^{\prime}(\mathbf{m}^{ \prime}))^{\top}\Sigma_{0}^{-1}(G(\mathbf{m}^{\prime})-G(\mathbf{m}^{\dagger})-\varepsilon)\] \[=\left(\Sigma_{0}^{-1/2}(G^{\prime}(\mathbf{m})-G^{\prime}(\mathbf{m}^{ \prime}))\delta\mathbf{m}_{T}\right)^{\top}\Sigma_{0}^{-1/2}(G(\mathbf{m}^{\prime})-G (\mathbf{m}^{\dagger})-\varepsilon)\] \[\leq L_{G^{\prime}}(L_{G}\lVert\mathbf{m}^{\dagger}-\mathbf{m}^{\prime} \rVert_{W_{\dagger}}+\lVert\varepsilon\rVert_{\Sigma_{0}^{-1}})\lVert\delta \mathbf{m}\rVert_{W_{\dagger}}\lVert\delta\mathbf{m}_{T}\rVert_{W_{\dagger}}.\]
Since \(\mathbf{m},\mathbf{m}^{\prime}\) are contained in the ball \(B_{W^{\dagger}}(\mathbf{m}^{\dagger},\hat{r})\) it follows that
\[\lVert\mathcal{I}_{0}^{-1}\rVert_{W_{\dagger}^{-1}\to W_{\dagger}}^{-1} \lVert\mathbf{m}_{T}\rVert_{W_{\dagger}}^{2}\leq(\delta\mathbf{m}_{T})^{\top}\mathcal{ I}_{0}\delta\mathbf{m}_{T}\leq L_{G^{\prime}}\left(3L_{G}\hat{r}+\lVert\varepsilon \rVert_{\Sigma_{0}^{-1}}\right)\lVert\delta\mathbf{m}_{T}\rVert_{W_{\dagger}} \lVert\delta\mathbf{m}\rVert_{W_{\dagger}},\]
using the fact that one has
\[\mathbf{m}^{\top}\mathcal{I}_{0}\mathbf{m} =(W_{\dagger}^{1/2}\mathbf{m})^{\top}\left[W_{\dagger}^{-1/2} \mathcal{I}_{0}W_{\dagger}^{-1/2}\right](W_{\dagger}^{1/2}\mathbf{m})\] \[\geq\lVert\mathbf{m}\rVert_{W_{\dagger}}^{2}\lVert W_{\dagger}^{1/2} \mathcal{I}_{0}^{-1}W_{\dagger}^{1/2}\rVert_{2\to 2}^{-1}=\lVert\mathbf{m} \rVert_{W_{\dagger}}^{2}\lVert\mathcal{I}_{0}^{-1}\rVert_{W_{\dagger}^{-1} \to W_{\dagger}}^{-1}.\]
Dividing by \(\lVert\mathbf{m}_{T}\rVert_{W_{\dagger}}\), the estimate
\[\lVert T(\mathbf{m})-T(\mathbf{m}^{\prime})\rVert_{W_{\dagger}}=\lVert\delta\mathbf{m}_{ T}\rVert_{W_{\dagger}}\leq\kappa(\hat{r})\lVert\delta\mathbf{m}\rVert_{W_{\dagger}}= \kappa(\hat{r})\lVert\mathbf{m}-\mathbf{m}^{\prime}\rVert_{W_{\dagger}}\]
follows with
\[\kappa(\hat{r}):=L_{G^{\prime}}\lVert\mathcal{I}_{0}^{-1}\rVert_{W_{\dagger}^ {-1}\to W_{\dagger}}\left(3L_{G}\hat{r}+\lVert\varepsilon\rVert_{\Sigma_{0}^{ -1}}\right).\]
The contraction estimate above holds for any \(\hat{r}\leq r\) under the assumption that the points under consideration lie in the appropriate ball. In order to ensure contraction, we need to establish an appropriate bound and assumptions on the data. For this, we consider the linearized estimate
\[\delta\widehat{\mathbf{m}}=-\mathcal{I}_{0}^{-1}S(\mathbf{m}^{\dagger})=\mathcal{I}_ {0}^{-1}\left[G^{\prime}(\mathbf{m}^{\dagger})^{\top}\Sigma_{0}^{-1}\varepsilon- \beta(\mathbf{\rho};\mathbf{0})\right],\]
from (5.5). Using the weighted \(W_{\dagger}\)-norm defined in Proposition 5.2, one has
\[\lVert\delta\widehat{\mathbf{m}}\rVert_{W_{\dagger}} \leq\lVert\mathcal{I}_{0}^{-1}\rVert_{W_{\dagger}^{-1}\to W_{ \dagger}}(\lVert(\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger}))^{\top}\Sigma_ {0}^{-1/2}\varepsilon\rVert_{W_{\dagger}^{-1}}+\beta\lVert(\mathbf{\rho};\mathbf{0}) \rVert_{W_{\dagger}^{-1}})\] \[\leq\lVert\mathcal{I}_{0}^{-1}\rVert_{W_{\dagger}^{-1}\to W_{ \dagger}}(\lVert(\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger}))^{\top}\rVert_{2 \to W_{\dagger}^{-1}}\lVert\Sigma_{0}^{-1/2}\varepsilon\rVert_{2}+\beta\sqrt{ \lVert\mathbf{q}^{\dagger}\rVert_{1}})\] \[\leq\lVert\mathcal{I}_{0}^{-1}\rVert_{W_{\dagger}^{-1}\to W_{ \dagger}}(L_{G}\lVert\varepsilon\rVert_{\Sigma_{0}^{-1}}+\beta\sqrt{\lVert\bm {q}^{\dagger}\rVert_{1}}),\]
where we have used (5.6) together with \(\lVert A^{\top}\rVert_{2\to W_{\dagger}^{-1}}=\lVert A\rVert_{W_{\dagger}\to 2}\) and \(\lVert(\mathbf{\rho};\mathbf{0})\rVert_{W_{\dagger}^{-1}}=\sqrt{\lVert\mathbf{q}^{\dagger} \rVert_{1}}\). In the following, we denote
\[c_{1}:=\lVert\mathcal{I}_{0}^{-1}\rVert_{W_{\dagger}^{-1}\to W_{\dagger}}(L_{G} +\sqrt{\lVert\mathbf{q}^{\dagger}\rVert_{1}}),\quad c_{2}:=L_{G^{\prime}}\lVert \mathcal{I}_{0}^{-1}\rVert_{W_{\dagger}^{-1}\to W_{\dagger}}(6L_{G}c_{1}+1).\]
If we now choose \(\hat{r}=\min\left\{c_{1}/c_{2},\,r/2\right\}\) and assume that
\[\lVert\varepsilon\rVert_{\Sigma_{0}^{-1}}+\beta\leq\frac{\hat{r}}{2c_{1}}=\min \left\{\,\frac{1}{2c_{2}},\,\,\frac{r}{4c_{1}}\,\right\}, \tag{5.14}\]
then it follows immediately with the previous estimates that
\[\lVert\delta\widehat{\mathbf{m}}\rVert_{W_{\dagger}}\leq c_{1}(\lVert\varepsilon \rVert_{\Sigma_{0}^{-1}}+\beta)\leq\frac{\hat{r}}{2}\quad\text{and}\,\,\kappa( \hat{r})\leq 1/2.\]
We are now ready to show the existence of a fixed point in \(B_{W_{\uparrow}}(\mathbf{m}^{\dagger},\hat{r})\) as well as the claimed estimates. For this purpose, consider the simplified Gauss-Newton iterative sequence
\[\mathbf{m}_{0}=\mathbf{m}^{\dagger},\quad\mathbf{m}^{k+1}=T(\mathbf{m}^{k})=\mathbf{m}^{k}-\mathcal{ I}_{0}^{-1}S(\mathbf{m}^{k}),\quad k\geq 1. \tag{5.15}\]
Put \(\delta\mathbf{m}^{k}:=\mathbf{m}^{k}-\mathbf{m}^{k-1}\), \(k\geq 1\). It can be seen that the first Gauss-Newton step is given by \(\delta\mathbf{m}^{1}=\delta\widehat{\mathbf{m}}\). We use induction to prove that \(\mathbf{m}^{k}\in B_{W_{\uparrow}}(\mathbf{m}^{\dagger},\hat{r})\) for all \(k\geq 0\). Indeed, if \(\varepsilon\) satisfies (5.14), we have \(\|\mathbf{m}^{1}-\mathbf{m}^{\dagger}\|_{W_{\uparrow}}=\|\delta\widehat{\mathbf{m}}\|_{W_ {\downarrow}}\leq\hat{r}/2\), which implies \(\mathbf{m}^{1}\in B_{W_{\uparrow}}(\mathbf{m}^{\dagger},\hat{r})\). Assume that \(\mathbf{m}^{k}\in B_{W_{\uparrow}}(\mathbf{m}^{\dagger},\hat{r})\). Notice that it holds \(\|\delta\mathbf{m}^{k+1}\|_{W_{\uparrow}}=\|T(\mathbf{m}^{k})-T(\mathbf{m}^{k-1})\|_{W_{ \uparrow}}\leq\kappa\|\delta\mathbf{m}^{k}\|_{W_{\uparrow}}\). Then, with \(d^{k}:=\|\delta\mathbf{m}^{k}\|_{W_{\uparrow}}\) and \(e^{k}:=\sum_{i=1}^{k}d^{i}\) we have
\[d^{k+1}\leq\kappa d^{k}\quad\text{and }e^{k}\leq\frac{1-\kappa^{k}}{1-\kappa}d^{ 1}\leq\frac{1}{1-\kappa}d^{1}.\]
Hence,
\[\|\mathbf{m}^{k+1}-\mathbf{m}^{\dagger}\|_{W_{\uparrow}}\leq\sum_{i=1}^{k+1}\|\mathbf{m}^{ i}-\mathbf{m}^{i-1}\|_{W_{\uparrow}}=e^{k}\leq\frac{1}{1-\kappa}d^{1}\leq 2\|\delta \widehat{\mathbf{m}}\|_{W_{\uparrow}}\leq\hat{r}, \tag{5.16}\]
and thus \(\mathbf{m}^{k+1}\in B_{W_{\uparrow}}(\mathbf{m}^{\dagger},\hat{r})\). Going to the limit, by standard arguments, we obtain that \(\mathbf{m}^{k}\to\widehat{\mathbf{m}}\in B_{W_{\uparrow}}(\mathbf{m}^{\dagger},\hat{r})\) with \(T(\widehat{\mathbf{m}})=\widehat{\mathbf{m}}\) and thus \(S(\widehat{\mathbf{m}})=0\). Furthermore, by letting \(k\to\infty\) in (5.16), we obtain \(\|\widehat{\mathbf{m}}-\mathbf{m}^{\dagger}\|_{W_{\uparrow}}\leq 2\|\delta\widehat{\mathbf{m}}\|_ {W_{\uparrow}}\).
For the second estimate, we rewrite the difference between the error and the perturbation in terms of all updates
\[\widehat{\mathbf{m}}-\mathbf{m}^{\dagger}-\delta\widehat{\mathbf{m}}=\widehat{\mathbf{m}}-\bm {m}^{0}-\delta\mathbf{m}^{1}=\sum_{k=2}^{\infty}\delta\mathbf{m}^{k}.\]
Now, choosing \(\mathbf{m}:=\mathbf{m}^{k+1}\) and \(\mathbf{m}^{\prime}:=\mathbf{m}^{k}\) we have the contraction estimate
\[\|\delta\mathbf{m}^{k}\|_{W_{\uparrow}}\leq\kappa(\tilde{r})\|\delta\mathbf{m}^{k-1} \|_{W_{\uparrow}},\]
where \(\hat{r}\) is now replaced by \(\tilde{r}=\max\{\,\|\mathbf{m}^{k+1}-\mathbf{m}^{\dagger}\|,\,\|\mathbf{m}^{k}-\mathbf{m}^{ \dagger}\|\,\}\leq 2d^{1}\leq 2c_{1}(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta)\) and thus \(\kappa(\tilde{r})\leq c_{2}(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta)\). Hence, bounding the updates by \(\|\delta\mathbf{m}^{k}\|_{W_{\uparrow}}=d^{k}\leq\kappa(\tilde{r})^{k-1}d^{1}\leq (1/2)^{k-2}\kappa(\tilde{r})d^{1}\), we conclude
\[\|\widehat{\mathbf{m}}-\mathbf{m}^{\dagger}-\delta\widehat{\mathbf{m}}\|_{W_{\uparrow}}\leq \sum_{k=2}^{\infty}(1/2)^{k-2}\kappa(\tilde{r})d^{1}\leq 2c_{1}c_{2}(\| \varepsilon\|_{\Sigma_{0}^{-1}}+\beta)^{2}. \tag{5.17}\]
It remains to argue that \(\widehat{\mathbf{m}}\) is the _unique_ stationary point of (5.1) on \(B_{W_{\uparrow}}(\mathbf{m}^{\dagger},(3/2)\hat{r})\). Replacing \(\hat{r}\) with \(\tilde{r}=(3/2)\hat{r})\) we still obtain the Lipschitz constant \(\kappa((3/2)\hat{r})\leq 3/4\) on the slightly larger ball. Now, assume that \(\widehat{\mathbf{m}}\) is any stationary point in the larger ball, thus also fixed point of \(T\) and
\[\|\widetilde{\mathbf{m}}-\widehat{\mathbf{m}}\|_{W_{\uparrow}}=\|T(\widetilde{\mathbf{m}})- T(\widehat{\mathbf{m}})\|_{W_{\uparrow}}\leq(3/4)\|\widetilde{\mathbf{m}}-\widehat{\mathbf{m}}\|_ {W_{\uparrow}},\]
yielding \(\widetilde{\mathbf{m}}=\widehat{\mathbf{m}}\).
**Remark 5.5**.: Following (5.14) and (5.17), the constant \(C_{1}\) in the statement of Proposition can be chosen explicitly as
\[C_{1}=\max\left\{c_{1},\frac{4c_{1}}{r},2c_{2}\right\}.\]
In particular, \(C_{1}\) depends monotonically on \(\|\mathcal{I}_{0}^{-1}\|_{W_{\uparrow}^{-1}\to W_{\uparrow}}\).
### Error estimates for reconstructions of the ground truth
As mentioned in the preceding section, solving the stationarity equation (5.3) for \(\widehat{\mathbf{m}}=(\widehat{\mathbf{y}},\widehat{\mathbf{q}})\) is not feasible in practice since it presupposes knowledge of \(N_{s}^{\dagger}\). Moreover, recalling that \(\widehat{\mathbf{m}}\) is merely a stationary point, the parametrized measure
\[\widehat{\mu}=\sum_{n=1}^{N_{s}^{\dagger}}\widehat{q}_{n}\delta_{\widehat{y}_{n}} \tag{5.18}\]
is not necessarily a minimizer of \((\mathcal{P}_{\beta,\varepsilon})\). In this section, our primary goal is to show that \(\widetilde{\boldsymbol{m}}\) indeed parametrizes the unique solution of problem \((\mathcal{P}_{\beta,\varepsilon})\) if the minimum norm dual certificate \(\eta^{\dagger}\) associated to \((\mathcal{P}_{0})\) is \(\theta\)-admissible and if the set of admissible noises \(\varepsilon\) is further restricted. A fully-explicit estimate for the reconstruction error between \(\widehat{\mu}\) and the ground truth \(\mu^{\dagger}\) in the Hellinger-Kantorovich distance then follows immediately. For this purpose, recall from [8, Proposition 7] that the non-degeneracy of \(\eta^{\dagger}\) implies
\[\eta^{\dagger}=\eta_{\mathrm{PC}}=K^{*}\varSigma_{0}^{-1/2}(G^{\prime}( \boldsymbol{m}^{\dagger})\varSigma_{0}^{-1/2})^{+}(\boldsymbol{\rho}; \boldsymbol{0})=K^{*}\varSigma_{0}^{-1}G^{\prime}(\boldsymbol{m}^{\dagger}) \varmathcal{I}_{0}^{-1}(\boldsymbol{\rho};\boldsymbol{0}). \tag{5.19}\]
where \(\eta_{\mathrm{PC}}\) denotes the vanishing derivative pre-certificate from Section 3.3.
We first prove that
\[\widehat{\eta}=\beta^{-1}K^{*}\varSigma_{0}^{-1}(z^{d}(\varepsilon)-K \widehat{\mu})=\beta^{-1}K^{*}\varSigma_{0}^{-1}(G(\boldsymbol{m}^{\dagger}) +\varepsilon-G(\widetilde{\boldsymbol{m}}))\]
is \(\theta/2\)-admissible for certain \(\varepsilon\). To begin, we need the following estimates on \(K^{*}\).
**Lemma 5.6**.: Suppose that \(\eta(y)=[K^{*}\varSigma_{0}^{-1}\zeta](y)\) for \(y\in\varOmega_{s}\), \(\zeta\in\mathbb{R}^{N_{o}}\). Then
\[\sup_{y\in\varOmega_{s}}|D\eta(y)|\leq C_{D}\|\varSigma_{0}^{-1/2}\zeta\|_{2}.\]
for \(D\in\{\,\mathrm{Id},\nabla,\nabla^{2},\nabla^{3}\,\}\) and \(C_{D}\in\{\,C_{k},C_{k}^{\prime},C_{k}^{\prime\prime},C_{k}^{\prime\prime\prime }\,\}\), respectively.
**Proof.** One can see that \(\eta\) can be written as
\[\eta(y)=[K^{*}\varSigma_{0}^{-1}\zeta](y)=(\varSigma_{0}^{-1}\zeta,k[ \boldsymbol{x},y])_{2}=\sum_{n=1}^{N_{o}}(\varSigma_{0}^{-1}\zeta)_{n}k(x_{n}, y). \tag{5.20}\]
Hence for every \(y\in\varOmega_{s}\),
\[|\eta(y)|\leq\sum_{n=1}^{N_{o}}|(\varSigma_{0}^{-1}\zeta)_{n}||k(x_{n},y)| \leq\sup_{x\in\varOmega_{o},y\in\varOmega_{s}}|k(x,y)|\sum_{n=1}^{N_{o}}|( \varSigma_{0}^{-1}\zeta)_{n}|=C_{k}\|\varSigma_{0}^{-1}\zeta\|_{1}.\]
Since \(\sum_{n=1}^{N_{o}}\sigma_{0,n}^{-2}=1\), we have
\[\|\varSigma_{0}^{-1/2}v\|_{1}=\sum_{n=1}^{N_{o}}\sigma_{0,n}^{-1}v_{n}\leq \sqrt{\sum_{n=1}^{N_{o}}\sigma_{0,n}^{-2}}\sqrt{\sum_{n=1}^{N_{o}}|v_{n}|^{2} }=\|v\|_{2},\quad\forall v\in\mathbb{R}^{N_{o}}\]
Hence, \(\|\varSigma_{0}^{-1}\zeta\|_{1}\leq\|\varSigma_{0}^{-1/2}\zeta\|_{2}.\) From (5.20) the other estimates follow similarly by taking derivatives. \(\square\)
**Proposition 5.7**.: Let the assumptions in Proposition 5.4 be satisfied and \(\eta^{\dagger}\) be \(\theta-\)admissible for \(\mu^{\dagger}\), \(\theta\in(0,1)\). Then there exists a constant \(C_{2}=C_{2}(k,\mu^{\dagger},\|\mathcal{I}_{0}^{-1}\|_{W_{\dagger}^{-1}\to W_{ \dagger}})\) such that if
\[C_{1}(\|\varepsilon\|_{\varSigma_{0}^{-1}}+\beta)\leq\sqrt{ \theta/32}, \tag{5.21}\] \[C_{2}\beta^{-1}\left((\|\varepsilon\|_{\varSigma_{0}^{-1}}+\beta )^{2}+\|\varepsilon\|_{\varSigma_{0}^{-1}}\right)\leq\theta^{2}/32, \tag{5.22}\]
then the function \(\widehat{\eta}\) is \(\theta/2-\)admissible for \(\widehat{\mu}\).
**Proof.** By the definition of \(\widehat{\eta}\), one has
\[\widehat{\eta}(\widehat{y}_{n})=\operatorname{sign}(\widehat{q}_{n})= \operatorname{sign}(q_{n}^{\dagger})\text{ and }\nabla\widehat{\eta}(\widehat{y}_{n})=0,\quad n=1,\ldots,N_{s}.\]
We now prove the admissibility of \(\widehat{\eta}\), namely
\[-\operatorname{sign}\widehat{\eta}(\widehat{y}_{n})\nabla^{2} \widehat{\eta}(\widehat{y}_{n})\geq\theta|w_{n}^{\dagger}|^{2}\,\mathrm{Id}, \quad\forall n=1,\ldots,N_{s}, \tag{5.23}\] \[|\widehat{\eta}(y)|\leq 1-(\theta/2)^{2},\quad\forall y\in\varOmega_{s} \setminus\bigcup_{n=1,\ldots,N_{s}}B_{w_{n}^{\dagger}}(\widehat{y}_{n},\sqrt{ \theta/2}) \tag{5.24}\]
if (5.21)-(5.22) hold. To this end, consider the noisy pre-certificate
\[\eta_{\mathrm{PC},\varepsilon} :=-\beta^{-1}K^{*}\mathcal{L}_{0}^{-1}(G^{\prime}(\mathbf{m}^{\dagger} )\delta\widehat{\mathbf{m}}-\varepsilon) \tag{5.25}\] \[=\beta^{-1}K^{*}\mathcal{L}_{0}^{-1}[G^{\prime}(\mathbf{m}^{\dagger} )\mathcal{I}_{0}^{-1}(\beta(\mathbf{\rho};\mathbf{0})-G^{\prime}(\mathbf{m}^{\dagger})^{ \top}\Sigma_{0}^{-1}\varepsilon)+\varepsilon]\] \[=\eta_{\mathrm{PC}}-\beta^{-1}K^{*}\mathcal{L}_{0}^{-1/2}[ \mathcal{L}_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger})\mathcal{I}_{0}^{-1}G^{ \prime}(\mathbf{m}^{\dagger})^{\top}\Sigma_{0}^{-1/2}-\mathrm{Id}](\Sigma_{0}^{-1 /2}\varepsilon)\] \[=\eta_{\mathrm{PC}}-\beta^{-1}K^{*}\mathcal{L}_{0}^{-1/2}[P- \mathrm{Id}](\Sigma_{0}^{-1/2}\varepsilon),\]
where \(\eta_{\mathrm{PC}}\) is given in (5.19) and \(P\) is an orthogonal projection to the \(N_{s}-N_{o}(1+d)\) dimensional range of \(\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger})\). This implies
\[\widehat{\eta} =-\beta^{-1}K^{*}\mathcal{L}_{0}^{-1}(G(\widehat{\mathbf{m}})-G(\mathbf{ m}^{\dagger})-\varepsilon)\] \[=\eta_{\mathrm{PC},\varepsilon}-\beta^{-1}K^{*}\Sigma_{0}^{-1} \left(G(\widehat{\mathbf{m}})-G(\mathbf{m}^{\dagger})-G^{\prime}(\mathbf{m}^{\dagger}) \delta\widehat{\mathbf{m}}\right)\] \[=\eta_{\mathrm{PC}}-\beta^{-1}\Big{[}\underbrace{K^{*}\Sigma_{0}^ {-1/2}[P-\mathrm{Id}](\Sigma_{0}^{-1/2}\varepsilon)}_{e_{1}}-\underbrace{K^{* }\Sigma_{0}^{-1}\left(G(\widehat{\mathbf{m}})-G(\mathbf{m}^{\dagger})-G^{\prime}(\bm {m}^{\dagger})\delta\widehat{\mathbf{m}}\right)}_{e_{2}}\Big{]}.\]
Applying Lemma 5.6, we have
\[\|e_{1}\|_{\mathcal{C}(\Omega_{s})} \leq C_{k}\|\Sigma_{0}^{-1/2}[P-\mathrm{Id}]\Sigma_{0}^{-1/2} \varepsilon\|_{1}\] \[\leq C_{k}\|[P-\mathrm{Id}]\Sigma_{0}^{-1/2}\varepsilon\|_{2}\leq C _{k}\|\varepsilon\|_{\Sigma_{0}^{-1}}.\]
In order to estimate \(e_{2}\), we apply Lemma 5.6 and Proposition 5.2 to have
\[\|e_{2}\|_{\mathcal{C}(\Omega_{s})} \leq C_{k}\|\Sigma_{0}^{-1}\left(G(\widehat{\mathbf{m}})-G(\mathbf{m}^{ \dagger})-G^{\prime}(\mathbf{m}^{\dagger})\delta\widehat{\mathbf{m}}\right)\|_{1} \tag{5.26}\] \[\leq C_{k}\|\Sigma_{0}^{-1/2}\left(G(\widehat{\mathbf{m}})-G(\mathbf{m}^ {\dagger})-G^{\prime}(\mathbf{m}^{\dagger})\delta\widehat{\mathbf{m}}\right)\|_{2}.\]
Notice that
\[G(\widehat{\mathbf{m}})-G(\mathbf{m}^{\dagger})-G^{\prime}(\mathbf{m}^{ \dagger})\delta\widehat{\mathbf{m}} =G^{\prime}(\mathbf{m}^{\dagger})(\widehat{\mathbf{m}}-\mathbf{m}^{\dagger}- \delta\widehat{\mathbf{m}})\] \[\quad+\int_{0}^{1}(G^{\prime}(\mathbf{m}_{\tau})-G^{\prime}(\mathbf{m}^{ \dagger}))(\widehat{\mathbf{m}}-\mathbf{m}^{\dagger})\,\mathrm{d}\tau\]
where \(\mathbf{m}_{\tau}=\mathbf{m}^{\dagger}+\tau(\widehat{\mathbf{m}}-\mathbf{m}^{\dagger})\). Using this together with Propositions 5.3 and 5.4, we have
\[\|e_{2}\|_{\mathcal{C}(\Omega_{s})} \leq C_{k}\big{(}\|\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger})( \widehat{\mathbf{m}}-\mathbf{m}^{\dagger}-\delta\widehat{\mathbf{m}})\|_{2} \tag{5.27}\] \[\quad+\int_{0}^{1}\|\Sigma_{0}^{-1/2}(G^{\prime}(\mathbf{m}_{\tau})-G ^{\prime}(\mathbf{m}^{\dagger}))(\widehat{\mathbf{m}}-\mathbf{m}^{\dagger})\|_{2}\, \mathrm{d}\tau\big{)}\] \[\leq c_{3}(k,\mu^{\dagger})\left(\|\widehat{\mathbf{m}}-\mathbf{m}^{ \dagger}-\delta\widehat{\mathbf{m}}\|_{W_{\uparrow}}+\|\widehat{\mathbf{m}}-\mathbf{m}^{ \dagger}\|_{W_{\uparrow}}^{2}\right)\] \[\leq c_{3}C_{1}^{2}\left(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta \right)^{2}.\]
Combining (5.25)-(5.27), we have
\[\|\widehat{\eta}-\eta_{\mathrm{PC}}\|_{\mathcal{C}(\Omega_{s})}\leq C_{2}\beta ^{-1}\left[(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta)^{2}+\|\varepsilon\|_{ \Sigma_{0}^{-1}}\right],\]
where \(C_{2}=c_{3}C_{1}^{2}\). This yields
\[|\widehat{\eta}(y)| \leq|\widehat{\eta}(y)-\eta_{\mathrm{PC}}(y)|+|\eta_{\mathrm{PC}}( y)| \tag{5.28}\] \[\leq\|\widehat{\eta}-\eta_{\mathrm{PC}}\|_{\mathcal{C}(\Omega_{s} )}+|\eta_{\mathrm{PC}}(y)|\] \[\leq C_{2}\beta^{-1}\left[(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta )^{2}+\|\varepsilon\|_{\Sigma_{0}^{-1}}\right]+|\eta_{\mathrm{PC}}(y)|.\]
We prove (5.24). Assume that (5.21)-(5.22) hold. Using Proposition 5.4, we know that for \(y\in\Omega_{s}\setminus\bigcup_{n=1,\dots,N_{s}}B_{w_{n}^{\dagger}}(\widehat{y} _{n},\sqrt{\theta/2})\), there holds
\[\|w_{n}^{\dagger}(y-y_{n}^{\dagger})\|_{2}\geq\|w_{n}^{\dagger}(y-\widehat{y}_{ n})\|_{2}-\|\widehat{\mathbf{m}}-\mathbf{m}^{\dagger}\|_{W_{\uparrow}}\geq\sqrt{\theta/2}- \sqrt{\theta/32}=\sqrt{9\theta/32}.\]
Hence, since \(\eta_{\mathrm{PC}}\) is nondegenerate, we have by (3.4) that
\[|\eta_{\mathrm{PC}}(y)|\leq 1-\theta\min\{\theta,\|w_{n}^{\dagger}(y-y_{n}^{ \dagger})\|_{2}^{2}\}\leq 1-\theta\min\{\theta,9\theta/32\}=1-9\theta^{2}/32.\]
This and (5.28) imply that
\[|\widehat{\eta}(y)|\leq\theta^{2}/32+(1-9\theta^{2}/32)=1-(\theta/2)^{2},\quad \text{for every }y\in\varOmega_{s}\setminus\bigcup_{n=1,\ldots,N_{s}}B_{w_{n}^{\dagger}}( \widehat{y}_{n},\sqrt{\theta/2}),\]
which is indeed (5.24).
We next prove (5.23). Following the same argument to (5.26)-(5.27) by applying Lemma 5.6, we have without loss of generality that
\[\sup_{y\in\varOmega_{s}}\|\nabla^{2}\widehat{\eta}-\nabla^{2}\eta_{\mathrm{PC} }\|_{2\to 2}\leq C_{2}\beta^{-1}\left[(\|\varepsilon\|_{\varSigma_{0}^{-1}}+ \beta)^{2}+\|\varepsilon\|_{\varSigma_{0}^{-1}}\right]. \tag{5.29}\]
In addition, by invoking Assumption A1 on the boundedness of third derivative of \(k\), we obtain
\[\|\nabla^{2}\eta_{\mathrm{PC}}(\widehat{y}_{n})-\nabla^{2}\eta_{ \mathrm{PC}}(y_{n}^{\dagger})\|_{2\to 2} \leq\|\widehat{y}_{n}-y_{n}^{\dagger}\|_{2}\sup_{y\in\varOmega_{ s}}\|\nabla^{3}\eta_{\mathrm{PC}}(y)\|_{2\times 2\to 2} \tag{5.30}\] \[\leq c_{4}(\mu^{\dagger},k)|\widehat{\mathbf{m}}-\mathbf{m}^{\dagger}\|_ {W_{\dagger}}\|\Sigma_{0}^{-1/2}G^{\prime}(\mathbf{m}^{\dagger})\mathcal{I}_{0}^{ -1}(\mathbf{\rho};\mathbf{0})\|_{2}\] \[\leq c_{4}C_{1}\|\mathcal{I}_{0}^{-1}\|_{W_{\dagger}^{-1}\to W_{ \dagger}}(\|\varepsilon\|_{\varSigma_{0}^{-1}}+\beta)\] \[\leq c_{4}C_{1}\|\mathcal{I}_{0}^{-1}\|_{W_{\dagger}^{-1}\to W_{ \dagger}}\beta^{-1}\left[(\|\varepsilon\|_{\varSigma_{0}^{-1}}+\beta)^{2}+\| \varepsilon\|_{\varSigma_{0}^{-1}}\right].\]
Without loss of generality, we use \(C_{2}\) for the constant \(c_{4}C_{1}\|\mathcal{I}_{0}^{-1}\|_{W_{\dagger}^{-1}\to W_{\dagger}}\) in (5.30). From (5.29)-(5.30), we have
\[\|\nabla^{2}\widehat{\eta}(\widehat{y}_{n})-\nabla^{2}\eta_{ \mathrm{PC}}(y_{n}^{\dagger})\|_{2\to 2} =\|\nabla^{2}\widehat{\eta}(\widehat{y}_{n})-\nabla^{2}\eta_{ \mathrm{PC}}(\widehat{y}_{n})\|_{2\to 2}+\|\nabla^{2}\eta_{\mathrm{PC}}( \widehat{y}_{n})-\nabla^{2}\eta_{\mathrm{PC}}(y_{n}^{\dagger})\|_{2\to 2}\] \[\leq C_{2}\beta^{-1}\left[(\|\varepsilon\|_{\varSigma_{0}^{-1}}+ \beta)^{2}+\|\varepsilon\|_{\varSigma_{0}^{-1}}\right]\]
for every \(n=1,\ldots,N_{s}\). Since
\[C_{2}\beta^{-1}\left[(\|\varepsilon\|_{\varSigma_{0}^{-1}}+\beta)^{2}+\| \varepsilon\|_{\varSigma_{0}^{-1}}\right]\leq\theta^{2}/32\leq\theta/2\]
and \(\eta_{\mathrm{PC}}\) is \(\theta-\)admissible, \(\widehat{\eta}\) satisfies (5.23). Hence, we conclude that \(\widehat{\eta}\) is \(\theta/2-\) admissible for \(\widehat{\mu}\) and the proof is complete.
**Remark 5.8**.: In fact, the constant \(C_{2}\) in the proof of Proposition 5.7 could be chosen as
\[C_{2}=\max\left\{c_{3}(k,\mu^{\dagger})C_{1}^{2},c_{4}(k,\mu^{\dagger})\| \mathcal{I}_{0}^{-1}\|_{W_{\dagger}^{-1}\to W_{\dagger}}C_{1}\right\}.\]
We note that \(C_{1}\) depends monotonically on \(\|\mathcal{I}_{0}^{-1}\|_{W_{\dagger}^{-1}\to W_{\dagger}}\). Hence, we also have the monotone dependence of \(C_{2}\) on \(\|\mathcal{I}_{0}^{-1}\|_{W_{\dagger}^{-1}\to W_{\dagger}}\).
As a consequence, we conclude that the solution to \((\mathcal{P}_{\beta,\varepsilon})\) is unique and parametrized by \(\widehat{\mathbf{m}}\). Moreover, its H-K distance to \(\mu^{\dagger}\) can be bounded in terms of the linearization \(\delta\widehat{\mathbf{m}}\).
**Theorem 5.9**.: Let the assumptions of Proposition 5.7 hold. Then the solution of \((\mathcal{P}_{\beta,\varepsilon})\) is unique and given by \(\widehat{\mu}\) from (5.18). There holds
\[d_{\mathrm{HK}}(\widehat{\mu},\mu^{\dagger})^{2}\leq 8\|\delta\widehat{\mathbf{m}}\|_{W _{\dagger}}^{2}. \tag{5.31}\]
Proof.: From Proposition 5.7, we conclude that \(\widehat{\eta}\) is \(\theta/2\)-admissible for \(\widehat{\mu}\). Consequently, we have \(\widehat{\eta}\in\partial\|\widehat{\mu}\|_{\mathcal{M}(\varOmega_{s})}\), i.e., \(\widehat{\mu}\) is a solution of \((\mathcal{P}_{\beta,\varepsilon})\). It remains to show its uniqueness. For this purpose, it suffices to argue that
\[K_{|\operatorname{supp}\widehat{\mu}}=k[\mathbf{x},\widehat{\mathbf{y}}]\in\mathbb{R}^ {N_{s}\times N_{s}^{\dagger}}\]
is injective, see, e.g., the proof of [26, Proposition 3.6]. Assume that this is not the case. Then, following [25, Theorem B.4], there is \(\mathbf{v}\neq 0\) with \(k[\mathbf{x},\mathbf{y}]\mathbf{v}=0\) and \(\tau\neq 0\) such that the measure \(\widetilde{\mu}\) parametrized by \(\widehat{\mathbf{m}}=(\widehat{\mathbf{q}};\widehat{\mathbf{y}})\) with \(\widetilde{\mathbf{q}}=\widehat{\mathbf{q}}+\tau\mathbf{v}\) is also a solution of \((\mathcal{P}_{\beta,\varepsilon})\) (choose the sign of \(\tau\) to not increase the \(\ell_{1}\)-regularization, and the magnitude small not to change the sign of \(\widehat{\mathbf{q}}\)) and \(\tilde{\mathbf{q}}\neq\tilde{\mathbf{q}}\). For \(s\in(0,1)\), set \(\mathbf{q}_{s}=(1-s)\widehat{\mathbf{q}}+s\tilde{\mathbf{q}}\). By convexity of \((\mathcal{P}_{\beta,\varepsilon})\), the measure parametrized by \(\mathbf{m}_{s}=(\mathbf{q}_{s};\widehat{\mathbf{y}})\) is also a minimizer of \((\mathcal{P}_{\beta,\varepsilon})\). Consequently, \(\mathbf{m}_{s}\) is a solution of (5.1) and thus
also a stationary point. Finally, noting that \(\mathbf{m}_{s}\neq\widehat{\mathbf{m}}\), \(s\in(0,1)\), and \(\lim_{s\to 0}\mathbf{m}_{s}=\widehat{\mathbf{m}}\), we arrive at a contradiction to the uniqueness of stationary points in the vicinity of \(\mathbf{m}^{\dagger}\). The estimate in (5.31) immediately follows from
\[d_{\mathrm{HK}}(\widehat{\mu},\mu^{\dagger})^{2}\leq R(\widehat{\mathbf{q}},\mathbf{q} ^{\dagger})\|\widehat{\mathbf{m}}-\mathbf{m}^{\dagger}\|_{W_{\uparrow}}^{2}\leq 2\| \widehat{\mathbf{m}}-\mathbf{m}^{\dagger}\|_{W_{\uparrow}}^{2}\leq 8\|\delta \widehat{\mathbf{m}}\|_{W_{\uparrow}}^{2}\qed\]
## 6. Inverse problems with random noise
Finally, let \((\mathcal{D},\mathcal{F},\mathbb{P})\) denote a probability space and consider the stochastic measurement model
\[z^{d}(\varepsilon)=K\mu^{\dagger}+\varepsilon,\]
where the noise is distributed according to \(\varepsilon\sim\gamma_{p}=\mathcal{N}(0,p^{-1}\Sigma_{0})\) for some \(p>0\) representing the overall precision of the measurements. Mimicking the deterministic setting, we are interested in the reconstruction of the ground truth \(\mu^{\dagger}\) by solutions obtained from \((\mathcal{P}_{\beta,\varepsilon})\) for realizations of the random variable \(\varepsilon\). By utilizing the quantitative analysis presented in the preceding section, we provide an upper bound on the worst-case mean-squared error
\[\mathbb{E}_{\gamma_{p}}\left[\sup_{\mu\in\mathfrak{M}(\cdot)}d_{\mathrm{HK}}( \mu,\mu^{\dagger})^{2}\right]=\int_{\mathbb{R}^{N_{o}}}\sup_{\mu\in\mathfrak{M }(\varepsilon)}d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2}\ \mathrm{d}\gamma_{p}(\varepsilon)\]
for a suitable a priori parameter choice rule \(\beta=\beta(p)\). Note that the expectation is well-defined according to Appendix A.2.
### A priori parameter choice rule
Before stating the main result of the manuscript, let us briefly motivate the particular choice of the misfit term in \((\mathcal{P}_{\beta,\varepsilon})\) as well as the employed parameter choice rule from the perspective of the stochastic noise model. Since we consider independent measurements, their covariance matrix \(\Sigma=p^{-1}\Sigma_{0}\) is diagonal with \(\Sigma_{jj}=\sigma_{j}^{2}\) for variances \(\sigma_{j}^{2}>0\), \(j=1,\ldots,N_{o}\). This corresponds to performing the individual measurements with independent sensors of variable precision \(p_{j}=1/\sigma_{j}^{2}\). We call
\[p=\sum_{j=1}^{N_{o}}p_{j}=\sum_{n=1}^{N_{o}}\sigma_{j}^{-2}=\mathrm{tr}( \Sigma^{-1}).\]
the total precision of the sensor array. It can be seen that its reciprocal \(\sigma_{\mathrm{tot}}^{2}=1/p\) corresponds to the harmonic average of the variances divided by the number of sensors \(N_{o}\). Therefore, the misfit in \((\mathcal{P}_{\beta,\varepsilon})\) satisfies
\[\|K\mu-z^{d}(\varepsilon)\|_{\Sigma_{0}^{-1}}^{2}=\frac{1}{p}\sum_{j=1}^{N_{o} }\sigma_{j}^{-2}|\mathcal{K}(x_{j})-z^{d}(\varepsilon)_{j}|^{2}.\]
For identical sensors and measurements \(\varepsilon\sim\mathcal{N}(0,\mathrm{Id}_{N_{o}})\) this simply leads to the scaled Euclidean norm \((1/N_{o})\|K\mu-z^{d}(\varepsilon)\|_{2}^{2}\). In general, by increasing the total precision of the sensor setup \(p\), we improve the measurements by proportionally decreasing the variances by \(\sigma_{\mathrm{tot}}^{2}\). While this will decrease the expected level of noise through its distribution, it will not affect the misfit functional, which is just influenced by \(\Sigma_{0}\), or the normalized variances \(\sigma_{0,j}^{2}=\sigma_{j}^{2}/\sigma_{\mathrm{tot}}^{2}\).
Moreover, since \(\varepsilon\sim\mathcal{N}(0,\Sigma)\), we have \(\Sigma^{-1/2}\varepsilon\sim\mathcal{N}(0,\mathrm{Id}_{N_{o}})\) and by direct calculations, the following estimate holds
\[\frac{N_{o}}{\sqrt{N_{o}+1}}\leq\mathbb{E}_{\gamma_{p}}\|\|\varepsilon\|_{ \Sigma^{-1}}\leq\sqrt{N_{o}}.\]
Hence, with high probability, realizations of the error fulfill the estimate
\[\sqrt{\sum_{j=1}^{N_{o}}\varepsilon_{j}^{2}/\sigma_{j}^{2}}=\|\varepsilon\|_ {\Sigma^{-1}}=\sqrt{p}\|\varepsilon\|_{\Sigma_{0}^{-1}}\leq C\sqrt{N_{o}}\]
and thus \(\|\varepsilon\|_{\Sigma_{0}^{-1}}\lesssim 1/\sqrt{p}\). Thus, we consider the expected noise \(\sigma_{\mathrm{tot}}=1/\sqrt{p}\) as an (expected) upper bound for the noise. This motivates the parameter choice rule
\[\beta(p)=\beta_{0}/\sqrt{p}=\beta_{0}\,\mathrm{tr}(\Sigma^{-1})^{-1/2}\]
for some \(\beta_{0}>0\) large enough.
### Quantitative error estimates in the stochastic setting
We are now prepared to prove a quantitative estimate on the worst-case mean-squared error by lifting the deterministic result of Theorem 5.9 to the stochastic setting.
**Theorem 6.1**.: Assume that \(\eta^{\dagger}\) is \(\theta\)-admissible for \(\theta\in(0,1)\) and set \(\beta(p)=\beta_{0}/\sqrt{p}\). Then there exists
\[\overline{p}=\overline{p}(\theta,\beta_{0},k,\mu^{\dagger},\|\mathcal{I}_{0}^ {-1}\|_{W_{\dagger}^{-1}\to W_{\dagger}})\]
such that for \(p\geq\overline{p}\), there holds
\[\mathbb{E}_{\gamma_{p}}\left[\sup_{\mu\in\mathfrak{M}(\cdot)}d_{\mathrm{HK}}( \mu,\mu^{\dagger})^{2}\right]\leq 8\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{ \boldsymbol{m}}\|_{W_{\dagger}}^{2}]+C_{3}\exp\left[-\left(\frac{\theta^{2} \beta_{0}}{64C_{4}}\right)^{2}/(2N_{o})\right], \tag{6.1}\]
where \(C_{3}=2\|\mu^{\dagger}\|_{\mathcal{M}(\Omega_{s})}+\sqrt{2N_{o}}/(2\beta_{0} \sqrt{p})\) and \(C_{4}=\max\{C_{1},C_{2}\}\). In addition, the expectation \(\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\boldsymbol{m}}\|_{W_{\dagger}}^{2}]\) has the closed form
\[\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\boldsymbol{m}}\|_{W_{\dagger}}^{2}] =\frac{1}{p}\left(\mathrm{tr}(W_{\dagger}\mathcal{I}_{0}^{-1})+\beta_{0}^{2} \|\mathcal{I}_{0}^{-1}(\boldsymbol{\rho};\boldsymbol{0})\|_{W_{\dagger}}^{2} \right). \tag{6.2}\]
**Proof.** Define the sets
\[A_{1}=\left\{\varepsilon:C_{4}\beta(p)^{-1}\|\varepsilon\|_{\Sigma_{0}^{-1}} \leq\frac{\theta^{2}}{64}\right\},\quad A_{2}=\left\{\varepsilon:C_{4}\beta(p) ^{-1}(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta(p))^{2}\leq\frac{\theta^{2}}{64 }\right\}.\]
By a case distinction, we readily verify
\[\mathbb{R}^{N_{o}}\setminus(A_{1}\cap A_{2})\subset(\mathbb{R}^{N_{o}} \setminus A_{1})\cup((\mathbb{R}^{N_{o}}\setminus A_{2})\cap A_{1})\]
and thus
\[\mathbb{E}_{\gamma_{p}}\left[\sup_{\mu\in\mathfrak{M}(\cdot)}d_{ \mathrm{HK}}(\mu,\mu^{\dagger})^{2}\right]\leq \int_{A_{1}\cap A_{2}}\sup_{\mu\in\mathfrak{M}(\varepsilon)}d_{ \mathrm{HK}}(\mu,\mu^{\dagger})^{2}\,\mathrm{d}\gamma_{p}(\varepsilon) \tag{6.3}\] \[+\underbrace{\int_{\mathbb{R}^{N_{o}}\setminus A_{1}}\sup_{\mu\in \mathfrak{M}(\varepsilon)}d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2}\,\mathrm{d} \gamma_{p}(\varepsilon)}_{I_{1}}\] \[+\underbrace{\int_{(\mathbb{R}^{N_{o}}\setminus A_{2})\cap A_{1}} \sup_{\mu\in\mathfrak{M}(\varepsilon)}d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2} \,\mathrm{d}\gamma_{p}(\varepsilon)}_{I_{2}}.\]
For \(\varepsilon\in A_{1}\cap A_{2}\), we have
\[C_{2}\beta(p)^{-1}\left((\|\varepsilon\|_{\Sigma_{0}^{-1}}+ \beta(p))^{2}+\|\varepsilon\|_{\Sigma_{0}^{-1}}\right) \leq C_{4}\beta^{-1}(p)(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta(p ))^{2}+C_{4}\beta^{-1}(p)\|\varepsilon\|_{\Sigma_{0}^{-1}}\] \[\leq\frac{\theta^{2}}{64}+\frac{\theta^{2}}{64}=\frac{\theta^{2} }{32},\]
i.e., \(\varepsilon\) satisfies (5.22). Moreover, expanding the square in the definition of \(A_{2}\), we conclude that (5.21) also holds due to
\[2C_{4}(\|\varepsilon\|_{\Sigma_{0}^{-1}}+\beta(p))\leq\frac{\theta^{2}}{64} \leq\frac{\sqrt{\theta}}{2\sqrt{32}}.\]
Hence, for \(\varepsilon\in A_{1}\cap A_{2}\), there holds \(\mathfrak{M}(\varepsilon)=\{\widehat{\mu}\}\) and
\[\sup_{\mu\in\mathfrak{M}(\varepsilon)}d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2}=d _{\mathrm{HK}}(\widehat{\mu},\mu^{\dagger})^{2}\leq 8\|\delta\widehat{ \boldsymbol{m}}\|_{W_{\dagger}}^{2} \tag{6.4}\]
by Proposition 5.9. Next, we estimate \(I_{1}\) by
\[d_{\mathrm{HK}}(\widehat{\mu},\mu^{\dagger})^{2}\leq\|\mu^{\dagger}\|_{ \mathcal{M}(\Omega_{s})}+\|\widehat{\mu}\|_{\mathcal{M}(\Omega_{s})}\leq 2\|\mu^{ \dagger}\|_{\mathcal{M}(\Omega_{s})}+\|\varepsilon\|_{\Sigma_{0}^{-1}}^{2}/(2 \beta_{0}/\sqrt{p})\]
applying Proposition 3.3 and [20, Proposition 7.8]. Together with Lemma A.1 this yields
\[\begin{split} I_{1}=\int_{\mathbb{R}^{N_{o}}\backslash A_{1}}d_{ \mathrm{HK}}(\widehat{\mu},\mu^{\dagger})^{2}\,\mathrm{d}\gamma_{p}(\varepsilon )&\leq\int_{\mathbb{R}^{N_{o}}\backslash A_{1}}\left(2\|\mu^{ \dagger}\|_{\mathcal{M}(\Omega_{s})}+\|\varepsilon\|_{\Sigma^{-1}}^{2}/(2\beta _{0}\sqrt{p})\right)\mathrm{d}\gamma_{p}(\varepsilon)\\ &\leq\left(2\|\mu^{\dagger}\|_{\mathcal{M}(\Omega_{s})}+\frac{ \sqrt{2N_{o}}}{2\beta_{0}\sqrt{p}}\right)\exp\left[-\left(\frac{\theta^{2}\beta _{0}}{64C_{4}}\right)^{2}/(2N_{o})\right].\end{split} \tag{6.5}\]
Finally, for \(\varepsilon\in(\mathbb{R}^{N_{o}}\backslash A_{2})\cap A_{1}\), one has
\[p^{1/4}\left(\frac{\theta^{2}\beta_{0}}{64C_{4}}\right)^{1/2}-\beta_{0}<\| \varepsilon\|_{\Sigma^{-1}}\leq\frac{\theta^{2}\beta_{0}}{64C_{4}}.\]
Hence, if we choose
\[p\geq\left(\frac{\theta^{2}\beta_{0}}{64C_{4}}+\beta_{0}\right)^{4}/\left( \frac{\theta^{2}\beta_{0}}{64C_{4}}\right)^{2}:=\overline{p}\]
then \((\mathbb{R}^{N_{o}}\backslash A_{2})\cap A_{1}\) is empty and \(I_{2}=0\). Together with (6.3)-(6.5), we obtain (6.1) for every \(p\geq\overline{p}\). The equality in (6.2) follows immediately from the closed form expression (5.5) for \(\delta\widehat{\mathbf{m}}\) and \(\varepsilon\sim\mathcal{N}(0,p^{-1}\Sigma_{0})\).
Let us interpret this result: By choosing \(\beta_{0}\) large enough, the second term on the right hand side of (6.6) becomes negligible, i.e.,
\[\mathbb{E}_{\gamma_{p}}\left[\sup_{\mu\in\mathfrak{M}(\cdot)}d_{\mathrm{HK}}( \mu,\mu^{\dagger})^{2}\right]\leq 8\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mathbf{m}} \|_{W_{\uparrow}}^{2}]+\delta \tag{6.6}\]
for some \(0<\delta\ll 1\). As a consequence, due to its closed form representation (6.2), \(\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mathbf{m}}\|_{W_{\uparrow}}^{2}]\) provides a computationally inexpensive, approximate upper surrogate for the worst-case mean-squared error which vanishes as \(p\to 0\). Moreover, due to its explicit dependence on the measurement setup, it represents a suitable candidate for an optimal design criterion in the context of optimal sensor placement for the class of sparse inverse problems under consideration. This potential will be further investigated in a follow-up paper.
**Remark 6.2**.: It is worth mentioning that the constant \(8\) appearing on the right hand side of (6.6) is not optimal and is primarily a result of the proof technique. In fact, by appropriately selecting constants in Propositions 5.3 and 5.4, it is possible to replace \(8\) by \(1+\delta\), where \(0<\delta\ll 1\) at the cost of increasing \(\bar{p}\). We will illustrate the sharpness of the estimate of the worst-case mean-squared error by \(\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mathbf{m}}\|_{W_{\uparrow}}^{2}]\) in the subsequent numerical results.
**Remark 6.3**.: Relying on similar arguments as in the proof of Theorem 6.1, we are also able to derive pointwise estimates on the Hellinger-Kantorovich distance which hold with high probability. Indeed, noticing that (6.4) holds in the set \(A_{1}\cap A_{2}\), we derive a lower probability bound for \(\mathbb{P}(\varepsilon\in A_{1}\cap A_{2})\) by noticing that
\[\mathbb{P}(\varepsilon\in A_{1}\cap A_{2}) \geq\mathbb{P}(\varepsilon\in A_{1})+\mathbb{P}(\varepsilon\in A _{2})-1\] \[\geq 1-\mathbb{P}(\varepsilon\in\mathbb{R}^{N_{o}}\backslash A_{1}) -\mathbb{P}(\varepsilon\in\mathbb{R}^{N_{o}}\backslash A_{2}).\]
By invoking Lemma A.1, one has
\[\mathbb{P}(\varepsilon\in\mathbb{R}^{N_{o}}\backslash A_{1}) =\mathbb{P}\left(\|\varepsilon\|_{\Sigma^{-1}}>\frac{\theta^{2} \beta_{0}}{64C_{4}}\right)\leq 2\exp\left[-\left(\frac{\theta^{2}\beta_{0}}{64C_{4}} \right)^{2}/(2N_{o})\right],\] \[\mathbb{P}(\varepsilon\in\mathbb{R}^{N_{o}}\backslash A_{2}) =\mathbb{P}\left(\|\varepsilon\|_{\Sigma^{-1}}>\frac{p^{1/2}\theta \beta_{0}^{1/2}}{8C_{4}^{1/2}}-\beta_{0}\right)\leq 2\exp\left[-\left(\frac{p^{1/2} \theta\beta_{0}^{1/2}}{8C_{4}^{1/2}}-\beta_{0}\right)^{2}/(2N_{o})\right].\]
Hence, since \(\exp(-x^{2})\to 0\) as \(x\to\infty\), we can see that for every \(\delta\in(0,1)\), one can choose \(\beta_{0}\) and \(p\) large enough such that
\[\exp\left[-\left(\frac{\theta^{2}\beta_{0}}{64C_{4}}\right)^{2}/(2N_{o})\right]< \delta/4,\quad\exp\left[-\left(\frac{p^{1/2}\theta\beta_{0}^{1/2}}{8C_{4}^{1/2 }}-\beta_{0}\right)^{2}/(2N_{o})\right]<\delta/4,\]
which implies \(\mathbb{P}(\varepsilon\in A_{1}\cap A_{2})\geq 1-\delta\). Therefore, with probability at least \(1-\delta\), we have
\[\sup_{\mu\in\mathfrak{M}(\varepsilon)}d_{\mathrm{HK}}(\mu,\mu^{\dagger})^{2} \leq 8\|\delta\widehat{\mathbf{m}}\|_{W_{\dagger}}^{2}\]
for realization \(\varepsilon\) of the noise. Furthermore, employing Lemma A.1 again, we know that with probability at least \(1-\delta\), and independently from \(p\), one has \(\|\varepsilon\|_{\Sigma^{-1}}\leq\sqrt{-2N_{o}\ln(\delta/2)}\). Hence, by Proposition 5.4 together with \(\varepsilon\in A_{1}\cap A_{2}\), we have
\[\sup_{\mu\in\mathfrak{M}(\varepsilon)}d_{\mathrm{HK}}(\mu,\mu^{ \dagger})^{2} \leq 8C_{1}p^{-1/2}(\|\varepsilon\|_{\Sigma^{-1}}+\beta_{0})\] \[\leq 8C_{1}p^{-1/2}\left(\sqrt{-2N_{o}\ln(\delta/2)}+\beta_{0}\right)\]
with probability at least \(1-2\delta\).
## 7. Numerical results
We end this paper with the study of some numerical examples to illustrate our theory. We consider a simplified version of Example 3.1:
* The source domain \(\Omega_{s}\) and observation domain \(\Omega_{o}\) are the interval \([-1,1]\).
* The reference measure is given by \(\mu^{\dagger}=0.4\delta_{-0.7}+0.3\delta_{-0.3}-0.2\delta_{0.3}\in\mathcal{M} (\Omega_{s})\).
* The kernel \(k:[-1,1]\times[-1,1]\to\mathbb{R}\) is defined as \[k(x,y)=\exp\left(-\frac{(x-y)^{2}}{2\sigma^{2}}\right),\sigma=0.2,\quad x,y \in[-1,1].\]
* The measurement points \(\{x_{1},\ldots,x_{N_{o}}\}\subset\Omega_{o}\) vary between the individual examples and are marked by grey points in the respective plots. The associated noise model is given by \(\varepsilon\sim\mathcal{N}(0,\Sigma)\) with \(\Sigma^{-1}=p\Sigma_{0}^{-1}\), where \(\Sigma_{0}^{-1}=(1/N_{o})\operatorname{Id}_{N_{o}}\).
Following our theory, we attempt to recover \(\mu^{\dagger}\) by solving \((\mathcal{P}_{\beta,\varepsilon})\) using the a priori parameter choice rule \(\beta(p)=\beta_{0}/\sqrt{p}\). The regularized problems are solved by the Primal-Dual-Active-Points method, [22, 26], yielding a solution \(\bar{\mu}\). Since the action of the forward operator \(K\) on sparse measures can be computed analytically, the algorithm is implemented on a grid free level. In addition, we compute a stationary point \(\widehat{\mathbf{m}}\) of the nonconvex problem (5.1) inducing the measure \(\widehat{\mu}\) from (5.18). This is done by a similar iteration to the Gauss-Newton sequence (5.15) with a nonsmooth adaptation to handle the \(\ell_{1}\)-norm and an added globalization procedure to make it converge without restrictions on the data. We note that this solution depends on the initialization of the algorithm at \(\mathbf{m}^{\dagger}\), which is usually unavailable in practice. To evaluate the reconstruction results in a qualitative way, we follow [8] by considering the dual certificates and pre-certificates; see Section 3. Our Matlab implementation is available at [https://github.com/hphuoctruong/OED_SparseInverseProblems](https://github.com/hphuoctruong/OED_SparseInverseProblems).
**Example 1**.: In the first example, we illustrate the reconstruction capabilities of the proposed ansatz for different measurement setups and with and without noise in the observations. To this end, we attempt to recover the reference measure \(\mu^{\dagger}\) using a variable number \(N_{o}\) of uniformly distributed sensors. For noisy data, the regularization parameter is selected as \(\beta=\beta_{0}/\sqrt{p}\) where \(\beta_{0}=2\) and \(p=10^{4}\). We first consider the exact measurement data with \(N_{o}\in\{6,9,11\}\) and try to obtain \(\mu^{\dagger}\) by solving \((\mathcal{P}_{0})\). The results are shown in Figure 1. We observe that with \(6\) sensors, the pre-certificate \(\eta_{\mathrm{PC}}\) is not admissible. Recalling [8, Proposition 7], this implies that \(\mu^{\dagger}\) is not a minimum norm solution. In contrast, the experiments with \(9\) and \(11\) uniform sensors provide
admissible pre-certificates. In these situations, the pre-certificates coincide with the minimum norm dual certificates and the ground truth \(\mu^{\dagger}\) is indeed an identifiable minimum norm solution.
Next, we consider noisy data and solve \((\mathcal{P}_{\beta,\varepsilon})\) for the aforementioned choice of \(\beta(p)\). Following the observation in the first example, we only evaluate the reconstruction results obtained by 9 and 11 uniform sensors. In the absence of the measurement data obtained from experiments, we generate synthetic noisy measurements where the noise vector \(\varepsilon\) is a realization of the Gaussian
Figure 1. Reconstruction results with exact data using 6 sensors (left), 9 sensors (middle) and 11 sensors (right)
Figure 2. Reconstruction results with noisy data using 9 sensors (left) and 11 sensors (right)
random noise \(\varepsilon\sim\mathcal{N}(0,\Sigma)\). The results are shown in Figure 2. Since \(\mu^{\dagger}\) is identifiable in these cases, \(\widehat{\mu}\) and \(\bar{\mu}\) coincide and closely approximate \(\mu^{\dagger}\) with high probability for an appropriate choice of \(\beta_{0}\) and \(p\) large enough. Both properties can be clearly observed in the plots, where \(\beta_{0}=2\).
**Example 2**.: In the second example we study the influence of the parameter choice rule on the reconstruction result. To this end, we fix the measurement setup to \(9\) uniformly distributed sensors. We recall that the a priori parameter choice rule is given by \(\beta(p)=\beta_{0}/\sqrt{p}\). According to Section 6.2, selecting a sufficiently large value for \(\beta_{0}\) is recommended to achieve a high quality reconstruction. To determine a useful range of regularization parameters, we solve problem \((\mathcal{P}_{\beta,\varepsilon})\) for a sequence of regularization parameters using PDAP. Here, we choose \(\beta_{0}\in\{0.5,1,2\}\) and \(p\in\{10^{4},10^{5},10^{6}\}\).
In Figure 3, different reconstruction results are shown for the same realization of noise, \(\beta_{0}\in\{0.5,1,2\}\) and \(p=10^{4}\). As one can see, for this particular realization of the noise, the number of spikes is recovered exactly in the case \(\beta_{0}=2\) and we again observe that \(\widehat{\mu}=\bar{\mu}\). In contrast, for smaller \(\beta_{0}\), the noisy pre-certificate is not admissible. Hence, while \(\widehat{\mu}\) still provides a good approximation of \(\mu^{\dagger}\), \(\bar{\mu}\) admits two additional spikes away from the support of \(\mu^{\dagger}\). These observations can be explained by looking at Theorem 6.1 the second term on the right hand side of the inequality becomes negligible for increasing \(\beta_{0}\) and large enough \(p\). Thus, roughly speaking, the parameter \(\beta_{0}\) controls the probability of the "good events" in which \(\widehat{\mu}\) is the unique solution of \((\mathcal{P}_{\beta,\varepsilon})\).
Finally, we address the reconstruction error from a quantitative perspective. For this purpose, we simplify the evaluation of the maximum mean-squared error (MSE) by inserting the solution \(\bar{\mu}\) computed algorithmically. We note that this could only lead to an under-estimation of the maximum error in the case of non-unique solutions of \((\mathcal{P}_{\beta,\varepsilon})\); a degenerate case that is unlikely to occur in practice. Moreover, the expectation is approximated using \(10^{3}\) Monte-Carlo samples. Additionally, we use the closed form expression (6.2) for evaluating the linearized estimate \(\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mu}\|_{W_{1}}^{2}]\) exactly. Here, the expectations are computed for \(\beta_{0}\in\{2,0.5\}\). The results are collected in Table 1. We make several observations: Clearly, the MSE decreases for increasing \(p\), i.e. lower noise level. For increased \(\beta_{0}\), the behavior differs: For the theoretical quantities \(\widehat{\mathbf{m}}\) and \(\delta\widehat{\mathbf{m}}\) increased \(\beta_{0}\) only introduces additional bias and thus increases error. For the estimator \(\bar{\mu}\), the increased regularization however leads to generally improved results, since the probability of \(\widehat{\mu}\neq\bar{\mu}\) is decreased. We
highlight in bold the estimator which performed best for each \(\beta_{0}\). Here, the results conform to Theorem 6.1: For larger \(\beta_{0}\), the second term on the right-hand side of (6.1) is negligible and the linearized estimate provides an excellent bound on the MSE for both \(\widehat{\mu}\) and \(\bar{\mu}\). We also note that the estimate is closer to the MSE in the limiting case for larger \(p\). In contrast, for \(\beta=0.5\), the linearized estimate and the MSE of \(\widehat{\mu}\) are much smaller than the MSE of the estimator \(\bar{\mu}\). This underlines the observation that Theorem 5.9 requires further restrictions on the admissible noises in comparison to Proposition 5.4.
**Example 3**.: The final example is devoted to compare the reconstruction results obtained by uniform designs and an improved design chosen by heuristics. To this end, we consider three measurement setups: uniformly distributed setups with \(6\) and \(11\) sensors, respectively, and one with \(6\) sensors selected on purpose. More precisely, in the later case, we place the sensors at \(\varOmega_{o}=\{-0.8,-0.6,-0.4,-0.1,0.1,0.4\}\). The different error measures are computed as in the previous example and the results are gathered in Table 2.
We observe that the measurement setup with \(6\) selected sensors performs better than the uniform ones. Moreover, the linearized estimate again provides a sharp upper bound on the error for both ten uniform and six selected sensors but yields numerically singular Fisher information matrices for six uniform sensors (denoted as Inf in the table), i.e. \(\mu^{\dagger}\) is not stably identifiable in this case. Note that the estimator \(\bar{\mu}\) still yields somewhat useful results, which are however affected by a constant error due to the difference in minimum norm solution and exact source as depicted in Figure 1 and do not improve with lower noise level. These results suggest that the reconstruction quality does not only rely on the amount of measurements taken but also on their specific setup. In this case, we point out that the selected sensors are chosen to be adapted to the sources as every two sensors are placed on the two sides of every source. Thus the obtained results imply that if we have some reasonable prior information on the source positions and amplitudes, one may obtain a better sensor placement setup by incorporating it in the design of the measurement setup. This leads to the concept of optimal sensor placement problems for sparse inversion which we will consider in a future work.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & \(p=10^{4}\) & \(p=10^{5}\) & \(p=10^{6}\) & \(p=10^{7}\) \\ \cline{3-6} & \(\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mathbf{m}}\|^{2}_{W_{\dagger}}]\) & \(7.97\cdot 10^{-3}\) & \(7.97\cdot 10^{-4}\) & \(7.97\cdot 10^{-5}\) & \(7.97\cdot 10^{-6}\) \\ \(\beta_{0}=2\) & \(\mathbb{E}_{\gamma_{p}}[d_{\mathrm{HK}}(\mu^{\dagger},\bar{\mu})^{2}]\) & \(5.43\cdot 10^{-3}\) & \(6.49\cdot 10^{-4}\) & \(7.35\cdot 10^{-5}\) & \(7.76\cdot 10^{-6}\) \\ & \(\mathbb{E}_{\gamma_{p}}[d_{\mathrm{HK}}(\mu^{\dagger},\bar{\mu})^{2}]\) & \(5.44\cdot 10^{-3}\) & \(6.51\cdot 10^{-4}\) & \(7.42\cdot 10^{-5}\) & \(7.99\cdot 10^{-6}\) \\ \hline & \(\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mathbf{m}}\|^{2}_{W_{\dagger}}]\) & \(2.07\cdot 10^{-3}\) & \(2.07\cdot 10^{-4}\) & \(2.07\cdot 10^{-5}\) & \(2.07\cdot 10^{-6}\) \\ \(\beta_{0}=0.5\) & \(\mathbb{E}_{\gamma_{p}}[d_{\mathrm{HK}}(\mu^{\dagger},\bar{\mu})^{2}]\) & \(1.71\cdot 10^{-3}\) & \(1.94\cdot 10^{-4}\) & \(2.03\cdot 10^{-5}\) & \(2.06\cdot 10^{-6}\) \\ & \(\mathbb{E}_{\gamma_{p}}[d_{\mathrm{HK}}(\mu^{\dagger},\bar{\mu})^{2}]\) & \(\mathbf{4.12\cdot 10^{-3}}\) & \(9.83\cdot 10^{-4}\) & \(2.65\cdot 10^{-4}\) & \(7.94\cdot 10^{-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Reconstruction results with \(\beta_{0}=2\) and \(\beta_{0}=0.5\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(11\) sensors & “selected” \(6\) sensors & \(6\) sensors \\ \cline{3-4} & \(p=10^{4}\) & \(5.03\cdot 10^{-3}\) & \(\mathbf{4.25\cdot 10^{-3}}\) & \(1.54\cdot 10^{-2}\) \\ \(\mathbb{E}_{\gamma_{p}}[d_{\mathrm{HK}}(\mu^{\dagger},\bar{\mu})^{2}]\) & \(p=10^{5}\) & \(5.61\cdot 10^{-4}\) & \(\mathbf{4.58\cdot 10^{-4}}\) & \(1.19\cdot 10^{-2}\) \\ & \(p=10^{6}\) & \(6.31\cdot 10^{-5}\) & \(\mathbf{4.65\cdot 10^{-5}}\) & \(1.18\cdot 10^{-2}\) \\ \hline & \(p=10^{4}\) & \(6.09\cdot 10^{-3}\) & \(4.77\cdot 10^{-3}\) & \\ \(\mathbb{E}_{\gamma_{p}}[\|\delta\widehat{\mathbf{m}}\|^{2}_{W_{\dagger}}]\) & \(p=10^{5}\) & \(6.09\cdot 10^{-4}\) & \(4.77\cdot 10^{-4}\) & Inf \\ & \(p=10^{6}\) & \(6.09\cdot 10^{-5}\) & \(4.77\cdot 10^{-5}\) & \\ \hline \hline \end{tabular}
\end{table}
Table 2. Reconstruction results with different sensor setups.
**Acknowledgments.** The work of P.-T. Huynh was supported by the Austrian Science Fund FWF under the grants DOC 78. The material in this manuscript is based on work supported by the Laboratory Directed Research and Development Program at Oak Ridge National Laboratory (ORNL), managed by UT-Battelle, LLC, under Contract No. DE-AC05-00OR22725. The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan)).
|
2302.04176 | Parameter estimates and a uniqueness result for double phase problem
with a singular nonlinearity | We consider the boundary value problem $-\Delta_p u_\lambda -\Delta_q
u_\lambda =\lambda g(x) u_\lambda^{-\beta}$ in $\Omega$ , $u_\lambda=0$ on
$\partial \Omega$ with $u_\lambda>0$ in $\Omega.$ We assume $\Omega$ is a
bounded open set in $\mathbb{R}^N$ with smooth boundary, $1<p<q<\infty$,
$\beta\in [0,1),$ $g$ is a positive weight function and $\lambda$ is a positive
parameter. We derive an estimate for $u_\lambda$ which describes its exact
behavior when the parameter $\lambda$ is large. In general, by invoking
appropriate comparison principles, this estimate can be used as a powerful tool
in deducing the existence, non-existence and multiplicity of positive solutions
of nonlinear elliptic boundary value problems. Here, as an application of this
estimate, we obtain a uniqueness result for a nonlinear elliptic boundary value
problem with a singular nonlinearity. | R. Dhanya, M. S. Indulekha | 2023-02-08T16:30:33Z | http://arxiv.org/abs/2302.04176v1 | # Parameter estimates and a uniqueness result for double phase problem with a singular nonlinearity
###### Abstract
We consider the boundary value problem \(-\Delta_{p}u_{\lambda}-\Delta_{q}u_{\lambda}=\lambda g(x)u_{\lambda}^{-\beta}\) in \(\Omega\), \(u_{\lambda}=0\) on \(\partial\Omega\) with \(u_{\lambda}>0\) in \(\Omega.\) We assume \(\Omega\) is a bounded open set in \(\mathbb{R}^{N}\) with smooth boundary, \(1<p<q<\infty,\)\(\beta\in[0,1),\)\(g\) is a positive weight function and \(\lambda\) is a positive parameter. We derive an estimate for \(u_{\lambda}\) which describes its exact behavior when the parameter \(\lambda\) is large. In general, by invoking appropriate comparison principles, this estimate can be used as a powerful tool in deducing the existence, non-existence and multiplicity of positive solutions of nonlinear elliptic boundary value problems. Here, as an application of this estimate, we obtain a uniqueness result for a nonlinear elliptic boundary value problem with a singular nonlinearity.
keywords: p-q Laplacian, \(L^{\infty}\) estimates, uniqueness Msc: 35A15, 35B33, 35R11, 35J20
## 1 Introduction
We are interested in the positive solution of the non-homogeneous quasi-linear boundary value problem
\[(P_{\lambda})\left\{\begin{array}{rcl}-\Delta_{p}u-\Delta_{q}u&=&\lambda h(x,u)\text{ in }\Omega\\ u&=&0\text{ on }\partial\Omega\end{array}\right.\]
where \(\Omega\) is a bounded smooth domain in \(\mathbb{R}^{N},\)\(1<p<q<\infty\) and \(\lambda>0.\) We say that \(u\) is a solution of \((P_{\lambda})\) if \(u\in W_{0}^{1,q}(\Omega)\) and satisfies the PDE in the weak sense:
\[\int_{\Omega}|\nabla u|^{p-2}\nabla u\nabla\varphi+\int_{\Omega}|\nabla u|^{q- 2}\nabla u\nabla\varphi=\lambda\int_{\Omega}h(x,u)\varphi\text{ for all }\varphi\in C_{c}^{\infty}(\Omega)\]
The main focus of this article is to obtain the asymptotic estimate for the positive solution \(u_{\lambda}\) of \((P_{\lambda})\) when the function \(h(x,u)\) takes the form \(g(x)u^{-\beta}\) for \(\beta\in[0,1)\) along with certain conditions on \(g(x).\) Moreover, we consider the uniqueness of positive solution of \((P_{\lambda})\) for large parameter when \(h(x,u)=f(u)u^{-\beta}\) and \(1<p<q\leq 2.\)
Problems of the type \((P_{\lambda})\) are closely associated to the minimization of certain energy functionals of the type
\[u\mapsto\int_{\Omega}|\nabla u|^{p}+a(x)|\nabla u|^{q}dx \tag{1}\]
where \(1<p<q<\infty,\)\(\Omega\subset\mathbb{R}^{N}\) is a bounded domain and \(a:\mathbb{R}\rightarrow[0,\infty)\) is a measurable function. These functionals are called double phase functionals and were introduced by Zhykov([18], [19]) to model strongly anisotropic materials. The differential operator counterpart of (1), \(u\mapsto-\Delta_{p}u-a(x)\Delta_{q}u,\) known as the double phase operator is used in modelling a variety of physical problems in plasma physics[17], biophysics[6], reaction diffusion systems [3], etc. and are also well analyzed theoretically.
The quasilinear elliptic operator \(\mathcal{L}_{p,q}u:=\Delta_{p}u+\Delta_{q}u\) is called p-q Laplace operator and it reduces to the standard p Laplace operator when \(p=q\). It is observed that several existence and multiplicity results for the p Laplacian can be extended to the p-q Laplacian as well since the associated energy functional preserves the required variational structure (see [9] and the references therein). But the non-homogeneous nature of the operator poses a greater challenge in understanding certain problems like the qualitative properties of the solutions of nonlinear p-q Laplace equation, eigenvalue problem of p-q Laplacian etc. In this work, we mainly address one such difficulty associated to nonhomogenity namely, the scaling property of the solutions of p-q Laplace equation. For instance, if \(u_{\lambda}\) is the unique solution of \(-\Delta_{p}u_{\lambda}=\lambda\) in \(\Omega\) and \(u_{\lambda}=0\) on \(\partial\Omega,\) then it can easily be verified that \(u_{\lambda}=\lambda^{\frac{1}{p-1}}u_{1}.\) A result of this type is impossible for p-q Laplace operator due to its non-homogeneous nature. The novelty of this work is in obtaining the exact asymptotic estimate of the solutions of certain nonlinear elliptic problems involving p-q Laplace operator which is given in Theorem 1.1.
For the rest of the paper we denote \(d(x):=d(x,\partial\Omega),\) the distance function in \(\Omega.\)
**Theorem 1.1**.: _Let \(u_{\lambda}\) be the unique positive solution of the boundary value problem_
\[\begin{split}-\Delta_{p}u-\Delta_{q}u&=\lambda\frac {g(x)}{u^{\beta}}\text{ in }\Omega\\ u&=0\text{ on }\partial\Omega\end{split} \tag{2}\]
_where \(1<p<q<\infty,\)\(0<\beta<1,\lambda>0\) and \(g\) satisfies the hypothesis \((G_{\beta})\) given below:_
_(\(G_{\beta}\)) For a given \(\beta\in[0,1),\)\(\exists\)\(\delta\in(0,1-\beta)\) and \(C>0,\) such that \(0<g(x)\leq Cd(x)^{-\delta}\) in \(\Omega\)_
_Then for a given \(\lambda_{0}>0,\) there exists constants \(c_{1},c_{2}>0\) such that_
\[c_{1}\lambda^{\frac{1}{q-1+\beta}}d(x)\leq u_{\lambda}(x)\leq c_{2}\lambda^{ \frac{1}{q-1+\beta}}d(x) \tag{3}\]
_for all \(\lambda\geq\lambda_{0}\) and for every \(x\in\Omega.\)_
The existence, uniqueness and Holder regularity of (2) is proved in [7]. In this work, we are interested in the asymptotic behavior of solution \(u_{\lambda}\) when \(\lambda\) tends to infinity. This result is proved in Theorem 1.1, where the main idea of the proof is inspired by the discussion on scaling property of the solutions of p-q Laplace operator given in [9]. The proof inevitably depends on the \(C^{1,\alpha}\) regularity result proved by Giacomoni et. al in [7]. As mentioned in the abstract, the estimate of this type is known to be of great importance in establishing several qualitative properties of the solutions of \((P_{\lambda}).\) In addition, the ideas presented here can also be extended to certain class of sign changing weight functions. In an ongoing work, we derive similar estimates to prove the existence of a solution to a nonhomogeneous boundary value problem with an indefinite singular reaction term.
In this article, next we focus our attention to the quasilinear nonlinear elliptic problems of the type
\[\begin{split}-\Delta_{p}u-\Delta_{q}u=\lambda\frac{f(u)}{u^{\beta}} \text{ in }\Omega\\ u>0\;\text{ in }\Omega\\ u=0\text{ on }\partial\Omega\end{split} \tag{4}\]
Recently, existence and multiplicity of solutions for p-q Laplace equation with singular nonlinearity has gained considerable attention. Arora([2]) and Acharya et. al. ([1]) proved the existence of two solutions of (4) using the technique of sub-super solutions. In a series of papers ([11], [12], [13]), Papageorgiou and Winkert have explored bifurcation type results describing the changes in the set of positive solutions of the problem as the parameter \(\lambda\) varies. The reaction term considered in their papers has the combined effects of the singular term as well as a super-diffusive growth term. In [10], Papageorgiou et.al have also provided a similar bifurcation type result for more general non-homogeneous differential operators. These works deal with existence and multiplicity results for (4), whereas we are interested in the uniqueness aspect of the solution for large \(\lambda.\) In this context, the authors in [4] and [5] have proved the uniqueness of solutions of (4) when \(p=q\) and with certain additional assumptions on \(f.\) In section 3 of this paper, we provide a concrete application of Theorem 1.1 in proving a uniqueness result for an elliptic boundary value problem (4), the result is stated below.
**Theorem 1.2**.: _Let \(1<p<q\leq 2\) and consider the elliptic boundary value problem (4) where \(f\) satisfies the conditions_
1. \(f:[0,\infty)\to(0,\infty)\) _is of class_ \(C^{1}\) _with_ \(\inf_{[0,\infty)}f>0\)_, and_
2. _There exists a constant_ \(a>0\) _such that_ \(\frac{f(z)}{z^{\beta}}\) _is decreasing on_ \([a,\infty)\)_._
_Also, we assume either \(0<\beta<\frac{(q-1)(1+p-q)}{1+q-p}\) or \(\sup_{[0,\infty)}f<\infty\) and \(0<\beta<1-q+p\). Then there exists \(\lambda_{0}>0\) such that (4) has a unique solution for every \(\lambda>\lambda_{0}.\)_
The above theorem extends the result proved for p Laplace equation in [5]. Our paper is organized as follows. The proof of main results, Theorems 1.1 and 1.2 are detailed in sections 2 and 3 respectively. In the appendix, we prove the existence of a solution for the problem (4) satisfying the conditions \((A1)\) and \((A2)\). We also prove an \(L^{\infty}\) regularity result in the appendix.
## 2 Estimates
Before proving main results, we state a version of the regularity theorem (Theorem 1.7, [7]) which is used several times in our paper to obtain uniform \(C^{1,\alpha}\) bounds for weak solutions. We say \(g\) satisfies the condition \((G_{0})\) if \(0<g(x)\leq Cd(x)^{-\delta}\) for some \(C>0\) and \(\delta\in(0,1).\)
**Theorem 2.1**.: _Let \(u\in W^{1,q}_{0}(\Omega)\) be the weak solution of the BVP_
\[\begin{split}-\mu\Delta_{p}u-\Delta_{q}u=g(x)\text{ in }\Omega\\ u=0\text{ on }\partial\Omega\end{split} \tag{5}\]
_where \(1<p<q<\infty\), \(0\leq\mu\leq\mu_{0}\) for some \(\mu_{0}>0\) and \(g\) satisfies the condition \((G_{0}).\) Suppose \(0\leq u\leq M\) and \(0\leq u\leq Kd(x)\) in \(\Omega\) for some constants \(M\) and \(K\). Then there exists a constant \(\alpha\in(0,1)\) depending only on \(N,q,\delta\) such that \(u\in C^{1,\alpha}(\bar{\Omega})\) and_
\[\left\|u\right\|_{C^{1,\alpha}(\bar{\Omega})}\leq C(N,q,\delta,M,\Omega). \tag{6}\]
Now, we state and prove a proposition which provides the case \(\beta=0\) of Theorem 1.1.
**Proposition 2.1**.: _Let \(v_{\lambda}\) be the unique solution of the following quasilinear BVP:_
\[\begin{split}-\Delta_{p}v-\Delta_{q}v=\lambda g(x)\text{ in } \Omega\\ v=0\text{ on }\partial\Omega\end{split} \tag{7}\]
_where \(1<p<q<\infty\), \(\lambda>0,\) and \(g\) satisfies the hypothesis \((G_{0}).\) Then for a given \(\lambda_{0}>0,\) there exists positive constants \(c_{1},c_{2}\) such that_
\[c_{1}\lambda^{\frac{1}{q-1}}d(x)\leq v_{\lambda}(x)\leq c_{2}\lambda^{\frac{1} {q-1}}d(x)\]
_for all \(\lambda\geq\lambda_{0}\) and for every \(x\in\Omega.\)_
**Proof:** The existence and uniqueness of weak solution \(v_{\lambda}\in W_{0}^{1,q}(\Omega)\) of (7) can be obtained by standard minimization technique. Next, we can show that \(\lambda\to v_{\lambda}\) is monotone, namely:
\[v_{\lambda_{1}}(x)\leq v_{\lambda_{2}}(x)\text{ \ if \ }\lambda_{1}\leq \lambda_{2}\]
by taking \((v_{\lambda_{1}}-v_{\lambda_{2}})^{+}\) as the test function in the weak formulation. Motivated by the discussions in the work of Marano and Mosconi [9], we define :
\[\tilde{v}_{\lambda}:=\lambda^{\frac{-1}{q-1}}v_{\lambda} \tag{8}\]
and prove the asymptotic behavior of the solution \(\tilde{v}_{\lambda}\) for large \(\lambda.\) Using this substitution, the boundary value problem (7) reduces to
\[\begin{split}-\mu\Delta_{p}\tilde{v}_{\lambda}-\Delta_{q}\tilde{v }_{\lambda}=g(x)\text{ in }\Omega\\ \tilde{v}_{\lambda}=0\text{ on }\partial\Omega\end{split} \tag{9}\]
where \(\mu=\lambda^{\frac{p-q}{q-1}}\). It is important to note that \(\mu\to 0\) as \(\lambda\) tends to infinity. We note that by Remark 4.1 of Appendix, \(\|\tilde{v}_{\lambda}\|_{\infty}\) is uniformly bounded as \(\lambda\rightarrow\infty.\) Thanks to Proposition 2.7 of [7], now we can apply Theorem 2.1 and obtain that \(\{\tilde{v}_{\lambda}\}\) is uniformly bounded in \(C^{1,\alpha}(\bar{\Omega})\) for some \(\alpha\in(0,1)\). Thus by Ascoli-Arzela theorem, upto a subsequence
\[\tilde{v}_{\lambda}\to v_{0}\text{ in }C^{1}_{0}(\overline{\Omega})\text{ as }\lambda\rightarrow\infty. \tag{10}\]
Now, for any \(\phi\in C^{\infty}_{c}(\Omega),\)
\[\mu\int_{\Omega}|\nabla\tilde{v}_{\lambda}|^{p-2}\nabla\tilde{v}_{\lambda} \boldsymbol{\cdot}\nabla\phi dx+\int_{\Omega}|\nabla\tilde{v}_{\lambda}|^{q-2} \nabla\tilde{v}_{\lambda}\boldsymbol{\cdot}\nabla\phi dx=\int_{\Omega}\phi gdx\]
Since \(\nabla\tilde{v}_{\lambda}\rightarrow\nabla v_{0}\) uniformly in \(\Omega,\) applying the limit as \(\lambda\rightarrow\infty\) on either sides we get
\[\int_{\Omega}|\nabla v_{0}|^{q-2}\nabla v_{0}\boldsymbol{\cdot}\nabla\phi dx= \int_{\Omega}\phi gdx\]
This implies that \(v_{0}\) is the weak solution of
\[\begin{split}-\Delta_{q}v_{0}=g(x)\text{ in }\Omega\\ v_{0}=0\text{ on }\partial\Omega\end{split} \tag{11}\]
By the uniqueness of weak solution \(v_{0}\), the original sequence \(\tilde{v}_{\lambda}\) itself converges to \(v_{0}\) in \(C^{1}_{0}(\overline{\Omega})\). Let \(\nu\) represents the unit outward normal on \(\partial\Omega\) and \(m=\max_{x\in\partial\Omega}\frac{\partial v_{0}}{\partial\nu}(x).\) We know that \(m<0\), thanks to Theorem 5 of Vasquez [16]. By uniform convergence of \(\frac{\partial\tilde{v}_{\lambda}}{\partial\nu}\rightarrow\frac{\partial v_{0} }{\partial\nu}\) we can find a \(\lambda^{\prime}>0\) for which \(\frac{\partial\tilde{v}_{\lambda}}{\partial\nu}(x)<\frac{m}{2}\ \ \forall\ x\in\partial\Omega\) and for all \(\lambda\geq\lambda^{\prime}.\) Since \(\Omega\) is assumed to be a smooth bounded domain in \(\mathbb{R}^{N}\), there exists a \(c>0\) independent of \(\lambda\geq\lambda^{\prime}\) such that
\[\tilde{v}_{\lambda}(x)\geq cd(x)\ \text{for}\ \lambda\geq\lambda^{\prime}\ \text{and}\ \forall x\in\Omega. \tag{12}\]
Once again using Theorem 2.1 for some constant \(C>0\)
\[\tilde{v}_{\lambda}(x)\leq C\ d(x)\ \text{for}\ \lambda\geq\lambda^{\prime}\ \text{and}\ \forall x\in\Omega. \tag{13}\]
Finally, using the monotonicity of \(v_{\lambda}\) in any compact sub interval \([a,b]\) of \((0,\infty)\) there exists positive constants \(m_{1},m_{2}\) such that \(m_{1}d(x)\leq v_{\lambda}(x)\leq m_{2}d(x)\) for all \(\lambda\in[a,b].\) Combining this property of \(v_{\lambda}\) along with the lower and upper estimates (12), (13) for \(\tilde{v}_{\lambda}\) we infer that for any given \(\lambda_{0}>0\) there exists constants \(c_{1},c_{2}\) (which depends only on \(\lambda_{0}\)) such that
\[c_{1}\lambda^{\frac{1}{q-1}}d(x)\leq v_{\lambda}(x)\leq c_{2}\lambda^{\frac{1 }{q-1}}d(x).\]
Hence the proof.
We use the above proposition to prove our main result of this paper.
Proof of Theorem 1.1.: We define \(\tilde{u}_{\lambda}:=\lambda^{\frac{-1}{q-1+\beta}}u_{\lambda}\). Then, clearly \(\tilde{u}_{\lambda}\in W^{1,q}_{0}(\Omega)\) is the unique weak solution of the boundary value problem
\[\begin{split}-\gamma\Delta_{p}\tilde{u}_{\lambda}-\Delta_{q} \tilde{u}_{\lambda}&=\frac{g(x)}{(\tilde{u}_{\lambda})^{\beta}} \ \text{in}\ \Omega\\ \tilde{u}_{\lambda}&=0\ \text{on}\ \partial\Omega \end{split} \tag{14}\]
where \(\gamma=\lambda^{\frac{p-q}{q-1+\beta}}\). As before, we note that \(\gamma\to 0\) as \(\lambda\rightarrow\infty\) and vice versa. Now, using the Moser iteration technique we can prove that \(\|\tilde{u}_{\lambda}\|_{L^{\infty}}\leq M_{0}\) for some \(M_{0}>0\) independent of \(\gamma\). The details of the uniform \(L^{\infty}\) estimate is proved in the appendix (Theorem 4.1). Therefore, we have \(\frac{g(x)}{(\tilde{u}_{\lambda})^{\beta}}\geq\frac{g(x)}{M_{0}^{\beta}}\). By the weak comparison principle, \(\tilde{u}_{\lambda}\geq w_{\gamma}\) where \(w_{\gamma}\) is the unique weak solution of
\[\begin{split}-\gamma\Delta_{p}w_{\gamma}-\Delta_{q}w_{\gamma}& =\frac{g(x)}{M_{0}^{\beta}}\ \ \text{in}\ \Omega\\ w_{\gamma}&=0\ \text{on}\ \partial\Omega\end{split}\]
Following the ideas in the proof of Proposition 2.1, we find a \(c_{1}>0\) such that \(w_{\gamma}\geq c_{1}d(x)\) for all \(\gamma\in(0,\gamma_{0}).\) This implies, for large \(\lambda\)
\[u_{\lambda}(x)\geq c_{1}\lambda^{\frac{1}{q-1+\beta}}d(x)\ \text{for all}\ \lambda\ \text{large and for every}\ x\in\Omega. \tag{15}\]
Next, to obtain an upper bound for \(u_{\lambda}\) we note that
\[-\Delta_{p}u_{\lambda}-\Delta_{q}u_{\lambda}=\lambda\frac{g(x)}{u_{\lambda}^{ \beta}}\leq\lambda^{\frac{q-1}{q-1+\beta}}\frac{g(x)}{c_{1}^{\beta}d(x)^{\beta}}\]
From the assumption \((G_{\beta})\), we have \(g(x)d(x)^{-\beta}\leq Cd(x)^{-(\beta+\delta)}\) and \(\beta+\delta<1.\) Now, let \(z_{\lambda}\) denote the unique solution of
\[\begin{split}-\Delta_{p}z_{\lambda}-\Delta_{q}z_{\lambda}=\lambda ^{\frac{q-1}{q-1+\beta}}c_{1}^{-\beta}& g(x)d(x)^{-\beta}\text{ in }\Omega\\ z_{\lambda}&=0\text{ \ on }\partial\Omega.\end{split}\]
Again, from Proposition 2.1 there exists \(c_{2}>0\) such that \(z_{\lambda}(x)\leq c_{2}\lambda^{\frac{1}{q-1+\beta}}\,d(x)\) for large \(\lambda\). Clearly, by weak comparison principle,
\[u_{\lambda}(x)\leq z_{\lambda}(x)\leq c_{2}\lambda^{\frac{1}{q-1+\beta}}d(x) \text{ for large }\lambda. \tag{16}\]
If \(\lambda_{1}\leq\lambda_{2}\), then taking \((u_{\lambda_{1}}-u_{\lambda_{2}})^{+}\) as the test function in the weak formulation of the (2) we can prove that \(u_{\lambda_{1}}\leq u_{\lambda_{2}}\) a.e in \(\Omega.\) Now using monotonicity of \(u_{\lambda}\) along with the estimates (15) and (16) we have the required result.
## 3 Uniqueness result
In this section we consider the singular elliptic boundary value problem
\[\begin{split}-\Delta_{p}u-\Delta_{q}u=\lambda\frac{f(u)}{u^{ \beta}}\text{ in }\Omega\\ u>0\text{ in }\Omega\\ u=0\text{ on }\partial\Omega\end{split} \tag{17}\]
and f satisfies the conditions (A1) and (A2). The existence of at least one positive solution of (17) can be proved using the technique of monotone iterations (see Appendix for a proof). In this section we only prove the uniqueness of its weak solution when \(\lambda\) is large, i.e. Theorem 1.2. The proof crucially depends on the estimates for (2) when \(\lambda>>1\).
**Proof of Theorem 1.2 :** We prove the theorem in three steps. It may be noted that the steps \(1\) and \(2\) hold true for any \(1<p<q<\infty.\) Restriction on \(1<p<q<2\) is essentially due to the step \(3\).
**Step \(\mathbf{1}\)**: _There exists \(k_{1}>0\) such that_
\[u(x)\geq k_{1}\lambda^{\frac{1}{q-1+\beta}}d(x)\text{ for large }\lambda. \tag{18}\]
Let \(c=\inf_{[0,\infty)}f,\) then \(c>0\) by \((A1)\). This implies, by weak comparison principle \(u\geq v\) where \(v\in W_{0}^{1,q}(\Omega)\) is the unique weak solution of
\[\begin{split}-\Delta_{p}v-\Delta_{q}v=\frac{\lambda c}{v^{\beta }}\text{ in }\Omega\\ v=0\text{ on }\partial\Omega.\end{split}\]
By Theorem 1.1 above, we know that there exists \(k_{1}\) such that
\[v(x)\geq k_{1}\lambda^{\frac{1}{q-1+\beta}}d(x)\]
for all \(\lambda\geq\lambda_{0}\). Hence, \(u\geq k_{1}\lambda^{\frac{1}{q-1+\beta}}d(x)\) for large \(\lambda\).
**Step 2**: _There exists \(k_{2}>0\) such that \(\|u\|_{C^{1}(\bar{\Omega})}\leq k_{2}\lambda^{\frac{1}{q-1}}.\) In addition if we assume that \(\sup_{[0,\infty)}f<\infty,\) then \(\|u\|_{C^{1}(\bar{\Omega})}\leq k_{2}\lambda^{\frac{1}{q-1+\beta}}.\)_
Let us define \(w=\lambda^{-\frac{1}{(q-1)}}u,\) then for \(\delta=\lambda^{\frac{p-q}{q-1}}\)
\[-\delta\Delta_{p}w-\Delta_{q}w=\frac{f(\lambda^{\frac{1}{q-1}}w)}{(\lambda^{ \frac{1}{q-1}}w)^{\beta}} \tag{19}\]
By (A1)-(A2) and Step 1 we can estimate the RHS of above equation as shown below.
\[\frac{f(\lambda^{\frac{1}{q-1}}w)}{(\lambda^{\frac{1}{q-1}}w)^{\beta}}\leq\;K (1+\frac{1}{(\lambda^{\frac{1}{q-1}}w)^{\beta}})\leq\;\frac{K_{1}}{d^{\beta}}.\]
Now we can apply Theorem 2.1 to (19) and obtain a constant \(k_{2}\) independent of \(\delta\) such that \(\|w\|_{C^{1}(\bar{\Omega})}\leq k_{2}.\) That is, \(\|u\|_{C^{1}(\bar{\Omega})}\leq k_{2}\lambda^{\frac{1}{q-1}}.\)
In addition if we suppose that \(\sup_{[0,\infty)}f<\infty,\) then we obtain a sharper estimate for the \(C^{1}\) norm of \(u.\) Define \(\tilde{w}:=\lambda^{\frac{-1}{q-1+\beta}}u\). Then for \(\gamma=\lambda^{\frac{p-q}{q-1+\beta}},\)
\[-\gamma\Delta_{p}\tilde{w}-\Delta_{q}\tilde{w}=\frac{f(\lambda^{\frac{1}{q-1+ \beta}}\tilde{w})}{\tilde{w}^{\beta}}\leq\frac{\sup_{[0,\infty)}f}{\tilde{w}^{ \beta}}\]
Using the estimate proven in Step 1 above, one can verify that \(\tilde{w}\) satisfies the hypothesis of Theorem 2.1 and hence \(\tilde{w}\) is uniformly bounded in \(C^{1,\alpha}(\bar{\Omega})\) independent of \(\gamma\). Hence, there exists \(k_{2}>0\) such that \(\|\tilde{w}\|_{C^{1}(\bar{\Omega})}\leq k_{2}\). In other words, \(\|u\|_{C^{1}(\bar{\Omega})}\leq k_{2}\lambda^{\frac{1}{q-1+\beta}}.\)
**Step 3**: _Uniqueness of solution for (17) when \(\lambda\) is large is proved using Step 1 and Step 2._
On the contrary, suppose that the solution of (17) is not unique. Let \(u,v\in C^{1,\alpha}(\bar{\Omega})\) be weak solutions of (17). Now,
\[-\Delta_{p}u-\Delta_{q}u-(-\Delta_{p}v-\Delta_{q}v)=\lambda(h(u)-h(v)).\]
where \(h(z)=\frac{f(z)}{z^{\beta}}\). Multiplying both sides by \(u-v\) and integrating,
\[\int_{\Omega}((|\nabla u|^{p-2}\nabla u-|\nabla v|^{p-2}\nabla v)+ (|\nabla u|^{q-2}\nabla u-|\nabla v|^{q-2}\nabla v))\boldsymbol{\cdot}(\nabla u -\nabla v)dx\] \[= \lambda\int_{\Omega}(h(u)-h(v))(u-v)dx\]
By the inequality
\[(|a|^{r-2}a-|b|^{r-2}b)\boldsymbol{\cdot}(a-b)\geq(r-1)\frac{|a-b|^{2}}{(|a|+ |b|)^{2-r}}\text{ for }1<r\leq 2\]
where \(a,b\in\mathbb{R}^{n}\) and using the fact that \(|\nabla u(x)|+|\nabla v(x)|\leq\|u\|_{C^{1}}+\|v\|_{C^{1}}\) we obtain,
\[(p-1)\int_{\Omega}\frac{|\nabla u-\nabla v|^{2}}{(\|u\|_{C^{1}}+\|v\|_{C^{1}} )^{2-p}}dx+(q-1)\int_{\Omega}\frac{|\nabla u-\nabla v|^{2}}{(\|u\|_{C^{1}}+ \|v\|_{C^{1}})^{2-q}}dx\leq\lambda\int_{\Omega}h^{\prime}(\xi)(u-v)^{2}dx.\]
We have evaluated the RHS using mean value theorem on \(h\), for some function \(\xi\) lying in between \(u\) and \(v\). By the estimate in Step 2, we have \(\|u\|_{C^{1}}+\|v\|_{C^{1}}\leq 2k_{2}\lambda^{\frac{1}{q-1}}\). Also, as \(p<q\) and \(\lambda\) is large,
\[(2k_{2}\lambda^{\frac{1}{q-1}})^{2-p}\geq(2k_{2}\lambda^{\frac{1}{q-1}})^{2-q}.\]
Hence,
\[(p-1)\int_{\Omega}|\nabla(u-v)|^{2}dx+(q-1)\int_{\Omega}|\nabla(u-v)|^{2}dx \leq k\lambda^{1+\frac{2-p}{q-1}}\int_{\Omega}h^{\prime}(\xi)(u-v)^{2}dx\]
for \(k>0\). That is,
\[(p+q-2)\int_{\Omega}|\nabla(u-v)|^{2}dx+\leq k\lambda^{1+\frac{2-p}{q-1}}\int_ {\Omega}h^{\prime}(\xi)(u-v)^{2}dx \tag{20}\]
Now, we closely follow the proof of Theorem 1.1 of [5] and using the estimates in Step 1 and Step 2, for any \(f\) satisfying \((A1)\) and \((A2)\) we obtain that
1. \[(p+q-2)\int_{\Omega}|\nabla(u-v)|^{2}dx\leq m\lambda^{1+\frac{2-p}{q-1}-\frac{ 2}{q-1+\beta}}\int_{\Omega}|\nabla(u-v)|^{2}dx.\] In addition, if we assume \(\sup_{[0,\infty)}f<\infty\),
2. \[(p+q-2)\int_{\Omega}|\nabla(u-v)|^{2}dx\leq m\lambda^{1-\frac{p}{q-1+\beta}} \int_{\Omega}|\nabla(u-v)|^{2}dx.\]
It is clear that \(\lambda\) in the RHS has a negative exponent when \(\beta<\beta^{\prime}=\frac{(q-1)(1+p-q)}{1+q-p}\) in (i) and when \(\beta<\beta^{\prime\prime}=1-q+p\) in (ii). Clearly, \(\beta^{\prime},\beta^{\prime\prime}\in(0,1)\).
Thus for \(\lambda\) large, the inequalities \((i)\) and \((ii)\) holds only when the integrand \(\nabla(u-v)\) is identically zero. Since \(u=v=0\) on \(\partial\Omega\), this implies that \(u\equiv v\) in \(\Omega\) for large \(\lambda.\) Hence uniqueness results holds for \(\lambda\) large.
## 4 Appendix
In this section we shall prove the existence result for (4) and an \(L^{\infty}\) regularity result for (14).
### Existence of weak solution for (4)
**Proposition 4.1**.: _Suppose all the conditions of Theorem 1.2 are satisfied. Then, there exists \(u_{\lambda}\in W^{1,p}_{0}(\Omega)\) such that \(u_{\lambda}\) is a weak solution of (4)._
**Proof:** The differential equation
\[-\Delta_{p}u-\Delta_{q}u=\lambda\frac{f(u)}{u^{\beta}}\]
can be written as
\[-\Delta_{p}u-\Delta_{q}u-\lambda\frac{f(0)}{u^{\beta}}=\lambda h(u) \tag{21}\]
where \(h(u)=\frac{f(u)-f(0)}{u^{\beta}}\). We assume that the function \(h(u)\) is monotonically increasing. If not, we can choose an appropriate \(K>0\) such that \(h(u)+Ku\) is monotone increasing in \([0,\infty).\) And we will consider the PDE \(-\Delta_{p}u-\Delta_{q}u-\lambda\frac{f(0)}{u^{\beta}}+Ku=\lambda(h(u)+Ku)\) instead of (21). The proof does not vary much among both cases. So, without loss of generality, the monotonicity of \(h\) can be assumed.
Let \(c=\inf_{[0,\infty)}f(t),\) then \(c>0\) by the condition \((A1)\) and
\[h(t)\geq\ (\frac{c-f(0)}{t^{\beta}})\]
for all \(t>0\). Assume that \(\underline{u}\in W^{1,q}_{0}(\Omega)\) is the unique weak solution of
\[\begin{split}-\Delta_{p}u-\Delta_{q}u&=\lambda \frac{c}{u^{\beta}}\text{ in }\Omega\\ u&=0\text{ on }\partial\Omega.\end{split} \tag{22}\]
The existence and uniqueness of \(\underline{u}\) is known from [7]. Clearly, \(\underline{u}\) is a subsolution of (21). By (A2), there exists \(C>0\) such that
\[\frac{f(t)}{t^{\beta}}\leq C(1+\frac{1}{t^{\beta}})\]
for all \(t>0\). Let \(\bar{u}\in W^{1,q}_{0}(\Omega)\) be the unique weak solution of
\[\begin{split}-\Delta_{p}u-\Delta_{q}u&=\lambda C( 1+\frac{1}{u^{\beta}})\text{ in }\Omega\\ u&=0\text{ on }\partial\Omega.\end{split} \tag{23}\]
\(\bar{u}\) is a super-solution of (21). The existence, uniqueness and \(L^{\infty}\) regularity of \(\bar{u}\) can be derived from [7]. Choosing \(C\) large enough if required we can show that \(\underline{u}\leq\overline{u}.\) Let \(v_{0}=\underline{u}\). Define the sequence \(\{v_{n}\}_{n\in\mathbb{N}}\subset W^{1,q}_{0}(\Omega)\) iteratively as follows: Let \(v_{n+1}\) be the unique weak solution of the BVP
\[\begin{split}-\Delta_{p}v_{n+1}-\Delta_{q}v_{n+1}-\lambda\frac{f( 0)}{v_{n+1}^{\beta}}&=&\lambda h(v_{n})\text{ in }\Omega\\ v_{n+1}&=&\quad 0\text{ on }\partial\Omega. \end{split} \tag{24}\]
for every \(n\in\mathbb{N}.\) Using the monotonicity of \(h(t)\) we can show that
\[cd(x)\leq\underline{u}\leq\cdots v_{n}\leq v_{n+1}\leq\cdots\bar{u}\leq M.\]
Now using the standard ideas we can pass through the limit in (24) and prove the existence of a weak solution of (4).
### Uniform \(L^{\infty}\) regularity
Now, we prove that \(\{\tilde{u}_{\lambda}\}\) is uniformly bounded in \(L^{\infty}(\Omega)\), where \(\tilde{u}_{\lambda}\) is the unique weak solution of
\[\begin{split}-\gamma\Delta_{p}\tilde{u}_{\lambda}-\Delta_{q}\tilde {u}_{\lambda}&=\frac{g(x)}{(\tilde{u}_{\lambda})^{\beta}}\text{ in }\Omega\\ \tilde{u}_{\lambda}&=0\text{ on }\partial\Omega \end{split} \tag{25}\]
where \(1<p<q<\infty\), \(0<\beta<1\), and \(0<g(x)\leq\frac{C}{d(x)^{\delta}}\) for some \(C>0\), \(0<\beta+\delta<1\) and \(\gamma=\lambda^{\frac{p-g}{q-1+\beta}}\). We now state the uniform \(L^{\infty}\) regularity theorem:
**Theorem 4.1**.: _Let \(\tilde{u}_{\lambda}\) be the unique weak solution of (25). Then there exists \(M_{0}>0\) independent of \(\lambda\) such that \(\|\tilde{u}_{\lambda}\|_{L^{\infty}(\Omega)}\leq M_{0}\) for every \(\lambda>0.\)_
Proof.: The proof follows the ideas of Lemma 3.2 from [8] and Theorem E.0.19 of [14]. Let \(u\in W_{0}^{1,q}(\Omega)\) be the weak solution of (25) and \(\phi\) be a \(C^{1}\) cut-off function such that \(\phi(t)=0\) for \(t\leq 0,\)\(\phi^{\prime}(t)\geq 0\) for \(0\leq t\leq 1\) and \(\phi(t)=1\) for \(t\geq 1\). By the weak comparison principle we know that \(u\geq 0\) in \(\Omega.\) Let us define \(\phi_{\epsilon}(t):=\phi(\frac{t-1}{\epsilon})\) for \(t\in\mathbb{R}\). Then, \(\phi_{\epsilon}(u)\in W_{0}^{1,q}(\Omega)\) and \(\nabla\phi_{\epsilon}(u)=\phi^{\prime}_{\epsilon}(u)\nabla u.\) For a non-negative function \(w\in C_{c}^{\infty}(\Omega),\) using \(\phi_{\epsilon}(u)w\) as a test function in the weak formulation of (25) we obtain
\[\int_{\Omega}(\gamma|\nabla u|^{p}+|\nabla u|^{q})\phi^{\prime}_ {\epsilon}(u)wdx+\int_{\Omega}(\gamma|\nabla u|^{p-2}+|\nabla u|^{q-2})\phi_{ \epsilon}(u)\nabla u\boldsymbol{\cdot}\nabla wdx\] \[=\int_{\Omega}\frac{g(x)}{u^{\beta}}\phi_{\epsilon}(u)wdx\]
Taking \(\epsilon\to 0,\) we have
\[\int_{\Omega\cap\{u\geq 1\}}(\gamma|\nabla u|^{p-2}+|\nabla u|^{q-2})\nabla u \boldsymbol{\cdot}\nabla wdx\leq\int_{\Omega\cap\{u\geq 1\}}\frac{g(x)}{u^{ \beta}}wdx\leq\int_{\Omega}g(x)wdx\]
as \(\phi^{\prime}(t)\geq 0\) for all \(t\) and \(\frac{1}{u^{\beta}}\leq 1\) when \(u\geq 1\). That is,
\[\gamma\int_{\Omega}|\nabla\bar{u}|^{p-2}\nabla\bar{u}\boldsymbol{\cdot} \nabla wdx+\int_{\Omega}|\nabla\bar{u}|^{q-2}\nabla\bar{u}\boldsymbol{\cdot} \nabla wdx\leq C\int_{\Omega}\frac{1}{d(x)^{\delta}}wdx\]
as \(\gamma>0,\) where \(\bar{u}:=(u-1)_{+}\) is the positive part of the function \((u-1).\) It is clear that \(\frac{C}{d(x)^{\delta}}\in W^{-1,r}(\Omega)\) for \(1<r<\infty\). So, there exists a function \(\mathbf{F}=(F_{1},F_{2},..,F_{N})\) such that \(\frac{C}{d(x)^{\delta}}=\text{div}(F)\). Here, \(F_{i}\in L^{r}(\Omega)\) for \(1\leq i\leq N\).
\[\gamma\int_{\Omega}|\nabla\bar{u}|^{p-2}\nabla\bar{u}\boldsymbol{\cdot} \nabla wdx+\int_{\Omega}|\nabla\bar{u}|^{q-2}\nabla\bar{u}\boldsymbol{\cdot} \nabla wdx\leq\int_{\Omega}\mathbf{F}\boldsymbol{\cdot}\nabla wdx \tag{26}\]
Define the truncation function \(T_{k}(s):=(s-k)\chi_{[k,\infty)}\) which was introduced in [15], for \(k>0\). Let \(U_{k}:=\{x\in\Omega:\bar{u}(x)\geq k\}\) for \(k>0\). Choosing \(T_{k}(\bar{u})\) for a \(k>0\) as the test function in (26) and using the fact that \(\gamma>0\), we have
\[\int_{U_{k}}|\nabla\bar{u}|^{q}dx\leq\int_{U_{k}}\mathbf{F}\boldsymbol{\cdot} \nabla\bar{u}dx.\]
So,
\[\Big{(}\int_{U_{k}}|\nabla\bar{u}|^{q}dx\Big{)}^{1-\frac{1}{q}}\leq\Big{(} \int_{U_{k}}|\mathbf{F}|^{r}dx\Big{)}^{\frac{1}{r}}|U_{k}|^{1-(\frac{1}{r}+ \frac{1}{q})} \tag{27}\]
by Holder's inequality, where \(|U_{k}|\) is the Lebesgue measure of the set \(U_{k}\). We can now proceed exactly as in [14] and obtain the required result.
**Remark 4.1**.: _The above theorem is true even when \(\beta=0.\)_ |
2306.16085 | Mass Spectra Prediction with Structural Motif-based Graph Neural
Networks | Mass spectra, which are agglomerations of ionized fragments from targeted
molecules, play a crucial role across various fields for the identification of
molecular structures. A prevalent analysis method involves spectral library
searches,where unknown spectra are cross-referenced with a database. The
effectiveness of such search-based approaches, however, is restricted by the
scope of the existing mass spectra database, underscoring the need to expand
the database via mass spectra prediction. In this research, we propose the
Motif-based Mass Spectrum Prediction Network (MoMS-Net), a system that predicts
mass spectra using the information derived from structural motifs and the
implementation of Graph Neural Networks (GNNs). We have tested our model across
diverse mass spectra and have observed its superiority over other existing
models. MoMS-Net considers substructure at the graph level, which facilitates
the incorporation of long-range dependencies while using less memory compared
to the graph transformer model. | Jiwon Park, Jeonghee Jo, Sungroh Yoon | 2023-06-28T10:33:57Z | http://arxiv.org/abs/2306.16085v1 | # Mass Spectra Prediction with Structural Motif-based Graph Neural Networks
###### Abstract
Mass spectra, which are agglomerations of ionized fragments from targeted molecules, play a crucial role across various fields for the identification of molecular structures. A prevalent analysis method involves spectral library searches, where unknown spectra are cross-referenced with a database. The effectiveness of such search-based approaches, however, is restricted by the scope of the existing mass spectra database, underscoring the need to expand the database via mass spectra prediction. In this research, we propose the Motif-based Mass Spectrum Prediction Network (MoMS-Net), a system that predicts mass spectra using the information derived from structural motifs and the implementation of Graph Neural Networks (GNNs). We have tested our model across diverse mass spectra and have observed its superiority over other existing models. MoMS-Net considers substructure at the graph level, which facilitates the incorporation of long-range dependencies while using less memory compared to the graph transformer model.
**Keywords:** Mass Spectra, GNNs, motif, deep learning, molecule
## 1 Introduction
Mass spectrometry (MS) [1, 2] is an indispensable analytical method for the identification of molecular structures in unknown samples [3, 4, 5]. In this technique, a molecule undergoes ionization, and its fragment ions are measured by a mass analyzer, which captures information regarding the mass-to-charge ratio (m/z). By analyzing the mass spectrum, which provides the m/z values and their relative intensities, it is possible to infer the molecular structure of the original chemical.
Modeling the fragmentation patterns for ionized molecules in order to analyze the mass spectrum is challenging. While some domain knowledge-based rules can be useful for certain types of molecules, it becomes difficult to apply them to smaller fragments with diverse functional groups.
The interpretation of mass spectra typically relies on library search, which compare the spectra with a large database of known molecules [6, 7]. While there are various extensive mass spectra libraries available, such as the National Institute of Standards and Technology (NIST) [8], Wiley [9], and Mass Bank of North America (MoNA) [10], the search-based method is limited by its ability to access known materials and does not provide information on the mass spectra of new molecules. An alternative way is to use _de novo_ techniques, which aim to directly predict the molecular structure based on the input spectrum [11, 12, 13]. However, these methods often have low accuracy and are challenging to use effectively [14].
An approach to address the coverage issue in library search is to enhance existing libraries by incorporating predicted mass spectra generated by a model. Mass spectrum prediction models utilize either quantum mechanical calculations [15, 16, 17, 18], or machine learning techniques [19]. These methods aim to predict the fragmentation patterns that occur after ionization. Quantum mechanical calculations require extensive computation of electronic state, but they are computationally inefficient. On the other hand, machine learning approaches can provide faster predictions, but they may lack the ability to simulate diverse and detailed fragmentation processes.
Recently, deep learning has been significantly developed in the areas of image recognition and natural language processing. Moreover, there has been a significant surge in interest in applying deep learning to the fields of material science and drug development. Graph Neural Networks (GNNs), in particular, are widely used to predict chemical properties and generate new molecules because molecules, which are comprised of atoms and bonds, can be represented using graph structures, where nodes represent atoms and edges represent bonds.
Several studies have focused on predicting mass spectra using MLP, GNN, and graph transformer[20, 21, 22, 23, 24]. J. Wei et al. [20] proposed the NEIMS model, which utilizes fingerprints to map molecules. They employ MLP layers and a bidirectional prediction mode to model fragments and neutral losses in a mass spectrum. B. Zhang et al. [22] employ a graph convolutional network (GCN) for predicting mass spectra. They initialize the nodes' features using concatenated one-hot vectors representing various atom properties such as atom symbol, degree, valence, formal charge, radical charge, etc. The initial features of edges are also represented using one-hot vectors based on bond type, ring presence, conjugation, and chirality. Multiple GCN layers are applied, and the nodes' representations are pooled to form a graph representation. An MLP
layer is then used to predict the mass spectra. A. Young et al. [23] proposed the Mass-Former framework, which is based on the graph transformer for predicting tandem mass spectra. They employ a graph transformer that calculates pairwise attention between nodes, considers the shortest path distance between two nodes, and incorporates averaged edge information along the shortest path. In the work by M. Murphy et al. [24], they proposed a prediction model that maps an molecular graph to a distribution of probabilities across various molecular formulas using high-resolution mass spectra. This model differs from our task in the sense that high-resolution mass spectra contain additional information about specific peaks that can be used to infer the molecular formulas associated with those peaks.
Motifs, which are important and frequently occurring subgraphs, correspond to functional groups and important fragments in the molecules [25]. Mining motifs can be beneficial in many tasks. There are different approaches to motif mining. One method involves rule-based techniques, where fragmentation rules are defined based on domain knowledge. However, this approach may not cover all possible types of fragments. Alternatively, a motif mining algorithm based on the counting of subgraph structures can be employed. This approach is inspired from Byte Pair Encoding (BPE), a widely used technique in natural language processing (NLP) for tokenizing words into subwords. Motifs can be used to improve the capability for property prediction, drug-gene interaction prediction and molecule generation, due to the strong dependence of a molecule's properties on its molecular structure, particularly the functional groups [26; 27; 28; 29; 30; 31].
In this work, we propose the Motif-based Mass Spectrum Prediction Network (MoMS-Net) for predicting mass spectra. We utilize motifs in applying GNNs because they are related to the stability of fragment ions and the fragmentation patterns in the mass spectra. The motif vocabulary is constructed by following the merge-and-update method described in Z. Geng et al.[32]. The MoMS-Net model consists of two GNNs, as shown in Fig. 1. One is for the molecule graph, which is defined based on the molecule itself. The other is for heterogeneous motif graph, which consists of all molecules in the dataset and the motifs in the motif vocabulary. Before applying the MLP (Multi-Layer Perceptron) layer to predict a mass spectrum, we incorporate the knowledge and characteristics of motifs' mass spectra into our model. GNNs struggle to consider long-range dependencies as node information is updated by pooling neighboring nodes [33; 34; 35]. While deep layers are typically required to incorporate long-range dependencies in GNNs, this can lead to oversmoothing problems where all nodes become similar, resulting in decreased performance [36; 37; 38]. However, our model can consider the relationship with subgraphs at the graph level, allowing it to effectively incorporate long-range dependency effects. The graph transformer [23; 39] has demonstrated good performance in predicting mass spectra but requires a significant amount of memory during training. In contrast, our model requires less memory than the graph transformer. Ultimately, our model achieves the state-of-the-art performance in predicting mass spectra. Main contributions of MoMS-Net are as below.
* MoMS-Net incorporate relation with substructures in the graph level, so it can consider long-range dependancy effect. In contrast, GNNs cannot consider long-range dependancy as updating node information by pooling neighboring nodes.
* MoMS-Net has the state-of-the-art performance in predicting mass spectra. It shows the highest spectrum similarity compared to other models.
* MoMS-Net requires less memory compared to graph transformer, making it applicable for larger molecules.
Figure 1: Overall architecture of MoMS-Net. The model consists of two GNNs for molecule graph and heterogeneous motif graph. Two graph embeddings from two GNNs are concatenated, and an MLP layer is applied to predict mass spectra.
## 2 Results
### Overview of the framework
The MoMS-Net model consists of two Graph Neural Networks (GNNs) designed to handle both the molecule graph and the heterogeneous motif graph. The molecule graph utilizes a 3-layer Graph Convolutional Network (GCN) [40] with RDkit fingerprints [41] as input embeddings. On the other hand, the heterogeneous motif graph employs a 3-layer Graph Isomorphic Network (GIN) [42] with input embeddings based on the relationships between molecules and motifs, motifs and motifs, and molecular weights. The hidden representations obtained from the molecule graph GNN and the heterogeneous motif graph GNN are concatenated to capture the combined information from both graphs. Additionally, the molecular weight distribution of motifs and selected fragments are utilized to further finetune the hidden representations. Finally, an MLP layer is applied to the hidden representation in order to predict the mass spectrum.
### Spectrum Similarity
The NIST dataset was divided into three subsets: training (70%), validation (20%), and test (10%), using the Murcko scaffold splitting method [43]. To assess the performance of our model, we made predictions for the mass spectra of the molecules in the test set. We then calculated the spectrum similarity between the target (actual) mass spectra and the predicted mass spectra using the cosine similarity score. To ensure a fair comparison, we initially normalized both the target and predicted mass spectra. Then, we computed the cosine similarity score between the normalized vectors. Each result has been obtained by conducting the experiments five times, with distinct random seeds for each run.
The results for the NIST dataset are presented in Table 1. MoMS-Net demonstrates the best performance compared to other models. Specifically, MassFormer outperforms CNN, WLN and GCN models. Furthermore, we observe that the performance on the FT-HCD dataset is higher compared to the FT-CID dataset. This can be attributed to the larger amount of data available in the FT-HCD dataset. It is commonly known that transformer-based models can achieve better performance when trained on larger dataset[44]. However, it is noteworthy that MoMS-Net surpasses the performance of MassFormer even in the larger FT-HCD dataset.
\begin{table}
\begin{tabular}{l l l} \hline & FT-CID & FT-HCD \\ \hline CNN & \(0.356\pm 0.002\) & \(0.535\pm 0.002\) \\ MassFormer & \(0.385\pm 0.005\) & \(0.573\pm 0.003\) \\ WLN & \(0.357\pm 0.001\) & \(0.569\pm 0.001\) \\ GCN & \(0.356\pm 0.001\) & \(0.565\pm 0.001\) \\ MoMS-Net & \(\mathbf{0.388\pm 0.002}\) & \(\mathbf{0.578\pm 0.001}\) \\ \hline \end{tabular} Every result is done five times with different random seeds.
\end{table}
Table 1: Cosine Similarity
### Molecule Identification
To address the coverage issue in spectral library searches, predicting mass spectra is a essential step to augment the existing database. By predicting mass spectra, we can expand the range of compounds and their corresponding spectra available in the spectral library. However, assessing the accuracy of a model in matching predicted spectra with unknown queries is challenging because confirming the identification of the compound requires experimental analysis. To simplify the evaluation process, we can employ a candidate ranking experiment inspired by [20, 23]. In this experiment, the objective is to accurately associate a query spectrum with the corresponding molecule from a set of candidate spectra. The query set comprises authentic spectra from the test set, which are heldout partitions. The reference set consists of spectra collected from distinct origins: predicted spectra in the heldout partition, and real spectra from the training and validation partitions. By evaluating the similarity between spectra in the query and reference sets, we calculate a ranking of spectra in the reference set for each query. This ranking, based on the degree of similarity, effectively induces a ranking of candidate structures since each spectrum corresponds to a specific molecule. Table 2 provides a summary of the results obtained from this experiment on the metric, Top-5%. This metric evaluates whether the true matched candidate is ranked within the top 5% of all candidates. As the number of candidates per query may vary, the Top-5% metric is normalized to ensure fair comparison. This metric provides insight into the model's ability to accurately identify the correct candidate among a larger set of options. The results indicate that our model demonstrates comparable performance with MassFormer and higher than other models. This consistent strong performance of our model suggests that it is one of the best performing models in terms of accurately matching query spectra with the correct molecule. Our model can be utilized for augmenting spectral libraries holds promise to address the coverage issue.
### The Effect of Motif Vocabulary Size
We utilized the merge-and-update method to generate motifs from the dataset [32]. The top \(K\) most frequent subgraphs were chosen as motifs. We examined the resulting motif vocabulary and observed a decreasing exponential trend in the frequency count as the number of motifs increased, as shown in Fig. 2. The most frequent subgraphs are small and stable fragments such as "CC", "CCCC","CO" and benzene ring (C\({}_{6}\)H\({}_{6}\)).
\begin{table}
\begin{tabular}{l l l} \hline \hline & FT-CID & FT-HCD \\ \hline CNN & \(0.802\pm 0.008\) & \(0.778\pm 0.004\) \\ MassFormer & \(\mathbf{0.850\pm 0.016}\) & \(0.830\pm 0.007\) \\ WLN & \(0.736\pm 0.011\) & \(0.812\pm 0.008\) \\ GCN & \(0.728\pm 0.0.016\) & \(0.802\pm 0.008\) \\ MoMS-Net & \(0.824\pm 0.002\) & \(\mathbf{0.840\pm 0.010}\) \\ \hline \hline \end{tabular} Every result is done five times with different random seeds.
\end{table}
Table 2: Top-5% scores on the ranking task
Our approach allowed for the generation of motifs of various types and sizes, with a higher occurrence of motifs containing 5 to 15 atoms. Fig. 2 displays several examples of large motifs with distinct structures. We performed tests with different sizes of motif vocabularies and observed that when the motif size exceeded 1,000, the cosine similarity began to decrease. This decline can be attributed to the inclusion of trivial motifs in the heterogeneous motif graph as the motif vocabulary size increased. Hence, in this study, we set the motif vocabulary size to 300.
### Ablation Study
We compare GNN architectures for the prediction of mass spectra. As shown in Table 2, we can see that GCN performs better than GIN. In the MoMS-Net model, GIN is utilized for the heterogeneous motif graph, while both GIN and GCN are compared for the molecular graph. Interestingly, when the MoMS-Net model employs GIN instead of GCN for the molecular graph, it exhibits similar performance.
## 3 Discussion
The analysis of mass spectra plays a crucial role in identifying molecular structures in material chemistry and drug discovery. Search-based methods are widely employed for mass spectra analysis. However, they often suffer from a coverage issue. To address this problem, it is necessary to generate mass spectra using a model to augment the database.
MoMS-Net demonstrates the capability to accurately predict mass spectra for complex molecules, as shown in Fig. 3. Molecules containing conjugated aromatic rings are known to be highly stable, resulting in a smaller number of peaks in their mass spectra. On the other hand, molecules without aromatic rings tend to exhibit a greater number of peaks. Our model is effective in predicting both aromatic and nonaromatic compounds accurately. However, it should be noted that there is a limitation in terms of the abundances of the main peaks in the predicted mass spectra. Our model tends to generate more smaller peaks, which can result in a decrease in the intensity of the main peak after normalization.
In Table 3, the information regarding the number of model parameters and memory allocation is presented. It is observed that all models have similar numbers of model parameters, indicating comparable complexity in terms of the model architecture and parameter count. However, a notable difference is observed in the memory allocation between MassFormer and MoMS-Net. Despite MassFormer having a smaller batch size, it requires a similar amount of memory allocation compared to MoMS-Net. This suggests that MassFormer consumes a significant amount of memory during its execution. On the other hand, MoMS-Net demonstrates better performance while utilizing less memory compared to MassFormer. This efficiency in memory usage allows MoMS-Net to handle larger molecules and proteins effectively.
In this study, we proposed the MoMS-Net model, which incorporates motifs to predict mass spectra from molecular structures. Motifs play a important role in the task of predicting molecular properties as they are directly associated with the functional groups present in the molecule and provide valuable information on the relationships
between molecules. We applied the merge-and-update method to generate a motif vocabulary from the dataset to represent various motif sizes and functional groups. We conducted tests with different sizes of motif vocabularies and varying model architectures. MoMS-Net outperforms other deep learning models in predicting mass spectra from molecular structures. It effectively considers long-range dependencies by incorporating motifs at the graph level even though GNNs have limitations in considering long-range dependencies. Additionally, our model requires less memory compared to the graph transformer. We found that real mass spectra of motifs are useful in predicting the mass spectra of molecules, although the predicted mass spectra may contain more small and false peaks. In future work, we strive to enhance the initialization method of mass spectra for motifs and incorporate regularization techniques to prevent false peaks. Furthermore, we plan to apply MoMS-Net to larger molecules and proteins.
\begin{table}
\begin{tabular}{l l l l} \hline & \# of Parameters & Memory Allocation(MB) & Batch Size \\ \hline CNN & 1.46E+07 & 717 & 512 \\ MassFormer & 1.36E+07 & 1340 & 50 \\ WLN & 1.23E+07 & 1519 & 1024 \\ GCN & 1.31E+0.7 & 973 & 1024 \\ MoMS-Net & 1.82E+07 & 1519 & 1024 \\ \hline \end{tabular}
\end{table}
Table 3: Number of Parameters and Memory Allocation
Figure 2: (a) Frequency of generated motifs (b) The distribution of motif size (c) Some examples of large motifs (d) Cosine Similarity according to Motif Size. The frequency of motif is decreased exponentially as motif number and most motif has size of 5 to 20 atoms. Data-driven motif generation method can generate large motifs which have various functional groups. The model achieves its best performance when the motif size is adjusted to 300. However, as the motif size surpasses 1000, the performance starts to decline.
Figure 3: True and predicted spectra for four molecules. Predicted spectra have similar patterns for complex molecular structure but have lower intensity because of many smaller peaks.
## 4 Methods
### Dataset
We use the NIST 2020 MS/MS dataset for training and evaluation. NIST dataset is a widely used due to its extensive coverage and convenient use in the mass spectrum analysis process. Mass spectra depend on the acquisition conditions. We only use spectra from Fourier Transform (FT) instruments because of the large amount of data available, and we consider their collision cell type (CID or HCD). The information regarding the dataset is summarized in Table A1.
### Generation of Motif Vocabulary
A motif refers to the most frequent substructure, and some motifs are correspond to functional groups of molecules. To construct a motif vocabulary, we apply the merge-and-update method introduced by A. Young et al. [32] to identify common patterns from a given dataset \(D\). The goal is to learn the top \(K\) most frequent subgraphs from dataset \(D\), where \(K\) is a hyperparameter. Each molecule in \(D\) is represented as a graph, \(\mathcal{G=(V,E)}\), where atoms and bonds correspond to nodes (\(\mathcal{V}\)) and edges (\(\mathcal{E}\)). Initially, we consider each atom from the molecules as a single fragment.
We merge two adjacent fragments, \(\mathcal{F}_{i}\) and \(\mathcal{F}_{j}\), to create a new fragment, \(\mathcal{F}_{ij}=\mathcal{F}_{i}\oplus\mathcal{F}_{i}\), using a defined operation "\(\oplus\)". The merging process involves iteratively updating the merging graphs, \(\mathcal{G}_{M}^{(k)}(\mathcal{V}_{M}^{(k)},\mathcal{E}_{M}^{(k)})\) at the \(k^{th}\) iteration (\(k=0,\cdots,K-1\)). If the most frequent merged fragment, \(\mathcal{F}_{ij}\), is valid, it is added to the motif vocabulary \(\{\mathcal{M}\}\). This process is iterated for \(K\) iterations to construct the motif vocabulary.
### Construction of the Heterogeneous Motif Graph
The heterogeneous motif graph is generated by combining molecule nodes from the molecular dataset and motif nodes from the motif vocabulary. This graph consists of two types of edges connecting the nodes. One type is the molecule-motif edge, which is created when a molecule contains that motif. The other type is the motif-motif edge, which is established when two motifs share at least one atom. To differentiate the importance of these edges, different weights are assigned based on their types according to Z. Yu et al. [26]. For the molecule-motif edge, the weight is computed using the term frequency-inverse document frequency (TF-IDF) value. For the motif-motif edges, the weight is calculated as the co-occurrence information point-wise mutual information (PMI). So the edge weight \(A_{ij}\) between two nodes \((i,j)\) is represented as
\[A_{ij}=\left\{\begin{array}{cc}\text{PMI}_{ij},&\text{ if i, j are motifs}\\ \text{TF-IDF}_{ij},&\text{if i or j is a motif}\\ 0,&\text{Otherwise}\end{array}\right. \tag{1}\]
The PMI value is calculated as
\[\begin{split}\text{PMI}_{ij}=\text{log}\frac{p(i,j)}{p(i)p(j)}\\ p(i,j)=\frac{N(i,j)}{M},p(i)=\frac{N(i)}{M},p(j)=\frac{N(j)}{M}, \end{split} \tag{2}\]
where \(N(i,j)\) is the number of molecules that have motif \(i\) and motif \(j\). \(M\) is the total number of molecules, and \(N(i)\) is the number of molecules with motif \(i\).
\[\text{TF-IDF}_{ij}=C(i)_{j}\left(\text{log}\frac{1+M}{1+N(i)}+1\right), \tag{3}\]
where \(C(i)_{j}\) is the number of frequency that the motif occurs in the molecule \(j\).
### Heterogeneous Motif Graph Neural Networks
We apply two different GNNs for molecule graphs and heterogeneous motif graph. The molecule graph represents each atom and bond as nodes and edges, respectively. We utilize a 3-layer Graph Convolutional Network (GCN) to update the atom-level representations. To encode the atom and bond features, we employ the Deep Graph Library (DGL) package [45], which supports embedding them as either one-hot encoding or numerical values. For the heterogeneous motif graph, we employ the other 3-layer Graph Isomorphism Network (GIN). The total number of nodes in the heterogeneous graph is the sum of the number of molecules (\(|N|\)) and the size of the motif vocabulary (\(|V|\)). The node feature in the heterogeneous motif graph is represented by the occurrence of motifs and molecule weight for the node. To represent the occurrence of motifs in molecules and other motifs, we create a vector of size \(|V|\), where the values indicate motif occurrences. We apply a linear layer and concatenate it with the molecule weight.
A heterogeneous motif consists of all molecule nodes and motif nodes. Since the number of molecules can be large (e.g., 27K for CID and 232K for HCD), computational resource limitations may arise. To tackle this challenge, we employ an edge sampler to decrease the size of the heterogeneous motif graph. We employ a breadth-first algorithm for hop-by-hop sampling from a starting node [26]. We use a 3-hop sampler, denoted as \([s_{1},s_{2},s_{3}]\), where \(s_{i}\) represents the number of nodes to be sampled. The first-hop neighbors of molecule nodes are motif nodes only. Before applying GINs, we first utilize a 2-layer MLP for input embedding.
### Mass Spectra of Motif
After obtaining the graph embeddings for the heterogeneous motif graphs, we incorporate additional information from the mass spectra of motif. This is because the fragmentation patterns in mass spectra are associated with the motif structure. We construct the mass spectra of motifs, taking into account the isotope effect of the
molecular ion. Additionally, we incorporate a few fragments generated from RDKit software [41] into the motif mass spectra.
### Objective Function
Cosine similarity is commonly used in mass spectrum library search to compare and quantify the similarity between mass spectra [7]. So we choose cosine distance as loss function as Eq. 4.
\[\text{CD}(\mathbf{I},\mathbf{\hat{I}})=1-\frac{\sum_{k=1}^{M_{max}}I_{k}\cdot \hat{I_{k}}}{\|\sum_{k=1}^{M_{max}}{I_{k}}^{2}\|\cdot\|\sum_{k=1}^{M_{max}}{ \hat{I}_{k}}^{2}\|} \tag{4}\]
where \(\mathbf{I}\) and \(\mathbf{\hat{I}}\) are vectors of intensities versus m/z for reference and predicted spectrum.
### Evaluation Metrics
The mass spectrum is represented as a vector with a length corresponding to the m/z range, along with intensity values. To measure spectrum similarity, we compute the cosine similarity score between the target and predicted spectra after normalization.
\[\text{Similarity}(\mathbf{I},\mathbf{\hat{I}})=\frac{\sum_{k=1}^{M_{max}}I_ {k}\cdot\hat{I_{k}}}{\|\sum_{k=1}^{M_{max}}{I_{k}}^{2}\|\cdot\|\sum_{k=1}^{M_{ max}}{\hat{I}_{k}}^{2}\|}. \tag{5}\]
## Acknowledgments
This study was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant (No.2021-0-01343, Artificial Intelligence Graduate School Program in Seoul National University), and also supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2022R1A6A3A01087603, 2022R1A3B1077720, 2022M3C1A3081366). All supports were funded by the Korea government(MSIT).
## Appendix A The Dataset
We use the NIST 2020 MS/MS dataset. Table A1 shows the number of spectra and compounds according to their collision cell type (CID or HCD).
## Appendix B GNN architectures
In the Table B2, MoMS-Net(GIN) uses a GIN for a molecule graph and another GIN for a heterogeneous motif graph. MoMS-Net(GCN) uses a GCN for a molecule graph and a GIN for a heterogeneous motif graph.
|
2308.09409 | A Single-Input State-Switching Building Block Harnessing Internal
Instabilities | Bistable mechanisms are prevalent across a broad spectrum of applications due
to their ability to maintain two distinct stable states. Their energy
consumption is predominantly confined to the process of state transitions,
thereby enhancing their efficiency. However, the transition often requires two
distinct inputs, implicating the requirement of multiple actuators. Here, we
propose an elastic and contactless design strategy for inducing state
transitions in bistable mechanisms, requiring only a single cyclic input. The
strategy leverages internal information, interpreted as system state, as an
extra input to make a weighted decision for transitioning to the subsequent
state. We characterize the behavior using a spring-based rigid-body model,
consisting of a column near bifurcation, combined with a non-linear spring
connected to a bistable element that represents the information state. The
results show that a nonlinear spring with a quadratic stiffness function, i.e.,
representing internal instability, is crucial for regulating state-switching
behavior. We then demonstrate this design strategy by developing a monolithic
and compliant design embodiment and experimentally evaluate its behavior. | Malte A. ten Wolde, Davood Farhadi | 2023-08-18T09:22:15Z | http://arxiv.org/abs/2308.09409v1 | # A single-input state-switching building block harnessing internal instabilities
###### Abstract
Bistable mechanisms are prevalent across a broad spectrum of applications due to their ability to maintain two distinct stable states. Their energy consumption is predominantly confined to the process of state transitions, thereby enhancing their efficiency. However, the transition often requires two distinct inputs, implicating the requirement of multiple actuators. Here, we propose an elastic and contactless design strategy for inducing state transitions in bistable mechanisms, requiring only a single cyclic input. The strategy leverages internal information, interpreted as system state, as an extra input to make a weighted decision for transitioning to the subsequent state. We characterize the behavior using a spring-based rigid-body model, consisting of a column near bifurcation, combined with a non-linear spring connected to a bistable element that represents the information state. The results show that a nonlinear spring with a quadratic stiffness function, i.e., representing internal instability, is crucial for regulating state-switching behavior. We then demonstrate this design strategy by developing a monolithic and compliant design embodiment and experimentally evaluate its behavior.
## 1 Introduction
Bistable mechanisms have two stable equilibrium states, separated by an energy barrier. The transition from one stable state to the other is often a snap-through action, and results in a large elastic deformation of the mechanism. These deformations consume only power when switching between the two states. These properties make bistable mechanisms suitable for various applications across numerous fields [1], including but not limited to, (soft) robotics [2, 3], (reprogrammable) mechanical metamaterials [4, 5], mechanical logic structures [6, 7], energy absorbers [8], in microscale electromechanical systems (MEMS) like micro-positioners [9], actuators [10], grippers [11], optical switches [12], mechanical relays [13], and micro fluidic valves [14], and in the nanoscale, like DNA-based structures [15, 16, 17].
Conventionally, the transition between the states can be initiated through an external input, such as a change in temperature, pressure, voltage, or mechanical forces, but often requires also a counter-input to reset the system. For example, stacked bistable inflatable origami modules require positive pressure (input) and negative pressure (counter-input) for multimodal deformation [18], another bistable origami module uses compression and tension to actuate the two states and thereby creating peristaltic locomotion [19], and in soft media the stored elastic energy in bistable elements are used to propagate mechanical signals [20], but need to be reset manually (counter-input). These examples can be considered to be "responsive", i.e., the state of the system is a direct response to the external input - which can be considered a form of (low-level) intelligence. Intelligent systems can interact with the environment, adapt their structure to store and process information, and make autonomous decisions by tightly integrating sensing, actuation, and computation into the structure itself [21, 22]. There are considered to be different levels of intelligence in systems. The next frontier of intelligent systems after responsive systems, termed "adaptive systems", will incorporate not only external inputs but also internal information, such as mechanical memory or state information [23, 24]. Adaptive systems require less complex actuation to control a set of states, compared to responsive systems, by leaving some of the decision-making ability to the mechanism itself [25]. For example, recently a pneumatically actuated soft robot was shown to walk and switch gaits to control the direction of locomotion using only a single constant source of pressurized air to actuate the robot [26].
Developing adaptive systems requires a building block that combines an external input with internal information when making a decision. Some research has been conducted to achieve this functionality. A well-known example is the mechanism in retractable ballpoint pens [27], which is an angled tooth cam following mechanism consisting of discrete parts. When the input is pressed repeatedly, the current state of the system (ballpoint
in or out) determines the next state. Generally, this functionality can be found in single-input switches, for example in MEMS devices [28, 29, 30, 31, 32]. Furthermore, this functionality has been realized through different methodologies within the field of mechanical metamaterials [33, 34]; one such approach involves a unit cell consisting of two inward buckling beams with a carefully designed cutout, and a central'state' beam in which its buckling direction changes when interacting with the inward buckling beams upon a cyclic input displacement [35]. Separate studies have demonstrated how coupled interaction between unit cells, i.e., repeated building blocks in a structure like waves in corrugated sheets [36] or domes in a dome-patterned sheet[37], can change the response depending on their current global state. In yet another example, researchers leverage geometric frustration [38] to exhibit a history-dependent response, i.e., indicating that a system's past states influence its present and future behavior [39].
Although there have been significant advancements in the design of these types of systems, current solutions have several limitations that hinder their performance and usability. For instance, many existing designs rely on contact-based solutions. This introduces hysteresis and displacement errors, both resulting from friction and manufacturing tolerances. In addition, they are prone to wear due to friction, which results in a loss of input information and energy. Furthermore, the predominant contact-based solutions face scalability issues, particularly when miniaturized, due to challenges like micro-stiction [40]. Additionally, unit cells with coupled interactions are often not rationally designed, making it hard to predict how these coupled interactions work when increasing the number of unit cells for more complex computations [36, 38]. Furthermore, some designs, such as those that utilize mechanically frustrated unit cells, can only compute once before having to reset manually, reducing their flexibility and versatility in real applications [39].
In this paper, we propose a fully elastic and contactless state-switching building block, that harnesses internal instabilities to switch between two distinct states in response to a single input. In the following sections, we will discuss the details of our proposed building block. In Section 2, we will describe the design principle behind the mechanism, including an analytical spring-based rigid-body model. Section 3 covers an analytical case study used to evaluate its performance, where the two design parameters of the nonlinear spring are studied. In Section 4 we propose a planar design embodiment and cover the numerical simulations, fabrication, and experimental validation of our prototype. Then, in Section 5, the results of the simulations and measurements are presented. Furthermore, we provide a discussion and interpretation of our findings. Section 6 will provide a discussion on opportunities and potential future research directions. Finally, in Section 7 we present our conclusion.
## 2 Design principle
To regulate state switching in bistable mechanisms, we propose a building block consisting of three elements. This includes a state element, a buckling column initially configured around bifurcation, and a connecting spring that connects the two, see Fig. 1a. The building block represents two distinct states, e.g. '0-state' and '1-state', that alternate with a cyclic input displacement. A bistable mechanism, with two stable equilibrium positions, is used to represent the two states. The force-displacement characteristic of such an element is shown in Fig. 1B; with state displacement \(d_{s}\) between the two different stable states, and critical loads \(F_{cr,1}\) and \(F_{cr,2}\). The bistable element requires pull and push input to switch between the two states. The buckling column is configured such that it can buckle in two directions, and can convert a compression input, \(u_{in}\), into a pull and push motion along the x-axis, see Fig. 1A (2) and (4) respectively. Lastly, the connecting spring, shown with \(k_{n}\), has two functions to achieve the alternating behavior. Firstly, controlling the response by reading the current state of the state element, see Fig. 1A (1) and (3), and secondly, switching the state (writing) by transmitting the input force towards the state element, see Fig. 1A (2) and (4). To achieve this, the spring characteristics should be highly nonlinear and meet specific design requirements. First we address the force-displacement requirements related to the four configurations displayed in Fig. 1A, and then we discuss the continuous force-displacement characteristics and stiffness of the nonlinear spring.
The connecting spring has force-displacement criteria related to each of the configurations (1) to (4), see circles labeled (1) to (4) in Fig. 1C, that depend on the state element and the buckling column. In the initial configuration, the system is stable, thus the displacement \(u_{n}=0\) and force \(F_{n}=0\); this is represented as configuration (1). In addition, the buckling column is designed with an imperfection, i.e., a small initial angle \(\phi_{0}\), such that the left buckling bifurcation path is preferred when an input \(u_{in}\) is given. In configuration (2), when the input displacement is maximal, \(u_{in}=U_{max}\), the tension in the connecting spring should exceed the critical load of \(F_{cr,1}\), so that the state element snaps-through to the 1-state. In configuration (3), the connecting spring should deliver a tensile force denoted as \(F_{p}\). This force should be equal to or greater than the force generated by the hinges of the buckling column, with stiffness \(k_{c}\), when rotating the buckling column to \(\phi=-\phi_{0}\). This ensures that the buckling direction of the column is in the positive x-direction. By designing the connecting spring such that the spring is still in tension when it is shortened due to state displacement \(d_{s}\), i.e., \(F_{n}(ds)>0\), it can'read' the current state and move the column through the bifurcation
x-direction, with \(\delta_{0}=L_{c}\sin\phi_{0}\). Thus, in configuration (3), the following should be satisfied,
\[F_{n}(d_{s}+2\delta_{0})=F_{p}\geq 2k_{c}\phi_{0}, \tag{1}\]
which is denoted as criteria \(C_{1}\). Lastly, in configuration (4), an input displacement creates a buckling deformation along the positive x-direction, thereby generating compression in the spring. Then, when the input is maximal, \(u_{in}=U_{max}\), the compression force should exceed the critical load of \(F_{cr,2}\), and the state element switches back to the initial 0-state.
Through the four points (1)-(4) in Fig. 1C, a continuous function, which represents the force-displacement characteristics of the connecting spring, can be plotted. One function that fits through the four points is a cubic function (e.g. the dashed line). This reveals that the connecting spring stiffness should be a quadratic form, and can be described by
\[k_{n}(u_{n})=\alpha(u_{n}-r_{1})(u_{n}-r_{2}), \tag{2}\]
with \(u_{n}\) the spring displacement, and unknown variables \(\alpha\), \(r_{1}\), and \(r_{2}\). To get the nonlinear force response (\(F_{n}\)), Eq. 2 can be integrated. In the initial configuration (\(u_{n}=0\)) the force \(F_{n}(0)=0\), thus integrating Eq. 2 yields
\[F_{n}(u_{n})=\frac{\alpha u_{n}}{6}(2u_{n}^{2}-3r_{1}u_{n}-3r_{2}u_{n}+6r_{1}r _{2}). \tag{3}\]
However, not every value for \(\alpha\), \(r_{1}\), and \(r_{2}\) provides a viable solution. Namely, When the input is removed between config. (2) and (3), \(u_{in}:U_{max}\to 0\), the 1-state should remain in its stable position. This can be achieved under the condition that
\[F_{n}(u_{n})>F_{cr,2}\quad\forall\,u_{n}\in[(2),(3)], \tag{4}\]
which is denoted as criteria \(C_{2}\). Furthermore, there are no stresses in the initial configuration, and thus configuration (1) should be the lowest energy state. The energy of the connecting spring can be determined by
Figure 1: Design methodology for single-input state-switching building block. The building block consists of a state element, a buckling column near bifurcation, and a nonlinear spring connecting the two. (A) Four configurations of the mechanism representing the states and state-transitions: initial stable 0-state (1), state-transition from 0 to 1-state (2), stable 1-state (3), resetting state-transition (4). (B) The force-displacement characteristics of a state element. (C) One possible solution for the force-displacement characteristics of the nonlinear spring (dashed line), crossing the four key points related to the four configurations: initial conditions as fabrication (1), required tensile forces in configurations (2) and (3) to pull the buckling column to the other bifurcation path, and a compressive force in configuration (4).
integrating Eq. 3. In the initial configuration (\(u_{n}=0\)) the potential energy \(E_{n}(0)=0\), thus integrating Eq. 3 yields
\[E_{n}=\frac{\alpha u_{n}^{2}}{12}(u_{n}^{2}-2r_{1}u_{n}-2r_{2}u_{n}+6r_{1}r_{2}) \geq 0\quad\forall\,u_{n}, \tag{5}\]
which is denoted as criteria \(C_{3}\).
To find viable nonlinear spring characteristics, an analysis is performed to find adequate combinations of design parameters \(\alpha\), \(r_{1}\), and \(r_{2}\) that satisfy Eqs. 1, 4, and 5, denoted as criteria \(C_{1}\), \(C_{2}\), and \(C_{3}\), respectively. This analysis together with an analytical case study is presented in Section 3.
## 3 Analytical case study
To perform a case study, first, a suitable set of connecting spring design parameters, \(\alpha\), \(r_{1}\), and \(r_{2}\), need to be identified using the previously determined criteria \(C_{1}\), \(C_{2}\), and \(C_{3}\), i.e., Eqs. 1, 4, and 5, respectively. For the analysis, we have chosen a state element with \(L_{s}=21\,\mathrm{mm}\), \(\theta_{0}=7^{\circ}\), and \(k_{s}=1.735\times 10^{5}\,\mathrm{N}\,\mathrm{m}^{-1}\); and a buckling column with \(L_{c}=2.1L_{s}\), for sufficient geometrical advantage, and \(\phi_{0}=0.57^{\circ}\), as an imperfection to enforce buckling in negative x-direction. Furthermore, rotational stiffness value \(k_{c}\), with the hinges replaced as short-length flexures, is described by
\[k_{c}=4\frac{Ewt^{3}}{12L}, \tag{6}\]
with Young's modulus \(E=1.7\,\mathrm{GPa}\), width \(w=7.5\,\mathrm{mm}\), thickness \(t=0.5\,\mathrm{mm}\), and length \(L=4\,\mathrm{mm}\) of the flexures in the buckling column. To satisfy Eq. 1, it is reasonable to position the local maxima of Eq. 3 at point (3) in Fig. 1C, i.e., \(r_{1}=-(d_{s}+2\delta_{0})\), such that the greatest force can be delivered; however, we note that it is not a requirement that point (3) is located at the local maximum, only that criteria \(C_{1}\) is satisfied.
The analysis results for \(\alpha\) and \(r_{2}\) are presented in Fig. 2A. Several distinct regions can be identified, labeled with the corresponding criteria that are not satisfied. Firstly, three vertical regions that are dependent on the energy landscape of the nonlinear spring. When \(r_{2}<-1.87\,\mathrm{mm}\), the energy landscape of the nonlinear spring shows only 1 stable point, indicating it cannot provide enough pull-in force, thus criteria \(C_{1}\) is not satisfied. When \(r_{2}>-1.57\,\mathrm{mm}\), the energy landscape becomes non-feasible because the energy at the second stable point drops below zero, thus criteria \(C_{3}\) is not satisfied. Furthermore, within the range of \(-1.87\,\mathrm{mm}<r_{2}<-1.57\,\mathrm{mm}\), three distinct horizontal regions can be identified. Firstly, when \(\alpha\gtrsim 6\times 10^{8}\,\mathrm{N}\,\mathrm{m}^{-3}\), criteria \(C_{2}\), i.e., Eq. 4, is not satisfied, resulting in the second state remaining unstable. Secondly, when \(\alpha\lesssim 7\times 10^{5}\,\mathrm{N}\,\mathrm{m}^{-3}\), criteria \(C_{1}\), i.e., Eq. 1, is not satisfied, this implies that the connecting spring does not generate enough tension to pull the buckling column to the alternate bifurcation path. A combination of design parameters in the center region fulfills all constraints and can be selected for an analytical case study.
The performance of our building block can be evaluated using the criteria on the design parameters of the connecting spring. The spring stiffness is defined as \(k_{n}=\alpha(u_{n}-r_{1})(u_{n}-r_{2})\mathrm{N}\,\mathrm{m}^{-1}\), where \(\alpha=4\times 10^{8}\,\mathrm{N}\,\mathrm{m}^{-3}\)
Figure 2: Analytical case study. (A) Analysis of design parameters \(\alpha\) and \(r_{2}\) for the nonlinear spring. The center dark green region represents feasible combinations, while the surrounding regions fail to meet one or more of design criteria \(C_{1}\), \(C_{2}\), and \(C_{3}\). The red cross indicates the values chosen for the case study. The energy landscape of the two bifurcation paths combined with the path of minimal energy from (B) the 0-state and from (C) the 1-state. An offset of \(0.1\,\mathrm{mm}\) in \(u_{s}\)-direction is given for visibility of both paths. (D) The zoom-ins at (1) and (3) show the switching between bifurcation energy landscapes.
\(r_{1}=-(ds+2\delta_{0})\) m, and \(r_{2}=-1.6\times 10^{-3}\) m. These values, indicated with a red cross in Fig. 2A, represent the physical prototype discussed in Section 4.
The total energy of the system (\(E_{t}\)) can be calculated by summing the energy of the connecting spring (\(E_{n}\), as described in Eq. 5), the state element (\(E_{s}\)), and the buckling column (\(E_{c}\)). These energy components can be derived from the spring-based rigid-body model. The energy landscapes of the left and right buckling bifurcation paths are illustrated in Figs. 2B and 2C, respectively. Additionally, the path of minimal energy for a full input cycle of \(u_{in}=0\to U\to 0\to U\), where \(U\) represents the maximal input displacement, is overlayed. The blue line represents the transition from 1 to 2, the red line from 2 to 3, the yellow line from 3 to 4, and the green line from 4 back to 1. This path is calculated using:
\[\nabla\mathcal{L}=0,\quad\mathcal{L}(u_{in},u_{s},\lambda)=E_{t}+\lambda g \tag{7}\]
where \(E_{t}\) represents the total potential energy, \(g\) denotes a constraint for the input displacement \(u_{in}\), and \(\lambda\) is the input force required to maintain this constraint. The analysis suggests that our mechanism is capable of switching between the two states using a single input. When an input displacement is given, there is a sudden snap-through to a lower energy path, see (1) to (2) in Fig. 2B and (3) to (4) in Fig. 2C. This path remains stable even when the input is removed, see (2) to (3) and (4) to (1) in Figs. 2B and 2C, respectively. Upon reaching the new stable position, the opposite bifurcation path becomes the lowest energy path, leading the system to follow the alternate path upon the application of a new input displacement. This switching behavior, in (3) and (1), is evident in the zoom-ins, see Fig. 2D. After completing the full cycle, the 0-state is reattained.
Using the spring-based rigid-body model, the influence of design parameters \(\alpha\) and \(r_{2}\) on the mechanism's performance can be readily evaluated, and depicted in Fig. 3. We analyzed four decreasing values for \(\alpha\) (4, 3, 2, and \(1\times 10^{8}\)) and \(r_{2}\) (\(-1.60\), \(-1.65\), \(-1.70\), and \(-1.75\times 10^{-3}\)). The analysis indicates that a lower value of \(\alpha\) results in a decreased input force while a larger input displacement is necessary for state switching. This can be explained by the fact that \(\alpha\) is a scalar of the stiffness function, see Eq. 2. Therefore, a larger input displacement is required to overcome the critical load of the state element but with a greater transmission ratio, i.e., decreased input force. Additionally, according to Fig. 3C and 3D, the influence of \(r_{2}\) appears to be minimal and determines if the mechanism functions as desired.
Figure 3: Performance analysis with different values of \(\alpha\) and \(r_{2}\). The blue lines show behavior from config. 1 to 2, the red lines from config. 2 to 3, the yellow lines from config. 3 to 4, and the green lines from config. 4 back to 1. The force-displacement characteristics for different values of (A) \(\alpha\) and (C) \(r_{2}\), respectively. The input-output displacement for different values of (B) \(\alpha\) and (D) \(r_{2}\), respectively.
## 4 Compliant design, numerical modeling, and fabrication
To experimentally validate our design method, we designed an elastic, planar embodiment where all rotational hinges are replaced with small-length flexures. For the state element, we used two parallel bistable elements to provide rectilinear translation. The buckling column remains a beam with three flexures to enforce buckling. This column is suspended on a compliant shuttle to allow for rectilinear input displacement. A bistable element is a natural embodiment of the connecting spring, due to its similar cubic force-displacement behavior. This design fulfills the three criteria discussed in Section 2, and has similar values for \(\alpha\), \(r_{1}\), and \(r_{2}\) as mentioned in Section 3. Fig. 4A shows the proposed compliant embodiment annotated with design parameters \(L_{s}=17.13\,\mathrm{mm}\), \(t_{s}=2\,\mathrm{mm}\), \(\theta_{s}=7^{\circ}\), \(L_{n}=20.15\,\mathrm{mm}\), \(\theta_{n}=7^{\circ}\), \(L_{c}=40\,\mathrm{mm}\), \(\delta_{0}=0.4\,\mathrm{mm}\), \(L_{i}=20\,\mathrm{mm}\), and \(\theta_{i}=2^{\circ}\). Besides the annotated design parameters, the mechanism has an out-of-plane thickness of \(h=7.5\,\mathrm{mm}\). All short-length flexures have length \(L_{f}=4\,\mathrm{mm}\) and thickness \(t_{f}=0.5\,\mathrm{mm}\), the compression springs, \(k_{s}\), have length \(L_{k}=3.5\,\mathrm{mm}\) and thickness \(t_{k}=1.5\,\mathrm{mm}\), and all other beams have a thickness \(t_{b}=5.5\,\mathrm{mm}\).
The prototype is fabricated using 3D printing by Multi Jet Fusion (MJF) using polyamide-12 (Nylon-12). A picture of the fabricated prototype is presented in Fig. 4B, with annotated regions that represent the state element, connecting spring, buckling column, and a region indicating one of the compression springs \(k_{s}\).
A finite element analysis (FEA) using Ansys Parametric Design Language (APDL) was conducted to verify the mechanism's design and performance. Two-node beam elements (beam188), based on Timoshenko beam theory, with rectangular beam cross-section, were used. The mechanism's material parameters are: Young's modulus \(E=1.7\,\mathrm{GPa}\), density \(\rho=1010\,\mathrm{kg}\,\mathrm{m}^{-3}\), and Poisson's ratio \(\nu=0.33\)[41]. The mechanism is anchored with fixed boundary conditions at the points where it interfaces with the frame. The design parameters were carefully selected to ensure the maximum Von Mises stress remained below the 48 MPa limit, thereby maintaining the mechanism's structural integrity.
Furthermore, experiments were carried out to assess the force-displacement characteristics and input-output kinematics of the mechanism. For the force-displacement measurement, a 45N force sensor (Futek LSB200 FSH03878), mounted to a precision linear stage (PI M505) is used. An input displacement of \(1.2\,\mathrm{mm}\) was applied to the input shuttle of the mechanism. Simultaneously, for the input-output displacement measurement, the displacement of the input shuttle and state element is captured using a video camera and then analyzed using image processing.
## 5 Results and Discussion
Four distinct configurations of the developed prototype throughout a full input cycle are shown in Fig. 5A, see also video S1 (supplementary material). This sequence corresponds to an input displacement pattern of
Figure 4: (A) Proposed compliant embodiment of the building block, labeled with design parameters. (B) Fabricated prototype, labeled with the three main elements. The proposed design has a out-of-plane thickness of \(7.5\,\mathrm{mm}\), all flexures have a length of \(4\,\mathrm{mm}\) and thickness of \(0.5\,\mathrm{mm}\), all compression springs (\(k_{s}\)) have a length of \(3.5\,\mathrm{mm}\) and thickness of \(1.5\,\mathrm{mm}\), and all beams have a thickness of \(5.5\,\mathrm{mm}\).
Figure 5: Results from both simulations and experimental measurements. (A) Snapshots illustrating the four configurations of the mechanism upon applying a cyclic input displacement. Force displacement characteristics from (B) simulations, and from (C) measurements. Input-output kinematics from (D) simulations, and from (E) measurements.
\(0\,\mathrm{mm}\to 1.2\,\mathrm{mm}\to 0\,\mathrm{mm}\to 1.2\,\mathrm{mm}\), and the configurations at each stage are denoted as (1), (2), (3), and (4) respectively. The transition from (1) to (2) illustrates the buckling of the column along with the state switching to the 1-state. In the transition from (2) to (3) the state's stability is evident, and the nonlinear spring delivers a tensile force on the buckling column. Due to the tension in the connecting spring, the buckling column follows the second bifurcation path, i.e., along the positive x-direction, upon applying an input displacement. This action prompts the transition from configuration (3) to (4), the state to reset to its original 0-state position. Lastly, in the transition from (4) to (1), upon removing the input displacement, the state remains stable and the mechanism arrives in its original configuration.
Furthermore, the results of the force-displacement characteristics obtained by simulations and experiments are shown with dashed lines and arrows indicating the direction, see Figs. 5B and 5C, respectively. Transparent lines and areas represent fabrication inaccuracies, and will be discussed later in this section. The blue line represents the transition from 1 to 2, the red line from 2 to 3, the yellow line from 3 to 4, and the green line from 4 back to 1. Similar behavior from the analytical model is observed, see Fig. 3. The required input force switching to the 1-state, transition 1 to 2, is \(-36\,\mathrm{N}\) in simulation compared to \(-22\,\mathrm{N}\) in our experimental measurements. Next, when removing the input, transition 2 to 3, a maximal force of \(9\,\mathrm{N}\) in the simulation and \(7.5\,\mathrm{N}\) in the experiment is reached. Then, for switching back to 0-state, transition 3 to 4, a force of \(-25\,\mathrm{N}\) in simulation and \(-16.5\,\mathrm{N}\) in the experiment is needed. Lastly, returning to the original configuration, transition 4 to 1, the peak force is \(-30\,\mathrm{N}\) in simulation compared to \(-12.5\,\mathrm{N}\) in the experiment.
The input-output displacement results derived from both simulation and experiment are displayed in Figs. 5D and 5E, respectively. The arrows indicate the direction of the lines. The y-axis displays the state displacement, where \(0\,\mathrm{mm}\) displacement represents the 0-state and \(\sim 5\,\mathrm{mm}\) represents the 1-state. A sudden snap-through can be observed from 0- to 1-state and vice-versa, after the snap-through the state displacement remains stable when the input displacement is removed; this behavior is the state-switching. For transition 1 to 2, an input displacement of \(1.2\,\mathrm{mm}\) is applied, and the state displaces \(5\,\mathrm{mm}\) in simulation and \(5.25\,\mathrm{mm}\) in the experiments to the 1-state. The snap-through is triggered at an input displacement of \(0.45\,\mathrm{mm}\) in simulation and \(0.65\,\mathrm{mm}\) in experiments. The state then retains this position when the input is removed, transition 2 to 3, until a subsequent input is applied. The next input displacement of \(1.2\,\mathrm{mm}\) causes the transition from 3 to 4, upon which the state resets back to the 0-state at \(0\,\mathrm{mm}\) for both simulation and experiment. The snap-through occurs at an input displacement of \(0.23\,\mathrm{mm}\) in simulations and \(0.45\,\mathrm{mm}\) in the experiment.
A sensitivity analysis was performed to understand the influence of key parameters of the mechanism. Our analysis indicated that the performance is dominated by the state element, while the correct functionality of the mechanism is determined by the design criteria of the connecting spring with respect to the fabricated dimensions of the state element. The fabricated prototype was measured and the following design parameters were changed in the FEA: \(L_{s}=17.13\)\(+0.5\,\mathrm{mm}\), \(L_{k}=3.5\)\(-0.25\,\mathrm{mm}\), \(t_{k}=1.5\)\(+0.25\,\mathrm{mm}\), \(\theta_{s}=7\)\(+0.75^{\circ}\), \(L_{n}=20.15\)\(+0.5\,\mathrm{mm}\), and \(\theta_{s}=7\)\(+1^{\circ}\). These values represent manufacturing inaccuracies, which are partly attributed to 3D-printer accuracy and partly because \(1\,\mathrm{mm}\) fillets are used in the compression springs, while the FEA used rectangular beams. Besides, the prototypes were fabricated using Material Jet Fusion (MJF); while it should be possible to fabricate flexures of \(0.5\,\mathrm{mm}\) using this method, a closer examination of the prototypes revealed heterogeneous material filling within the flexures. This doesn't accurately reflect the material properties used in the simulations. Therefore, additional simulations with updated dimensions were conducted using a lower Young's modulus of \(1\,\mathrm{GPa}\). The results of the simulations are presented in Fig. 5B and 5D as transparent lines and a shaded area between the nominal values and the fabricated prototype with inaccuracies. The results indicate that small variations in the mentioned design parameters have a significant influence on the force-displacement and input-output relation. In Fig. 5B, the force-displacement characteristics, a decrease in peak force of \(12\,\mathrm{N}\) is observed in transition 1 to 2, a decrease of \(5\,\mathrm{N}\) in transition 2 to 3, a decrease of \(4.5\,\mathrm{N}\) in transition 3 to 4, and a decrease of \(12.5\,\mathrm{N}\) in transition 4 to 1. These results are similar to those measured in experiments, namely, \(-25\,\mathrm{N}\) vs \(-22\,\mathrm{N}\), \(4\,\mathrm{N}\) vs \(7.5\,\mathrm{N}\), \(-20.5\,\mathrm{N}\) vs \(-16.5\,\mathrm{N}\), and \(-15\,\mathrm{N}\) vs \(-12.5\,\mathrm{N}\), respectively. Furthermore, in Fig. 5D, the input-output displacement, the state element displaces \(6\,\mathrm{mm}\), and the snap-though is triggered at \(0.8\,\mathrm{mm}\) in simulation vs \(0.65\,\mathrm{mm}\) in experiment from the 0- to the 1-state, and at \(0.45\,\mathrm{mm}\) in both simulation and experiment back from the 1- to the 0-state.
Further observed discrepancies between the measurements and simulations can potentially be attributed to the finite stiffness of the frame which is not considered in the FEA. Bistable structures are highly sensitive to boundary conditions. When boundary conditions are overly complaint, bistability may be lost entirely. Precautions have been taken to increase the frame's stiffness. Such as taping the prototype to a PMMA base plate, however, small changes in the boundary conditions, such as small outward displacement of the boundary conditions of the state element due to flexion of the frame, can explain some of the discrepancies. From the sensitivity analysis, it was determined that the behavior of the mechanism is dominated by the state element. An estimation of our frame stiffness is \(3\times 10^{5}\,\mathrm{N}\,\mathrm{m}^{-1}\), which is in series connection to support stiffness \(k_{s}\), see Figs. 1A and 4A. The support stiffness \(k_{s}\) is estimated to be \(1.735\times 10^{5}\,\mathrm{N}\,\mathrm{m}^{-1}\), thus the frame stiffness contributes significantly, further explaining the differences observed.
Lastly, to actuate the mechanism, a hole of \(3\,\mathrm{mm}\) in diameter was implemented in the prototype to accom
modate a hook attachment to provide input displacement. Due to the difference in diameter of the hole and hook, there was some hysteresis of \(\sim 0.1\,\mathrm{mm}\) in the measurement, this can be seen around \(0\,\mathrm{mm}\) and \(0.6\,\mathrm{mm}\) at \(0\,\mathrm{N}\) in Fig. 5C. This partly explains why a negative input displacement was required.
## 6 Opportunities and Outlook
In this study, our primary focus was on the quasi-static behavior of the mechanism, with the dynamic characteristics remaining unexplored. For instance, material selection is crucial due to inherent visco-elastic behavior. As the mechanism approaches the snap-through point, the visco-elasticity can lead to relaxation behavior, thereby changing the precise snapping moment. This phenomenon becomes particularly significant during the transition from configuration 2 to configuration 3, where the forces involved are relatively low. Due to this phenomenon, besides hysteresis, we applied a small negative displacement of \(u_{in}=-0.2\,\mathrm{mm}\) to our mechanism in the experimental study. Furthermore, to comprehensively understand the dynamic performance of the mechanism, it might be beneficial to further explore its maximum operating frequency through a multi-body dynamic model.
When adapting our single-input state-switching mechanism for real-world applications where an output load is required, careful consideration of load placement becomes crucial. To maintain the mechanism's desired bistable behavior and eliminate unintended state changes, the output load should preferably be placed on the buckling column, e.g., at the left/right buckling point, rather than directly on the state element.
Furthermore, an interesting observation is that the input-output displacement relation of our mechanism exhibits characteristics of a frequency divider, see Fig. 5D and 1E. In MEMS devices, the operation frequencies from actuators are generally high. While mechanical frequency up-conversions have been achieved before [42], down-conversion of motion frequency is rarely reported. By concatenating multiple instances of our building block - and considering that the loads transmitted through such a system should still be carried by the input beam - we could potentially achieve higher frequency division ratios.
Lastly, while this study outlines design guidelines for the proposed mechanism, it's important to highlight the generality of our approach. To illustrate the feasibility of our design principle, we fabricated a prototype that satisfies the design criteria. However, these guidelines are not confined to this specific embodiment. For instance, our prototype embodies a rectilinear input displacement and bistable mechanism, but our design principle can adapt to other design embodiments, including rotational input displacement and other variants of bistable elements. Moreover, while our prototype is realized at the decimeter scale, the framework we present holds potential across a range of length scales, from nano to macro. Thus, our demonstrated prototype serves as a tangible representation, but our design guidelines are applicable more broadly, offering adaptation beyond the embodiment we present.
## 7 Conclusion
We have presented a fully elastic state-switching mechanism that can convert a cyclic input signal into two distinct stable states. This functionality is achieved by harnessing internal instability that guides the bifurcation path of a buckling column. By'reading' the mechanism's current state, and 'writing' the input into the state element, it facilitates alternating switching behavior. In contrast to previous studies in which state switching has been achieved through complex contact-based interaction, we laid out a guideline for designing nonlinear springs that facilitate state switching through a fully elastic and monolithic embodiment. Although we demonstrated the theory using a centimeter scale prototype in this study, it is important to note that our approach is compatible with miniaturization, suitable for a broad spectrum of applications, from micro switches to reprogrammable metamaterials. Furthermore, the proposed methodology allows for different variations, such as changing the nature of the input motion, e.g., the type of displacement field, or adjusting the readout mode. Lastly, while this work focused on a system characterized by a single input and state element, the strategy lays the groundwork for the development of flexible mechanisms with sequencing behavior, such as sequencing between parallel sets of state elements through a single input.
|
2310.05079 | Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM
Inference? | The inference of Large language models (LLMs) requires immense computation
and memory resources. To curtail these costs, quantisation has merged as a
promising solution, but existing LLM quantisation mainly focuses on 8-bit. In
this work, we explore the statistical and learning properties of the LLM layer
and attribute the bottleneck of LLM quantisation to numerical scaling offsets.
To address this, we adapt block quantisations for LLMs, a family of methods
that share scaling factors across packed numbers. Block quantisations
efficiently reduce the numerical scaling offsets solely from an arithmetic
perspective, without additional treatments in the computational path. Our
nearly-lossless quantised 6-bit LLMs achieve a $19\times$ higher arithmetic
density and $5\times$ memory density than the float32 baseline, surpassing the
prior art 8-bit quantisation by $2.5\times$ in arithmetic density and
$1.2\times$ in memory density, without requiring any data calibration or
re-training. We also share our insights into sub-8-bit LLM quantisation,
including the mismatch between activation and weight distributions, optimal
fine-tuning strategies, and a lower quantisation granularity inherent in the
statistical properties of LLMs. The latter two tricks enable nearly-lossless
4-bit LLMs on downstream tasks. Our code is open-sourced. | Cheng Zhang, Jianyi Cheng, Ilia Shumailov, George A. Constantinides, Yiren Zhao | 2023-10-08T09:05:14Z | http://arxiv.org/abs/2310.05079v2 | # Revisiting Block-based Quantisation:
###### Abstract
The inference of Large language models (LLMs) requires immense computation and memory resources. To curtail these costs, quantisation has emerged as a promising solution, but existing LLM quantisation mainly focuses on 8-bit. In this work, we explore the statistical and learning properties of the LLM layer and attribute the bottleneck of LLM quantisation to _numerical scaling offsets_. To address this, we adapt block quantisations for LLMs, a family of methods that share scaling factors across packed numbers. Block quantisations efficiently reduce the numerical scaling offsets solely from an arithmetic perspective, without additional treatments in the computational path. Our nearly-lossless quantised 6-bit LLMs achieve a \(19\times\) higher arithmetic density and \(5\times\) memory density than the float32 baseline, surpassing the prior art 8-bit quantisation by \(2.5\times\) in arithmetic density and \(1.2\times\) in memory density, without requiring any data calibration or re-training. We also share our insights into sub-8-bit LLM quantisation, including the mismatch between activation and weight distributions, optimal fine-tuning strategies, and a lower quantisation granularity inherent in the statistical properties of LLMs. The latter two tricks enable nearly-lossless 4-bit LLMs on downstream tasks. Our code is open-sourced 1.
Footnote 1: [https://github.com/ChengZhang-98/llm-mixed-q](https://github.com/ChengZhang-98/llm-mixed-q)
## 1 Introduction
Pre-trained Large Language Models (LLMs) Brown et al. (2020); Black et al. (2021); Zhang et al. (2022) have demonstrated impressive performance on a range of Natural Language Processing (NLP) tasks. However, their underlying computational and memory costs are a critical bottleneck to their usability. For instance, the larger variants in the GPT family scale up to hundreds of billions of parameters, requiring at least 300GB of memory to store these parameters in a float16 format Brown et al. (2020). Quantisation serves as a natural solution for reducing the cost of running inference on these LLMs Yao et al. (2022); Xiao et al. (2022); Dettmers et al. (2022), as a low-precision format enables cost savings across all relevant efficiency metrics: reduced on-chip memory, increased arithmetic intensity for matrix multiplies, and decreased DRAM bandwidth requirement. On the other hand, the growing popularity of running services such as ChatGPT OpenAI (2022) provides an impetus for exploring the use of custom silicon to support LLM inference. This raises the question: _What would a low-precision number system look like in these near-future LLM hardware accelerators (ASICs)?_
LLM quantisation is challenging because of the activations with large absolute magnitudes, also known as activation outliers Bondarenko et al. (2021); Xiao et al. (2022). Previous approaches have proposed various techniques to address such outliers. However, these either require additional treatments in the integer quantisation domain LLM.int8() and SmoothQuant) or yield unsatisfactory performance (ZeroQuant); and prior work has primarily focused on arithmetics that can be ported to GPUs. We observe that the presence of outliers necessitates different scaling factors at a finer granularity than per-tensor or per-token level Yao et al. (2022); Xiao et al. (2022). This insight naturally leads us to revisit arithmetic systems with small exponents, such as MiniFloat Sun et al. (2019), Block Minifloat Fox et al. (2021), Block Logarithm Miyashita et al. (2016), and Block Floating Point Kalliojarvi and Astola (1996), as they can effectively represent outliers in Transformer models. To the best of our knowledge, our work is the first to systemically investigate short-exponent arithmetics for LLM quantisation.
Figure 1 illustrates the variance of the tensors joining the GEMMs in an OPT-6.7B Zhang et al. (2020).
2022). After feeding 128 samples from Wikitext2 to the pretrained float32 model, we make three interesting observations. 1) The variance of most activations in Figure 1 increases with the depth of the layer; 2) Certain tensors (_e.g._\(\mathbf{K}\)) consistently have a greater variance compared to others; 3) All the weight variance is smaller than activations. Similar trends can be observed in other LLMs. We provide a variance plot of Vicuna-7B (Zheng et al., 2023) in Appendix (Figure 4).
The presence of varying numerical ranges across layers and tensors poses a challenge to the efficacy of a single quantisation configuration for the entire network. From an arithmetic perspective, we refer to this phenomenon as _numerical scaling offsets_, as it requires different numerical ranges and granularities for quantisation. To ensure optimal performance, these layers should be subjected to fine-grained non-linear quantisation strategies.
Table 1 provides a comparison between our work and existing LLM quantisation methods. Our quantisation considers all GEMMs (8/8) in transformer layers and both Post-Training-Quantisation (PTQ) and Training-After-Quatisation (TAQ) scenarios. In this work, we also explore suitable places to perform TAQ and quantisation search within the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & (QW, QAct) & Bitwidth & PTQ or TAQ & \# Quantisied GEMMs \\ \hline ZeroQuant (Yao et al., 2022) & (\(\surd\), \(\surd\)) & W4A8 & TAQ & 8/8 \\ LLM\_int8() (Dettmers et al., 2022) & (\(\surd\), \(\surd\)) & W8A8\({}^{*}\) & PTQ & 6/8 \\ GPTQ (Frantar et al., 2022) & (\(\surd\), \(\times\)) & W4 & PTQ + DC & 6/8 \\ SmoothQuant (Xiao et al., 2022) & (\(\surd\), \(\surd\)) & W8A8 & PTQ + DC & 6/8 \\ Ours & (\(\surd\), \(\surd\)) & W6A6/W4A4 & PTQ/TAQ & 8/8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparison of different LLM quantisation methods. (QW, QAct) shows whether quantisations are applied to weights or activations, W\(\mathrm{x}\mathrm{x}\mathrm{y}\) means \(x\)-bit quantisation for weights and \(y\)-bit quantisation for activation. PTQ and TAQ represents Post Training Quantisation and Training After Quantisation respectively. DC means data calibration. There are eight general matrix multiplications (GEMMs) per transformer layer (1)-(8) in Algorithm 2). Only ZeroQuant and ours quantis all of them. Other approaches leave 4 and 5 in float32/float16 format, which take up 20.6% floating-point operations in OPT-6.7B’s self-attention. \({}^{*}\) means outliers in LLM.INT8() is computed in float16; this improves arithmetic density but memory density is kept identical to canonical float16.
Figure 1: The algorithm on the left is the forward pass computation of a single Transformer layer (Vaswani et al., 2017) in mainstream LLMs, wherein values in blue (_e.g._\(X_{n}\)) represent tensors with predetermined min-max values, such as the outputs of a normalisation layer or softmax. Values in red have unbounded min-max, and are plotted on the upper right for different layers of OPT-6.7B (Zhang et al., 2022). We show that for almost all activation tensors, their variances increase at deeper layers, resulting in _scaling offsets_ in their quantisation, while weight tensors on the lower right have smaller variances. This statistical trend enlightens our LLM quantisation study.
entire NLP pipeline. We make the following contributions:
* We address the LLM quantisation problem with activation outliers and examine it as a _scaling offsets_ problem from an arithmetic design perspective. We demonstrate the efficacy of a family of arithmetic systems with short exponents shared across a block of numbers.
* We propose a novel quantisation framework based on block arithmetic, and demonstrate its effectiveness in performing W6A6 inference for various tasks. Our nearly-lossless W6A6 outperforms prior work in terms of arithmetic density and memory density, without requiring data calibration or fine-tuning.
* We present two methods to achieve 4-bit quantisation on downstream tasks: one is fine-tuning-based, and the other is mixed-precision search. The latter further demonstrates the potential advantage of shifting LLM inference to cost-effective ASICs.
## 2 Related Work
While quantisation of earlier Machine learning (ML) models has been extensively studied, effective quantisation of LLMs still remains an open problem. In this section, we review the previous works on block-based quantisation and compare to the existing LLM quantisation techniques.
### Block-based Quantisation
Block-based quantisation is a technique that quantises a block of values into a compact format, where the elements within each block share common digits. This technique offers a significant memory footprint reduction while maintaining a minor round-off error. A number of previous works rely on this method to quantise Convolutional Neural Networks (CNNs). Lin _et al._ utilised a linear combination of multiple binary bases, equivalent to each binary matrix having a scaling factor Lin et al. (2017). Subsequently, Zhang _et al._ introduced LQ-Nets that rely on a form of block quantisation with a shared scaling factor at the vector level Zhang et al. (2018). Further investigations explored grouping numbers at various granularities, including layer-wise Wu et al. (2018), channel-wise Krishnamoorthi (2018), and vector-wise quantisation Dai et al. (2021).
It is worth noting that sharing a scaling factor is similar to, but not necessarily the same as, sharing the exponent Darvish Rouhani et al. (2020). This distinction arises because scaling factors can be arbitrary float32 values, whereas exponent values must be integers represented by the assigned number of bits. Our work focuses on sharing the exponent or exponent bias. When the block size of the shared exponent is 1, we fall back to the minifloat representation such as FP8 Sun et al. (2019). These approaches showed promising results primarily for vision models or relatively small Transformer-based models, while we shift the focus to quantising LLMs with a significantly larger parameter count.
### LLM Quantisation
Efficient quantisation techniques for language models have been explored in previous works. Zafrir _et al._ proposed an approach for quantising BERT Shen et al. (2019) into 8-bit integers Zafrir et al. (2019), while Shen _et al._Shen et al. (2019) proposed Hessian-based ultra-low precision quantisation for the same model. Zhang _et al._Zhang et al. (2020) quantised BERT to ternary values leveraging layer-wise knowledge distillation, and Bai _et al._Bai et al. (2021) further pushed the quantisation of BERT weights to binary values.
The recent surge of interest in quantising LLMs has presented a unique challenge distinct from the prior art summarised above. This challenge stems from the increased model sizes of LLMs. Yao _et al._ proposed ZeroQuant, which quantises both weights and activations of large transformers into small integers with shared scaling factors Yao et al. (2022). However, as mentioned by Xiao et al. (2022), ZeroQuant suffers from a severe accuracy loss. Dettmers _et al._ introduced LLM.int8(), a method that computes outlier GEMMs in float16 and the rest in 8-bit integer Dettmers et al. (2022). Xiao _et al._ extended 8-bit LLM quantisation with their PTQ technique named SmoothQuant, Xiao _et al._ proposed SmoothQuant which scales down activations by row and scales up weights by column proportionally before 8-bit fixed-point quantisation Xiao et al. (2022). Frantar _et al._ proposed GPTQ, which quantises the weights of LLMs to 3 or 4-bit integers while keeping the activations in float32. Most LLM quantisation methods, directly or indirectly, reserve LLM activation outliers.
## 3 Method
In this section, we outline our quantisation strategy for LLMs. We first define block-based quantisation and then describe the metrics we use for evaluating quantisation methods. Finally, we detail a precision search that lowers the quantisation granularity down to the tensor level, effectively accommodating the statistical distribution inherent in LLMs.
### Block-based Arithmetic
Figure 2 illustrates the data representation we explore to address LLM quantisation as well as the standard float32/float16. We outline the specifications for traditional floating-point numbers and extend them to block-based quantisation. Detailed definitions can be found in Appendix C.
Standard floating-pointA standard IEEE floating-point number is defined as a 4-tuple, \((s,e,m,b)\)(Kahan, 1996). \(s\in\{0,1\}\) is the sign bit, \(e\in\mathbb{N}\) is the exponent field; \(b\in\mathbb{N}\) is the exponent bias; and \(m\in\mathbb{N}\) is the mantissa. Let the bit widths of the exponent and the mantissa be \(E\) and \(M\), respectively. The IEEE standard float32 (FP32) number has \(E=8\) and \(M=23\), where the other bit is used as the sign bit. Note that the exponent bias depends on \(E\): \(b=2^{E-1}-1\), separating the exponent field symmetrically. Similarly, float16 (FP16) has \(E=5\) and \(M=10\).
MiniFloat and Denormalised MiniFloatMiniFloat is an efficient floating-point representation that requires fewer bits than traditional floating-point numbers. Traditionally, an 8-bit MiniFloat inherits the definition of FP32 by assigning \(E=4\) and \(M=3\). We saturate MiniFloat when \(e=2^{E}-1\) and thus no \(\pm\inf\) is included.
In this paper, we also introduce a Denormalised MiniFloat (DMF) with zero as the implicit leading bit in the mantissa. Similar to MiniFloat, we saturate the infinity to a maximum finite value. DMF provides a higher precision than MiniFloat for small values at the expense of narrowing down the value range. We investigate this trade-off in the context of quantising LLMs.
Block MiniFloat, Block Floating-Point and Block LogarithmAs shown in Figure 2, Block quantisation packs values in a block in which a common scaling factor is shared across \(N\) values where \(N\) is the block size, reducing the computation in vector inner products. This work mainly explores three block quantisation arithmetics on LLMs: BM, BFP and BL.
Block Minifloat (BM) shares a \(B\)-bit exponent bias (Fox et al., 2021). This representation achieves high precision and high range at the same time, at the cost of a larger quantisation error at medium value than standard floating point. This is potentially amenable to values in a multimodal distribu
Figure 2: An illustration of different quantisation methods considered in this work: MiniFloat (Sun et al., 2019) and Denormed MiniFloat (DMF), Block MiniFloat (BM) (Fox et al., 2021), Block Floating-Point (BFP) (Darvish Rouhani et al., 2020) and Block Logarithm (BL).
tion, where values close to a peak can be efficiently represented in a block. Block Floating-Point (BFP) shares an \(E\)-bit exponent. This shared exponent bounds the range in the block and is amenable to values with small block variances. Block Logarithm (BL) sets the mantissa in BM to 1 and shares a \(B\)-bit exponent bias, resulting in values that are powers-of-twos. This contrasts with BFP and is amenable to values with large dynamic ranges.
All these quantisation methods are non-linear and thus can be useful tools to address the _scaling offsets_ phenomenon depicted in Figure 1. Moreover, the hyper-parameter block size allows for flexible quantisation granularity, ranging from layer-wise, tensor-wise, and channel-wise, to slice-wise (a slice along the token/channel vector).
### Arithmetic and Memory Densities
Reducing model size is not the only advantage of quantisation; it also simplifies the computation, thereby accelerating inference. We evaluate quantisation arithmetics using adopted memory and arithmetic densities [1]. We define memory density as the reciprocal of the size of the activation and weight data in a model, and the arithmetic density as the reciprocal of the area/the number of Look-Up-Tables (LUTs) to synthesise a multiply-accumulate (MAC) unit, which serves as the basic cell for matrix multiplication in custom inference circuits. An efficient quantisation method should make a good trade-off among task accuracy, memory density, and arithmetic density. We implemented MAC units with different above-mentioned arithmetics in FPGAs to obtain the number of LUTs. A detailed description of this procedure can be found in Appendix D.
### Quantisation Search
Previous works [1, 2] observed that the layers in CNNs exhibit varying tolerance, or "sensitivity", to quantisation - we also notice this phenomenon in LLMs. The crucial aspect is identifying the layers that are sensitive and determining tailored quantisation configurations. To achieve this, we apply Tree-structured Parzen Estimator (TPE) [1] to conduct a fine-grained search for quantisation precision multiple times and analyse the statistics inherent in the quantised models that recover more accuracy. Our search space is constructed on a per-tensor basis, allowing each input tensor or weight tensor in 1-8 (See Algorithm 2) to have its own precision. The search space increase exponentially as the layer count increases. We leverage accuracy and memory density to design the objective function: \(O_{f}=acc+\alpha\cdot mem\). Here \(O_{f}\), \(acc\), \(mem\) represent the objective function, accuracy, and memory density of the searched quantised models, respectively. The constant \(\alpha\) is used to balance \(acc\) and \(mem\). To determine the \(\alpha\) for a specific search, we initially set \(\alpha\) to 1.0 and perform the search while recording the values of \((acc,mem)\) until convergence. The final value of \(\alpha\) is determined as \(\frac{acc_{c}}{mem_{c}}\), where \((acc_{c},mem_{c})\) represents the converged values. Detailed search parameters are in Appendix B.
## 4 Evaluation
We conducted a comprehensive set of experiments to identify the key factors influencing the performance of sub-8-bit LLMs. We begin with a language modelling task to eliminate less promising quantisation methods (Section 4.2), and then run the promising ones on downstream tasks. For the tasks that proved challenging even for FP32 models, we resort to fine-tuning. Additionally, we conducted a mixed-precision search on two tasks where the quantised 4-bit model struggle. The results of this search provide insights into how to further refine quantisation at the tensor level.
### Experiment setup
BaselinesWe compare our approach with four baselines: 8-bit plain fixed-point quantisation, LLM.int8()[1], GPTQ [12], and SmoothQuant [13]. We amend SmoothQuant's source code to ensure its consistency with their paper (See Ap
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Config & \(E\) & \(M\) & \(B\) \\ \hline Fixed-point & W8A8 & - & 7 & - \\ MiniFloat & W8A8 & 4 & 3 & - \\ DMF & W8A8 & 4 & 3 & - \\ BFP & W8A8 & 8 & 7 & - \\ BFP & W6A6 & 8 & 5 & - \\ BFP & W4A4 & 8 & 3 & - \\ BM & W8A8 & 4 & 3 & 8 \\ BL & W8A8 & 7 & - & 8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The quantisation configuration used in the following sections, where \(E\), \(M\), and \(B\) are the bit-width of exponent (shared exponent), mantissa, and bias (shared bias) respectively.
pendix B) and add this amended version (referred to as "SmoothQuant-c") to the result table.
Quantisation configurationTable 2 clarifies the quantisation configuration used in the following sections, where \(E\), \(M\), and \(B\) are the bit-width of exponent (shared exponent), mantissa, and bias (shared bias) respectively. All these representations include a 1-bit sign bit. The block size of block-based methods is set to \([1,16]\) for both the weight and activation matrix (a slice along matrix row in Algorithm 2) unless otherwise specified.
Models and datasetsWe choose the representative OPT (Zhang et al., 2022) family, and evaluate on Wikitext2 (Merity et al., 2016), ARC(easy) (Clark et al., 2018), LAMBADA (Paperno et al., 2016), PIQA (Bisk et al., 2020), COPA (Roemmele et al., 2011), QNLI (Wang et al., 2018), SST2 (Socher et al., 2013), MRPC (Dolan and Brockett, 2005), and COLA (Warstadt et al., 2019). To demonstrate the generalizability of our method, we also report the Wikitext2 perplexity of quantized LLaMA models (Touvron et al., 2023; Chiang et al., 2023; Taori et al., 2023). Following prior work (Zhang et al., 2022; Xiao et al., 2022), we use lm-eval-harness (Gao et al., 2021) to evaluate models on downstream tasks in the context of zero-shot prompting.
### Zero-shot PTQ on Wikitext2 and downstream tasks
In this section we present our results in a setup we call zero-shot Post-Training-Quantisation (PTQ), which was also adopted by prior work on LLM quantisation (Dettmers et al., 2022; Frantar et al., 2022; Xiao et al., 2022). In this approach, we take a pre-trained OPT model from Huggingface, quantis it, and apply it on Wikitext2 to calculate
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & FP32 & \(\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt\tt()\) & W6A6 BFP \\ \hline LLaMA-7B & 5.79 & 5.83 (+0.04) & 5.83 (+0.04) \\ Vicuna-7B & 7.06 & **7.07 (+0.01)** & 7.08 (+0.02) \\ Alpaca-7B & 7.01 & 7.02 (+0.01) & 7.02 (+0.01) \\ LLaMA-13B & 5.17 & 5.22 (+0.05) & **5.20 (+0.03)** \\ Vicuna-v1.5-13B & 6.13 & 6.16 (+0.03) & 6.16 (+0.03) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Perplexity (\(\downarrow\)) values of LLM family quantized by W6A6 BFP. We compare our method with FP32 and LLM. int8() and find that our method achieves nearly lossless perplexity on Wikitext2. We exclude GPTQ and SmoothQuant-c in this table because they have obvious perplexity increase larger than 0.2 and 5.0 respectively.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Config} & \multicolumn{5}{c}{Perplexity (\(\downarrow\))} & \multicolumn{3}{c}{_Hardware metrics_} \\ \cline{3-10} & & 125M & 350M & 1.3B & 2.7B & 6.7B & Mem \(\uparrow\) & Arith \(\uparrow\) \\ \hline FP32 & - & 27.65 & 22.00 & 14.62 & 12.47 & 10.86 & 1\(\times\) & 1\(\times\) \\ \hline LLM. int8() & W8A8\({}^{\dagger}\) & 27.72 & 22.03 & 14.64 & 12.49 & 10.86 & 2\(\times\) & \(<\) 7.7\(\times\) \\ GPTQ & W4\({}^{*}\) & 31.12 & 24.24 & 15.47 & 12.87 & 11.39 & \(<\) 1.6\(\times\) & - \\ SmoothQuant & W8A8 & \_\(\ddagger\) & \_\(\ddagger\) & 14.62 & 12.50 & 10.85 & \(<\) 4\(\times\) & \(<\) 7.7\(\times\) \\ SmoothQuant-c & W8A8 & \_\(\ddagger\) & \_\(\ddagger\) & 17.97 & 26.88 & 42.90 & 4\(\times\) & 7.7\(\times\) \\ \hline Fixed-point & W8A8 & 275 & 117 & 1.78E4 & 7.81E3 & 3.77E3 & 4\(\times\) & 7.7\(\times\) \\ MiniFloat & W8A8 & 28.16 & 22.24 & 15.03 & 12.73 & 10.99 & 4\(\times\) & 17.4\(\times\) \\ DMF & W8A8 & 30.41 & 23.89 & 18.08 & 14.55 & 11.95 & 4\(\times\) & 17.4\(\times\) \\ BFP & W6A6 & **28.27** & **22.22** & **15.08** & **12.54** & **10.90** & **4.9\(\times\)** & **19.2\(\times\)** \\ BFP & W4A4 & 41.94 & 33.98 & 24.70 & 19.34 & 13.59 & 7.1\(\times\) & 37.3\(\times\) \\ BM & W8A8 & 5.6E3 & 2.7E4 & 1.17E4 & 1.33E4 & 8.61E3 & 3.8\(\times\) & 14.4\(\times\) \\ BL & W8A8 & 780 & 1.26E3 & 323 & 950 & 289 & 3.8\(\times\) & 16.1\(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Perplexity (\(\downarrow\)) values with zero-shot Post-Training-Quantisation (PTQ) on WikText2, this means we directly quantise the pre-trained model and apply on WikiText2. Mem and Airth represent Memory and Arithmetic density accordingly. DMF, BM, BFP and BL represent Denormalised MiniFloat, Block Minifloat, Block Floating Point and Block Logarithm respectively. SmoothQuant-c is our improved implementation where the two activation matrix multiplications are now also quantised. \({}^{\dagger}\) means the inliner matrix multiplications are calculated in 8-bit fixed-point, and outliers are calculated in FP16. \({}^{*}\) means the weights of GPTQ are kept in FP32. \({}^{\ddagger}\) means SmoothQuant repository does not include the weight scaling matrices for 125M and 350M. We **highlight** the best block-based quantisation arithmetic, 6-bit BFP, considering perplexity, memory density, and arithmetic density together.
perplexity, and the eight downstream tasks shortlisted in Section 4.1 to calculate accuracy. The zero-shot PTQ setup is particularly advantageous in scenarios where LLMs lack prior knowledge, as it eliminates the need for downstream task fine-tuning and Training-After-Quantisation (TAQ).
Perplexity on Wikitext2Table 3 compares our results with the baselines in terms of perplexity, memory density, and arithmetic density. Similar to prior work Dettmers et al. (2022); Xiao et al. (2022), plain fixed-point quantisation performs poorly. In contrast, non-linear arithmetic, such as MiniFloat, yields a significantly better perplexity at a similar memory density. MiniFloat yields slightly better results than DMF, indicating the 2\(\times\) higher range is more important than precision in this context.
Block-based quantisation exhibits inconsistent performance on Wikitext2. A noteworthy result is that our 6-bit BFP achieves higher memory density, higher arithmetic density, and lower perplexity than the prior art GPTQ and SmoothQuant-c without requiring data calibration. BM and BL perform poorly compared to BFP. BM was originally proposed in the context of Quantisation-Aware-Training (QAT), whereas our evaluation is based on PTQ. Without retraining, the 3-bit mantissa of BM and the 1-bit mantissa of BL may be the reason for the poor perplexity.
Table 4 shows the perplexity of W6A6 BFP on LLaMA family, including LLaMA-7B/-13B Touvron et al. (2023), Vicuna-7B Zheng et al. (2023), Alpaca-7B Chiang et al. (2023), and Vicuna-v1.5-13B Chiang et al. (2023), with FP32 and LLM.int8() as baselines. We observe that 6-bit BFP still achieves nearly lossless perplexity on these models, verifying the efficacy of our method across model architectures.
Accuracy on downstream tasksWe exclude fixed-point, DMF, BM, and BL from downstream task evaluation due to their poor language modelling performance. Table 5 represents the mean accuracy on ARC (easy), COPA, LAMBADA, PIQA, and SST2. The results of QNLI, MRPC, and COLA are not included in this table as even FP32 LLMs exhibited poor accuracy close to random guess. A plot depicting how these methods match FP32 accuracy as the model scales up and a complete result table are in Appendix E.
Besides LLM.int8() and SmoothQuant-c, we also report a 4-bit version LLM.int8() (referred to as LLM.int4()) reported by Dettmers (2023) on downstream tasks. We observe that 6-bit BFP achieve nearly lossless accuracy, below FP32 and LLM.int8(), and above SmoothQuant-c and LLM.int4(). Note that 6-bit BFP has the highest memory density and arithmetic density among these methods. The 4-bit BFP suffers severe accuracy degradation because its shared exponent and 3-bit mantissa cause large quantisation errors.
Overall, we make the following observations:
* Fixed-point representation performs inadequately due to unability of linear quantisation to address the scaling offset issue caused by varying variances.
* LLMs have different tolerance to block-based
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Config} & \multicolumn{6}{c}{Mean accuracy (\(\uparrow\),\%)} \\ \cline{3-6} & & 125M & 350M & 1.3B & 2.7B & 6.7B \\ \hline Float32 & - & 52.7 & 57.5 & 69.6 & 65.4 & 73.4 \\ \hline LLM.int8() & W8A8 & 52.5 (-0.2) & 58.3 (+0.8) & 69.2 (-0.4) & 65.3 (-0.1) & 73.5 (+0.1) \\ LLM.int4() & W4A4 & 50.8 (-1.9) & 55.8 (-1.7) & 67.0 (-2.6) & 64.5 (-0.9) & 72.5 (-0.9) \\ SmoothQuant-c & W8A8 & - & - & 67.2 (-2.4) & 65.2 (-0.2) & 72.2 (-1.2) \\ \hline MiniFloat & W8A8 & 52.1(-0.6) & 55.1(-2.4) & 64.7(-4.9) & 65.7(+0.3) & 70.5(-2.9) \\ BFP & W4A4 & 47.8 (-4.9) & 51.7 (-5.8) & 57.2 (-12.4) & 55.7 (-9.7) & 67.2 (-6.2) \\ BFP & W5A5 & 51.1 (-1.6) & 56.8 (-0.7) & 65.5 (-4.1) & 64.6 (-0.8) & 72.0 (-1.4) \\ BFP & W6A6 & **52.6 (-0.1)** & **57.6 (+0.1)** & **67.8 (-1.8)** & **65.5 (+0.1)** & **72.9 (-0.5)** \\ BFP & W8A8 & 52.8 (+0.1) & 57.6 (+0.2) & 69.1 (-0.5) & 65.2 (-0.2) & 73.1 (-0.3) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mean accuracy (\(\uparrow,\%\)) values with zero-shot prompting PTQ on ARC (easy), COPA, LAMBADA, PIQA, and SST2, this means we directly quantise the pre-trained model and benchmark on these downstream tasks using zero-shot prompting. We **highlight** 6-bit BFP which also achieves an accuracy close to FP32 on these tasks.
quantisations. BM and BL exhibit subpar performance compared to BFP, indicating that non-linear quantisation still needs sufficient mantissa length to capture the learned weight distribution, or retraining may be required.
* BFP strikes a good balance in the trade-off between range and resolution. Our nearly-lossless 6-bit LLMs, without data calibration/re-training, outperform prior art methods in terms of perplexity (accuracy), memory density, and arithmetic density.
We also observe that sub-6-bit BFP has a severe accuracy drop. To address this problem, we further investigate two approaches for improving the accuracy of 4-bit LLMs.
### 4-bit LLMs via fine-tuning
Previous study Brown et al. (2020); Zhang et al. (2022) reported FP32 LLMs' low accuracy on several downstream tasks in the context of zero-shot prompting. In our experiments, OPTs also exhibit poor accuracy on QNLI, MRPC, and COLA. Fine-tuning language models on downstream tasks has proven to be helpful for improving accuracy Devlin et al. (2019). We explore the fine-tuning and quantisation of LLMs on downstream tasks.
There are two stages where quantisation can be applied. LLMs are typically pre-trained in FP32. The first option is to continue fine-tuning the FP32 model on downstream tasks and subsequently quantisise this fine-tuned FP32 model. We refer to this setup as _PTQ on fine-tuned FP32_. The second option is to quantise the pre-trained FP32 model and retrain this quantised model on downstream tasks, which we refer to as _TAQ on downstream tasks_.
We compare these two cases on four downstream tasks (SST2, QNLI, MRPC, and COLA) that zero-shot prompting struggles to handle. The result table is in Appendix F. We observe that:
* Both options effectively improve accuracy, enabling nearly lossless downstream accuracy even if 4-bit BFP is applied.
* TAQ on downstream tasks reaches a slightly better accuracy (a gain of 0.2% on average) than PTQ on fine-tuned FP32 given the same bit-width. However, the former is harder to optimize through backpropagation because of the forward quantisation error and the Straight-Through Estimator (STE) Bengio et al. (2013) used in backpropagation.
### 4-bit LLMs via mixed precision
Currently, our block-based quantisation uses a uniform configuration, where the block size and bit-width remain constant across the entire model. What if we push the barrier further? Existing works on CNN compression have explored mixed-precision quantisation Wu et al. (2018); Wang et al. (2019), thereby increasing memory density. This subsection lowers the block size granularity and the bit-width granularity to the tensor level to demonstrate uncharted possibilities of aggressive LLM quantisation.
Variation-aware block sizeBy comparing the activation variance and weight variance in Figure 1, we observe that the weight variance remains stable
Figure 3: The bit width distribution of \(\mathbf{Q}\) in Line 6, Algorithm 2 from 2688 searches. We identify the layers less tolerant to aggressive quantisation in OPT-2.7B. For example, layers 18, 25 and 30 often need more bits than other layers. Keeping these layers in relatively high precision recovers the accuracy from 36.2% to 61.3% without decreasing the memory density, equivalent to a 4.3-bit OPT-2.7B on average.
and much smaller, suggesting that we can increase the weight block size while decreasing the activation block size. This approach enhances accuracy while maintaining memory density.
Mixed-precisionWe repeat the quantisation search described in Section 3.3 on downstream tasks and filter out less promising quantisation configurations using an accuracy threshold and a memory density threshold. Each time we start TPE search with a different random seed, so the distribution of filtered quantisation configurations exposed the sensitivity of the searched tensors in LLMs. An example of a mixed-precision search result is presented in Figure 3. We find _certain layers were consistently assigned with higher precision, while others tended to have lower bit widths_. By preserving high precision for these sensitive layers, we recovered the 4-bit LLM accuracy _from 36.2% to 61.3%_ on LAMBADA without compromising memory density. The memory density of the searched OPT-2.7B is 7.42\(\times\), which is slightly better than the uniform 4-bit BFP's 7.11\(\times\). Figure 7 in Appendix G compares uniform 4-bit BFP and mixed-precision 4-bit BFP on LAMBADA and ARC (easy), highlighting the effectiveness of our mixed-precision quantisation. We include more tasks and model sizes in Appendix G. In conclusion, variance-aware block size and mixed precision allow aggressive quantisation beyond 6-bit without fine-tuning.
## 5 Conclusion
This study focuses on addressing the scaling offset issue in LLMs and provides valuable insights into the quantisation of LLMs. Through extensive experimentation, we identify key factors that significantly impact LLM quantisation. When aiming for quantisation above or equal to 6-bit, BFP surpasses previous methods in terms of accuracy, memory density, and arithmetic density, without requiring for data calibration or training. Moreover, we demonstrate that fine-tuning or mixed precision techniques enable 4-bit LLMs on downstream tasks. Fine-tuning is suitable for GPUs, and mixed precision has the potential to shift the inference platform from GPUs to cost-effective ASICs. Our findings contribute to advancing the field of LLM quantisation and provide practical guidance for achieving good quantisation performance.
## Limitations
Different from many prior arts in LLM quantisation that focus on integers, our work puts particular emphasis on minifloat variants. However, the potential gains of our work have not manifested in GPU systems due to a lack of CUDA kernel implementation. The implementation of some proposed quantisation methods in this paper requires specialised kernels and hardware, however, a major focus of our work is to _explore potential designs for next-generation hardware to run LLM inference_. Another limitation is that our search algorithm does not include arithmetic density due to a lack of hardware models for LLMs. We ran a mixed-precision search with hardware models on a small transformer. The result included in Appendix G is promising. We leave sufficient study on hardware-aware LLM quantization as a future work.
|
2302.12714 | Russel and Rao Coefficient is a Suitable Substitute for Dice Coefficient
in Studying Restriction Mapped Genetic Distances of Escherichia coli | Escherichia coli is one of many bacterial inhabitants found in human
intestines and any adaptation as a result of mutations may affect its host. A
commonly used technique employed to study these mutations is Restriction
Fragment Length Polymorphism (RFLP) and is proceeded with a suitable distance
coefficient to quantify genetic differences between 2 samples. Dice is
considered a suitable distance coefficient in RFLP analyses, while others were
left unstudied in its suitability for use. Hence, this study aims to identify
substitutes for Dice. Experimental data was obtained by subculturing E. coli
for 72 passages in 8 different adaptation media and RFLP profiles analyzed
using 20 distance coefficients. Our results suggest that Dennis, Fossum,
Matching and Russel and Rao to work as well or better than Dice. Dennis,
Matching and Fossum coefficients had highest discriminatory abilities but are
limited by the lack of upper or lower boundaries. Russel and Rao coefficient is
highly correlated with Dice coefficient (r2 = 0.998), with both higher and
lower boundaries, suggesting that Russel and Rao coefficient can be used to
substitute Dice coefficient in studying genetic distances in E. coli. | Zhu En Chay, Chin How Lee, Kun Cheng Lee, Jack SH Oon, Maurice HT Ling | 2023-02-19T02:39:00Z | http://arxiv.org/abs/2302.12714v1 | Russel and Rao Coefficient is a Suitable Substitute for Dice Coefficient in Studying Restriction Mapped Genetic Distances of _Escherichia coli_
###### Abstract
_Escherichia coli_ is one of many bacterial inhabitants found in human intestines and any adaptation as a result of mutations may affect its host. A commonly used technique employed to study these mutations is Restriction Fragment Length Polymorphism (RFLP) and is proceeded with a suitable distance coefficient to quantify genetic differences between 2 samples. Dice is considered a suitable distance coefficient in RFLP analyses, while others were left unstudied in its suitability for use. Hence, this study aims to identify substitutes for Dice. Experimental data was obtained by subculturing _E. coli_ for 72 passages in 8 different adaptation media and RFLP profiles analyzed using 20 distance coefficients. Our results suggest that Dennis, Fossum, Matching and Russel and Rao to work as well or better than Dice. Dennis, Matching and Fossum coefficients had highest discriminatory abilities but are limited by the lack of upper or lower boundaries. Russel and Rao coefficient is highly correlated with Dice coefficient (r\({}^{2}\) = 0.998), with both higher and lower boundaries, suggesting that Russel and Rao coefficient can be used to substitute Dice coefficient in studying genetic distances in _E. coli_.
1School of Chemical and Life Sciences
Singapore Polytechnic, Singapore
2Department of Zoology
The University of Melbourne, Australia
## 1 Introduction
_Escherichia coli_ is a Gram-negative bacteria species that inhabits in the gastrointestinal tract of humans (Foley et al., 2009) and is one of the most thoroughly studied organism (Welch et al., 2002). It is a diverse species where some _E. coli_ strains live as harmless bacterium (Welch et al., 2002), while other strains like O157:H7 can cause a wide range of intestinal and extraintestinal diseases (MacDonald et al., 1988; Clermont et al., 2007). _E. coli_ has been identified as one of the major bacterial foodborne infections and due to their significant impact on human health (MacDonald et al., 1988), many molecular methods include restriction endonuclease analysis, polymerase chain reaction and DNA sequence polymorphism were derived to study them (Foley et al., 2009). _E. coli_ may be genetically altered by diets of its human host as it had been suggested that _E. coli_ has higher prevalence to antibiotics resistance (Silva et al., 2007). Oral antibiotics consumption caused emergence of antibiotic resistant strains of _E. coli_(Bourque et al., 1980) and infections caused by antimicrobial-resistant bacteria are associated with substantial morbidity and mortality (O'Fallon et al., 2009).
Nucleic acid fingerprinting are analysis techniques employed to differentiate between DNA samples based on its DNA band patterns generated after enzymatic amplification of variable regions and analyzed by gel electrophoresis, with or without restriction digestion (Gilbride et al., 2006). The project will utilize RFLP method to study the DNA bands.
A distance coefficient is to quantify comparable features (similarity and differences) of two given vectors between two objects where the collective differences or dissimilarity can be denoted as a distance measure, seen as a scalar measure (Basilevsky, 1983). There are different ways in computing distance coefficients, each differing from each other as the emphasis is given to either the intersecting area or the non-intersecting regions may differ. Most distance coefficients have upper and lower boundaries, distinct to the mathematical equation used. If defined, the lower boundary of a coefficient denotes complete difference, whereby the upper boundary suggests complete similarity. Values within the 2 boundaries are scaled to determine similarities and differences The values given by each distance coefficient varies (Duarte et al., 1999). Thirty-five of these distance coefficients were compiled (Ling, 2010).
Genetic distance refers to the genetic difference and similarity between and within a species that is often used for classification and evolutionary studies involving humans, mammals, fruit flies and mosquitoes with the involvement of statistical models (Wang et al., 2001). A smaller genetic distance indicates a closer genetic relationship, while a larger genetic distance will indicate a weaker genetic relationship in comparison studies, useful for reconstructing phylogenetic relationships (Shriver et al., 1995). The commonly discussed genetic distance measures are Nei's minimum genetic distance and Nei's standard genetic distance. The two genetic distance measures are nonlinear with time or large mutation rates considered as factors. The linearity of Nei Li genetic distances is factored by the frequency of mutations (Nei & Li, 1979; Shriver et al., 1995). Some other proposed genetic distance measures include average square distance (ASD) (Goldstein et al., 1995a), Delta Mu Genetic Distance (Goldestein et al., 1995b), stepwise weighted genetic distance measure (_Dsw_) (Shriver et al., 1995), kinship coefficient (_Dsf_) (Cavalli-Sforza & Bodmer, 1971) and coancestry coefficient (_Theta_ (_Fst_) (Reynolds et al., 1983).
Russel and Rao is a distance measure used for dichotomous variables (Hwang et al., 2001; Russel & Rao, 1940). It was previously studied for random amplified polymorphic DNA (RAPD) and was concluded that Russel and Rao can only be used for specific instances due to its exclusion of negative co-occurrences (Coefficient D) in numerator and inclusion in the denominator (Meyer et al., 2004).
## 2 Objectives
This study aims to determine suitable distance coefficient measures from 20 of the 35 compiled measures (Ling, 2010) to study the genetic distance, at the genomic scale, of a sequenced strain of human intestinal bacterium, _Escherichia coli_ ATCC 8739. Since Dice is the only ideal distance coefficient studied for its use in RFLP _E. coli_ genetic study analysis, this study aims to identify other distance coefficients that are capable of substituting Dice. Restriction Fragment Length Polymorphsm (RFLP) is employed in this study for being economical, fast and simple compared to other DNA fingerprinting methods (Xiao et al., 2006). The band patterns produced as a result of varying lengths of restriction fragments will be analyzed using statistical methods
(Nei & Li, 1979). Measurement errors tend to occur and hide differences between different RFLP studies and statistical methods are used to normalize these errors (Evett et al., 1993). Our results demonstrated that Russel and Rao coefficient can be used as substitute for Dice coefficient in studying genetic distances in _E. coli_.
## 3 Methodology
**Bacterial culture and PCR-RFLP DNA Fingerprinting.**_Escherichia coli_ ATCC 8739 was inoculated into 8 different treatment supplementation in Nutrient Broth [0.025% (w/v) as high MSG (H MSG); 0.0025% (w/v) as low MSG (L MSG); 0.025% (w/v) as high benzoic acid (H BA); 0.0025% (w/v) as low benzoic acid (L BA); 1% (w/v) NaCl as high salt (H SALT), Nutrient Broth as low salt (L SALT); H MSG, H BA and H SALT as high combination (H COMB); L MSG and L BA as low combination (L COMB)] and cultured at 37\({}^{\circ}\)C. Subculture was performed every 2 to 3 days from 1% of the previous culture. Genomic DNA was extracted from the treatment cultures at every 12\({}^{\text{th}}\) passages for Polymerase Chain Reaction (PCR) and Restriction Fragment Length Polymorphism (RFLP). A total of 72 subcultures were carried out, resulting in 6 time-points. A total volume of 50\(\upmu\)l in each PCR reaction was prepared according to supplier's specification (New England Biolabs, Inc.), consisting of 1 of the 3 primers: Primer 5, CgCgCTggC; Primer 6, gCTggCggC; and Primer 7, CAggCggCg. Each of the primers act as both forward and reverse primers. The PCR reaction was carried out (Hybaid Limited, PCR Express) with the cycling condition of initial denaturation at 95\({}^{\circ}\)C for 10 minutes; 35 cycles of amplification at 95\({}^{\circ}\)C for 1 minute, 27\({}^{\circ}\)C for 1 minute, 72\({}^{\circ}\)C for 3 minutes; followed by a final extension at 72\({}^{\circ}\)C for 10 minutes. The product was digested with a unit of restriction endonuclease (TaqI, Hinfl or Mspl) for 16 hours. Both PCR and RFLP products were visualized on 2% (w/v) agarose gel with 1X GelRed. A total of 12 agarose gels per time-point (3 PCR gels and 3 RFLP gels for each PCR gel) resulting in a total of 72 agarose gels for the 6 time-points under study.
**Determination of Suitable Distance Coefficients.** The experimental data obtained from RFLP will have the bands of a lane measured and retention factor (Rf) value are obtained. Lanes in a gel will be compared with each other using 20 distance coefficients (Ling, 2010). Results obtained from the 20 distance coefficient measures are then compared with other distance measures using analysis of variance (ANOVA), coefficient of determination (r\({}^{2}\)) and the arithmetic means, standard deviation (SD) and Coefficient of Variance (COV) at percentiles ranges 0 - 10, 10 - 90 and 90 - 100 for the identification of suitable distance coefficients for use of RFLP analysis of _E. coli_. There will be a total of 190 pair-wise combinations arises from 20 permutations of 2, possible from the 20 distance coefficients. As there are 72 gels of data, 190 pair-wise combinations will be done for every gel, giving 13680 possible number of comparisons to be done. Each gel has 8 different treatments, comparison of either 2 treatments gives 28 different combinations. Each distance coefficient will have 2016 number of analysis to be done, derived from 28 number of combinations of 72 gels. However, some comparisons are excluded if there are no observable bands, such as DNA smear. A total of 383,040 (28 x 13680) number of comparison studies to be done. The suitable distance coefficients should cover a broad spectrum of data that is capable of comprising extremely huge and small data values, with ability to discriminate small data changes and differences.
## 4 Results and Discussion
A one-way ANOVA test was run to first detect differences in the results of the 20 distance coefficients' (Table 1). A p - value of less than 0.001 demonstrate statistically significant differences in the 20 distance coefficients, suggesting that the distance coefficients' result outcomes are incomparable. To study the incomparability, the analysis of COV values in the 3 percentile ranges are done to identify the ideal distance coefficient and r\({}^{2}\) value for each test is obtained to study similarities.
The mean, SD and COV of the 20 distance coefficients in Table 2 are arranged in an descending order of the number of percentile range wins against Dice. The suitable distance coefficient can be deduced by observing the coefficient of variance (COV) at the 3 range of percentiles. A high COV at 0 - 10 percentile range confers to a distance coefficient with high capability to detect and discriminate low values of genetic distances, while a low COV be otherwise. Meanwhile a high COV at 90 - 100 percentile range illustrates a distance coefficient with good detection and distinction capacity for high values, a low COV be otherwise. For a distance coefficient with a high COV at 10 - 90 percentile range demonstrates high capability to cover the majority of values with acceptable discriminative ability, while a low COV be otherwise.
Dice distance coefficient \(\mathit{Dice}=\frac{2A}{(A+B)+(A+C)}\)[Dice, 1945] is identical to Nei Li distance coefficient \(\mathit{Nei}\)\(\mathit{Li}=\frac{2n_{xy}}{n_{x}+n_{y}}\) (where n\({}_{x}\) and n\({}_{y}\) are number of fragments in populations X and Y respectively; whereas n\({}_{xy}\) is the number of fragments shared by two populations) [Nei & Li, 1979] and Nei Li was tested to be a reliable distance coefficient from a simulation study [Li, 1981]. Since Nei Li [Nei & Li, 1979] has been identified as a reliable distance coefficient, it will
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline
**Source** & **Sum of Squares** & **D. f** & **Mean Square** & **F ratio** & **P - value** \\ \hline Between Groups & 55,205.4 & 19 & 2905.55 & 4405.23 & \textless{}0.001 \\ \hline Within Groups & 23,594.2 & 36318 & 0.659567 & & \\ \hline Total & 79,159.6 & 36337 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of ANOVA of the 20 Distance Coefficients.
Figure 1: Flowchart of Methodology used in this study
be used to identify suitable distance coefficients. Coefficients B, C and D in Dice are substituted with 0 and yield a upper boundary of 1, while A and D gave a lower boundary of 0. COV values of Dice (Dice, 1945) will be used as a threshold to identify distance coefficients that are similar or better than Nei Li. Dice (Dice, 1945) will be used to represent Nei Li (Nei & Li, 1979) in this study. With this, the benchmark for percentile range \(0-10\), \(10-90\) and \(90-100\) are 0.286, 0.297 and 0.000 respectively.
Forbes (Forbes, 1907), Anderberg (Ling, 2010), Sokal and Sneath (Sokal & Sneath, 1963), Hamann (Hermann, 1961), Roger and Tanimoto (Roger & Tanimoto, 1960), McConnaughey (McConnaughey, 1964), Jaccard (Jaccard, 1908), Sokal and Michener (Sokal & Michener, 1985), Gower and Legendre (Gower & Legendre, 1986), Tulloss (Tulloss, 1997), Buser (Holliday et al., 2002), Sokal and Sneath 2 (Sokal & Sneath, 1963), Ochiai (Ochiai, 1957), Kulczynski 2 (Holliday et al., 2002) and Simpson (Fallaw, 1979) will not be efficient for use as distance coefficient in studying RFLP of _E. coli_ (Table 2), since its COV values do not surpass Dice at 3 percentile ranges. The COV study suggests the above mentioned 15 distance coefficients have poorer discriminatory abilities than Dice (Dice, 1945). A suitable distance coefficient should possess higher COV values in all 3 ranges of percentiles than Dice (Dice, 1945). Dennis (Dennis, 1965), Matching (Dunn & Everitt, 1982), Fossum (Fossum, 1966) and Russel and Rao (Russel & Rao, 1940) were identified to work as well or better than Dice (Dice, 1945) (Table 2). The upper and lower boundaries of Dennis (Dennis, 1965), Matching (Dunn & Everitt, 1982), Fossum (Fossum, 1966) and Russel and Rao (Ruseel & Rao, 1940) were studied.
### Dennis Coefficient
\[Denniscoefficient=\frac{(A\times D)-(B\times C)}{\sqrt{(A+B+C+D)(A+B)(A+C)}}\]
The Dennis (Dennis, 1965) equation coefficients \(B\), \(C\) and \(D\) that will be replaced with the value 0 when 2 data sets share exact similar numbers, resulting in the upper boundary of 0. However, in the scenario of 2 datasets which share no similar data, a different conclusion of no lower boundary is observed. A relationship was observed, the Dennis coefficient decreases in value with an increasing number of data in 2 datasets, due to coefficient B and C and the negative sign in the quotient.
Figure 2: Venn diagram illustrating the overlapping regions between two objects. ‘A’ is the region of intersection, ‘D’ signifies elements not present in ‘original’ and ‘test’ while ‘B’ and ‘C’ are elements present in original and test respectively (Ling, 2010).
## Matching Coefficient
\[Matching\ coefficient=\frac{A+D}{(A+B)+\ (A+C)}\]
The Matching (Dunn & Everitt, 1982) equation has coefficients B, C and that will be replaced with value 0 when 2 data sets share similar numbers. There is no upper boundary for Matching (Dunn & Everitt, 1982). When A and D coefficient of Matching (Dunn & Everitt, 1982) is replaced with value 0, the result is 0, which is the lower boundary. Hence, Matching (Dunn & Everitt, 1982) coefficient no upper boundary and a lower boundary of 0.
## Fossum Coefficient
\[Fossum\ coefficient=\frac{(A+B+C+D)(A-0.5)^{2}}{(A+B)(A+C)}\]
The Fossum (Fossum, 1966) equation has coefficients B, C and D that will be replaced with the value 0 when 2 data sets share exact similar numbers. There is no upper boundary in Fossum (Fossum, 1966). For the scenario of 2 datasets sharing no similar data, a different conclusion of a lower boundary of 0 is observed. The Fossum coefficient (Fossum, 1966) decreases in value with increasing number of data in 2 datasets, due to coefficient B and C in the quotient and denominator.
## Russel Rao Coefficient
\[Russel\ Rao\ coefficient=\frac{A}{A+B+C+D}\]
Russel and Rao (Russel & Rao, 1940) has coefficients B, C and D to be replaced with the value 0 when 2 data sets share exact similar numbers. This will give a upper boundary of 1. When coefficients A and D are substituted with value 0 to find the lower boundary, Russel and Rao distance coefficient (Russel & Rao, 1940) yields 0. Russel and Rao (Russel & Rao, 1940) has an upper boundary of 1 and a lower boundary of 0.
With the absence of a lower boundary in Dennis (Dennis, 1965) and Matching (Dunn & Everitt, 1982), and upper boundary in Fossum (Fossum, 1966), the results of the three distance coefficients are hard to interpret. As Dice (Dice, 1945) have a lower and upper boundary, values within the 2 boundaries are scaled to determine similarities and differences where as Dennis (Dennis, 1965), Matching (Dunn & Everitt, 1982) and Fossum (Fossum, 1966) do not have the boundaries to do so. This suggests that both Dennis (Dennis, 1965) Matching (Dunn & Everitt, 1982) and Fossum (Fossum, 1966) are not ideal for use as a distance measure. Meanwhile Russel and Rao (Russel & Rao, 1940) has an upper boundary and lower boundary, suggesting that it can replace Dice (Dice, 1945).
Based on Pearson correlation data, Matching (Dunn & Everitt, 1982), Forbes (Forbes, 1907) and Dennis (Dennis, 1965) are inversely proportional to Dice, with their Perarson correlation values to be -0.175, -0.120 and -0.100 respectively. A negative value in Pearson correlation will mean that with increasing Dice coefficient result will yield decreasing Matching, Forbes and Dennis coefficient results. This will make results difficult to interpret; hence, unable to be used as substitute for Dice.
The Coefficient of determination (r\({}^{2}\)) values of the 20 distance coefficients were obtained from comparing data values. This was to identify correlation relationships between distance coefficients. Figure 2 shows that there are no distance coefficients that are statistically similar to either Dennis (Dennis, 1965), Matching (Dunn & Everitt, 1982) or Fossum (Fossum, 1966). Hamann (Hamann, 1961) is directly correlated to Sokal and Michener (Sokal & Michener, 1958), Sokal and Sneath (Sokal & Sneath, 1963) to Anderberg (Ling, 2010) and McConnaughey (McConnaughey, 1964) to Kulczynski 2 (Holliday et al., 2002) (Table 3). They are directly
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c||c|c|} \hline \multicolumn{1}{|c|}{**Distance**} & \multicolumn{2}{c|}{**0-10 Percentile range**} & \multicolumn{2}{c||}{**10-90 Percentile range**} & \multicolumn{2}{c|}{**90-100 Percentile range**} \\ \cline{2-9} \multicolumn{1}{|c|}{**Coefficient**} & \multicolumn{1}{c|}{**Mean**} & \multicolumn{1}{c|}{**(SD)**} & \multicolumn{1}{c|}{**COV**} & \multicolumn{1}{c|}{**Mean**} & \multicolumn{1}{c|}{**(SD)**} & \multicolumn{1}{c||}{**COV**} & \multicolumn{1}{c|}{**Mean**} & \multicolumn{1}{c|}{**(SD)**} & \multicolumn{1}{c|}{**COV**} \\ \hline Dennis & -0.123 & 0.222 & 1.798 & 0.143 & 0.435 & 3.036 & 1.490 & 0.435 & 0.326 \\ \hline Matching & 0.244 & 0.073 & 0.301 & 0.490 & 0.290 & 0.593 & 1.126 & 0.601 & 0.534 \\ \hline Fossum & 0.741 & 0.422 & 0.569 & 6.300 & 3.012 & 0.478 & 12.287 & 1.068 & 0.087 \\ \hline Dice & 0.181 & 0.052 & 0.286 & 0.713 & 0.212 & 0.297 & 1.000 & 0.000 & 0.000 \\ \hline Russel and Rao & 0.182 & 0.052 & 0.285 & 0.714 & 0.212 & 0.296 & 1.000 & 0.000 & 0.000 \\ \hline Forbes & 0.938 & 0.117 & 0.125 & 1.071 & 0.544 & 0.508 & 2.224 & 1.174 & 0.528 \\ \hline Anderberg & 0.144 & 0.058 & 0.405 & 0.697 & 0.245 & 0.352 & 1.000 & 0.000 & 0.000 \\ \hline Sokal and Sneath & 0.144 & 0.058 & 0.405 & 0.697 & 0.245 & 0.352 & 1.000 & 0.000 & 0.000 \\ \hline Hamann & -0.382 & 0.220 & 0.577 & 0.646 & 0.309 & 0.478 & 1.000 & 0.000 & 0.000 \\ \hline Roger and Tanimoto & 0.188 & 0.076 & 0.407 & 0.727 & 0.220 & 0.303 & 1.000 & 0.000 & 0.000 \\ \hline McConnaughey & 0.160 & 0.191 & 1.195 & 0.789 & 0.188 & 0.239 & 1.000 & 0.000 & 0.000 \\ \hline Jaccard & 0.248 & 0.090 & 0.363 & 0.796 & 0.180 & 0.225 & 1.000 & 0.000 & 0.000 \\ \hline Sokal and Michener & 0.309 & 0.110 & 0.356 & 0.823 & 0.154 & 0.188 & 1.000 & 0.000 & 0.000 \\ \hline Gower and Legendre & 0.331 & 0.104 & 0.316 & 0.845 & 0.142 & 0.168 & 1.000 & 0.000 & 0.000 \\ \hline Tulloss & 0.838 & 0.170 & 0.202 & 0.804 & 0.214 & 0.266 & 0.822 & 0.227 & 0.276 \\ \hline Buser & 0.382 & 0.102 & 0.268 & 0.846 & 0.135 & 0.160 & 1.000 & 0.000 & 0.000 \\ \hline Sokal and Sneath 2 & 0.461 & 0.133 & 0.289 & 0.895 & 0.098 & 0.109 & 1.000 & 0.000 & 0.000 \\ \hline Ochiai & 0.476 & 0.098 & 0.206 & 0.884 & 0.107 & 0.121 & 1.000 & 0.000 & 0.000 \\ \hline Kulczynski 2 & 0.580 & 0.096 & 0.165 & 0.894 & 0.094 & 0.105 & 1.000 & 0.000 & 0.000 \\ \hline Simpson & 0.761 & 0.141 & 0.185 & 0.998 & 0.011 & 0.011 & 1.000 & 0.000 & 0.000 \\ \hline \end{tabular}
\end{table}
Table 2: The Mean, Standard Deviation and Coefficient of Variance of the 20 Distance Coefficients. The 20 Distance Coefficients are arranged in a descending order of the number of percentile range wins against Dice. E.g: Fossum has COV values at all 3 percentile range higher than Dice, hence arranged above Dice.
correlated and are labeled in dark pink (Table 3). The directly correlated coefficients can be used interchangeably in the study of genetic distance of _E. coli_.
The cut-off for Pearson coefficient of distance coefficients is studied. Given the degree of freedom of 1816 (n = 1818), a Pearson's correlation coefficient of 0.115 will be significant at 99.999% confidence based on a previously implementation (Chay & Ling, 2010). This suggest that a Pearson's correlation coefficient of larger than 0.115 will be statistically significant at greater than 99.999% confidence. Hence, the colour coded cells in Table 3 are correlated and can be used interchangeably.
Unexpectedly, Dice (Dice, 1945) is statistically similar to Russel and Rao (Russel & Rao, 1940), and is among the 4 that passed the COV analysis. Russel and Rao can only be used for specific instances due to its exclusion of negative co-occurrences (Coefficient D) in numerator and inclusion in the denominator and it acts as a viable substitute for Dice in this study. Although Dice (Dice, 1940) and Russel and Rao (Russel & Rao, 1945) are different in their mathematical formula, it has a r\({}^{2}\) value of 0.998 (Table 3) and had high compactness (Figure 3), suggesting that the both distance coefficients are similar and can be used interchangeably, and be used as a substitute for Dice. The COV values of Russel and Rao (Russel & Rao, 1940) are similar to that of Dice as well (Dice, 1945). Russel and Rao (Russel & Rao, 1940) had correlation value of 0.998, suggesting that the values will differ based on the following formula: \(Dice=[Russel\ and\ Rao\pm(1-r^{2})](Russel\ and\ Rao).\) Russel and Rao (Russel & Rao, 1940) can be used to replace Dice (Dice, 1945) it is 99.8% reflective of Dice. Hence, our results suggest that Dice (Dice, 1945) which are used for genetic studies of E. _coli_ can be substituted with Russel and Rao (Russel & Rao, 1940).
Figure 3: Dendrogram of 20 distance correlations to study their similarities and uniqueness against each other. 1: Jaccard; 2: Sokal and Michener; 3: Matching; 4: Dice; 5: Ochiai; 6: Anderberg; 7: Kulczynsk; 8: Forbes; 9: Hamann; 10: Simpson; 11: Russel and Rao; 12: Roger and Tanimoto; 13: Sokal and Sneath; 14: Sokal and Sneath 2; 15: Buser; 16: McConnaughey; 17: Dennis; 18: Gower and Legendre; 19: Tulloss; 20: Fossum.
|
2304.01380 | Leaves of Foliated Projective Structures | The $\text{PSL}(4,\mathbb{R})$ Hitchin component of a closed surface group
$\pi_1(S)$ consists of holonomies of properly convex foliated projective
structures on the unit tangent bundle of $S$. We prove that the leaves of the
codimension-$1$ foliation of any such projective structure are all projectively
equivalent if and only if its holonomy is Fuchsian. This implies constraints on
the symmetries and shapes of these leaves.
We also give an application to the topology of the non-${\rm T}_0$ space
$\mathfrak{C}(\mathbb{RP}^n)$ of projective classes of properly convex domains
in $\mathbb{RP}^n$. Namely, Benz\'ecri asked in 1960 if every closed subset of
$\mathfrak{C}(\mathbb{RP}^n)$ that contains no proper nonempty closed subset is
a point. Our results imply a negative resolution for $n \geq 2$. | Alexander Nolte | 2023-04-03T21:12:01Z | http://arxiv.org/abs/2304.01380v2 | # Leaves of foliated projective structures
###### Abstract.
The \(\mathrm{PSL}(4,\mathbb{R})\) Hitchin component of a closed, oriented surface \(S\) of genus \(g\geq 2\) is parametrized by the holonomies of properly convex foliated projective structures on the unit tangent bundle of \(S\). We study the geometry of the codimension-1 foliation of any such projective structure \((\mathrm{dev},\mathrm{hol})\) through the \(\pi_{1}(S)\)-equivariant map \(\mathfrak{s}\) that associates to a leaf \(x\) of the semi-stable geodesic foliation of the unit tangent bundle of the universal cover of \(S\) the projective equivalence class of \(\mathrm{dev}(x)\). We prove that unless \(\mathrm{hol}\) is Fuchsian, \(\mathfrak{s}\) demonstrates the following pathology: \(\mathfrak{s}\) is continuous, is constant on many dense subsets of its domain, and is not constant. A consequence is that the leaves \(\mathfrak{s}(x)\) always have self-similarity properties, but are never divisible unless \(\mathrm{hol}\) is Fuchsian.
Our proofs draw on a range of tools, including Benoist's limit cone theorem, the classification of Zariski closures of Hitchin representations, and the Baire category theorem.
## 1. Introduction
A theme developed extensively in the last half-century is that much of the structure of locally homogeneous geometric objects associated to a word-hyperbolic group \(\Gamma\) is encoded in \(\Gamma\)-equivariant maps with domain the Gromov boundary \(\partial\Gamma\) of \(\Gamma\). The most prominent examples are those induced by quasi-isometric embeddings, but other maps, such as those introduced by Cannon and Thurston [9], have seen fruitful study.
When \(\Gamma\) is the fundamental group of a closed, oriented surface \(S\) of genus at least \(2\), as it will be throughout this paper, \(\partial\Gamma\) is a circle and the regularity of maps induced on \(\partial\Gamma\) plays a prominent role in the theory (e.g. [4], [7], [8], [16], [30]).
The objects of our study here are a special family of equivariant curves \(\mathfrak{s}_{\rho}\) induced by \(\mathrm{PSL}(4,\mathbb{R})\) Hitchin representations \(\rho\). We call \(\mathfrak{s}_{\rho}\) the _leaf map_ associated to \(\rho\), and discuss its definition below. Leaf maps are continuous maps from \(\partial\Gamma\) to the space \(\mathfrak{C}\) of projective equivalence classes of properly convex domains in \(\mathbb{RP}^{2}\).
Leaf maps encode much of the geometry of the foliations of properly convex foliated projective structures on the unit tangent bundle \(T^{1}(S)\), a refinement of \((\mathrm{PSL}(4,\mathbb{R}),\mathbb{RP}^{3})\) structures on \(T^{1}S\) introduced by Guichard and Wienhard in [19] whose holonomies parametrize the \(\mathrm{PSL}(4,\mathbb{R})\) Hitchin component \(\mathrm{Hit}_{4}(S)\) (see SS3). Briefly, the developing map of such a projective structure maps leaves of the semi-stable geodesic foliation \(\overline{\mathcal{F}}\) of \(T^{1}\widetilde{S}\) to properly convex domains in projective planes inside of \(\mathbb{RP}^{3}\). The leaf space of \(\overline{\mathcal{F}}\) is naturally identified with \(\partial\Gamma\), and \(\mathfrak{s}_{\rho}\) is defined by associating to \(x\in\partial\Gamma\) the projective equivalence class of \(\mathrm{dev}_{\rho}x\). If \(\rho\) is in the Fuchsian locus of \(\mathrm{Hit}_{4}(S)\) (henceforth, _is \(4\)-Fuchsian_), then \(\mathfrak{s}_{\rho}\) is constant with value the unique projective class of ellipses in \(\mathbb{RP}^{2}\).
The standard topology on \(\mathfrak{C}\) is complicated and poorly separates points--singleton subsets of \(\mathfrak{C}\) need not be closed, and may be dense. This makes it difficult to get control of the leaf maps \(\mathfrak{s}_{\rho}\). Nevertheless, we shall show:
**Theorem 1.1**.: _Let \(\rho\in\mathrm{Hit}_{4}(S)\). The following are equivalent:_
1. \(\rho\) _is_ \(4\)_-Fuchsian,_
2. _The leaf map_ \(\mathfrak{s}_{\rho}\) _is constant,_
3. _There exists a divisible leaf_ \(\mathfrak{s}_{\rho}(x)\)_,_
4. _There exists a leaf_ \(\mathfrak{s}_{\rho}(x)\) _with non-discrete projective automorphism group._
Recall that a properly convex domain is _divisible_ if it admits a cocompact action by a discrete group of projective automorphisms. Conditions (3) and (4) show that leaves of properly convex foliated projective structures have little symmetry unless \(\rho\) is \(4\)-Fuchsian. Condition (3) is in notable contrast to the fact that if \(\gamma\in\Gamma-\{e\}\), the leaf \(\mathfrak{s}_{\rho}(\gamma^{+})\) associated to the attracting fixed point \(\gamma^{+}\in\partial\Gamma\) contains a \(\mathbb{Z}\) subgroup induced by \(\rho(\gamma)\).
That \(\mathfrak{s}_{\rho}\) is not constant unless \(\rho\) is \(4\)-Fuchsian is the crux of Theorem 1.1. Though this may appear intuitive at first approach, it in fact implies the following rather dramatic pathology, made possible by the poor separation of \(\mathfrak{C}\).
**Theorem 1.2**.: _If \(\rho\in\operatorname{Hit}_{4}(S)\) is not \(4\)-Fuchsian, then the leaf map \(\mathfrak{s}_{\rho}:\partial\pi_{1}(S)\to\mathfrak{C}\) satisfies the following properties:_
1. _Given_ \(x\in\partial\pi_{1}(S)\)_, there is a dense subset_ \(S_{x}\subset\partial\pi_{1}(S)\) _containing_ \(x\) _so that the restriction of_ \(\mathfrak{s}_{\rho}\) _to_ \(S_{x}\) _is constant,_
2. \(\mathfrak{s}_{\rho}\) _is continuous and non-constant._
Though \(\mathfrak{C}\) poorly separates points, it has enough open sets that the continuity of \(\mathfrak{s}_{\rho}\) has nontrivial content that is at times (e.g. in SS4.4 below) useful. Some indirect analogues and similar phenomena from the theory of properly convex projective structures on surfaces are discussed in SS1.2.1. Below is an outline of the ideas that appear in our proofs.
A theme in our proofs of our main results is that considerations of boundary regularity of leaves \(\mathfrak{s}_{\rho}(x)\) leads to constraints on the eigenvalues of \(\rho\) when \(\mathfrak{s}_{\rho}(x)\) is constant.
In our analysis, the size of the projective automorphism groups of leaves \(\mathfrak{s}_{\rho}(x)\) is salient. The most involved case is when all leaves \(\mathfrak{s}_{\rho}(x)\) have discrete automorphism group. This is a place where we must contend seriously with the non-separation of points in \(\mathfrak{C}\), which appears in the form that there are discontinuous paths \(A_{t}:[0,1]\to\operatorname{SL}(3,\mathbb{R})\) and domains \(\Omega\) in \(\mathbb{RP}^{2}\) so that \(A_{t}\overline{\Omega}\) is continuous in the Hausdorff topology.1
Footnote 1: The easy-to-deal-with example of this is to use projective symmetries of \(\Omega\). We must also contend with e.g. the possibility that for a divergent sequence \(A_{t}\in\operatorname{SL}(3,\mathbb{R})\) the domains \(A_{t}\overline{\Omega}\) converge to \(\overline{\Omega}\).
Our argument in this case to obtain constraints on eigenvalues of \(\rho\) if \(\mathfrak{s}_{\rho}\) is constant has two main parts. The first hinges on the Baire category theorem, and shows that the above pathology may be avoided on a nonempty open subset \(U\subset\partial\Gamma\) in the sense that we may arrange for representatives of the equivalence classes \(\mathfrak{s}_{\rho}(x)\) to vary by a continuous family of projective equivalences on \(U\). This facilitates a geometric "sliding" argument that places constraints on the boundary regularity of leaves \(\mathfrak{s}_{\rho}(x)\).
The restrictions we obtain from this are equivalent to the logarithms of the eigenvalues of \(\rho(\gamma)\) (\(\gamma\in\Gamma\)) satisfying an explicit \(\gamma\)-independent homogeneous polynomial. Similar restrictions are obtained in other cases of the automorphism groups of leaves \(\mathfrak{s}_{\rho}(x)\).
The endgame of our proof is to show that the only way this constraint may be satisfied is if \(\rho\) is \(4\)-Fuchsian. We use two substantial results here, namely Guichard's classification of Zariski closures of Hitchin representations (see [33]) and a deep theorem of Benoist [3] on limit cones of Zariski-dense representations.
In the remainder of the introduction we explain some further consequences for the geometry of leaves of properly convex foliated projective structures and situate Theorems 1.1 and 1.2 in the context of broader projects in higher Teichmuller theory.
### Asymptotic Geometry of Leaves
While Theorems 1.1 and 1.2 substantially restrict the symmetries of leaves \(\mathfrak{s}_{\rho}(x)\) of properly convex foliated projective structures, they also yield a positive statement about symmetries of leaves. We explain this here.
We shall say that a properly convex domain \(\Omega\) is _asymptotically self-similar_ if there is a sequence \(A_{n}\in\operatorname{SL}(3,\mathbb{R})\) that leaves all compact subsets and so that \(\overline{A_{n}\Omega}\) converges in the Hausdorff topology to \(\overline{\Omega}\). We shall say that \(\Omega\) is _strictly asymptotically self-similar_
if the \(\mathfrak{C}\)-closure of the singleton set \(\{[\Omega]\}\) satisfies the following: there is a domain \(\Omega^{\prime}\) so that \([\Omega^{\prime}]\in\overline{\{[\Omega]\}}-\{[\Omega]\}\subset\mathfrak{C}\) and \([\Omega]\in\overline{\{[\Omega^{\prime}]\}}\). Strictly asymptotically self-similar domains are asymptotically self-similar.
**Example 1.3**.: _Convex polygons with at least \(5\) vertices are not asymptotically self-similar. If \(\mathbb{Z}\subset\operatorname{Aut}(\mathbb{RP}^{2},\Omega)\), then \(\Omega\) is asymptotically self-similar._
_Divisible domains, such as ellipses or domains of discontinuity for \(\operatorname{SL}(3,\mathbb{R})\) Hitchin representations, are always asymptotically self-similar but never strictly asymptotically self-similar. This is because divisible domains are closed points of \(\mathfrak{C}\) by work of Benzecri ([5] V.3.3)._
Now let \(\rho\in\operatorname{Hit}_{4}(S)\). If \(\rho\) is \(4\)-Fuchsian, every leaf \(\mathfrak{s}_{\rho}\) is an ellipse, and hence asymptotically self-similar. On the other hand, if \(\rho\) is not \(4\)-Fuchsian and \(x\in\partial\Gamma\), continuity of \(\mathfrak{s}_{\rho}\) and Theorem 1.2.(1) imply that \(\mathfrak{s}_{\rho}(x)\) is asymptotically self-similar and Theorem 1.2.(2) shows that \(\mathfrak{s}_{\rho}(x)\) is actually strictly asymptotically self-similar. So:
**Theorem 1.4** (Leaves Asymptotically Self-Similar).: _For any \(\rho\in\operatorname{Hit}_{4}(S)\) and \(x\in\partial\pi_{1}(S)\), the leaf \(\mathfrak{s}_{\rho}(x)\) is asymptotically self-similar. If \(\rho\) is not \(4\)-Fuchsian, \(\mathfrak{s}_{\rho}(x)\) is strictly asymptotically self-similar._
### Context and Related Results
#### 1.2.1. Properly Convex Projective Structures
The place in the literature where the most notable analogues to Theorems 1.1 and 1.2 occur is in the study of properly convex projective structures on surfaces. These structures parameterize \(\operatorname{SL}(3,\mathbb{R})\) Hitchin components by work of Choi and Goldman ([14], [10]).
Briefly, a projective structure \((\operatorname{dev},\operatorname{hol})\) on \(S\) is said to be _properly convex_ if \(\operatorname{dev}\) is a homeomorphism of \(\widetilde{S}\) onto a properly convex domain \(\Omega\) of \(\mathbb{RP}^{2}\). In this case, \(\Gamma\) acts properly discontinuously and without fixed-points on \(\Omega\) through \(\operatorname{hol}\).
A similar statement to Theorem 1.2 that is much easier to prove is the observation that in the above notation, \(\partial\Omega\) is topologically a circle and the map \(\operatorname{reg}:\partial\Omega\to(1,2]\) associating to \(x\in\partial\Omega\) the regularity of \(\partial\Omega\) at \(x\) (see e.g. SS2) is a \(\Gamma\)-equivariant map that is constant on all orbits of \(\Gamma\), and only constant if \(\operatorname{hol}\) is in the Fuchsian locus of \(\operatorname{Hit}_{3}(S)\).
Of course this is an imperfect analogue to Theorem 1.2 since the target, \((1,2]\), of \(\operatorname{reg}\) is much better-separated than \(\mathfrak{C}\), and there is no aspect of continuity present. Nevertheless, there is a theme here that the local projective geometry of domains of discontinuity for non-Fuchsian \(\operatorname{PSL}(n,\mathbb{R})\) Hitchin representations is quite complicated (c.f. also [31]).
The geometry of properly convex projective structures is well-studied, and much of the structure in this setting (e.g. [4][16]) is due to the presence of divisibility. It is not clear to what extent the geometry of leaves \(\mathfrak{s}_{\rho}(x)\) is similar.
#### 1.2.2. Geometric Structures and Hitchin Representations
For all split real forms \(G\) of complex simple centerless Lie groups, the \(G\)-Hitchin components are parametrized by holonomies of connected components of spaces of geometric structures on manifolds \(M_{G}\) associated to \(S\)[20]. Understanding the qualitative geometry of these geometric structures is a program within higher rank Teichmuller theory, into which this work falls. The basic question of the topological type of \(M_{G}\) has seen major recent progress in cases of special interest in [1] and more generally in [2] and [12]. There is no qualitative characterization of these connected components of geometric structures currently known in general.
In fact, the only Lie group \(G\) as above of rank at least \(3\) where \(M_{G}\) is known and the geometric structures corresponding to Hitchin representations are qualitatively characterized is \(\operatorname{PSL}(4,\mathbb{R})\). Since the analytic tools that are often used to study these geometric
structures in low rank (e.g. [11]) break down in rank 3 [32], the \(\mathrm{PSL}(4,\mathbb{R})\) Hitchin component is a natural candidate for study in developing expectations for the general geometry of Hitchin representations.
#### 1.2.3. The Mapping Class Group Action on Hitchin Components
A long-standing question in higher Teichmuller theory is to understand the structure of the action of the mapping class group \(\mathrm{Mod}(S)\) on Hitchin components. A conjecture that would have settled this question was due to Labourie [24]. Labourie's conjecture holds for Hitchin components for Lie groups \(G\) as above of rank 2 [25], and was disproved in rank at least 3 as the culmination of a series of papers by Markovic, Sagman, and Smillie [27][28][32].
However, the negative resolution to Labourie's conjecture does not appear to directly yield information about the \(\mathrm{Mod}(S)\) action on Hitchin components, and leaves open what we shall call the fibration conjecture ([34], Conjecture 14). To state the fibration conjecture, let \(\mathcal{Q}^{k}(S)\) denote the holomorphic bundle over Teichmuller space of holomorphic \(k\)-adic differentials (see e.g. [6]).
**Question 1.5** (Fibration Conjecture).: _Is the \(\mathrm{PSL}(n,\mathbb{R})\) Hitchin component naturally \(\mathrm{Mod}(S)\)-equivariantly diffeomorphic to the bundle sum \(\bigoplus_{k=3}^{n}\mathcal{Q}^{k}(S)\)?_
Work of the author [29] implies that a conjecture of Fock and Thomas on higher degree complex structures [13] is equivalent to the fibration conjecture. The connection of the fibration conjecture to this paper is through its prediction that there should be canonical projections \(\mathrm{Hit}_{n}(S)\to\mathrm{Hit}_{k}(S)\) for \(2\leq k<n\). The only known such projections have \(k=2\) (e.g. [23], [26], [21]).
In their paper [19] introducing properly convex foliated projective structures, Guichard and Wienhard suggest that perhaps these geometric objects could be used to approach the fibration conjecture for \(\mathrm{PSL}(4,\mathbb{R})\). The question that motivated the investigations leading to this paper was if examining the leaves of properly convex foliated projective structures gave rise to a projection \(\mathrm{Hit}_{4}(S)\to\mathrm{Hit}_{3}(S)\). This would have been evidence in favor of the Fock-Thomas and fibration conjectures.
More specifically, properly convex subsets of \(\mathbb{RP}^{2}\) are the setting of the geometric structures corresponding to the \(\mathrm{SL}(3,\mathbb{R})\) Hitchin component, and also appear as leaves of properly convex foliated projective structures. One might hope, after noticing Theorem 1.2.(1) and continuity of \(\mathfrak{s}_{\rho}\), that \(\mathfrak{s}_{\rho}\) was constant, \(\mathfrak{s}_{\rho}(x)\) was divisible, and examining the action of \(\rho\in\mathrm{Hit}_{4}(S)\) on the value of \(\mathfrak{s}_{\rho}(x)\) gave an element of \(\mathrm{Hit}_{3}(S)\). Theorems 1.1 and 1.2 show that this hope fails.
**Organization.** Following the introduction are two sections on background: SS2 on convex domains in \(\mathbb{RP}^{2}\) and SS3 on Hitchin representations and properly convex foliated projective structures. We prove Theorems 1.1 and 1.2 in SS4.
**Acknowledgements.** This paper would not have been written were it not for the reading group on Anosov representations at Rice University in 2021 and Rice's RTG geometry-topology seminar. Among the participants of these, I would like to specifically thank Chris Leininger, Mike Wolf, Alan Reid, and Sara Edelman-Munoz.
This paper has benefitted a great deal from conversations with various mathematicians, in particular with Max Riesenberg, Jean-Philippe Burelle, Colin Davalo, and Teddy Weisman. It is my pleasure to further thank Mike Wolf for his support and guidance.
This material is based upon work supported by the National Science Foundation under Grant No. 1842494 and Grant No. 2005551.
## 2. Properly Convex Domains in \(\mathbb{RP}^{2}\)
In this section we recall the foundational facts about properly convex subsets of \(\mathbb{RP}^{2}\) that are essential to our later arguments. In particular, SS2.1 discusses spaces of properly convex domains and Benzecri's compactness theorem, and SS2.2 concerns a boundary regularity fact due to Benoist.
We begin by introducing definitions and notation. A set \(\Omega\subset\mathbb{RP}^{2}\) is _convex_ if for any pair of points \(p,q\in\Omega\) there is a line segment contained in \(\Omega\) between \(p\) and \(q\). A _domain_ is an open connected subset of \(\mathbb{RP}^{2}\). A convex domain \(\Omega\) is said to be _properly convex_ if \(\overline{\Omega}\) is contained in a single affine chart, and is said to be _strictly convex_ if for every \(p,q\in\overline{\Omega}\), a line segment connecting \(p\) and \(q\) in \(\overline{\Omega}\) can be taken to be contained in \(\Omega\) except at its endpoints.
### Spaces of Properly Convex Sets
Let \(\mathcal{C}\) denote the collection of properly convex domains in \(\mathbb{RP}^{2}\). Let \(\mathcal{C}^{*}\) denote the collection of pointed properly convex domains in \(\mathbb{RP}^{2}\), that is, pairs \((\Omega,p)\) where \(\Omega\in\mathcal{C}\) and \(p\in\Omega\). We give \(\mathcal{C}\) the topology induced by the Hausdorff topology on closures, and \(\mathcal{C}^{*}\) the topology induced from the product \(\mathcal{C}\times\mathbb{RP}^{2}\).
The group \(\operatorname{SL}(3,\mathbb{R})\) acts on \(\mathbb{R}^{3}\), which induces an action on \(\mathbb{RP}^{2}\) and hence on \(\mathcal{C}\) and \(\mathcal{C}^{*}\). All projective equivalences (bijections sending lines to lines) of domains in \(\mathbb{RP}^{2}\) arise from the action of \(\operatorname{SL}(3,\mathbb{R})\). We denote the quotients of \(\mathcal{C}\) and \(\mathcal{C}^{*}\) by the action of \(\operatorname{SL}(3,\mathbb{R})\) by \(\mathfrak{C}\) and \(\mathfrak{C}^{*}\), respectively.
The topology of \(\mathfrak{C}\) has poor separation properties--one-point sets in \(\mathfrak{C}\) need not be closed--which play a prominent role in this paper. A first example of non-closed points in \(\mathfrak{C}\) is as follows.
**Example 2.1**.: _Let \(e_{1},e_{2},e_{3}\) be a basis for \(\mathbb{R}^{3}\). Work in an affine chart containing \([e_{1}],[e_{2}]\), and \([e_{3}]\). Let \(\Omega\) be a strictly convex domain contained in this affine chart preserved by \(A=\operatorname{diag}(e^{\lambda},e^{\eta},e^{-\lambda-\eta})\) for some \(\lambda>\eta\geq 0\). For instance \(\Omega\) may be an ellipse if \(\eta=0\)._
_Let \(\ell\) denote the line segment from \([e_{1}]\) to \([e_{3}]\) in this affine chart and \(p\in\ell\). Let \(\ell^{\prime}\) denote the line determined by \([e_{2}]\) and \(p\). Then \(\ell^{\prime}\) bisects \(\Omega\). Let \(\Omega^{\prime}\) be the component of \(\Omega\) containing \([e_{3}]\). Then \(\Omega^{\prime}\) is not projectively equivalent to \(\Omega\) as its boundary contains a line segment, but \(A^{n}\overline{\Omega^{\prime}}\) converges to \(\overline{\Omega}\) in the Hausdorff topology. So \([\Omega]\in\overline{\{[\Omega^{\prime}]\}}\)._
The closures of points in \(\mathfrak{C}\) vary a great deal: it is a consequence of Benzecri's compactness theorem below that all divisible domains are closed points, while Benzecri also showed ([5] SSV.3, p.321) there there exist dense one-point sets in \(\mathfrak{C}\). The topology of \(\mathfrak{C}\) is quite complicated, and is rich enough that the continuity of a map with target \(\mathfrak{C}\) has nontrivial content.
On the other hand, all of the poor separation in \(\mathfrak{C}\) is caused by divergent sequences of elements of \(\operatorname{SL}(3,\mathbb{R})\) for the tautological reason that if \(K\subset\operatorname{SL}(3,\mathbb{R})\) is compact and \(\Omega\in\mathcal{C}\), then the orbit of \(\Omega\) under \(K\) represents a single point in \(\mathfrak{C}\). As a consequence, if one is able to gain finer control on a sequence \(\Omega_{n}\in\mathcal{C}\) than convergence in \(\mathcal{C}\), it can be tractable to understand the limiting projective geometry of \(\Omega_{n}\) in spite of the non-separation of points in \(\mathfrak{C}\).
The typical way this is done in practice is by gaining control over a single point of the domains \(\Omega_{n}\) in question, working with the space \(\mathfrak{C}^{*}\) instead of \(\mathfrak{C}\). It follows from the below fundamental result of Benzecri that this is enough to guarantee uniqueness of limits.
**Theorem 2.2** (Benzecri Compactness).: \(\operatorname{SL}(3,\mathbb{R})\) _acts properly and co-compactly on \(\mathcal{C}^{*}\)._
As an immediate corollary, we have:
**Corollary 2.3**.: \(\mathfrak{C}^{*}\) _is a compact Hausdorff space._
### Regularity of Domains
In this subsection, we describe the notion of boundary regularity best adapted to our uses and a relevant circumstance in which the regularity of a boundary point of a properly convex domain may be computed explicitly.
**Definition 2.4**.: _Let \(C\) be a closed embedded \(C^{1}\) curve in \(\mathbb{R}^{2}\). For \(1<\alpha\leq 2\), we say that \(p\in C\) is a \(C^{\alpha}\) point of \(C\) if there is an open neighborhood \(U\) in \(C\) of \(p\) and constant \(C_{U}>0\) so that for all \(y\in U\), \(d(y,T_{x}C)\leq C_{U}d(y,x)^{\alpha}\)._
_We say that \(C\) is exactly \(C^{\alpha}\) at \(p\) if \(C\) is \(C^{\alpha}\) at \(p\) and not \(C^{\alpha^{\prime}}\) at \(p\) for any \(\alpha^{\prime}>\alpha\)._
Here, the distance is the standard Euclidean distance. Note that \(\alpha\)-regularity of a point \(p\in C\) is invariant under projective transformations. We remark that this definition does not quite match up with usual definitions outside of projective geometry for the one value \(\alpha=2\), in which case what we call \(C^{2}\) points are more typically referred to as \(C^{1,1}\) points.
The following lemma is essentially contained in work of Benoist ([4], proof of Corollaire 5.3). The form we use here is slightly stronger and more general than the version stated in [4], and follows from a close examination of the argument given there.
**Lemma 2.5** (Regularity at Fixed Points).: _Let \(\Omega\subset\mathbb{RP}^{2}\) be a properly convex, strictly convex domain preserved by \(A\in\mathrm{GL}(3,\mathbb{R})\) conjugate to \(\mathrm{diag}(\lambda_{1},\lambda_{2},\lambda_{3})\) with \(\lambda_{1}>\lambda_{2}>\lambda_{3}>0\). Write \(l_{i}=\log\lambda_{i}\) for \(i=1,2,3\) and let \(x_{A^{+}}\) denote the attracting fixed point of \(A\) in \(\mathbb{RP}^{2}\)._
_Then \(x_{A^{+}}\in\partial\Omega\) is exactly \(C^{\alpha}\) for_
\[\alpha=\frac{l_{1}-l_{3}}{l_{1}-l_{2}}.\]
In consideration of the importance of this lemma to the present work and the standing assumption of divisibility in [4] (which we will show is false in general in our setting) we present a proof of Lemma 2.5 in the appendix.
## 3. Properly Convex Foliated Projective Structures and Hitchin Representations
In this section, we recall the relevant features of Hitchin representations and the theory of properly convex foliated projective structures developed by Guichard and Wienhard in [19] to our later discussion. We also prove a few basic lemmata, and set conventions for later use. SS3.3 is the only portion of this section not contained in existing literature.
**Notation**.: _Let \(S\) be a closed, oriented surface of genus \(g\geq 2\), \(\Gamma=\pi_{1}(S),\) and \(\overline{\Gamma}=\pi_{1}(T^{1}(S))\). Let \(\mathcal{G},\mathcal{F}\) denote the stable and semi-stable geodesic foliations of \(T^{1}S\). Let \(\overline{\mathcal{F}},\overline{\mathcal{G}}\) denote the lifts of \(\mathcal{F},\mathcal{G}\) to \(T^{1}\widetilde{S}\), and \(\widetilde{\mathcal{G}},\widetilde{\mathcal{F}}\) the lifts of \(\mathcal{F},\mathcal{G}\) to the universal cover of \(T^{1}S\)._
### Hitchin Representations and Hyperconvex Frenet Curves
Hitchin representations \(\Gamma\to\mathrm{PSL}(n,\mathbb{R})\) are characterized in terms of the geometry of special equivariant curves by work of Labourie and Guichard [22], [18]. This perspective is central to our methods, and we recall it here.
For \(1\leq k\leq n\), denote the \(k\)-Grassmannian of \(\mathbb{R}^{n}\) by \(\mathrm{Gr}_{k}(\mathbb{R}^{n})\). A continuous curve \(\xi=(\xi^{1},...,\xi^{n-1}):\partial\Gamma\to\bigoplus_{k=1}^{n-1}\mathrm{Gr} _{k}(\mathbb{R}^{n})\) is a _hyperconvex Frenet curve_ if:
1. (Convexity) For any \(k_{1},...,k_{j}\) with \(\sum_{l=1}^{j}k_{l}\leq n\), and distinct \(x_{1},...,x_{j}\in\partial\Gamma\), the vector space sum \(\xi^{k_{1}}(x_{1})+...+\xi^{k_{j}}(x_{j})\) is direct;
2. (Osculation) For any \(x\in\partial\Gamma\) and \(k_{1},...,k_{j}\) with \(K=\sum_{l=1}^{j}k_{l}<n\) we have that \(\xi^{K}(x)=\lim_{m\to\infty}\left[\xi^{k_{1}}(x_{1}^{m})\oplus...\oplus\xi^{ k_{j}}(x_{j}^{m})\right]\) for any sequence \((x_{1}^{m},...,x_{j}^{m})\) of \(j\)-ples of distinct points so that for all \(l\), the sequence \(x_{l}^{m}\) converges to \(x\).
A hyperconvex Frenet curve \((\xi^{1},...,\xi^{n-1})\) is entirely determined by \(\xi^{1}\). The standard example of such a curve is the Veronese curve \(\xi^{1}:\mathbb{RP}^{1}\rightarrow\mathbb{RP}^{n}\), given in the model of \(\mathbb{R}^{k}\)\((k=2,n+1)\) as homogeneous polynomials on \(\mathbb{R}^{2}\) of degree \(k-1\) by \([f]\mapsto[f^{n}]\). The relevant result to us here, which serves as our working definition of a Hitchin representation, is:
**Theorem 3.1** (Labourie [22], Guichard [18]).: _A representation \(\rho:\Gamma\rightarrow\mathrm{PSL}(n,\mathbb{R})\) is Hitchin if and only if there exists a \(\rho\)-equivariant hyperconvex Frenet curve._
We denote the \(\mathrm{PSL}(n,\mathbb{R})\) Hitchin component(s) by \(\mathrm{Hit}_{n}(S)\). A fact that will be useful to us is that Hitchin representations \(\rho:\Gamma\rightarrow\mathrm{PSL}(n,\mathbb{R})\) may always be lifted to \(\mathrm{SL}(n,\mathbb{R})\).
Though the definition of a hyperconvex Frenet curve is stated in terms of sums of \(\xi^{k}\), work of Guichard [17] shows that intersections of \(\xi^{k}\) are also quite well-behaved, which is often the way in which we interact with the Frenet property.
**Proposition 3.2** (Guichard [17]).: _Let \(\xi=(\xi^{1},...,\xi^{n-1})\) be a hyperconvex Frenet curve. Then:_
1. _(General Position) If_ \(n=\sum_{i=1}^{j}k_{i}\) _and_ \(x_{1},...,x_{j}\in\partial\Gamma\) _are distinct, then_ \[\bigcap_{i=1}^{j}\xi^{n-k_{i}}(x_{i})=\{0\};\]
2. _(Dual Oscillation) For any_ \(x\in\partial\Gamma\) _and_ \(k_{1},...,k_{j}\) _with_ \(K=\sum_{l=1}^{j}k_{l}<n\) _we have that for any sequence_ \((x_{1}^{m},...,x_{j}^{m})\) _of_ \(j\)_-ples of distinct points in_ \(\partial\Gamma\) _so that_ \(x_{l}^{m}\) _converges to_ \(x\) _for each_ \(l\)_,_ \[\xi^{n-K}(x)=\lim_{m\rightarrow\infty}\bigcap_{i=1}^{j}\xi^{k_{i}}(x_{i}^{m}).\]
### Properly Convex Foliated Projective Structures
In this subsection, we recall the properties of geodesic foliations on surfaces that make the definition of properly convex foliated projective structures on surfaces well-defined, state their definitions and basic properties, and collect the main results of Guichard and Wienhard in [19]. Our notation and the content here follows [19].
#### 3.2.1. Geodesic Foliations
Fixing a hyperbolic metric on \(S\) identifies the geodesic foliations of \(T^{1}\widetilde{S}\) and \(T^{1}\mathbb{H}^{2}\), and identifies \(\partial\Gamma\) with \(\partial\mathbb{H}^{2}\). There is a well-known description of \(T^{1}\mathbb{H}^{2}\) as orientation-compatible triples \((t_{+},t_{0},t_{-})\) of distinct points in \(\partial\Gamma\). We denote the space of such triples \(\partial\Gamma^{(3)+}\). One obtains this identification by associating to \((p,v)\in T^{1}(S)\) the endpoints at infinity of the geodesic \(\ell\) determined by \(v\) as \(t_{-},t_{+}\), and the endpoint \(t_{0}\) of the geodesic perpendicular to \(\ell\) at \(p\) that makes \((t_{+},t_{0},t_{-})\) orientation-compatible (see Figure 1).
Under this identification, the leaves of the semi-stable geodesic foliation \(\overline{\mathcal{F}}\) are the collections of elements of \(\partial\Gamma^{(3)+}\) with fixed \(t_{+}\) entry, and the leaves of the stable geodesic foliation \(\overline{\mathcal{G}}\) are the collections of elements of \(\partial\Gamma^{(3)+}\) with fixed \(t_{-}\) and \(t_{+}\) entries. So the leaf spaces of \(\overline{\mathcal{F}}\) and \(\overline{\mathcal{G}}\) are identified with \(\partial\Gamma\) and \(\partial\Gamma^{(2)}:=\Gamma\times\Gamma-\{(x,x)\mid x\in\Gamma\}\). In the following, we shall identify elements of \(\partial\Gamma\) and \(\partial\Gamma^{(2)}\) and the corresponding leaves of \(\overline{\mathcal{F}},\overline{\mathcal{G}}\).
This identification between \(T^{1}\widetilde{S}\) and \(\partial\Gamma^{(3)+}\) is equivariant with respect to the natural actions of \(\Gamma\), and as a consequence, the topological type of the pair \((\mathcal{F},\mathcal{G})\) is independent of the choice of hyperbolic metric.
#### 3.2.2. Properly Convex Foliated Projective Structures
Consider \(T^{1}S\), together with its stable and semi-stable foliations \((\mathcal{F},\mathcal{G})\). Let \(\mathcal{P}(S)\) denote the collection of projective structures on \(T^{1}S\).
**Definition 3.3**.: _Let \(P\) be a projective structure on \(T^{1}S\), viewed as an atlas of charts \(\{(U,\varphi_{U})\}\) to \(\mathbb{RP}^{3}\) with projective transisitons. Denote (a representative of) the developing data of \(P\) as \((\mathrm{dev},\mathrm{hol})\)._
1. \(P\) _is_ foliated _if given any_ \(\mathrm{chart}\,(U,\varphi_{U})\) _and_ \(v\in U\) _contained in the leaves_ \(g_{v}\cap U\in\mathcal{G}|_{U}\) _and_ \(f_{v}\cap U\in\mathcal{F}|_{U}\)_, then_ \(\varphi_{U}(g_{v}\cap U)\) _is contained in a projective line and_ \(\varphi_{U}(f_{v}\cap U)\) _is contained in a projective plane._
2. _If for any leaf_ \(f\in\widetilde{\mathcal{F}}\)_, the developed image_ \(\mathrm{dev}(f)\) _is a properly convex domain in a projective plane, then we say_ \(P\) _is_ properly convex.
3. _Two foliated projective structures_ \(P,P^{\prime}\) _are said to be equivalent if there is a homeomorphism_ \(h\) _of_ \(T^{1}S\) _isotopic to the identity that is a projective equivalence_ \(h\) _of_ \(P\) _and_ \(P^{\prime}\) _so that_ \(h^{*}\mathcal{F}=\mathcal{F}\) _and_ \(h^{*}\mathcal{G}=\mathcal{G}\)_._
4. _Let_ \(\mathcal{P}_{f}(S)\) _and_ \(\mathcal{P}_{pcf}(S)\) _denote the collections of equivalence classes of foliated and properly convex foliated projective structures on_ \(T^{1}S\)_, respectively._
Note that it is not clear that the natural mappings of \(\mathcal{P}_{f}(S)\) and \(\mathcal{P}_{pcf}(S)\) to \(\mathcal{P}(S)\) given by forgetting the extra structure are injective, since the equivalence relation is refined. Developing maps of properly convex foliated projective structures always factor through \(T^{1}\widetilde{S}\) as a consequence of [19], so we may work with \(\overline{\mathcal{F}}\) and \(\overline{\mathcal{G}}\) in place of \(\widetilde{\mathcal{F}}\) and \(\widetilde{\mathcal{G}}\).
Let \(p:\overline{\Gamma}\to\Gamma\) be the map induced by the projection \(T^{1}(S)\to S\). In [19], it is proved that for properly convex foliated projective structures \((\mathrm{dev},\mathrm{hol})\), the value of \(\mathrm{hol}(\gamma)\) (\(\gamma\in\overline{\Gamma}\)) depends only on \(p(\gamma)\). So any properly convex foliated projective structure induces a representation \(\mathrm{hol}_{*}:\Gamma\to\mathrm{PSL}(4,\mathbb{R})\), well-defined up to conjugacy. Write by \([\mathrm{hol}_{*}]\) the associated conjugacy class of representations. In [19] the following characterization of properly convex foliated projective structures in terms of the \(\mathrm{PSL}(4,\mathbb{R})\) Hitchin component is proved.
**Theorem 3.4** (Guichard-Wienhard [19]).: _The holonomy map \(\mathcal{P}_{pcf}(S)\to\mathrm{Hit}_{4}(S)\) given by \((\mathrm{dev},\mathrm{hol})\mapsto[\mathrm{hol}_{*}]\) is a homeomorphism._
The main definition to our investigations is:
**Definition 3.5**.: _Given a properly convex foliated projective structure induced by a representation \(\rho\), under the natural identification of the leaf space of \(\overline{\mathcal{F}}\) and \(\partial\Gamma\), we denote by \(\mathfrak{s}_{\rho}(x)\in\mathfrak{C}\) the projective equivalence class of \(\mathrm{dev}(x)\). We call \(\mathfrak{s}_{\rho}\) the leaf map associated to \(\rho\)._
Figure 1. The unit tangent bundle \(T^{1}\mathbb{H}\).
One useful tool developed by Guichard and Wienhard in their proof that all Hitchin representations induce properly convex foliated projective structures is an explicit description of the developing map of the associated projective structure in terms of the hyperconvex Frenet curve \(\xi=(\xi^{1},\xi^{2},\xi^{3})\). See Figure 2, and discussion below.
To be more explicit, fix a Hitchin representation \(\rho:\Gamma\to\operatorname{PSL}(4,\mathbb{R})\), and denote the corresponsing equivariant Frenet curve by \(\xi=(\xi^{1},\xi^{2},\xi^{3})\). Using the identification of \(\partial\Gamma\) with the leaf space of \(\overline{\mathcal{F}}\), denote semi-stable leaves of the geodesic foliation on \(T^{1}\widetilde{S}\) by \(x\in\partial\Gamma\).
Following the notation of Guichard-Wienhard, define the two-argument map \(\xi^{1}:\partial\Gamma\times\partial\Gamma\to\mathbb{R}\mathbb{P}^{3}\) by
\[\xi^{1}_{t}(t^{\prime})=\begin{cases}\xi^{3}(t)\cap\xi^{2}(t^{\prime})&t\neq t ^{\prime}\\ \xi^{1}(t)&t=t^{\prime}\end{cases}.\]
Then we can define the developing map of the projective structure we seek as
\[\operatorname{dev}:\qquad\partial\Gamma^{3+} \to\mathbb{R}\mathbb{P}^{3}\] \[(t_{+},t_{0},t_{-}) \mapsto\overline{\xi^{1}(t_{+})\xi^{1}_{t_{+}}(t_{-})}\cap \overline{\xi^{1}_{t_{-}}(t_{+})\xi^{1}_{t_{+}}(t_{0})},\]
where we denote the line in \(\mathbb{R}\mathbb{P}^{3}\) determined by two points \(a\) and \(b\) by \(\overline{ab}\). Write \(\Omega_{\rho}:=\operatorname{dev}(\partial\Gamma^{(3)+})\).
A few qualitative remarks are in order. Here the boundary of \(\Omega_{\rho}\) is given by \(\partial\Omega=\bigsqcup_{t\in\partial\Gamma}\xi^{2}(t)\), where disjointness is a consequence of hyperconvexity. For any \(x\in\partial\Gamma\), the leaf \(\operatorname{dev}(x)\) is \(\xi^{3}(x)\cap\Omega_{\rho}\). The boundary of \(\operatorname{dev}(x)\) is given by \(\{\xi^{2}(y)\cap\xi^{3}(x):y\neq x\}\cup\{\xi^{1}(x)\}\). A supporting line to \(\partial\operatorname{dev}(x)\) at \(\xi^{2}(y)\cap\xi^{3}(x)\) is \(\xi^{3}(y)\cap\xi^{3}(x)\), and a supporting line to \(\partial\operatorname{dev}(x)\) at \(\xi^{1}(x)\) is \(\xi^{2}(x)\). These lines do not intersect \(\operatorname{dev}(x)\) due to the general position property of Frenet curves and our description of the boundary of \(\operatorname{dev}(x)\). We shall show in SS3.3 that these supporting lines are unique.
Figure 2. The developing map in terms of the Frenet curve.
### Two Remarks on Boundaries of Leaves
In this subsection, we describe two basic geometric features of the leaves \(\mathrm{dev}(x)\).
Our first observation is that the ruling of the boundary of \(\Omega\) by \(\xi^{2}(x)\) (\(x\in\partial\Gamma\)) gives rise to natural identifications of boundaries of leaves \(\partial\mathrm{dev}(x)\). Geometrically, any boundary point \(p\) of \(\mathrm{dev}(x)\) is contained in exactly one \(\xi^{2}(y)\) for \(y\in\partial\Gamma\). Given another \(x^{\prime}\in\partial\Gamma\), the identification of boundaries maps \(p\) to the unique intersection of \(\xi^{2}(y)\) with \(\partial\mathrm{dev}(x^{\prime})\) (see Figure 3). The below expresses this symbolically.
**Definition 3.6**.: _For \(x,x^{\prime}\in\partial\Gamma\), define a the map \(\Xi_{x\to x^{\prime}}:\partial\mathrm{dev}(x)\to\partial\mathrm{dev}(x^{ \prime})\) by \(\Xi_{x\to x^{\prime}}(\xi^{1}_{x}(y))=\xi^{1}_{x^{\prime}}(y)\) for \(y\in\partial\Gamma\)._
As a consequence of continuity of \(\xi^{1}_{x}(y)\), which follows from dual osculation in Proposition 3.2, the maps \(\Xi_{x\to x^{\prime}}(y)\) vary continuously in \(x,x^{\prime}\), and \(y\).
Our second observation concerns the structure of the boundary of \(\partial\mathrm{dev}(x)\) for \(x\in\partial\Gamma\): they are strictly convex and \(C^{1}\). Strict convexity, in particular, is a tool that we use for some obstructions in later case analysis.
**Proposition 3.7** (Basic Regularity).: _For all \(x\in\partial\Gamma\), the leaf \(\mathrm{dev}(x)\) is strictly convex and has \(C^{1}\) boundary._
Figure 3. Sketch of two slices of a domain of discontinuity and the ruling of the boundary by lines.
Proof.: To show that \(\mathrm{dev}(x)\) is \(C^{1}\), we consider the dual properly convex domain \(\mathrm{dev}(x)^{*}\subset\xi^{3}(x)^{*}\). The boundary \(\partial\mathrm{dev}(x)^{*}\) is a topological circle consisting of supporting lines to \(\partial\mathrm{dev}(x)\). The path \(\partial\Gamma\to\partial\mathrm{dev}(x)^{*}\) given by
\[y\mapsto\begin{cases}(\xi^{3}(y)\cap\xi^{3}(x))^{*}&y\neq x\\ \xi^{2}(x)^{*}&y=x\end{cases}\]
is a continuous injection of \(\partial\Gamma\cong S^{1}\) into \(\partial\mathrm{dev}(x)^{*}\cong S^{1}\), and so must be surjective. So all supporting lines to \(\mathrm{dev}(x)\) must be of the form \(\xi^{3}(y)\cap\xi^{3}(x)\) or \(\xi^{2}(x)\). In particular, all boundary points of \(\mathrm{dev}(x)\) have unique tangent lines, which implies \(\partial\mathrm{dev}(x)\) is \(C^{1}\).
Strict convexity follows from the general position property of Frenet curves as follows. Supposing otherwise, \(\partial\mathrm{dev}(x)\) must contain an interval \(I\), contained in a line \(\ell_{I}\). For any \(y\neq x\in\partial\Gamma\) so that \(\xi^{1}_{x}(y)\) is in the interior of \(I\), we must have \(\xi^{3}(y)\cap\xi^{3}(x)=\ell_{I}\), as this is a supporting line to \(\partial\mathrm{dev}(x)\) at a point in \(I\). This is impossible by the general position property of Frenet curves, and proves strict convexity.
## 4. Proofs of the Main Theorems
In this section we prove Theorems 1.1 and 1.2. The vast majority of the effort is spent showing \(\mathfrak{s}_{\rho}\) is not constant unless the Hitchin representation \(\rho\) is \(4\)-Fuchsian. We begin by setting notation in SS4.1. An outline of the structure of the core of our proofs is then given in SS4.2, and the remainder of the paper is spent following this outline.
### Notation, Conventions, and Definitions
Let us begin by setting up notation to facilitate comparison of projective types of leaves.
The group \(\mathrm{SL}(3,\mathbb{R})\) acts simply transitively on \(4\)-ples of points in general position in \(\mathbb{RP}^{2}\). So, by fixing a point \(t_{0}\in\partial\Gamma\) and a continuously varying family of \(4\) points
\[\{(p_{1}(t),p_{2}(t),p_{3}(t),p_{4}(t))\mid t\in\partial\Gamma\}\subset\mathbb{ RP}^{3}\]
so that \(p_{i}(t)\in\xi^{3}(t)\)\((i=1,...,4)\) and the points \((p_{1}(t),p_{2}(t),p_{3}(4),p_{4}(t))\) are in general position within \(\xi^{3}(t)\) for all \(t\in\partial\Gamma\), we induce well-determined projective equivalences \(\xi^{3}(t)\to\xi^{3}(t_{0})\) for all \(t\in\partial\Gamma\).
One way to produce such a normalization is to take \(4\) distinct points \(x_{1},...,x_{4}\in\partial\Gamma\) and let \(p_{i}(t)\)\((i=1,2,3,4)\) be the unique point of intersection between \(\xi^{2}(x_{i})\) and \(\partial\mathrm{dev}(t)\). The continuity of the points \(p_{i}(t)\) results in such a normalization being continuous in the sense that the induced mappings from a reference \(\mathbb{RP}^{2}\) with \(4\) fixed points in general position to \(\xi^{3}(t)\subset\mathbb{RP}^{3}\) vary continuously.
Throughout the following, we shall once and for all fix such a normalization and view all domains \(\mathrm{dev}(t)\) as subsets of \(\mathbb{RP}^{2}\cong\xi^{3}(t_{0})\). When relevant, we will write the map \(\xi^{3}(t)\to\xi^{3}(t_{0})\) by \(N_{t\to t_{0}}\). We denote \(N_{t\to t_{0}}(\mathrm{dev}(t))\) by \(C_{t}\). At times when not doing so would make notation extremely cumbersome, we abuse notation to suppress the normalization used to identify \(\mathrm{dev}(t)\) and \(C_{t}\).
**Definition 4.1**.: _Given a Hitchin representation \(\rho\), domains \(C_{t}\) as above and a subset \(S\subset\partial\Gamma\), a projective equivalence of leaves over \(S\) is a function \(f:S\to\mathrm{Aut}(\xi^{3}(t_{0}))\) so that \(f(t)C_{0}=C_{t}\) for all \(t\in S\)._
Projective equivalences of leaves need not exist over a given subset \(S\subset\partial\Gamma\). The leaf map \(\mathfrak{s}_{\rho}\) is constant if and only if a family of projective equivalences over \(\partial\Gamma\) exists. We do not assume continuity or any sort of regularity, measurability, or the like of projective equivalences over sets \(S\) unless explicitly noted.
At times, it will be useful to consider projective equivalences of leaves as two-argument maps between leaves seen as subsets of \(\mathbb{RP}^{3}\), which the next bit of notation facilitates.
**Definition 4.2**.: _Given a projective equivalence \(f\) of leaves over \(S\) and \(t,t^{\prime}\in S\), define the projective equivalence \(f(t,t^{\prime}):\text{dev}(t)\to\text{dev}(t^{\prime})\) by_
\[f(t,t^{\prime})=N_{t^{\prime}\to t_{0}}^{-1}\circ f(t^{\prime})\circ f(t)^{-1} \circ N_{t\to t_{0}}.\]
We adopt one final piece of notation in the following: if \(x\in\partial\Gamma\) and \(p\in\partial C_{x}\), we denote the regularity of \(\partial C_{x}\) at \(p\) by \(\operatorname{reg}_{x}(p)\).
### Outline of Proof that non-Fuchsian Leaf Maps are Nonconstant
Our proof assumes that \(\mathfrak{s}_{\rho}\) is constant, so that there is a projective equivalence \(f\) over \(\partial\Gamma\), and proves that \(\rho\) is \(4\)-Fuchsian through obtaining constraints on the eigenvalues of \(\rho(\Gamma)\).
In order to get initial leverage for our arguments, we require some control on the automorphisms of individual leaves \(\mathfrak{s}_{\rho}(x)\). The dichotomy we use to get this control is the closed subgroup Theorem, which in our setting implies that either for every \(x\in\partial\Gamma\) every \(\mathfrak{s}_{\rho}(x)\) has discrete projective automorphism group, or there is an \(x\in\partial\Gamma\) so that \(\operatorname{Aut}(\xi^{3}(t_{0}),C_{x})\subset\operatorname{SL}(3,\mathbb{R})\) contains a \(1\)-parameter subgroup. Our proof divides into these two cases.
The discrete case is the most involved. In it, we first show that though \(f\) may be everywhere discontinuous, we may modify \(f\) to obtain a _continuous_ family \(\widetilde{f}\) of projective equivalences over a nonempty open set \(U\subset\partial\Gamma\), which can be enlarged using equivariance of leaf maps. The informal idea of the phenomenon underlying why this possible is that all of the discontinuity of \(f\) comes from two sources: projective automorphisms of \(\mathfrak{s}_{\rho}(x)\), and divergent families of projective equivalences \(A_{t}\) so that \(A_{t}\overline{C_{t_{0}}}\) converges to \(\overline{C_{t^{\prime}}}\) in the Hausdorff topology for some \(t^{\prime}\). This is exploited by carefully choosing countable covers \(S_{i}\) of \(\partial\Gamma\) so that \(f\) is well-behaved on each \(S_{i}\), then applying the Baire category theorem to show some \(S_{i}\) is large enough to be useful.
Next, we use a geometric "sliding" argument based on boundary regularity to show that if \(\gamma\in\Gamma\) and there is a continuous family of projective equivalences \(g\) over an appropriate open set \(U_{\gamma}\subset\partial\Gamma\), the logarithms of the eigenvalues of \(\rho(\gamma)\) satisfy a homogeneous polynomial.
Finally, we apply the eigenvalue constraints obtained from the condition that \(\mathfrak{s}_{\rho}\) is constant to show that \(\rho\) must be \(4\)-Fuchsian. We use two tools here. Our starting point is that the classification of Zariski closures of Hitchin representations forces \(\rho(\Gamma)\) to be Zariski dense in an appropriate simple real linear Lie group. Inside this Lie group, we may apply work of Benoist on limit cones of Zariski-dense subgroups and find that our polynomial constraint is incompatible with the structure of limit cones unless \(\rho\) is \(4\)-Fuchsian.
The structure of ideas in the non-discrete case is similar, but the mechanisms through which we establish sufficient uniformity for our methods differ. In particular, the structure of a convex domain that is stable under a one-parameter family of automorphisms is extremely restricted and has a rather smooth boundary. We remark that in this case, we prove something stronger than is strictly necessary to show Theorem 1.2, but is needed to establish the final characterization of the Fuchsian locus in Theorem 1.1: that this case can only occur if \(\rho\) is \(4\)-Fuchsian. In particular, we do not assume \(\mathfrak{s}_{\rho}\) is constant _a priori_ in this case.
The discrete case is the topic of SS4.3. The matter of continuity is addressed in SS4.3.1 and boundary regularity constraints in SS4.3.2. In SS4.3.3 we show that \(\rho\) is \(4\)-Fuchsian or \(\rho(\Gamma)\) is Zariski dense. In 4.3.4 we recall Benoist's theorem on limit cones and apply it to show that \(\rho\) is \(4\)-Fuchsian. The non-discrete case is the topic of SS4.4. All sub-cases of leaf geometries to consider here are classified in SS4.4.1. Analysis of all but one case of leaf geometry is carried out in SS4.4.2. Analysis of the final case is completed in SS4.4.3, which depends on SS4.3.2 and SS4.3.4. We explain how Theorems 1.1 and 1.2 follow in SS4.5.
### The Discrete Case
In this subsection, we assume that the group \(\operatorname{Aut}(\xi^{3}(t_{0}),C_{x})\) of projective automorphisms of \(C_{x}\) is discrete for all \(x\in\partial\Gamma\).
#### 4.3.1. Continuity
We contend first with the poor separation of points in \(\mathfrak{C}\). This manifests in the phenomenon that a family of projectively equivalent closed domains \(A_{t}\overline{\Omega}\) may vary continuously in the Hausdorff topology while \(A_{t}\) varies discontinuously. Modifying a continuous family \(A_{t}\) by using projective equivalences of \(\Omega\) leads to one source of such examples.
We must also address the possibility that there is a continuous family of projective equivalences \(\{A_{t}\}_{0\leq t<1}\) that leaves all compact subsets of \(\operatorname{SL}(3,\mathbb{R})\) and never comes close to a projective automorphism of \(C\), yet \(A_{t}\overline{C}\) converges to \(\overline{C}\) in the Hausdorff topology. Such a family \(A_{t}\) extends discontinuously to \([0,1]\) by setting \(A_{1}=e\), while \(\{A_{t}\overline{C}\}_{0\leq t\leq 1}\) is continuous in the Hausdorff topology.
Some intuition from Benzceri's compactness theorem is that these examples should be the only ways discontinuities may appear in projective equivalences of leaves \(f\) over \(\partial\Gamma\). The idea of our proof of Proposition 4.4 below is that, in the present situation, all of the discontinuity of \(f\) comes from jumps of (locally) definite size. So we cover the target of \(f\) by sets \(K_{n}\) so small that no discontinuity is possible within \(K_{n}\), and use the Baire category theorem to find at least one \(f^{-1}(K_{n})\) that is large enough to be useful.
Before proving Proposition 4.4, it is useful to know that the domains \(C_{t}\) vary continuously in the Hausdorff topology. We record this here, along with the adjacent facts relevant to our main results.
**Lemma 4.3** (Leaf Map Basics).: _Let \(\rho\in\operatorname{Hit}_{4}(S)\). Then \(C_{t}\) is continuous, \(\mathfrak{s}_{\rho}\) is continuous, and if \(x\in\partial\Gamma\) we have \(\mathfrak{s}_{\rho}(x)=\mathfrak{s}_{\rho}(\gamma x)\) for all \(\gamma\in\Gamma\)._
Note that orbits of the action of \(\Gamma\) on \(\partial\Gamma\) are dense, as this action is minimal. So for all \(x\in\partial\Gamma\), the leaf map \(\mathfrak{s}_{\rho}(x)\) is constant on the dense set \(\Gamma x\).
Proof.: Observe that \(\overline{C_{t}}\) varies continuously in the Hausdorff topology on domains in \(\xi^{3}(t_{0})\), since \(\partial C_{t}\) is parametrized by the continuous function \(\partial\Gamma\to\xi^{3}(t_{0})\) given by \(N_{t\to t_{0}}\circ\Xi_{t_{0}\to t}(x)\) for \(x\in\partial\Gamma\), and \(\Xi_{t_{0}\to t}\) depends continuously on \(t\). So \(\mathfrak{s}_{\rho}(t)=[C_{t}]\in\mathfrak{C}\) varies continuously.
For the other claim, if \(\gamma\in\Gamma\) we have \(\mathfrak{s}_{\rho}(\gamma x)=[\rho(\gamma)(\operatorname{dev}(x))]\), where \(\rho(\gamma)|_{\xi^{3}(x)}:\xi^{3}(x)\to\xi^{3}(\gamma x)\) is induced by a linear map and hence a projective equivalence.
We are now ready to prove the main proposition of this paragraph.
**Proposition 4.4** (Modify to Continuity).: _Suppose that \(\mathfrak{s}_{\rho}\) is constant and every leaf \(\mathfrak{s}_{\rho}(x)\) has discrete automorphism group. Then there is a continuous projective equivalence \(\widetilde{f}\) of leaves over a non-empty open set \(U\subset\partial\Gamma\)._
Proof.: Let \(f\) be an arbitrary family of projective equivalences of leaves over \(\partial\Gamma\). To begin, let us fix a right-invariant metric \(d_{P}\) on \(\operatorname{SL}(3,\mathbb{R})\), and a metric \(d_{S}\) on \(\partial\Gamma\). Note that for all \(s\in\partial\Gamma\), we have \(\operatorname{Aut}(\xi^{3}(t_{0}),C_{s})=f(s)\operatorname{Aut}(\xi^{3}(t_{0}),C_{0})f(s)^{-1}\).
To proceed, we need locally uniform control in \(f(s)\) on the separation of \(\operatorname{Aut}(\xi^{3}(t_{0}),C_{s})\) from the identity. To this end, we adopt the notation that for \(\Lambda\) a discrete subgroup of a Lie group \(G\) equipped with a right-invariant metric we set \(\kappa(\Lambda):=\inf\{d(e,g)\mid g\in\Lambda-\{e\}\}\). Let us abbreviate conjugation by \(\Psi_{g}:h\mapsto ghg^{-1}\). We obtain control through the following fact, which is a straightforward consequence of differentiablity of conjugation. We include a proof in the appendix for the convenience of the reader.
**Lemma 4.5** (Discreteness is Conjugation-Stable).: _Let \(G\) be a Lie group and \(\Lambda<G\) be a discrete subgroup. Consider the function \(\eta:g\mapsto\kappa(\Psi_{g}(\Lambda))\). Let \(g_{0}\in G\) be given. Then there is a neighborhood \(U\) of \(g_{0}\) so that \(\eta(h)>\kappa(\Psi_{g_{0}}(\Lambda))/3\) for all \(h\in U\)._
By Lemma 4.5 (Discreteness is Conjugation-Stable), to each \(g\in\operatorname{SL}(3,\mathbb{R})\), there exists a set \(K_{g}\) with the following properties:
1. \(K_{g}\) is compact and contains \(g\) in its interior,
2. Letting \(\kappa_{g}\) denote \(\inf_{h\in K_{g}}(\kappa(\Psi_{h^{-1}}(\operatorname{Aut}(\xi^{3}(t_{0}),C_{0}) )))=\inf_{h\in K_{g}}(\kappa(\operatorname{Aut}(\xi^{3}(t_{0}),hC_{0})))\), we have \(\kappa_{g}>0\),
3. The map \(K_{g}\times K_{g}\to\operatorname{SL}(3,\mathbb{R})\) given by \((h_{1},h_{2})\mapsto h_{1}h_{2}^{-1}\) has image contained in the ball \(B_{\kappa_{g}/2}(e)\).
Now let \(\{K_{g_{i}}\}\) be a countable cover of \(\operatorname{SL}(3,\mathbb{R})\) by such compact sets. Define \(S_{i}\subset\partial\Gamma\) as \(f^{-1}(K_{g_{i}})\). We show:
**Claim:** The restriction of \(f\) to \(S_{i}\) is uniformly continuous.
Proof of Claim.: Fix \(\epsilon>0\). We must exhibit that there is some \(\delta>0\) so that if \(d_{S}(t,t^{\prime})<\delta\) and \(f(t),f(t^{\prime})\in K_{g_{i}}\), then \(d_{P}(f(t),f(t^{\prime}))<\epsilon\).
We first remark that the map \(\overline{B_{\kappa_{g_{i}}/2}(e)}\times K_{g_{i}}\to\mathbb{R}\) given by \((A,h)\mapsto d_{\operatorname{Haus}}(h\overline{C_{0}},Ah\overline{C_{0}})\) is continuous and has zero set exactly \(\{e\}\times K_{g_{i}}\) by construction of \(\kappa_{g_{i}}\). It follows from compactness that there is an \(\epsilon^{\prime}>0\) so that if \(h\in K_{g_{i}}\), \(A\in\overline{B_{\kappa_{g_{i}}/2}(e)}\), and \(d_{\operatorname{Haus}}(h\overline{C_{0}},Ah\overline{C_{0}})<\epsilon^{\prime}\), then \(A\in B_{\epsilon}(e)\).
As \(\partial\Gamma\) is compact, the map \(t\mapsto\overline{C_{t}}\) is uniformly continuous with respect to the Hausdorff topology on \(\xi^{3}(t_{0})\), hence there is a \(\delta>0\) so that if \(d_{S}(t,t^{\prime})<\delta\), then \(d_{\operatorname{Haus}}(\overline{C_{t}},\overline{C_{t^{\prime}}})<\epsilon^ {\prime}\). So if \(d_{S}(t,t^{\prime})<\delta\) and \(t,t^{\prime}\in S_{i}\), we have
\[\epsilon^{\prime}>d_{\operatorname{Haus}}(\overline{C_{t}},\overline{C_{t^{ \prime}}})=d_{\operatorname{Haus}}(\overline{C_{t}},f(t^{\prime})f(t)^{-1} \overline{C_{t}}).\]
As \(C_{t}=f(t)C_{0}\) with \(f(t)\in K_{g_{i}}\) and \(f(t^{\prime})f(t)^{-1}\in B_{\kappa_{g_{i}}/2}(e)\), we have from our previous observation that \(\epsilon>d_{P}(e,f(t^{\prime})f(t)^{-1})=d_{P}(f(t^{\prime}),f(t))\) by right-invariance.
The point of this claim to us is that for any \(i\), there exists a continuous extension \(\widetilde{f}_{i}\) of \(f|_{S_{i}}\) to \(\widetilde{S_{i}}\). So \(\widetilde{f}_{i}\) is a continuous projective equivalence of leaves over \(\overline{S_{i}}\).
Now, as \(S_{i}\) cover \(\partial\Gamma\) the collection \(\{\overline{S_{i}}\}\) is a countable cover of \(\partial\Gamma\) by closed sets, and so by the Baire category theorem at least one \(\overline{S_{i}}\) has non-empty interior. For any such \(i\), setting \(\widetilde{f}=\widetilde{f}_{i}\) yields the desired continuous family of projective equivalences of leaves.
Using the action of \(\Gamma\) on \(\partial\Gamma\), we may enlarge the open sets where we have continuous families of projective equivalences.
**Corollary 4.6** (Enlarge Domains).: _Suppose \(\mathfrak{s}_{\rho}\) is constant and every leaf \(\mathfrak{s}_{\rho}(x)\) has discrete automorphism group. Let \(\gamma\in\Gamma-\{e\}\) have attracting and repelling fixed-points \(\gamma^{+},\gamma^{-}\in\partial\Gamma,\) respectively. Then there is a connected open set \(U\) containing \(\gamma^{+}\) and \(\gamma^{-}\) and a continuous projective equivalence of leaves \(f\) over \(U\)._
Proof.: Proposition 4.4 (Modify to Continuity) produces an open set \(U\subset\partial\Gamma\) and a continuous projective equivalence of leaves \(\widetilde{f}\) over \(U\). By equivariance of \(\operatorname{dev}\), for any \(\eta\in\Gamma\) we have
\[C_{\eta x}=N_{\eta x\to t_{0}}(\operatorname{dev}(\eta x))=N_{\eta x\to t_{0}}( \rho(\eta)\text{dev}(x))=N_{\eta x\to t_{0}}(\rho(\eta)(N_{x\to t_{0}}^{-1}(C_{x}) )).\]
So defining \(f:\eta U\to\operatorname{SL}(3,\mathbb{R})\) by
\[\eta x\mapsto N_{\eta x\to t_{0}}\circ\rho(\eta)\circ N_{x\to t_{0}}^{-1} \circ\widetilde{f}(x)\]
gives a continuous projective equivalence of leaves over \(\eta U\). The corollary now follows from North-South dynamics of the action of \(\Gamma\) on \(\partial\Gamma\)
#### 4.3.2. Boundary Regularity
Throughout this paragraph, we suppress uses of normalization maps \(N_{x\to_{0}}:\operatorname{dev}(x)\to\xi^{3}(t_{0})\) to make notation manageable. The goal of this paragraph is to prove the following claim.
**Proposition 4.7** (Regularity Constraints).: _Suppose that \(\rho\) is a Hitchin representation, \(\mathfrak{s}_{\rho}\) is constant, and \(\mathfrak{s}_{\rho}(x)\) has discrete automorphism group for all \(x\in\partial\Gamma\). Then for all \(\gamma\in\Gamma-\{e\}\),_
\[\operatorname{reg}_{\gamma^{+}}(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+}))= \operatorname{reg}_{\gamma^{-}}\xi^{1}(\gamma^{-}).\]
A key input to our proof of Proposition 4.7 is the following application of discreteness of automorphism groups of leaves, which allows us to determine the values of a continuous projective equivalence of leaves at specific points. It says that at specific points, continuous projective equivalences of leaves commute with \(\rho\) in an appropriate sense.
**Lemma 4.8** (Commutativity Lemma).: _Let \(\gamma\in\Gamma-\{e\}\). If \(f\) is a continuous projective equivalence of leaves over a connected open set \(U\) containing \(\gamma^{+}\) for some \(\gamma\in\Gamma-\{e\}\), then for all \(s\in U\) and \(p\in\overline{\operatorname{dev}(\gamma^{+})}\), we have_
\[\rho(\gamma)(p)=[f(\gamma s,\gamma^{+})\circ\rho(\gamma)\circ f(\gamma^{+},s)] (p)\]
Proof.: The maps \(\{A_{s}\}_{s\in U}\) given by
\[A_{s}:\operatorname{dev}(\gamma^{+}) \to\operatorname{dev}(\gamma^{+})\] \[p \mapsto[f(\gamma s,\gamma^{+})\circ\rho(\gamma)\circ f(\gamma^{+},s)](p)\]
are a continuous family of projective equivalences of \(\operatorname{dev}(\gamma^{+})\), and hence must be constant by discreteness of \(\operatorname{Aut}(\xi^{3}(t_{0}),C_{\gamma^{+}})\). At \(s=\gamma^{+}\) we have \(A_{s}=\rho(\gamma)\).
We are now prepared to prove Proposition 4.7.
Proof of Proposition 4.7.: Let \(\gamma\in\Gamma-\{e\}\) be given. By Corollary 4.6 there is a connected open set \(U\) containing \(\gamma^{+}\) and \(\gamma^{-}\) and a continuous projective equivalence of leaves \(f\) over \(U\). Let \(I\subset\partial\Gamma\) be a closed interval with endpoints \(\gamma^{+},\gamma^{-}\). Our strategy is to constrain \(f(\gamma^{+},s)\) at \(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+})\), then conclude an equality of regularity at controlled points.
**Claim (Stuck to \(\xi^{2}(\gamma^{-})\)).** For all \(s\in I-\{\gamma^{-}\}\), we have \(f(\gamma^{+},s)(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+}))=\xi^{2}(\gamma^{- })\cap\xi^{3}(s)\).
Proof of Claim.: We compute an auxiliary limit in two different ways. Fix \(s\in I-\{\gamma^{-}\}\). By Lemma 4.8, for all \(n\in\mathbb{N}\), we have
\[f(\gamma^{n}s,\gamma^{+})\circ\rho(\gamma^{n})\big{[}f(\gamma^{+},s)[(\xi^{2}( \gamma^{-})\cap\xi^{3}(\gamma^{+})]\big{]}=\rho(\gamma)^{n}[\xi^{2}(\gamma^{- })\cap\xi^{3}(\gamma^{+})]=\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+}),\]
so that
\[\lim_{n\to\infty}f(\gamma^{n}s,\gamma^{+})\circ\rho(\gamma^{n})\circ f(\gamma ^{+},s)[(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+})]=\xi^{2}(\gamma^{-})\cap \xi^{3}(\gamma^{+}).\]
On the other hand, let \(p\in\partial\text{dev}(s)-\{\xi^{2}(\gamma^{-})\cap\xi^{3}(s)\}\). Then \(p=\xi^{2}(t^{\prime})\cap\xi^{3}(s)\) for some \(t^{\prime}\neq\gamma^{-}\) or \(p=\xi^{1}(s)\) with \(s\neq\gamma^{-}\). In the first case, we then have (see Figure 4.3.2)
\[\lim_{n\to\infty}\gamma^{n}s =\gamma^{+}\] \[\lim_{n\to\infty}\rho(\gamma)^{n}p =\lim_{n\to\infty}\xi^{3}(\gamma^{n}s)\cap\xi^{2}(\gamma^{n}t^{ \prime})=\xi^{1}(\gamma^{+}),\]
where we have used North-South dynamics of the action of \(\Gamma\) on \(\partial\Gamma\) and dual osculation. In the second case, we similarly have
\[\lim_{n\to\infty}\rho(\gamma)^{n}\xi^{1}(s)=\lim_{n\to\infty}\xi^{1}(\gamma^{ n}s)=\xi^{1}(\gamma^{+}).\]
So by continuity of \(f\) and that \(f(\gamma^{+},\gamma^{+})\) is the identity on \(\xi^{3}(\gamma^{+})\),
\[\lim_{n\to\infty}f(\gamma^{n}s,\gamma^{+})\circ\rho(\gamma^{n})(p)=\xi^{1}(\gamma ^{+}).\]
As \(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+})\neq\xi^{1}(\gamma^{+})\), the only possibility is that \(f(\gamma^{+},s)(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+}))=\xi^{2}(\gamma^{-} )\cap\xi^{3}(s)\).
We next observe that \(f(\gamma^{-},\gamma^{+})(\xi^{1}(\gamma^{-}))=\xi^{2}(\gamma^{-})\cap\xi^{3}( \gamma^{+})\). To see this, note that by dual osculation, for any sequence \(s_{n}\to\gamma^{-}\) with \(s_{n}\neq\gamma^{-}\) for all \(n\), we have
\[\lim_{n\to\infty}\xi^{2}(\gamma^{-})\cap\xi^{3}(s_{n})=\xi^{1}(\gamma^{-}).\]
Since \(f\) is continuous and \(f(s_{n},\gamma^{+})(\xi^{2}(\gamma^{-})\cap\xi^{3}(s_{n}))=\xi^{2}(\gamma^{-} )\cap\xi^{3}(\gamma^{+})\) for all \(n\), we must have \(f(\gamma^{-},\gamma^{+})(\xi^{1}(\gamma^{-}))=\xi^{2}(\gamma^{-})\cap\xi^{3}( \gamma^{+})\).
Since \(f(\gamma^{-},\gamma^{+}):\xi^{3}(\gamma^{-})\to\xi^{3}(\gamma^{+})\) is a projective equivalence sending \(\xi^{1}(\gamma^{-})\) to \(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+})\), we conclude that \(\operatorname{reg}_{\gamma^{+}}(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+}))= \operatorname{reg}_{\gamma^{-}}\xi^{1}(\gamma^{-})\).
#### 4.3.3. Zariski Density
In the following, we fix a lift of \(\rho:\Gamma\to\operatorname{PSL}(4,\mathbb{R})\) to \(\operatorname{SL}(4,\mathbb{R})\). Such lifts always exist, as mentioned in SS3. In this paragraph, we examine the contraints on eigenvalues of \(\rho\) given by Proposition 4.7 (Regularity Constraints) through Lemma 2.5 (Regularity at Fixed Points), and prove a dichotomy for the Zariski closure of \(\rho\).
We obtain eigenvalue data as follows. Under the hypotheses of Proposition 4.7, let \(\gamma\in\Gamma-\{e\}\), write the eigenvalues of \(\rho(\gamma)\) as \(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\) ordered by decreasing modulus, and denote \(\log|\lambda_{i}|\) by \(\ell_{i}\) for \(i=1,...,4\). As a consequence of \(\rho\) being Hitchin,
\(\ell_{3}>\ell_{4}\) and all \(\lambda_{i}\) have the same sign. Denote the corresponding eigenlines by \(e_{1},e_{2},e_{3},e_{4}\). We have \(e_{1}=\xi^{1}(\gamma^{+})\), \(e_{2}=\xi^{2}(\gamma^{+})\cap\xi^{3}(\gamma^{-})\), \(e_{3}=\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+})\), \(e_{4}=\xi^{1}(\gamma^{-})\) (see [19] SS5, in particular Fig. 7 there). Applying Lemma 2.5 (Regularity at Fixed Points) to the restrictions of \(\rho(\gamma)\) to the invariant subspaces \(\xi^{3}(\gamma^{+})\) and \(\xi^{3}(\gamma^{-})\) and the constraint \(\operatorname{reg}_{\gamma^{+}}(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+}))= \operatorname{reg}_{\gamma^{-}}\xi^{1}(\gamma^{-})\) shows
\[\frac{\ell_{1}-\ell_{3}}{\ell_{2}-\ell_{3}}=\frac{\ell_{2}-\ell_{4}}{\ell_{3} -\ell_{4}}, \tag{4.1}\]
or equivalently that
\[(\ell_{1}-\ell_{3})(\ell_{3}-\ell_{4})-(\ell_{2}-\ell_{4})(\ell_{2}-\ell_{3}) =0. \tag{4.2}\]
**Remark**.: _The homogeneity of Equation 4.2 is responsible for much of the usefulness of this constraint. It is expected since the points where regularity is computed for \(\gamma\) and \(\gamma^{n}\) (\(n\in\mathbb{N}\)) are the same._
**Remark**.: _One may also apply the same argument to \(\gamma^{-1}\) in place of \(\gamma\), which establishes an equality of regularity between two different points than the argument for \(\gamma\). The equation so obtained appears distinct from Equation 4.1 at a glance, but the two may be shown to be equivalent. So this offers no new information._
Zariski closures of Hitchin representations have been classified.2 For a lift of \(\rho\) in the \(\operatorname{PSL}(4,\mathbb{R})\) Hitchin component to \(\operatorname{SL}(4,\mathbb{R})\), the classification states that the Zariski closure of \(\rho(\Gamma)\) is conjugate to a principal \(\operatorname{SL}(2,\mathbb{R})\) (in which case \(\rho\) is \(4\)-Fuchsian), is conjugate to \(\operatorname{Sp}(4,\mathbb{R})\), or is \(\operatorname{SL}(4,\mathbb{R})\). We shall show that \(\rho\) is \(4\)-Fuchsian through this condition.
Figure 5. The zero locus of Equation 4.2. Note it is a cone. Image generated by Wolfram Mathematica.
We begin by showing that the Zariski closure of \(\rho(\Gamma)\) is not conjugate to \(\operatorname{Sp}(4,\mathbb{R})\). The linear algebra behind this case is contained in the next lemma.
**Lemma 4.9** (Diagonal Form).: _Suppose that \(A\in\operatorname{SL}(4,\mathbb{R})\) is diagonalizable, with real eigenvalues \((\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})\) with \(|\lambda_{1}|>|\lambda_{2}|>|\lambda_{3}|>|\lambda_{4}|>0\), satisfies Equation 4.1, and \(A\) is conjugate to a matrix in \(\operatorname{Sp}(4,\mathbb{R})\). Then \(A\) is conjugate to a matrix of the form \(\operatorname{diag}(\lambda^{3},\lambda,\lambda^{-1},\lambda^{-3})\) for some \(\lambda\in\mathbb{R}-[-1,1]\)._
Proof.: This is a computation. Write \(\alpha=\ell_{1}-\ell_{2},\beta=\ell_{2}-\ell_{3},\gamma=\ell_{3}-\ell_{4}\). Then \(\alpha,\beta,\gamma>0\), and Equation 4.1 is equivalent to \((\alpha+\beta)\gamma=(\beta+\gamma)\beta\), which reduces to \(\beta^{2}=\alpha\gamma\).
Eigenvalues of semisimple symplectic matrices \(A\) come in inverse pairs, i.e. if \(\lambda\) is an eigenvalue of \(A\) with multiplicity \(m\), then \(1/\lambda\) is also an eigenvalue with multiplicity \(m\). For us, this means that \(\ell_{1}-\ell_{2}=\ell_{3}-\ell_{4}\), so that \(\alpha^{2}=\beta^{2}=\gamma^{2}\), and by positivity \(\alpha=\beta=\gamma\). That \(A\in\operatorname{SL}(4,\mathbb{R})\) is to say \(\ell_{1}+\ell_{2}+\ell_{3}+\ell_{4}=0\), which implies the claim.
Any \(4\)-Fuchsian representation \(\rho\) has, up to negation, \(\rho(\gamma)\) (\(\gamma\in\Gamma\)) conjugate to a matrix of the form \(\operatorname{diag}(\lambda^{3},\lambda,\lambda^{-1},\lambda^{-3})\) for some \(\lambda>1\). We next show that this property distinguishes \(4\)-Fuchsian representations. In particular, it is not possible for a non-Fuchsian representation to take values in a collection of distinct principal \(\operatorname{SL}(2,\mathbb{R})\) subgroups of \(\operatorname{SL}(4,\mathbb{R})\).
**Proposition 4.10** (Fuchsian from Eigenvalues).: _Suppose that \(\rho\) is lift of a \(\operatorname{PSL}(4,\mathbb{R})\) Hitchin representation to \(\operatorname{SL}(4,\mathbb{R})\) so that for all \(\gamma\in\Gamma\), \(\rho(\gamma)\) is conjugate to a matrix of the form \(\operatorname{diag}(\lambda^{3},\lambda,\lambda^{-1},\lambda^{-3})\) for some positive \(\lambda=\lambda(\gamma)\in\mathbb{R}-\{0\}\). Then \(\rho\) is \(4\)-Fuchsian._
**Remark**.: _A shorter proof of the below is possible using the theorem of Benoist described in the next paragraph. The below proof is included due to its explicitness and its lack of direct reliance on such heavy machinery: in place of of Benoist's limit cone theorem, it uses the fundamental theorem of symmetric polynomials._
Proof.: By the classification of Zariski closures of Hitchin representations [33], it suffices to show that the Zariski closure of \(\rho(\Gamma)\) is neither \(\operatorname{SL}(4,\mathbb{R})\) nor conjugate to \(\operatorname{Sp}(4,\mathbb{R})\).
We begin by recalling that if \(a_{1},...,a_{4}\) are the eigenvalues of \(A\in\operatorname{GL}(4,\mathbb{R})\), then the coefficients \(\sigma_{i}\) (\(i=0,...,3\)) of the characteristic polynomial of \(A\) are the elementary symmetric polynomials in the variables \(a_{1},...,a_{4}\), and are all polynomials in the entries of \(A\). So let \(F(a_{1},a_{2},a_{3},a_{4})=\prod_{i,j\in\{1,...,4\}}(a_{i}-a_{j}^{3})\). Then \(F\) is a symmetric polynomial in \(\{a_{1},...,a_{4}\}\), and so is an element of the polynomial ring \(\mathbb{Z}[\sigma_{0},...,\sigma_{3}]\) by the fundamental theorem of symmetric polynomials. Consequently, \(F\) is a polynomial \(G\) in the entries of \(A\). As all \(\sigma_{i}\) are conjugation-invariant, so is \(G\).
Note, furthermore, that if \(A\) is conjugate to a matrix of the form \(\operatorname{diag}(\lambda^{3},\lambda,\lambda^{-1},\lambda^{-3})\), then \(F(\lambda^{3},\lambda,\lambda^{-1},\lambda^{-3})\) vanishes. So for a Hitchin representation \(\rho\) satisfying our hypotheses, the Zariski closure of \(\rho(\Gamma)\) is contained in the vanishing locus of \(G\).
On the other hand, for instance, the symplectic matrix \(A=\operatorname{diag}(3,2,1/2,1/3)\in\operatorname{Sp}(4,\mathbb{R})\) is not in the vanishing locus of \(G\), as \(F(3,2,1/2,1/3)\neq 0\). As \(G\) is conjugation-invariant, this shows that the Zariski closure of \(\rho(\Gamma)\) cannot contain any subgroup of \(\operatorname{SL}(4,\mathbb{R})\) conjugate to \(\operatorname{Sp}(4,\mathbb{R})\), which gives the claim.
We immediately obtain:
**Corollary 4.11** (Zariski Closure Dichotomy).: _Suppose that for all \(\gamma\in\Gamma-\{e\}\), we have \(\operatorname{reg}_{\gamma^{+}}(\xi^{2}(\gamma^{-})\cap\xi^{3}(\gamma^{+}))= \operatorname{reg}_{\gamma^{-}}\xi^{1}(\gamma^{-})\). Then \(\rho\) is \(4\)-Fuchsian or Zariski dense._
Proof.: Combine Lemma 4.9 and Proposition 4.10 and the classification of Zariski closures of Hitchin representations.
#### 4.3.4. Limit Cones
We finish the discrete case here by showing:
**Proposition 4.12** (Zariski Density Impossible).: _Suppose that \(\mathfrak{s}_{\rho}\) is constant and \(\mathfrak{s}_{\rho}(x)\) has discrete automorphism group. Then \(\rho(\Gamma)\) can not be Zariski-dense in \(\operatorname{SL}(4,\mathbb{R})\)._
The source of our obstruction is an incompatibility of the eigenvalues of \(\rho\) with Zariski density. The perspective we take to demonstrate the incompatibility is to analyze the _limit cone_\(\ell_{\rho(\Gamma)}\), which has been studied for Zariski-dense representations by Benoist. We begin by recalling the definition of limit cones and Benoist's theorem. The relevant theory has been developed for connected real reductive linear semisimple Lie groups \(G\), but we shall deal exclusively with the cases \(G=\operatorname{SL}(4,\mathbb{R})\) and \(\operatorname{Sp}(4,\mathbb{R})\).
Let \(H\) be a Cartan subgroup of \(\operatorname{SL}(4,\mathbb{R})\), e.g. the diagonal matricies of determinant \(1\) with respect to a choice of basis of \(\mathbb{R}^{4}\), and \(\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}\) the corresponding Cartan subalgebra of \(\mathfrak{sl}(4,\mathbb{R})\). We identify \(\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}\) with the hyperplane
\[\{(x_{1},x_{2},x_{3},x_{4})\in\mathbb{R}^{4}\mid x_{1}+x_{2}+x_{3}+x_{4}=0\}\]
and take the closed Weyl chamber \(\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}^{+}\subset\mathfrak{a}_{\mathfrak{ sl}(4,\mathbb{R})}\) given by
\[\{(x_{1},x_{2},x_{3},x_{4})\in\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}\mid x _{1}\geq x_{2}\geq x_{3}\geq x_{4}\}.\]
Let \(H_{\operatorname{Sp}}<H\) be a Cartan subgroup of \(\operatorname{Sp}(4,\mathbb{R})\), e.g. the elements of \(H\) preserving the standard symplectic form, with corresponding Cartan subalgebra \(\mathfrak{a}_{\mathfrak{sp}(4,\mathbb{R})}\) identified with the elements \((x_{1},x_{2},x_{3},x_{4})\in\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}\) with \(x_{1}+x_{4}=x_{2}+x_{3}=0\).
For \(A\in\operatorname{SL}(4,\mathbb{R})\), and \(i=1,2,3,4\) denote by \(\lambda_{i}(A)\) the generalized eigenvalue of \(A\) with \(i^{\text{th}}\) largest modulus. We define
\[\Lambda:\operatorname{SL}(4,\mathbb{R}) \to\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}^{+}\] \[A \mapsto(\log|\lambda_{1}(A)|,\log|\lambda_{2}(A)|,\log|\lambda_{ 3}(A)|,\log|\lambda_{4}(A)|).\]
**Definition 4.13**.: _Given a subgroup \(H<\operatorname{SL}(4,\mathbb{R})\) (resp. \(\operatorname{Sp}(4,\mathbb{R})\)), the limit cone\(\ell_{H}\) of H is the smallest closed cone in \(\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}^{+}\) (resp. \(\mathfrak{a}_{\mathfrak{sp}(4,\mathbb{R})}^{+}\)) containing \(\Lambda(H)\)._
For us, \(\ell_{\rho(\Gamma)}\) is the closure of the half-lines spanned by \((\ell_{1},\ell_{2},\ell_{3},\ell_{4})\) in the notation of the previous section. The following is due to Benoist:
**Theorem 4.14** (Benoist [3]).: _Suppose \(H<\operatorname{SL}(4,\mathbb{R})\) (resp. \(H<\operatorname{Sp}(4,\mathbb{R})\)) is Zariski dense. Then \(\ell_{H}\) is a convex cone with nonempty interior in \(\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}^{+}\) (resp. \(\mathfrak{a}_{\mathfrak{sp}(4,\mathbb{R})}^{+}\))._
In fact, Benoist proved much more in [3], such as realizability of convex cones with nonempty interior by Zariski-dense subgroups and equivalence of \(\ell_{H}\) and an analogous definition in terms of singular values. The above is what we need.
We are now ready to complete the discrete case.
Proof of Proposition 4.12.: For any \(\gamma\in\Gamma\), the logarithms \((\ell_{1},\ell_{2},\ell_{3},\ell_{4})\) of the absolute values of the eigenvalues of \(\rho(\gamma)\) must satisfy the _homogeneous_ degree \(2\) polynomial of Equation 4.2: \(F(x_{1},x_{2},x_{3},x_{4})=(x_{1}-x_{3})(x_{3}-x_{4})-(x_{2}-x_{4})(x_{2}-x_{3} )=0\). This polynomial is not uniformly \(0\) on \(\mathfrak{a}_{\mathfrak{sl}(4,\mathbb{R})}^{+}\), and so by homogeneity has zero set \(X\) that is a closed cone of positive codimension. As \(\Lambda(\rho(\Gamma))\subset X\), the limit cone \(\ell_{\rho(\Gamma)}\) must have empty interior, which by Benoist's theorem is impossible if \(\rho(\Gamma)\) is Zariski-dense.
### The Non-Discrete Case
Over the course of SS4.4.1-SS4.4.3 we will show:
**Proposition 4.15** (Only Ellipses).: _Suppose there is some \(x\in\partial\Gamma\) so that \(\mathfrak{s}_{\rho}(x)\) has non-discrete automorphism group. Then \(\rho\) is \(4\)-Fuchsian and \(\mathfrak{s}_{\rho}(y)\) is the ellipse for all \(y\in\partial\Gamma\)._
The hypothesis of Proposition 4.15 stands throughout the subsection. Note that we do not assume _a priori_ that \(\mathfrak{s}_{\rho}(x)\) is constant. For the assumed \(x\in\partial\Gamma\) so \(\operatorname{Aut}(\xi^{3}(t_{0}),C_{x})\) is not discrete, the closed subgroup theorem implies \(\operatorname{Aut}(\xi^{3}(t_{0}),C_{x})\) contains a one-parameter subgroup \(\{A_{t}\}_{t\in\mathbb{R}}\) with infinitesimal generator \(y\in\mathfrak{sl}(3,\mathbb{R})\).
The proof of Proposition 4.15 proceeds by case analysis of the Jordan form of the infinitesimal generator \(y\in\mathfrak{sl}(3,\mathbb{R})\). What makes the hypothesis of this section strong is the observation that if \(p\in\partial C_{x}\), then \(A_{t}p\in\partial C_{x}\) for all \(t\in\mathbb{R}\). For each Jordan form of \(y\), we obtain succinct classifications of possible shapes of the boundary using this observation.
#### 4.4.1. Case List
Up to conjugation, the infinitesimal generator \(y\) of \(\{A_{t}\}\) has one of the following forms, where \(\lambda,\eta,a,b\in\mathbb{R}\) and \(b\neq 0\):
\[(a).\begin{bmatrix}0&1\\ &0&1\\ &&0\end{bmatrix},\quad(b).\begin{bmatrix}a&-b\\ b&a\\ &&-2a\end{bmatrix},\quad(c).\begin{bmatrix}\lambda&1\\ &\lambda&\\ &&-2\lambda\end{bmatrix},\quad(d).\begin{bmatrix}\lambda&&\\ &\eta&\\ &&-(\lambda+\eta)\end{bmatrix}.\]
We break (b) into two sub-cases: (b).(i): \(a=0\) and (b).(ii): \(a\neq 0\). We break (c) into two sub-cases: (c).(i): \(\lambda=0\) and (c).(ii): \(\lambda\neq 0\). We break (d) into two sub-cases: (d).(i): \(\lambda=\eta\) and (d).(ii): \(\lambda\neq\eta\). We shall show that only cases (a), (b), and (d) are possible, and if any occurs then \(\rho\) is 4-Fuchsian. Case (d).(ii) is the most complicated case. Figure 6 summarizes the coming case analysis.
#### 4.4.2. Cases (a)-(d).(i)
In this section, we establish Proposition 4.15 for all cases except (d).(ii).
Proof.: **Case (a).** In this case, \(\exp(ty)=\begin{bmatrix}1&t&t^{2}\\ &1&t\\ &&1\end{bmatrix}\). One verifies that the only possibilities for \(C_{x}\) are an ellipse or line, so that \(C_{x}\) is an ellipse. Since the projective class \([O]\) of the ellipse is a closed point of \(\mathfrak{C}\), by Lemma 4.3 (Leaf Map Basics) the pre-image \(\mathfrak{s}_{\rho}^{-1}(\{[O]\})\subset\partial\Gamma\) is closed and contains a dense subset of \(\partial\Gamma\), hence must be all of \(\partial\Gamma\). So for all \(t\in\partial\Gamma\), the leaf \(C_{t}\) is an ellipse.
Let \(\gamma\in\Gamma-\{e\}\). Since we know \(C_{\gamma^{+}}\) and \(C_{\gamma^{-}}\) are ellipses, we may apply the regularity Lemma 2.5 to \(\xi^{1}(\gamma^{+})\) and \(\xi^{1}(\gamma^{-})\), which yields (in the notation of SS4.3.3)
\[\frac{\ell_{1}-\ell_{3}}{\ell_{1}-\ell_{2}}=2=\frac{\ell_{2}-\ell_{4}}{\ell_{3 }-\ell_{4}}. \tag{4.3}\]
Write \(\alpha=\ell_{1}-\ell_{2},\beta=\ell_{2}-\ell_{3},\gamma=\ell_{3}-\ell_{4}\); all are positive.
The first equality of Equation 4.3 shows that \(\alpha=\beta\), and the second equality \(2=(\beta+\gamma)/\gamma\) shows that \(\beta=\gamma\). So \(\alpha=\beta=\gamma\), which together with the condition
Figure 6. Summary of subsequent analysis.
\(\ell_{3}+\ell_{4}=0\) (due to \(\rho(\gamma)\in\operatorname{SL}(4,\mathbb{R})\)) shows that \(\rho(\gamma)\) is conjugate to a matrix of the form \(\operatorname{diag}(\lambda^{3},\lambda,\lambda^{-1},\lambda^{-3})\) for some \(\lambda>1\). Proposition 4.10 (Fuchsian from Eigenvalues) shows \(\rho\) is \(4\)-Fuchsian.
**Case (b) Generalities.** Denote by \(\Lambda_{t}\) the \(2\times 2\) complex multiplication matrix of \(e^{t\zeta}\) where \(\zeta=a+ib\) corresponds to \(\begin{bmatrix}a&-b\\ b&a\end{bmatrix}\). We then have \(\exp(ty)=\begin{bmatrix}\Lambda_{t}&\\ &e^{-2at}\end{bmatrix}\).
**Case (b).(i).** If \(a=0\), then in an appropriate affine chart, the non-point orbits of \(\exp ty\) are circles, so that all orbits are either ellipses, points, or line segments. The analysis in case (a) handles the case of ellipses, which are the only boundaries of properly convex sets among the possibilities.
**Case (b).(ii).** If \(a\neq 0\), every orbit of \(\exp ty\) is a line segment, point, or spirals around \([e_{3}]\). Since spirals are not convex, this case is impossible.
**Case (c).(i).** If \(\lambda=0\), then all orbits of \(\exp(ty)\) are line segments or points, which are obstructed by strict convexity.
**Case (c).(ii).** Here \(\lambda\neq 0\). In this case, \(\exp(ty)=\begin{bmatrix}e^{\lambda t}&te^{\lambda t}\\ &e^{\lambda t}\\ &e^{-2\lambda t}\end{bmatrix}\). Work in the affine chart \(e_{3}\neq 0\), and write \(p=[a:b:1]\). Then \(A_{t}[a:b:1]=[ae^{3\lambda t}+tbe^{3\lambda t}:be^{3\lambda t}:1]\). If \(b=0\), this orbit is contained in the horizontal axis. If \(b\neq 0\), this orbit is the plane curve (see Figure 7)
\[x=\frac{ay}{b}+\frac{\log(y/b)y}{3\lambda}. \tag{4.4}\]
If \(C_{x}\) is stable under \(A_{t}\), then \(\partial C_{x}\) must contain a point \(p\) in this affine chart (otherwise \(C_{x}\) is not properly covnex). This orbit \(\{A_{t}p\}\) is either a line segment, which cannot occur by strict convexity of \(\partial C_{x}\), or has the form of Equation 7. In the second case, consideration of the point \([0:0:1]\in\partial C_{x}\) shows that the only way for \(C_{x}\) to be convex and for \(\partial C_{x}\) to contain \(\{A_{t}p\}\) is for \(\partial C_{x}\) to contain a horizontal ray from \([0:0:1]\), which is obstructed by strict convexity. We conclude that this case is impossible.
**Case (d).(i).** We have \(\exp(ty)=\operatorname{diag}(e^{\lambda},e^{\lambda},e^{-2\lambda})\). In the affine chart given by \(e_{3}\neq 0\), the orbit of \([a:b:1]\) is the set \(\{([e^{3\lambda t}a:e^{3\lambda t}b:1]\mid t\in\mathbb{R}\}\), i.e. the ray between the origin and \((a,b)\). Strict convexity of \(C_{t}\) then obstructs this case.
Figure 7. Left: sample orbits in case (c).(ii), with horizontal axis scaled to emphasize relevant features. Right: an \((\alpha,\beta)\)-Bigon, with a dashed line between irregular points. Both generated in Wolfram Mathematica.
#### 4.4.3. Case (d).(ii): Bigons
Let \(e_{1},e_{2},e_{3}\) be eigenvectors corresponding to the eigenvalues \(\lambda,\eta,-(\lambda+\eta)\) respectively. Without loss of generality, assume \(\lambda>\eta\). We have \(\exp(ty)=\operatorname{diag}(e^{\lambda},e^{\eta},e^{-(\lambda+\eta)})\). From the regularity Lemma 2.5, we see that any strictly convex domain \(\Omega\) in this case must be formed by gluing together two smooth segments with endpoints \([e_{1}],[e_{3}]\) so that
\[\operatorname{reg}_{\Omega}([e_{1}])=\frac{2\lambda+\eta}{\lambda-\eta}, \qquad\operatorname{reg}_{\Omega}([e_{3}])=\frac{2\lambda+\eta}{2\eta+\lambda}.\]
See Figure 7. Let \(1<\alpha,\beta\leq 2\) be the regularity of \(\partial\Omega\) at \([e_{1}],[e_{3}]\) respectively. We shall call any domain \(\Omega\) of this type with regularity \(\alpha,\beta\) at \([e_{1}],[e_{3}]\) an _\((\alpha,\beta)\)-bigon_, and denote by \(p_{\alpha}(\Omega),p_{\beta}(\Omega)\) the points of \(\alpha,\beta\) regularity, respectively.
We remark that ellipses are \((2,2)\)-bigons, but not the only \((2,2)\)-bigons: one may also glue together two segments of distinct ellipses stabilized by the same one-parameter family of projective transformations. That our notation (see SS2) calls \(C^{1,1}\) boundary points \(C^{2}\) is of note here.
The following lemma allows us to convert information about a single \((\alpha,\beta)\)-bigon leaf \(\mathfrak{s}_{\rho}(x)\) to information about all leaves. Let \(O\) denote an ellipse in \(\mathbb{RP}^{2}\).
**Lemma 4.16** (Bigon Closures).: _If \(\Omega\) is an \((\alpha,\beta)\)-bigon, in \(\mathfrak{C}\) we have \(\overline{\{[\Omega]\}}=\{[\Omega],[O]\}\)._
The proof is a small variation of Benzecri's classical proof that divisible domains are closed points of \(\mathfrak{C}\) ([5] V.3.3, or e.g. [14], SS4.5), modified to account for the symmetries of the situation.
Proof.: Let \(\Omega\) be an \((\alpha,\beta)\)-bigon and \(O\) an ellipse in \(\mathbb{RP}^{2}\). It suffices to show that the union of the orbits of \(\{\Omega\}\) and \(\{O\}\) is closed in \(\mathcal{C}\), which is equivalent to the closedness of the union of the orbits of the preimages \(\Pi^{-1}(\{\Omega\}):=\{\Omega\}\times\Omega\) and \(\Pi^{-1}(\{O\}):=\{O\}\times O\) in \(\mathcal{C}^{*}\). This is in turn equivalent to showing the image of \((\{\Omega\}\times\Omega)\cup(\{O\}\times O)\) under the projection \(\mathcal{C}^{*}\to\mathfrak{C}^{*}\) is closed.
We note that \(\{O\}\times O\) projects to a single point in \(\mathfrak{C}^{*}\), that we shall call \(\mathfrak{O}\), since the projective automorphism group of \(O\) is transitive. On the other hand, let \(p\) be any point on the intersection of the segment \(\overline{[e_{1}][e_{3}]}\) with \(\Omega\), and let \(\ell\) be the intersection of the line between \([e_{3}]\) and \([p]\) with \(\Omega\) with endpoints \(\ell_{1},\ell_{2}\). Then the orbit of \(\ell\) under \(\operatorname{Aut}(\mathbb{RP}^{2},\Omega)\) is \(\Omega\), and so the projections of \(\{\Omega\}\times\ell\) and \(\{\Omega\}\times\Omega\) to \(\mathfrak{C}^{*}\) agree.
Now, given a sequence of points \((\Omega,p_{n})\) with \(p_{n}\in\ell\), any divergent sequence \(p_{n}\) contains a subsequence converging to one of the smooth points \(\ell_{1}\) or \(\ell_{2}\) of \(\partial\Omega\). It is a standard example using osculating conics (e.g. [15] Ex. 4.5.2.3) that given a domain \(\Omega\) with smooth boundary point \(p\) and a sequence \(p_{n}\) in \(\Omega\) converging to \(p\) along a line, \((\Omega,p_{n})\) converges to \(\mathfrak{O}\) in \(\mathfrak{C}^{*}\).
We conclude that \(\mathfrak{O}\) is a limit point of the sequence \((\Omega,p_{n})\). So the image in \(\mathfrak{C}^{*}\) of \((\{\Omega\}\times\ell)\cup(\{O\}\times O)\) is compact,3 and hence closed as \(\mathfrak{C}^{*}\) is Hausdorff by Benzecri compactness. This establishes the claim.
Footnote 3: \(\mathfrak{C}^{*}\) is second countable, so that sequential compactness and compactness are equivalent.
We now prove the final component of the non-discrete case:
**Lemma 4.17**.: _Suppose that there is some \(x_{0}\in\partial\Gamma\) so that \(\mathfrak{s}_{\rho}(x_{0})\) is an \((\alpha,\beta)\)-bigon. Then \(\alpha=\beta=2\) and \(\rho\) is \(4\)-Fuchsian._
Proof.: Write \(\Omega=\mathfrak{s}_{\rho}(x_{0})\). By Lemma 4.16 (Bigon Closures), the hypothesis implies that every leaf \(\mathfrak{s}_{\rho}(x)\) is projectively equivalent to \(\Omega\) or \(O\) as in case (a): the pre-image \(\mathfrak{s}_{\rho}^{-1}(\{[\Omega],[O]\})\) is a closed subset of \(\partial\Gamma\) containing a dense subset, and hence all \(\partial\Gamma\). If a single leaf is projectively equivalent to \(O\), the analysis in case (a) completes the proof.
So we may assume that all leaves \(\mathfrak{s}_{\rho}(x)\) are projectively equivalent to \(\Omega\), and that \(\Omega\) is not an ellipse. In particular, \(\Omega\) is has non-smooth points at \(p_{\alpha}(\Omega)\) and \(p_{\beta}(\Omega)\).
We next observe that for any \(\gamma\in\Gamma-\{e\}\), the set of attracting and repelling fixed points of \(\rho(\gamma)\) restricted to \(\xi^{3}(\gamma^{+})\) must be \(\{p_{\alpha}(C_{\gamma^{+}}),p_{\beta}(C_{\gamma^{+}})\}\) since \(\Omega\) has exactly \(2\) non-smooth points. A similar statement holds for \(\xi^{3}(\gamma^{-})\). The point of this is that we may apply Lemma 2.5 to deduce information about the eigenvalues of \(\rho(\gamma)\) from the pair \((\alpha,\beta)\).
As always, write by \(\ell_{1}>\ell_{2}>\ell_{3}>\ell_{4}\) the logarithms of the absolute values of the eigenvalues of \(\rho(\gamma)\). From Lemma 2.5, we obtain that as sets
\[\{\alpha,\beta\}=\left\{\frac{\ell_{1}-\ell_{3}}{\ell_{1}-\ell_{2}},\frac{\ell _{1}-\ell_{3}}{\ell_{2}-\ell_{3}}\right\}=\left\{\frac{\ell_{2}-\ell_{4}}{\ell _{2}-\ell_{3}},\frac{\ell_{2}-\ell_{4}}{\ell_{3}-\ell_{4}}\right\}.\]
We conclude that \((\ell_{1},\ell_{2},\ell_{3},\ell_{4})\) must satisfy the homogeneous polynomials
\[0 =(l_{1}-\ell_{3})^{2}(\beta(\ell_{2}-\ell_{3})-\alpha(\ell_{1}- \ell_{2}))(\alpha(\ell_{2}-\ell_{3})-\beta(\ell_{1}-\ell_{2})), \tag{4.6}\] \[0 =(\ell_{2}-\ell_{4})^{2}(\beta(\ell_{3}-\ell_{4})-\alpha(\ell_{2}- \ell_{3}))(\alpha(\ell_{3}-\ell_{4})-\beta(\ell_{2}-\ell_{3})). \tag{4.5}\]
This holds for all \(\gamma\in\Gamma\). For any choice of \(\alpha,\beta\), the above equations are homogeneous polynomials on the closed Weyl chamber \(\mathfrak{a}^{+}_{\mathfrak{sl}(4,\mathbb{R})}\), identified with the space of \(4\)-ples \((x_{1},x_{2},x_{3},x_{4})\) so that \(x_{1}>x_{2}>x_{3}>x_{4}\) and \(x_{1}+x_{2}+x_{3}+x_{4}=0\). By restriction, they are polynomials the closed Weyl chamber \(\mathfrak{a}^{+}_{\mathfrak{sp}(4,\mathbb{R})}\subset\mathfrak{a}^{+}_{ \mathfrak{sl}}(4,\mathbb{R})\) of \(\mathfrak{sp}(4,\mathbb{R})\), identified with those \(4\)-ples in \(\mathfrak{a}^{+}\) with \(x_{4}+x_{1}=x_{2}+x_{3}=0\). One may verify that for any choice of \(1<\alpha,\beta\leq 2\) these polynomials do not vanish identically on \(\mathfrak{a}^{+}_{\mathfrak{sl}(4,\mathbb{R})}\) or \(\mathfrak{a}^{+}_{\mathfrak{sp}(4,\mathbb{R})}\). In particular, the vanishing loci of these polynomials in both \(\mathfrak{a}^{+}_{\mathfrak{sl}(4,\mathbb{R})}\) and \(\mathfrak{a}^{+}_{\mathfrak{sp}(4,\mathbb{R})}\) are cones of positive codimension.
By the classification of Zariski closures of Hitchin representations, \(\rho\) is \(4\)-Fuchsian, \(\rho(\Gamma)\) is Zariski dense in a subgroup \(H<\operatorname{SL}(4,\mathbb{R})\) conjugate to \(\operatorname{Sp}(4,\mathbb{R})\), or is Zariski dense. In either non-Fuchsian case, the above analysis shows that the limit cone \(\ell_{\rho(\Gamma)}\) cannot have interior in \(\mathfrak{a}^{+}_{\mathfrak{sl}(4,\mathbb{R})}\) or \(\mathfrak{a}^{+}_{\mathfrak{sp}(4,\mathbb{R})}\). Theorem 4.14 of Benoist on limit cones thus implies that \(\rho\) is \(4\)-Fuchsian.
### Deduction of Theorems 1.1 and 1.2
We end by documenting how Theorems 1.1 and 1.2 follow from the preceding. We first note:
**Theorem 4.18**.: _The leaf map \(\mathfrak{s}_{\rho}\) is constant if and only if \(\rho\) is \(4\)-Fuchsian. If \(\rho\) is \(4\)-Fuchsian, then \(\mathfrak{s}_{\rho}\) takes value the ellipse._
Proof.: The \(4\)-Fuchsian case is shown by Guichard and Wienhard in [19]. That \(\mathfrak{s}_{\rho}\) is not constant if \(\rho\) is not \(4\)-Fuchsian follows from Corollary 4.11, Proposition 4.12, and Proposition 4.15.
The main theorems follow:
Proof of Theorem 1.1.: The first equivalence is Theorem 4.18. The parts of the equivalences of (3) and (4) with (1) pertaining to \(4\)-Fuchsian representations follow from standard facts about ellipses. We have (3) implies (2) as divisible domains are closed points of \(\mathfrak{C}\) together with Lemma 4.3 (Leaf Map Basics). We have (4) implies (2) from Proposition 4.15, in particular the lack of the assumption that \(\mathfrak{s}_{\rho}\) is constant there.
Proof of Theorem 1.2.: Combine Theorem 4.18 with Lemma 4.3 (Leaf Map Basics).
## Appendix A Proofs of Some Useful Facts
In this appendix, we prove two facts that are used in our main proofs. We reproduce the statements from the body of the paper for the convenience of the reader.
The first is Lemma 2.5 on the regularity of fixed points of domains under appropriate linear maps.
**Lemma A.1** (Regularity at Fixed Points).: _Let \(\Omega\subset\mathbb{RP}^{2}\) be a properly convex, strictly convex domain preserved by \(A\in\operatorname{GL}(3,\mathbb{R})\) conjugate to \(\operatorname{diag}(\lambda_{1},\lambda_{2},\lambda_{3})\) with \(\lambda_{1}>\lambda_{2}>\lambda_{3}>0\). Write \(l_{i}=\log\lambda_{i}\) for \(i=1,2,3\) and let \(x_{A^{+}}\) denote the attracting fixed point of \(A\) in \(\mathbb{RP}^{2}\)._
_Then \(x_{A^{+}}\in\partial\Omega\) is exactly \(C^{\alpha}\) for_
\[\alpha=\frac{l_{1}-l_{3}}{l_{1}-l_{2}}.\]
Proof of Lemma 2.5.: We follow Benoist to show that \(\alpha\) is an upper bound for the regularity of \(\partial\Omega\) at \(x_{A^{+}}\). The lower bound follows from uniformity present in the argument for the upper bound.
Let \(e_{1},e_{2},e_{3}\) be the eigenvectors of \(A\) with eigenvalues \(\lambda_{1},\lambda_{2},\lambda_{3}\), respectively. Work in an affine chart for which the repelling hyperplane of \(A\) is the hyperplane at infinity, \(x_{A^{+}}\) is at the origin, the attracting hyperplane \(y_{A^{+}}\) of \(A^{+}\) is the horizontal axis, the intersection of the line \(\overline{[e_{1}][e_{3}]}\) with this affine chart is the vertical axis, and \(\Omega\) is contained in the upper half-plane. Strict convexity implies \(\partial\Omega\) meets the horizontal axis only at the origin and contains no line segment. Denote by \(x_{A^{-}}\) the repelling fixed-point of \(A\).
It suffices to produce constants \(C_{1},C_{2}\) so that for all \(x\neq x_{A^{+}}\) in a compact subset \(K\subset\partial\Omega\) containing \(x_{A^{+}}\) in its interior,
(A.1) \[C_{1}\leq\log d(x,x_{A^{+}})-\alpha^{-1}\log d(x,y_{A^{+}})\leq C_{2}.\]
In this coordinate system, the action of \(A\) is given by \(\begin{bmatrix}\lambda_{2}/\lambda_{1}\\ \lambda_{3}/\lambda_{1}\end{bmatrix}\). So if \(p=(a,b)\), we have \(d(A^{n}p,x_{A^{+}})=\frac{1}{\lambda_{1}^{n}}(\lambda_{2}^{2n}|a|+\lambda_{3} ^{2n}|b|)^{1/2}\) and \(d(A^{n}p,y_{A^{+}})=\lambda_{3}^{n}b/\lambda_{1}^{n}\). For \(n\) sufficiently large, we may assume \(\lambda_{2}^{2n}|a|^{2}\leq\lambda_{2}^{2n}|a|^{2}+\lambda_{3}^{2n}|b|^{2}\leq \max\{\lambda_{2}^{2n+2}|a|^{2},\lambda_{2}^{2n-2}|a|^{2}\}\). So we have that
\[\log d(A^{n}p,x_{A^{+}}) =-n\ell_{1}+\frac{1}{2}\log(\lambda_{2}^{2n}|a|^{2}+\lambda_{3}^{ 2n}|b|^{2})\] \[\leq n(\ell_{2}-\ell_{1})+|\ell_{2}|+\log|a|,\]
and the lower bound \(n(\ell_{2}-\ell_{1})+\log|a|\) follows similarly. Furthermore,
\[\alpha^{-1}\log d(A^{n}p,y_{A^{+}})=\alpha^{-1}\log\left(\frac{\lambda_{3}^{n }}{\lambda_{1}^{n}}|b|\right)=n(\ell_{2}-\ell_{1})+\alpha^{-1}\log|b|,\]
so that for this \(p\) we have Equation A.1, with \(C_{1}=\log|a|+\alpha^{-1}\log|b|\) and \(C_{2}=|\ell_{2}|+\log|a|+\alpha^{-1}\log|b|\).
Now, we observe that if \(p=(a,b)\in\partial\Omega-\{x_{A^{-}}\}\) is contained in this affine chart, convexity of \(\Omega\) implies that all points in \(\partial\Omega-\{x_{A^{-}}\}\) between \(p\) and \(gp\) are in the compact box \(B=[(\lambda_{2}/\lambda_{1})a,a]\times[(\lambda_{3}/\lambda_{1})b,b]\). In particular, the segment of \(\partial\Omega\) between \(p\) and \(x_{A^{+}}\) is contained in \(\{x_{A^{+}}\}\cup(\bigcup_{n=0}^{\infty}A^{n}B)\). On \(B\), all estimates in the above can be taken uniformly and this produces the desired constants.
The second and final fact we prove here is the local control on quantitative discreteness of conjugate discrete subgroups of Lie groups that plays a role in our proof of Proposition 4.4 (Modify to Continuity).
Recall our notation that \(\Lambda\) is a discrete subgroup of a Lie group \(G\) equipped with a right-invariant metric, \(\kappa(\Lambda)=\inf\{d(e,g)\mid g\in\Lambda-\{e\}\}\), and conjugation is denoted by \(\Psi_{g}:h\mapsto ghg^{-1}\).
**Lemma A.2** (Discreteness is Conjugation-Stable).: _Let \(G\) be a Lie group and \(\Lambda<G\) be a discrete subgroup of \(G\). Consider the function \(\eta:g\mapsto\kappa(\Psi_{g}(\Lambda))\). Let \(g_{0}\in G\) be given. Then there is a neighborhood \(U\) of \(g_{0}\) so that \(\eta(h)>\kappa(\Psi_{g_{0}}(\Lambda))/3\) for all \(h\in U\)._
Proof of Lemma a.2.: Of course for \(h\in G\) we have \(\Psi_{h}(\Lambda)=\Psi_{hg_{0}^{-1}}(\Psi_{g_{0}}(\Lambda)).\) By using this, we work with the group \(\Lambda^{\prime}=\Psi_{g_{0}}(\Lambda)\). It suffices to show:
**Claim.** There is a neighborhood \(U\) of \(e\) so that \(2\kappa(\Psi_{h}\Lambda^{\prime})>\kappa(\Lambda^{\prime})\) for \(h\in U\).
For a fixed \(R>0\), let \(L_{R}=\overline{B_{R}(e)}\) be the closed ball of radius \(R\) around the identity, and let \(K\subset G\) be compact. Then \((g,p)\mapsto||D_{p}\Psi_{g}||\) is continuous. So there is some \(C=C(K,R)\) so that \(||D_{p}\Psi_{g}||\leq C\) for all \(g\in L_{R}\) and \(p\in K\), and in particular \(\Psi_{\cdot}(p)\) is \(C\)-Lipschitz on \(L_{R}\).
Let \(r>0\) and \(\epsilon>0\). Then for all \(h\in B_{r}(e)\) and \(g\in B_{\epsilon}(e)\), we have \(\Psi_{g}(h)\in B_{r+C\epsilon}(e)\). So let \(\epsilon<\min(\kappa_{\Lambda^{\prime}}/3C,R)\) and \(r<\kappa_{\Lambda^{\prime}}/3\). Then for all \(g\in B_{\epsilon}(e)\) and \(h\in B_{r}(e)\), we have \(\Psi_{g}(h)\in B_{2\kappa_{\Lambda^{\prime}}/3}.\) Next, we note that if \(\gamma\in G\), \(g\in B_{\epsilon}(e)\), and \(\Psi_{g}(\gamma)\in B_{r}(e)\), then \(\gamma=\Psi_{g^{-1}}(\Psi_{g}(\gamma))\in B_{2\kappa_{\Lambda^{\prime}}/3}(e)\), and in particular \(\gamma\notin\Lambda^{\prime}\). We conclude that for all \(g\) in \(B_{\epsilon}(e)\) and \(\gamma\in\Lambda^{\prime}\) we have \(g^{-1}\gamma g\notin B_{\kappa_{\Lambda}^{\prime}/3}(e)\), and hence on \(U=B_{\epsilon}(e)\), \(\kappa_{g^{-1}\Lambda g}>\kappa_{\Lambda}/3\).
|
2304.11193 | Combining Vision and Tactile Sensation for Video Prediction | In this paper, we explore the impact of adding tactile sensation to video
prediction models for physical robot interactions. Predicting the impact of
robotic actions on the environment is a fundamental challenge in robotics.
Current methods leverage visual and robot action data to generate video
predictions over a given time period, which can then be used to adjust robot
actions. However, humans rely on both visual and tactile feedback to develop
and maintain a mental model of their physical surroundings. In this paper, we
investigate the impact of integrating tactile feedback into video prediction
models for physical robot interactions. We propose three multi-modal
integration approaches and compare the performance of these tactile-enhanced
video prediction models. Additionally, we introduce two new datasets of robot
pushing that use a magnetic-based tactile sensor for unsupervised learning. The
first dataset contains visually identical objects with different physical
properties, while the second dataset mimics existing robot-pushing datasets of
household object clusters. Our results demonstrate that incorporating tactile
feedback into video prediction models improves scene prediction accuracy and
enhances the agent's perception of physical interactions and understanding of
cause-effect relationships during physical robot interactions. | Willow Mandil, Amir Ghalamzan-E | 2023-04-21T18:02:15Z | http://arxiv.org/abs/2304.11193v1 | # Combining Vision and Tactile Sensation for Video Prediction
###### Abstract
In this paper, we explore the impact of adding tactile sensation to video prediction models for physical robot interactions. Predicting the impact of robotic actions on the environment is a fundamental challenge in robotics. Current methods leverage visual and robot action data to generate video predictions over a given time period, which can then be used to adjust robot actions. However, humans rely on both visual and tactile feedback to develop and maintain a mental model of their physical surroundings. In this paper, we investigate the impact of integrating tactile feedback into video prediction models for physical robot interactions. We propose three multi-modal integration approaches and compare the performance of these tactile-enhanced video prediction models. Additionally, we introduce two new datasets of robot pushing that use a magnetic-based tactile sensor for unsupervised learning. The first dataset contains visually identical objects with different physical properties, while the second dataset mimics existing robot-pushing datasets of household object clusters. Our results demonstrate that incorporating tactile feedback into video prediction models improves scene prediction accuracy and enhances the agent's perception of physical interactions and understanding of cause-effect relationships during physical robot interactions.
Deep learning in robotics and automation, perception for grasping and manipulation, physical robot interaction, video prediction, force and tactile sensing.
## I Introduction
Physical interaction is an essential aspect of human life. As robots are pushed into the real world and their tasks become more complex, physical robot interactions (**PRI**) will become an increasingly essential feature. An agent's understanding of physical cause-effect relationships underpins its performance in PRI tasks. Without this cause-effect understanding, an agent is unable to distinguish and filter between promising or unfavourable candidate actions.
Visual and tactile sensations are essential to building physical interaction perception in humans [14], in particular, human tactile cognition helps with a series of interactive tasks [22] like grasping, manipulating an object, in-hand manipulation, tactile exploration, object pushing, and human-to-human physical collaboration. Tseng et al. [34] and Thoroughman et al. [32] showed humans use predictive models to perform such complex physical interaction tasks.
In robotics, physical interaction tasks are typically performed with deep neural network video prediction models [25] and bench-marked with object-pushing datasets such as the BAIR dataset [8]. These prediction architectures use optical sensation to perceive their environment. However, unlike the multi-modal systems humans use for environment perception, this single-modality approach results in more latent variables and prediction uncertainty. We believe the integration of other sensing modalities, such as tactile sensation, into these physical interaction perception models will improve an agent's cause-effect predictions during PRI.
This work aims to explore and develop physical perception forward models that take advantage of both visual and tactile sensations. A forward model in the context of human neuro-cognitive science [39] refers to a predictive model of a physically interactive task. In robotics, this is an action-conditioned predictive model that uses a history of sensory readings and robot states, as well as planned future robot movements, to generate the predicted sensory readings in
Fig. 1: (a) The interactions between the thalamus and the primary somatosensory and visual cortexes in the mammalian brain [11] to integrate touch and sight; (b) Simultaneous prediction of optical and tactile prediction mimics this to integrate tactile and optical sensation; (d) the SPOTS architecture has improved physical interaction perception over its vision-only equivalent and is capable of predicting the location of a previously unseen object with complex physical properties during pushing tasks (c). The yellow mask in (d) shows the true future state of the object.
the prediction horizon. To combine these two modalities, we introduce tactile sensation to a state-of-the-art video prediction architecture (Stochastic Video Generator [7]) with a variety of approaches. We explore these approaches within the context of object pushing, where the prediction system must predict the future image frames of a scene, given previously seen frames and a known robot action. We believe our unsupervised tactile-visual learning approach could be beneficial to other aspects of physical-robot interaction such as grasping, in-hand manipulation, human-robot interaction, and soft tissue manipulation.
**Contribution 1**: _In this article, we propose a novel array of action-conditioned multi-modal (tactile and visual) prediction models for physical interaction perception. Our proposed model, **S**imultanious **P**rediction of **O**ptical and **T**actile **S**ensations (**SPOTS**), outperforming other state-of-the-art models. It uses a dual pipeline prediction architecture that enables two bespoke network architectures dedicated to the prediction of the individual sensation. Crossover connections between the two pipelines capture the correlation between tactile and optical sensation and enable multi-modal learning. This bio-inspired approach mimics the structure and interaction between the visual, somatosensory and auditory primary cortexes in the mammalian brain [11] (Fig. 1). For instance, humans have individual cortexes for processing a given sensing modality, but crossover between the cortexes enables cross-sensation processing [11]._
**Contribution 2**: _We generate two novel datasets1 as there are no available PRI datasets that contain tactile and visual data. Our first novel dataset contains visually identical objects with different friction properties. The second is a large household object clusters dataset replicating standard vision-only benchmarks. The datasets contain RGB-D image data of the scene, robot state and tactile sensations from a magnetic-based sensor (Xela uSkin XR1944 [28])._
Footnote 1: The two datasets and the model code is available for download and use here: [https://github.com/imanlab/SPOTS_IML](https://github.com/imanlab/SPOTS_IML)
**Contribution 3**: _We present a set of comparative studies using the datasets to test the different potential multi-modal models presented in this article, exploring the quantitative and qualitative performance impact of integrating tactile sensation to PRI video prediction models. We use this comparison study to explore how best to perform the integration of tactile and visual sensation within recurrent neural networks. We show that within these datasets, the tactile-enabled prediction models outperform their vision-only counterparts both quantitatively and qualitatively. Further, we show that the multi-modal prediction system also enables accurate tactile predictions during physical interactions. The results shown in this paper indicate that as robotics pushes into the real-world, accurate and safe PRI should be rooted in multi-modal physical perception models._
## II Related Works
Video PredictionVideo prediction, the task of predicting future video frames, is a core technology challenge in enhancing robotic systems to perform human-like manipulation tasks and it imposes an interesting scientific problem. Early video prediction models focused on predicting raw pixel intensities without modelling the scene dynamics [27]. To perform predictions over longer time horizons, Srivastava et al. [31] introduced autoencoders and LSTM units to model the temporal coherence. Action-conditioned video prediction models provide further information to the prediction model [9; 23] for use in reinforcement learning and enabling model predictive control with video prediction systems [8]. Villegas et al. [36] split frames into content and motion streams and devolved the problem into predicting the pose and dynamics of landmarks [37]. More recent methods have applied a stochastic assumption to the video prediction problem, stating that there are multiple possible outputs for a single given input, due to a set of latent variables. To reduce uncertainty, which manifests as image blur [1], and produce sharper prediction image quality, recent models have estimated and sampled from these latent variables. Babaeizadeh et al. [1] applied this method to the optical flow method proposed in [9]. Denton et al. [7] proposed a similar approach but with a simpler model using only basic layers, which was later built upon by [35] for larger high-fidelity video prediction. Lee et al. [16] also built upon the method in [9] by appending adversarial training techniques. The existing video prediction models do not use the tactile sensation and assume the changes caused by PRI are fully observed in visual information and/or the models provide a prediction with high uncertainty.
Physical Robot InteractionOpera et al. [25] considered three categories for video prediction model benchmarks (i) human prediction tests (e.g. [2; 13; 29]), (ii) driving and road tests, where the objective is to predict how the state of a road might change (e.g. [3; 10; 12]) and (iii) robot pushing datasets, (e.g. [6; 8; 9]) where the objective is to predict the environment change from a robot's actions during physical robot interaction. Unlike the other video prediction benchmark tests, robot-pushing datasets contain physical interaction. The tactile sensation can provide features important to video prediction models that are not available from visual sensation. Current state-of-the-art methods apply a stochastic assumption to video prediction, assuming that variables such as the centre of mass, object friction, and object dynamics are unknowable. Tactile sensation during physical interaction may give access to many of these latent variables which is the concept behind integrating tactile sensation into video prediction models. In this article, we build new datasets as the available pushing datasets lack tactile sensation.
Tactile SensationTactile sensors are hardware devices that obtain tactile information through physical interaction with the environment. Tactile information typically attributes such as temperature, vibration, softness, texture, shape, composition and normal & shear forces [33]. In the context of object pushing, we require normal and shear force features. Tactile sensors available from both industry and literature that can generate these features typically make trade-offs between resolution, affordability and sensitivity. _Image
based_ tactile sensors2, such as the Optical wave-guide-based sensors [24, 41] and marker-based sensors such as the TacTip [38], are high-resolution tactile sensors. However, they require significant processing [30].
Footnote 2: Such technology includes a camera capturing the deformation of a membrane.
_Magnetic-based_[42] sensors, like the Xela Uskin 3, provide low spatial resolution and high-frequency data at each Taxel with tri-axial readings. The Xela Uskin sensor has magnetic-based cells each measuring non-calibrated normal and shear forces. In this work, we use the Xela uSkin magnetic sensor as it's (1) simple and easy to use, (2) low cost, and (3) it generates high-frequency readings. This sensor is used for tactile predictive models [20] and for data-driven model predictive control for slip-free robotic manipulation [21].
Footnote 3: The uSkin sensor by velarobotics.com
Vision and TouchThe combination of touch and vision is in its infancy. The relationship between vision and touch during PRI has been explored with translation tasks. For instance, [17] and [4] used adversarial networks to translate between material surfaces and touch with a vision-based touch sensor and a pen accelerometer, respectively. Li et al. [19] used ResNet Encoders and adversarial training to (i) synthesise plausible temporal tactile signals from visual inputs of touch with a static scene and (ii) translate from tactile signals to a single image output of the scene. Lee et al. [18] combined vision, haptic (wrist force/torque sensor) and proprioceptive data to encode a multi-modal representation using a set of surrogate tasks. Encoded data is then used to generate a policy for reinforcement learning (a peg-in-hole task). This work showed that the multi-modal representation outperforms single-modality models. Pinto et al. [26] show how physical interactions enable agents to better classify and categorise objects through pushing, poking and grasping. However, a model that captures the correlation between touch and visual sensing to predict visual _and/or_ tactile sensing during PRI has not been explored in the literature.
We present the combined tactile and video prediction models in this paper for effective PRI task completion (Fig. 2).
## III Problem Formulation
The objective of this work is to provide an improved video prediction by a model that simultaneously predicts tactile and video frames during physical robot interactions. This improved video prediction through integrated tactile sensation can be used for effective control of highly non-linear PRI tasks where the existing methods fail [21]. We build our models based on our previous work of tactile prediction models [20].
Given (i) a set of context frames \(\textbf{x}_{0:c-1}=\{x_{0},\ldots,x_{c-1}\}\), which are the previously seen images during the trial, with a context sequence length of \(c\) and (ii) a prediction horizon with a length of \(T-c\) (which is how many frames into the future a model will predict for), a video prediction model can be defined as \(\mathcal{F}(\textbf{x}_{0:c-1})=\tilde{\textbf{x}}_{c:T}\)4, where \(\tilde{\textbf{x}}_{c:T}=\{x_{c},\ldots,x_{T}\}\) is the predicted video frames. The aim is to optimise the following objective function:
Footnote 4: We use \(x\), **x** and \(\tilde{\textbf{x}}\) to denote variables either vector or matrix, a set of those variables and the corresponding predicted values.
\[\min\sum_{i=c}^{i=T}\mathcal{D}\left(\hat{x}_{i},x_{i}\right) \tag{1}\]
for each prediction horizon \(T-c\), where \(\mathcal{D}\) is the pixel space loss function, for example \(\mathcal{L}_{1}\) or \(\mathcal{L}_{2}\), defining the difference between predicted and observed video frames.
As we are predicting within the physical robot interaction space, we are focused on the model developing a cause-effect understanding of the robot. To do so, we action-condition the prediction model with past actions \(\{a_{0},\ldots,a_{c-1}\}\) and known future robot actions \(\{a_{c},\ldots,a_{T}\}\). Within the context of visual model predictive control, the future robot actions will be a batch of candidate actions, allowing a discriminator to decide on the best action based on the most desirable predicted scene. The prediction model is therefore:
\[\mathcal{F}(\textbf{x}_{0:c-1},\textbf{a}_{0:T})=\hat{\textbf{x}}_{c:T} \tag{2}\]
We chose the stochastic video prediction model SVG (presented in [7]) as the baseline architecture to build our multi-modal system from. SVG does not make the assumptions about the input data that other video prediction models like SAVP and SV2P do, making the system more generalisable [35] and hence more suitable to a multi-modal approach. It's simple architecture also allows us to change the models structure more without a negative or destructive impact.
SVG applies a stochastic assumption to the prediction model, where the objective is to sample from \(p(\hat{\textbf{x}}_{c:T}|\textbf{x}_{0:c-1},\textbf{a}_{0:T})\). Within the base video prediction architecture, we build our models from, video prediction is split into sub-modules: (i) a frame prediction network, (ii) a prior network, and (iii) a posterior network, which is used to train the prior network only [7].
Within video prediction for PRI, latent variables are used to estimate unknown physical properties: 'when a robot's arm pushes a toy on a table, the unknown weight of that toy affects how it moves [1]'. Intuitively, we believe that tactile sensation should provide the model with more accurate representations
Fig. 2: Possible methods of tactile integration into video prediction systems: (a) standard video prediction without tactile sensation; (b) including the context tactile data as a conditioning input for video prediction; (c) predicting both touch and vision sensation with a single video prediction model; (d) using two separate prediction modules for each sensing modality, with a crossover link
of the object's physical values. However, we still use the stochastic assumption and include the use of latent variable estimation as other features in the environment are still difficult to estimate even with tactile sensation.
SVG conditions the frame prediction network on the estimated latent variables **z**, \(p(\mathbf{\hat{x}}_{cT}|\mathbf{x}_{0:c-1},\mathbf{a}_{0:T},\mathbf{z}_{0:c-1})\). The latent variables are distributed according to the prior network \(p_{\psi}(\mathbf{z}_{t}|\mathbf{x}_{0:t-1})\), where \(t\) is the current time-step in the prediction sequence. Learning then involves training the parameters of factors \(\theta\) of the factorised model:
\[\prod_{t=c}^{T}p_{\theta}(\mathbf{\hat{x}}_{t}|\mathbf{x}_{0:t-1},\mathbf{a}_ {0:t},\mathbf{z}) \tag{3}\]
The learned prior network \(p_{\psi}(\mathbf{z}_{t}|\mathbf{x}_{0:t-1})\) is trained using Kullback-Leibler divergence [15] on the output of the posterior network \(q_{\phi}(\mathbf{z}_{t}|\mathbf{x}_{0:t})\). Both networks output the parameters of a conditional Gaussian distribution \(\mathcal{N}(\mu_{\psi}(\mathbf{x}_{0:t-1}),\,\sigma_{\psi}(\mathbf{x}_{0:t-1}))\)
The prior network can then be trained jointly with the frame prediction model by maximising Eq. 4. For further information on stochastic video prediction look to [1] and [7].
\[\begin{split}\mathcal{L}_{\theta,\phi,\psi}(\mathbf{x})& =-\mathbb{E}_{q_{\theta}(\mathbf{z}|\mathbf{x}_{0:T})}\big{[} \text{log}p_{\theta}(\mathbf{x}_{t:T}|\mathbf{x}_{0:t-1},\mathbf{a}_{0:T}, \mathbf{z})\big{]}\\ &+D_{KL}\big{(}q_{\phi}(\mathbf{z}_{t}|\mathbf{x}_{0:t})||p_{ \psi}(\mathbf{z}_{t}|\mathbf{x}_{0:t-1})\big{)}\end{split} \tag{4}\]
To integrate tactile sensation, \(\mathbf{d}_{0:c-1}=\{d_{0},\dots,d_{c-1}\}\), into video prediction models there are a variety of potential methods. In the following sections, we discuss the different methods we used to apply tactile integration into the above video prediction architecture (Fig. 2).
## IV Combining Vision and Tactile Sensation for Video Prediction
As images of the environment and tactile sensation can both be viewed as images, we will state a useful change in variable names to remove ambiguity. We will refer to the visual image of the environment as the _scene image_ and to the _tactile image_ as such.
To introduce tactile sensation to video prediction models we have three base approaches (shown in Fig. 2): (A) the SVG model is conditioned with the context tactile data (Fig. 3); (B.1) the scene and tactile data can be concatenated together and then passed to the prediction model, which predicts both touch and scene (Fig. 4 (a)); (B.2) the scene and tactile data can be passed to bespoke modality prediction models, with a crossover between the two (Fig. 4 (b)). In the following sections, we describe the three key integration approaches within the SVG model architecture. We describe the overall design of these methods, the key features, and the potential layers that make up our comparison study in Section VI.
In all models, the robot action data, \(a_{t}\in\mathbb{R}^{7}\), and the robot state (robot start position), \(a_{0}\in\mathbb{R}^{7}\) (we use \(a\) for robot planned action in prediction horizon and the past states), are concatenated together [9] and input to the LSTM chain in the Frame Prediction Model alongside the other feature vectors.
### _Tactile-Conditioned Video Prediction_
The simplest method of integration is to flatten the context tactile frames, \(\mathbf{d}_{0:c-1}=\{d_{0},\dots,d_{c-1}\}\) from \(d_{t}\in\mathbb{R}^{4\times 4\times 3}\) into \(d_{t}\in\mathbb{R}^{48}\), then encode it into a feature vector and pass it as input to the Frame Prediction Models LSTM chain through concatenation with the robot action data, the learned latent values and the scene feature vector. This model, titled Tactile Enhanced Stochastic Video Generation (SVG-TE), is shown in Fig. 3. In this model, the tactile feature vector is not used in the latent variable calculation.
Training this model is to learn the frame prediction networks weights \(\theta\), the learned prior networks weights \(\psi\) and the posterior network weights \(\phi\). The final factorised frame prediction model shown in Eq. 5 with the optimisation function to learn \(\theta,\psi\) and \(\phi\) shown in Eq. 6.
\[\begin{split}\prod_{t=c}^{T}p_{\theta}(\mathbf{\hat{x}}_{t}| \mathbf{x}_{0:t-1},\mathbf{a}_{0:t},\mathbf{d}_{0:c-1},\mathbf{z})\end{split} \tag{5}\]
\[\begin{split}\mathcal{L}_{\theta,\phi,\psi}(\mathbf{x})& =-\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x}_{0:T})}\big{[}\text{ log}p_{\theta}(\mathbf{x}_{t:T}|\mathbf{x}_{0:t-1},\mathbf{a}_{0:T},\mathbf{d}_{0:c-1}, \mathbf{z})\big{]}\\ &+D_{KL}\big{(}q_{\phi}(\mathbf{z}_{t}|\mathbf{x}_{0:t})||p_{\psi }(\mathbf{z}_{t}|\mathbf{x}_{0:t-1})\big{)}\end{split} \tag{6}\]
### _Simultaneous Tactile and Video Prediction_
The following two architectures predict both tactile and scene frames simultaneously. We adapt our model to sampling from \(p_{\theta}(\mathbf{x}_{c:T},\mathbf{d}_{c:T}|\mathbf{x}_{0:c-1},\mathbf{a}_{0: T},\mathbf{d}_{0:c-1},\mathbf{z})\), as shown in Fig. 2 (c) and 2 (d). With this style of approach, we can factorise these models to Eq. 7 with the full model training being a function of the loss shown in Eq. 8
Fig. 3: Stochastic video prediction architecture SVG [7] with tactile sensation integrated. Tactile enhanced video prediction, this model uses encoded tactile context data to enhance prediction accuracy (**SVG-TE**). The test time architecture is shown in this diagram.
\[\prod_{t=c}^{T}p_{\theta}(\hat{\mathbf{x}}_{t},\hat{\mathbf{d}}_{t}| \mathbf{x}_{0:t-1},\mathbf{a}_{0:t},\mathbf{d}_{0:t-1},\mathbf{z}) \tag{7}\] \[\mathcal{L}_{\theta,\phi,\psi}(\mathbf{x},\mathbf{d}) =-\mathbb{E}_{q_{\theta}(\hat{\mathbf{x}}|\mathbf{x}_{0:T},\mathbf{ d}_{0:T})}\] \[\big{[}\text{log}p_{\theta}(\mathbf{x}_{t:T},\mathbf{x}_{t:T}| \mathbf{x}_{0:t-1},\mathbf{d}_{0:t-1},\mathbf{a}_{0:T},\mathbf{z})\big{]}\] \[+D_{KL}\big{(}q_{\phi}(\mathbf{z}_{t}|\mathbf{x}_{0:t},\mathbf{d }_{0:t})||p_{\psi}(\mathbf{z}_{t}|\mathbf{x}_{0:t-1},\mathbf{d}_{0:t-1})\big{)} \tag{8}\]
We hypothesise that these predicted tactile frames can be utilised by the scene frame predictor network to improve prediction performance, beyond the boost provided through the context tactile data. Furthermore, predicting tactile sensation allows for more complex use within a model predictive control scenario, enabling, for example, proactive slip control [21].
We perform the above optimisation through the two proposed architectures presented in Fig. 4 (a) Stochastic Video and Tactile Generator (**SVTG**) and Fig. 4 (b) Simultaneous Prediction of Optic and Touch Sensations (**SPOTS**). Below we discuss additional layers and key features of the models we developed with this approach.
#### Iii-B1 Stochastic Video and Tactile Generation (SVTG)
This architecture concatenates the scene and tactile data together before encoding (Fig. 4 (a)). The tactile data is reshaped from \(d\in\mathbb{R}^{48}\) to \(d\in\mathbb{R}^{64\times 64\times 3}\) where the three channels represent the normal, shear x and shear y forces5. Although this architecture is simple to implement, the SVG architecture has been shown to predict re-scaled tactile data poorly [20]. To enable the processing of the tactile data with a separate, more tactile data-oriented architecture, we also implemented SPOTS, shown below.
Footnote 5: Xela sensor is used in this paper with 4 x 4 sensing cells where each cell provides normal, shear x and shear y reading at each taxel.
#### Iii-B2 Simultaneous Prediction of Scene and Touch Sensation (SPOTS)
This model uses two Frame Predictor Models, one for each modality, as shown in Fig. 4 (b). The SVG's frame predictor model is used for the scene and a bespoke tactile prediction architecture (Action-Conditioned Tactile Prediction [20]) is used for the tactile frame predictor model. Crossover connections between the encoded tactile data and the encoded scene data give each pipeline access to the other sensor modality.
We split the prediction network into a scene pipeline and a tactile pipeline, for the following reasons: (i) It enables changes to model structure that can only be applied to a single sensation, for example using optical flow for video prediction but not for tactile prediction; (ii) The structure enables the integration of more modalities that may require unique architectures, for example auditory and olfactory sensations [11]; (iii) The split architecture is easier to adjust for
Fig. 4: Stochastic video prediction architecture SVG [7] with tactile sensation integrated. Each model shown is the test architecture. (a) Stochastic Video and Tactile Generation (**SVTG**) data using a single frame prediction model, this network learns to predict tactile sensation to provide the scene prediction features more information. (b) Simultaneous Prediction of Optic and Tactile Sensation (**SPOTS**), this network uses two frame predictors, one for each modality. This approach is used to allow the tactile prediction network architecture to be different to the scene prediction architecture, enabling more accurate tactile prediction.
specific domain problems, if one aspect of the scene prediction network requires change, it will not impact the integral tactile prediction performance, and vice versa.
There are a few adjustments we made to support this change, each key change is discussed below:
Multi-Modal Fusion Model (MMFM)Inspired by [18], the combination of the two sensing modalities can be performed with an MMFM. The MMFM layer consists of two simple linear layers, with batch normalisation and _tanh_ activation functions. The best multi-modal representation may be different for the given network modality, so in the SPOTS architecture, each pipeline has its own MMFM layer. The MMFM layer is included prior to the LSTM chain in the two pipelines and takes as input the encoded scene and tactile values.
Scene only learned priorThe latent variables generated through the learned prior network reduce scene prediction blur by estimating latent values. Although this produces results that perform better on performance metrics that correlate well with human-based image similarity scores such as Structural Similarity Index (**SSIM**) and Peak Signal-to-Noise Ration (**PSNR**), it is unknown if this is required for the tactile prediction network, and the encoded uncertainty produced by deterministic tactile predictions may be more beneficial for the scene predictions. We test this option with **SPOTS-SOP** (SPOTS Scene-Only Prior)
Action-Conditioned Tactile Prediction NetworkThe dual pipeline system enables tailoring the pipeline for a given modality and allows a more detailed exploration of the integration problem. Tactile prediction (of the same Xela uSkin sensor) during physical robot interaction can be performed with video prediction architectures like SVG by resizing the tactile data, however, this was shown to produce poor results. We adjust the tactile prediction pipeline to use the best-performing tactile prediction model, i.e. Action-Condition Tactile Predictor (ACTP) [20].
## V Experiment setup: Robot and Task
To train and test the tactile integrated models shown above, we built two new object-pushing datasets that contain both tactile sensation and scene videos. Object-pushing datasets have been widely used to benchmark video prediction [25], unlike other existing benchmarks like driving and urban scene understanding datasets, they specifically test physical robot interaction and so are perfect for testing our prediction models.
Previous PRI video prediction research focuses on datasets that test generalisation across household objects and clusters [6; 8; 9]. We mimic the process of these datasets, performing random robot pushing actions through household object clusters. The datasets consists of: (i) robot proprioception data in joint and task space, enabling action conditioning (ii) tactile data from the pushing finger of the gripper (iii) RGBD video frames from 3 perspectives of the scene (\(x\in\mathbb{R}^{64\times 64\times 4}\)). The synchronised data was collected at 10 frames per second.
We introduce tactile sensation by appending the Xela uSkin magnetic tactile sensor to the pushing fingertip. The Xela uSkin tactile sensor contains 16 sensing elements arranged in a _4 by 4_ square grid, each outputting non-calibrated shear x, shear y and normal forces, i.e. the readings are proportional to a normal and two shear forces. The Xela sensor has previously been used to predict tactile sensation during pick and move tasks [20] and so is appropriate for use in our scene and tactile prediction models. A vision based sensor such as the GelSight sensor [40] could also be used in this setting.
We used the Franka Emika Panda robot for pushing and collected scene frames with the Intel RealSense D345 camera. Some previous robot-pushing datasets contain random object pushes. However, for these datasets, we used straight line pushes as: (i) the straight line push ensures that the tactile sensor is facing the objects and (ii) the straight line pushes provide more continuous interactions with an object over time. For example, during random motions, the objects are often touched but not completely pushed through. Pushing
Fig. 5: (a) The robot and its environment are shown, containing the Panda Franka Emika 7 degrees of freedom collaborative robot, the 4x4 Xela uSkin tactile sensor attached to the pushing fingertip, the household objects on the pushing surface and the object reset system which enabled semi-automated dataset collection. (b) The objects used in the household object clusters dataset.
through the object ensures the maximum change in the objects' location, and thus provides the complex pushing actions we require for testing.
### _Pushing dataset_
**Household object clusters:** This dataset contains 5,500 pushing trials of clusters with 100's household objects (Fig. 5 (b)). The dataset consists of objects from the YCB dataset [5], with other objects added for more thorough generalisation testing. The objects used are shown in Fig. 5 (b). Each pushing trial lasts 4 seconds. The test data is split into two sets: (i) seen object clusters, which contain new clusters made of objects within the training dataset and (ii) unseen object clusters, which contain objects not present in the training dataset. Each test set contains 250 trials.
The collection of this dataset was semi-automated. 20 pushes of the object clusters were followed by an automated resetting procedure, in which the robot pulled an ellipse-shaped barrier toward itself, pushing all objects back toward a central location. This ensured that a higher percentage of pushes involved object contact. This setup is shown in Fig. 5 (a). The arena also contained boundaries outside of the pushing range, resulting in a low number of pushes where the object is forced into a barrier (typically only the largest objects could hit the edges).
**Visually Identical Dataset** In this dataset we use the same object, with friction markers placed at different places on the objects' contact surface. This test is bespoke to our task and allows us to qualitatively test the impact of tactile integration on an agent's understanding of the scene. If the model is unable to utilise the tactile sensation, it will be unable to predict the correct direction of the objects' motion during pushing as there are no visual indications of the high friction location.
This dataset contains 1000 training interactions and 600 test interactions. This dataset contains a single object, with high weight (1.1 [kg] and 16.1 x 10.1 x 4.9 cm (LxWxH)), friction is altered by applying _'60 grit'_ sandpaper to different parts of the object resulting in the different outcomes of pushing interactions with the same visual scene (Fig. 6). The different locations of the sandpaper are the centre, the 4 corners, and the middle of the 4 edges of the box. The test set consists of 550 pushes with unseen friction locations and 50 pushes with previously seen friction locations. We use coloured markers to quantify object location and orientation during performance evaluation.
**Edge case subset** We identified that in many cases the impact of object friction location does not have a large enough impact on the future position and orientation of the object during short pushing actions. Nonetheless, we speculate that the impact of friction on longer interactive tasks will be significant. It
\begin{table}
\begin{tabular}{l c c c c c} \hline Model name & TE & TP & SOP & MMFM & Model Size \\ \hline SVG & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 18\({}^{\circ}\)010\({}^{\circ}\)027 \\ SVG-TE & ✓ & \(\times\) & \(\times\) & \(\times\) & 18\({}^{\circ}\)115\({}^{\circ}\)327 \\ SVTG & \(\times\) & ✓ & ✓ & \(\times\) & 38\({}^{\circ}\)766\({}^{\circ}\)766 \\ SPOTS & \(\times\) & ✓ & \(\times\) & ✓ & 21\({}^{\circ}\)051\({}^{\circ}\)051 \\ SPOTS-small & \(\times\) & ✓ & \(\times\) & ✓ & 18\({}^{\circ}\)250\({}^{\circ}\)180 \\ SPOTS-SOP & \(\times\) & ✓ & ✓ & ✓ & 21\({}^{\circ}\)026\({}^{\circ}\)475 \\ \hline \end{tabular}
\end{table} TABLE I: Key feature list for models tested in comparison study where: TE is a model performing tactile enhanced scene predictions; TP is a model performing tactile prediction as well as scene prediction; SOP is a model with only the scene data as input to the learned prior; MMFM is a model using a Multi-modal Fusion Model layer. Model size is the number of parameters within the network
Fig. 6: Two trials from the edge case subset are shown, with both the scene video frames as well as 3 normalised example taxel values (one normal force, shear X and shear Y) over the two trials. The friction location in both trials is different, and so, despite having the same starting position, the final position of the object is different. This Scenario provides a complex physical interaction perception task for a prediction agent.
is in edge cases, typically where the object is being pushed for 4 seconds in the middle that the impact of the friction location creates an unknowable scenario for the vision-alone system. These edge cases are essential for the exploration of our problem and so we create a simple subset of 4 test cases shown in figure 6. The test cases contain the object at the same location, with the friction location at each corner of the box respectively. Despite the exact same robot action, the final location of the object is drastically different, producing a good test case for qualitative analysis.
## VI Evaluation
In this section, we aim to answer if the inclusion of tactile sensation during physical robot interactions can improve an agent's cause-effect understanding during PRI. To test this, we compare the same video prediction system, SVG, with and without tactile integration. The tactile integrated versions of SVG are described in section IV and all the versions of models we test are summarised in the table I. With these models, we perform a comparative study on the key features and layers of the developed prediction models.
The key objective of this research is to investigate if the inclusion of tactile sensation can improve an agent's scene predictions. To do so, we use the two action-conditioned tactile pushing datasets to evaluate the different models proposed above, making comparisons to non-tactile included methods to evaluate improved performance. The goals of our experiments are to:
1. Evaluate the overall performance of the proposed video prediction models in comparison to the non-tactile video prediction model counterpart.
2. Test how the models generalise to new, unseen objects.
3. Compare the different multi-modal prediction models and evaluate which architecture develops the best physical interaction perception.
4. Explore the impact of tactile sensation during test cases through anaesttisation of the multi-modal models.
5. Evaluate the predicted tactile features of the multi-modal prediction models.
_Evaluation metrics:_ We perform an evaluation using 3 different metrics: Peak Signal-to-Noise Ration (PSNR), Structural Similarity (SSIM), and Mean Absolute Error (MAE). These provide a pixel-wise comparison between predicted frames and ground truth frames. The marked object dataset enables a performance metric that uses the ground truth and predicted marker centroids. In practice, however, the model predictions do not recreate these markers within their predicted scenes, so a marker-based performance metric is not applicable. In addition, we present and evaluate the performance of models qualitatively. For this, we focus on the edge case subset, which enables specific analyses of key details that amplify and highlight differences between the models.
_Training and Test Procedure_ The scene images are resized to \(x\in\mathbb{R}^{64\times 64\times 3}\). The models are trained end-to-end with the method shown in [7], replacing the inference model used during training with the learned prior for testing. Models were written in PyTorch with training and testing performed on two Nvidia RTX A6000 GPU's.
### _Quantitative Scene Analysis_
We show the performance of each model in Table II and III for the Household Cluster and Visually Identical datasets
\begin{table}
\begin{tabular}{l l l l} \hline Model & MAE \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) \\ \hline SVG & 0.0100 \(\pm\) 4.5\(e^{-3}\) & 81.1243 \(\pm\) 2.8\(e^{-1}\) & 0.9809 \(\pm\) 1.4\(e^{-3}\) \\ SVG-TE & 0.0100 \(\pm\) 3.9\(e^{-4}\) & 81.1274 \(\pm\) 5.4\(e^{-1}\) & 0.9088 \(\pm\) 2.9\(e^{-3}\) \\ SVTG & 0.0109 \(\pm\) 3.2\(e^{-4}\) & 80.3639 \(\pm\) 1.7\(e^{-1}\) & 0.9783 \(\pm\) 1.0\(e^{-3}\) \\ SPGTS & **0.0099** \(\pm\) 4.3\(e^{-4}\) & 81.1979 \(\pm\) 2.5\(e^{-1}\) & 0.9812 \(\pm\) 1.3\(e^{-3}\) \\ SPGTS-small & 0.0099 \(\pm\) 1.7\(e^{-3}\) & **81.2247**\(\pm\) 3.8\(e^{-1}\) & **0.9812**\(\pm\) 5.1\(e^{-3}\) \\ SPGTS-SOP & 0.0114 \(\pm\) 4.5\(e^{-4}\) & 81.1424 \(\pm\) 2.6\(e^{-1}\) & 0.9809 \(\pm\) 1.3\(e^{-3}\) \\ & MAE t+5 \(\downarrow\) & PSNR t+5 \(\uparrow\) & SSIM t+5 \(\uparrow\) \\ \hline SVG & 0.0112 \(\pm\) 5.9\(e^{-5}\) & 79.5207 \(\pm\) 3.3\(e^{-2}\) & 0.9766 \(\pm\) 1.8\(e^{-4}\) \\ SVG-TE & 0.0112 \(\pm\) 2.2\(e^{-5}\) & 79.5484 \(\pm\) 2.2\(e^{-2}\) & **0.9767**\(\pm\) 8.9\(e^{-5}\) \\ SVTG & 0.0129 \(\pm\) 5.5\(e^{-5}\) & 78.7614 \(\pm\) 1.7\(e^{-2}\) & 0.9715 \(\pm\) 2.0\(e^{-4}\) \\ SPGTS & 0.0113 \(\pm\) 4.8\(e^{-5}\) & 79.5417 \(\pm\) 1.5\(e^{-2}\) & 0.9766 \(\pm\) 1.4\(e^{-4}\) \\ SPGTS-small & **0.0112** \(\pm\) 5.0\(e^{-5}\) & **79.5703**\(\pm\) 3.7\(e^{-2}\) & 0.9767 \(\pm\) 1.1\(e^{-4}\) \\ SPGTS-SOP & 0.0114 \(\pm\) 7.8\(e^{-5}\) & 79.5014 \(\pm\) 3.5\(e^{-2}\) & 0.9764 \(\pm\) 2.5\(e^{-4}\) \\ \hline \end{tabular}
\end{table} TABLE II: Average scene prediction performance on both the combined seen and unseen household object cluster test datasets. Alongside the prediction scores are the 95% confidence intervals.
Fig. 7: The Mean Absolute Error performance metric for models on seen (left) and unseen (orange) objects and clusters in the household object clusters dataset, each model was trained 10 times and its performance statistics are shown in these box and whisker plots. Red models are without tactile sensation. Green models do not predict tactile sensation. Blue and purple models predict both touch and vision, in single and dual-pipeline architectures respectively. Test performance on previously seen objects shows similar performance between tactile-enabled and disabled models. However, for previously unseen objects, the dual-pipeline prediction models achieve improved performance.
respectively. Each table shows both the average performance over the prediction horizon and the prediction performance at the last time step in the prediction horizon. Each model is trained 10 times with different seeds, the 95% confidence intervals are shown alongside.
Small deviations in test performance on the Household Cluster Dataset indicate little difference between the tactile-enabled models and the vision-only model. SPOTS-small performs best over the whole prediction horizon and the last time frame.
The performance of the models is broken down in Fig. 7. Here the tests are split into clusters with seen and unseen objects. For objects seen in the training set, SVG, SVG-TE, and the SPOTS models are able to perform to a similar level. However, for new object clusters (meaning the clusters consist of unseen objects), the SPOTS systems outperform SVG and SVG-TE. This suggests that the tactile-enabled and disabled models are capable of similar predictions on objects they have already seen because they have an understanding of the objects' physical properties from experience, but the SPOTS models are capable of also generating this physical understanding on new objects it has not seen before, thus producing more accurate predictions. These test results also highlight the importance of tactile prediction (SPOTS) over tactile conditioning (SVG-TE) when an agent is predicting with new objects. There is also shown to be a significant negative impact from performing simultaneous tactile and scene prediction with a single pipeline approach (SVTG).
The prediction results in Table III are across the Visually Identical test dataset and the Edge Case test subset. These results complement the Household Cluster dataset results. Showing that multi-modal models outperform the vision-only model (other than SVTG). Testing on the edge case subset shows significant increases in the performance of the same tactile-enabled models, suggesting the tactile-enabled models are capable of physical interactions in identical visual scenes that have different physical properties. The parameter size differences between the tactile-enabled models appear to have little impact, SPOTS-small has roughly the same number of weights as SVG but is capable of prediction in these visually identical settings. Furthermore, results suggest smaller model size has a beneficial impact as the smaller SPOTS model has
Fig. 8: The Mean Absolute Error performance for prediction models on the Visually Identical edge case subset, each model was trained 10 times to generate these performance statistics. Red models are without tactile sensation. Green models do not predict tactile sensation. Blue and purple models predict both touch and vision, in single and dual-pipeline architectures respectively. The tactile-enabled models all outperform the vision-only model, and Spots-small achieves the best performance overall.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Model & MAE \(\downarrow\) & MAE t+5 \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & \\ \hline \multicolumn{6}{l}{Visually Identical Dataset} \\ \hline SVG & 0.0100 \(\pm\) & 9.73e\({}^{-5}\) & 0.0122 \(\pm\) & 1.99e\({}^{-4}\) & 78.7258 \(\pm\) & 5.65e\({}^{-2}\) & 0.9647 \(\pm\) & 3.52e\({}^{-4}\) \\ SVG-TE & **0.0098** \(\pm\) & 9.37e\({}^{-5}\) & 0.0120 \(\pm\) & 1.96e\({}^{-4}\) & 78.9524 \(\pm\) & 6.61e\({}^{-2}\) & 0.9659 \(\pm\) & 4.05e\({}^{-4}\) \\ SVTG & 0.0109 \(\pm\) & 1.33e\({}^{-4}\) & 0.0129 \(\pm\) & 3.33e\({}^{-4}\) & 79.0638 \(\pm\) & 6.99e\({}^{-2}\) & 0.9630 \(\pm\) & 3.14e\({}^{-4}\) \\ SPOTS & 0.0099 \(\pm\) & 1.89e\({}^{-4}\) & 0.0119 \(\pm\) & 2.39e\({}^{-4}\) & 79.0778 \(\pm\) & 1.20e\({}^{-1}\) & 0.9661 \(\pm\) & 8.34e\({}^{-4}\) \\ SPOTS-small & 0.0099 \(\pm\) & 1.39e\({}^{-4}\) & 0.0120 \(\pm\) & 2.19e\({}^{-4}\) & 79.0938 \(\pm\) & 1.32e\({}^{-1}\) & 0.9660 \(\pm\) & 7.93e\({}^{-4}\) \\ SPOTS-SOP & 0.0099 \(\pm\) & 6.82e\({}^{-5}\) & **0.0119** \(\pm\) & 1.37e\({}^{-4}\) & **79.1263** \(\pm\) & 8.09e\({}^{-2}\) & **0.9662** \(\pm\) & 5.50e\({}^{-4}\) \\ \hline \multicolumn{6}{l}{Visually Identical Edge Case Subset} \\ \hline SVG & 0.0104 \(\pm\) & 1.19e\({}^{-4}\) & 0.0129 \(\pm\) & 4.71e\({}^{-4}\) & 77.5338 \(\pm\) & 1.18e\({}^{-2}\) & 0.9586 \(\pm\) & 9.95e\({}^{-4}\) \\ SVG-TE & 0.0095 \(\pm\) & 2.34e\({}^{-4}\) & 0.0116 \(\pm\) & 4.10e\({}^{-4}\) & 78.6620 \(\pm\) & 2.69e\({}^{-1}\) & 0.9650 \(\pm\) & 1.62e\({}^{-3}\) \\ SVTG & 0.0101 \(\pm\) & 2.30e\({}^{-4}\) & 0.0115 \(\pm\) & 3.04e\({}^{-4}\) & 78.8688 \(\pm\) & 4.18e\({}^{-1}\) & 0.9627 \(\pm\) & 2.20e\({}^{-3}\) \\ SPOTS & 0.0093 \(\pm\) & 3.33e\({}^{-4}\) & 0.0113 \(\pm\) & 5.49e\({}^{-4}\) & 79.1245 \(\pm\) & 2.32e\({}^{-1}\) & 0.9666 \(\pm\) & 1.24e\({}^{-3}\) \\ SPOTS-small & **0.0091** \(\pm\) & 2.30e\({}^{-4}\) & **0.0112** \(\pm\) & 1.14e\({}^{-4}\) & **79.2577** \(\pm\) & 2.38e\({}^{-1}\) & **0.9671** \(\pm\) & 1.07e\({}^{-3}\) \\ SPOTS-SOP & 0.0092 \(\pm\) & 1.66e\({}^{-4}\) & 0.0111 \(\pm\) & 3.11e\({}^{-4}\) & 79.1381 \(\pm\) & 1.54e\({}^{-1}\) & 0.9666 \(\pm\) & 5.37e\({}^{-4}\) \\ \hline \multicolumn{6}{l}{Visually Identical Edge Case Subset} \\ \hline SVG & **0.0104** \(\pm\) & 1.19e\({}^{-4}\) & **0.0129** \(\pm\) & 4.71 \(e^{-4}\) & **77.5335** \(\pm\) & 1.18 e\({}^{-2}\) & **0.9586** \(\pm\) & 9.95 e\({}^{-04}\) \\ SVG-TE & 0.0113 \(\pm\) & 6.34 \(e^{-04}\) & 0.0145 \(\pm\) & 1.13 e\({}^{-03}\) & 76.8562 \(\pm\) & 5.58 e\({}^{-01}\) & 0.9542 \(\pm\) & 4.10 e\({}^{-03}\) \\ SVTG & 0.0217 \(\pm\) & 4.69 \(e^{-03}\) & 0.0271 \(\pm\) & 9.61 e\({}^{-3}\) & 72.4685 \(\pm\) & 1.80 e\({}^{-00}\) & 0.9014 \(\pm\) & 2.12 e\({}^{-2}\) \\ SPOTS & 0.0108 \(\pm\) & 2.86 \(e^{-04}\) & 0.0131 \(\pm\) & 6.85 e\({}^{-04}\) & 77.5330 \(\pm\) & 2.59 e\({}^{-0}\) & 0.9575 \(\pm\) & 1.73 e\({}^{-03}\) \\ SPOTS-small & 0.0113 \(\pm\) & 6.27 \(e^{-04}\) & 0.0135 \(\pm\) & 8.44 e\({}^{-04}\) & 77.1867 \(\pm\) & 4.48 e\({}^{-0}\) & 0.9549 \(\pm\) & 3.60 e\({}^{-03}\) \\ SPOTS-SOP & 0.0110 \(\pm\) & 8.38 e\({}^{-04}\) & 0.0132 \(\pm\) & 1.04 e\({}^{-03}\) & 77.4204 \(\pm\) & 6.48 e\({}^{-0}\) & 0.9568 \(\pm\) & 4.59 e\({}^{-03}\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Average scene prediction performance on the visually identical dataset, the edge case subset, and the anaesthetic test case; alongside these scores are the 95% confidence intervals.
improved performance in comparison to its larger, identical SPOTS model.
The single pipeline multi-modal prediction system SVTG performs worse than SVG across the whole household dataset but outperforms SVG on the visually identical dataset. This indicates that although the model produces poor visual predictions in scenes where tactile sensation is not required, the system is still capable of utilising tactile sensation for scenes where it is required. Thus, the poor visual prediction performance from this model indicates high object blur, not poor physical interaction understanding. These results are highlighted by the box and whisker plots in Fig. 8, showing the ability of the dual pipeline systems (SPOTS, SPOTS-STP, and SPOTS-small) to outperform other scene prediction methods, showing an improved understanding of the physical dynamics in the system.
We show the test performance over extended prediction horizons in Fig. 9. These graphs are applied to the edge case
Fig. 10: Comparison of different prediction models on the edge case test subset shown in figure 6. The prediction models used are the highest validation scoring models from the set of 10 identical models trained. Predictions are shown for timestep t+5 for the different prediction models. The Masked rows show the ground truth objects’ location in yellow and the predicted object image overlaid.
Fig. 9: These diagrams show the prediction performance over a long time series horizon (15 prediction frames). The bold line represents the mean performance of the prediction models at each time-step with the 95% confidence interval shaded. The models were trained up to 5 prediction frames, represented by the vertical black bar. MAE, PSNR and SSIM performance metrics show that over longer prediction horizons the tactile-enabled models produce increasingly better performance in comparison to the non-tactile model SVG. The results shown are the edge case subset test set.
subset and show that prediction of the performance of the tactile-enabled models become progressively more robust in comparison to the non-tactile model SVG. As uncertainty increases over the time horizon, these results show that although the non-tactile-enabled model produces similar performance at the beginning, its limited cause-effect understanding creates worsening predictions at extended prediction lengths. Likewise, the plots show that our model SVTG, despite producing significantly worse predictions during early time steps, it improved cause-effect understating enabling the system to outperform SVG over longer time horizons.
Overall, quantitative analysis across the two datasets shows:
1. The inclusion of tactile sensation into physical robot interaction prediction models improves the agent's physical cause-effect understanding and interaction perception.
2. The bio-inspired dual pipeline approach SPOTS, outperforms other methods of tactile integration, even with the same number of parameters.
3. Tactile integration in the right way leads to better generalisation to new objects and prediction in edge cases such as visually identical objects with different physical properties.
4. Tactile integration enables models to generate increasingly positive prediction performance in comparison to the non-tactile-enabled model over extended prediction horizons.
### _Qualitative Scene Analysis_
Although quantitative analysis provides general insight into prediction performance, the performance metrics evaluate the whole image on a pixel-to-pixel comparison. To evaluate an agent's physical interaction perception and understanding we can perform qualitative analysis. The key performance feature we are looking to observe is the location of the interacted object. Although in some applications, the crispness and overall look of an object are more important than its location in the scene, for physical interaction perception, predicted object location is the essential task.
Fig. 11 shows the performance of models on the household clusters dataset. Similar to the quantitative analysis of the household cluster dataset, there is little deviation between prediction performance which makes qualitative analysis unreliable. However, the visually identical dataset and the edge case subset provide a more clear insight into the models' physical interaction perception. Shown in Fig. 10 are the prediction results for each model at t+5 for the edge case trials shown in Fig 6. Rows 2 and 4 show the ground truth object location at t+5 in the prediction horizon highlighted in yellow, behind the given those models predicted the location of the object. These rows highlight the error in the predicted object location. The smaller the yellow region, the more overlap there is between the predicted object location and the true object location.
Within this task, we can visually see that the tactile-enabled models are capable of creating more accurate object location predictions. SVG is incapable of understanding the physical properties of the object. Despite poor performance metric scores, SVTG predicts the most accurate object locations, suggesting that although the model produces more inaccurate predictions with respect to our performance metrics, its overall physical interaction perception is more accurate.
As shown in the quantitative analysis, model size appears to have little impact on physical interaction perception, with both SPOTS and SPOTS-small producing similar strong predictions.
Overall, we observe that the impact of tactile sensation has a significant positive impact on video prediction performance. Despite better prediction scores of SPOTS, SVTG shows a slightly better performance qualitatively. Nonetheless, all tactile-enabled models display significant improvements in comparison to the non-tactile video prediction model. The comparison of the household object cluster dataset indicates
Fig. 11: This figure shows a comparison of the different prediction models on the household cluster test set for time-steps \(\{t+1,t+3,t+5\}\) into the prediction horizon. The models used are the highest validation scoring models from their batch. We observe little qualitative difference between the models in this dataset.
that the tactile-enabled models perform to a similar standard within the context of predicting object location.
### _Anaesthetisation of Prediction Models_
To further analyse the impact of tactile sensation on physical interaction perception at test time, we test each model with tactile data replaced with tactile values without contact. By anaesttising the agent's fingers, the system is unable to utilise the tactile features. The qualitative results of this test are shown in Fig. 12 and quantitative results in Table III. Qualitative analysis of the predictions at test time for the final time-step in the prediction horizon is shown as well as the overlaid prediction with the ground truth object location masked in yellow. Both quantitative and qualitative results show that the prediction performance of the tactile-enabled models is equal to the non-tactile-enabled comparison model, SVG.
This experiment shows that the impact of tactile sensation on an agent's cause-effect understanding of the hidden state of robot-object interaction due to its ability to update its internal understanding in real-time. From context tactile sensation, the tactile-enabled models are capable of performing improved predictions. This experiment shows that the unsupervised learning process does not improve tactile-enabled models' ability to estimate the physical properties of an object from visual sensation alone. For evidence of improved visual understanding, we would expect to see the tactile-enabled models still performing more accurate predictions during tactile occlusion.
This experiment indicates the requirement of tactile sensation for an agent's physical interaction perception at test time, not just during the training process.
### _Tactile Prediction in Physical Robot Interactions_
In this section, we explore the tactile prediction aspect of some of the models implemented. Tactile prediction, although not the purpose of the simultaneous scene and tactile prediction models, is a key extracted feature that we wish to explore. Tactile prediction is a new area of research and has been shown to enable physical robot interactions to proactively adapt during manipulation tasks [21]. When analysing tactile prediction with the Xela uSkin tactile sensor, quantitative analysis like MAE, PSNR, and SSIM, does not provide insight into tactile prediction performance, showing that qualitative analysis represents a more realistic representation of prediction performance [20]. We show both but the discussion is focused on qualitative analysis.
Fig. 13 (c) shows the Mean Absolute Error, for the models that predict tactile sensation, for an extended time horizon of 15 frames on the edge-case subset. This problem provides a complex setting for tactile prediction as the physical properties of the object have not been previously seen. The tactile prediction performances are similar, however, there is a slight
Fig. 12: Comparison of different prediction models with the mean tactile signal values when the sensor is not being touched as input for the tactile sensation (anaesthetisation) on the edge case test subset shown in figure 6. The models used are the highest validation scoring models from their batch. Predictions are shown for time-step \(t+1\). By providing the tactile-enabled models with anaesttised tactile (tactile occlusion) readings, we can explore the impact of tactile data during test predictions. These results, in comparison with Fig. 10, show that the unsupervised training approach enables models to update their understanding of the object being interacted from the sequences context data alone, suggesting that the models do not have an improved visual perception because of training with tactile sensation, but produce better results due to the ability to update understand the physical properties of an object during the interaction.
decrease in the performance of the single pipeline prediction system SVTG. This is most likely due to the impact of using the same prediction pipeline for both modalities. SVG has been shown to be a worse tactile prediction model than the ACTP model used in SPOTS [20]. Although the impact of multi-modality appears to positively impact tactile prediction with the SVG architecture, it is still worse than the ACTP system. This finding highlights the benefit of the dual pipeline architectures, which allow separate architectures for the given modality. In future work, if other sensations like sound are to be integrated, we believe the same approach would help for auditory prediction as well as to have a positive impact on scene prediction.
Qualitative prediction of SVTG and the SPOTS architectures is not possible as the two use different representations of the tactile data (SVTG uses a scaled-up image representation where as SPOTS uses a flattened vector). Fig. 13 (a) and 13 (b) show tactile prediction of a single feature in the predicted tactile feature vector representing a normal force. Although over the longer time horizons, the predictions are not accurate to the ground truth, for the trained length (5 frames) and a few after, the predictions show an ability to predict tactile sensation. Predicting both peaks (Fig. 13 (b)) and troughs (Fig. 13 (a)) in tactile sensation shows the models can predict the impact their future pushing action will have on the sensation they will feel. SPOTS-SOP appears to have slightly worse predictions during the early stages of the prediction horizon. This may be because the learned prior is only a representation of the scene data, instead of both scene and tactile, this lack of latent variable prediction in the tactile modality may negatively impact general tactile understanding and hence tactile prediction.
Overall, we find that both quantitatively and qualitatively the SPOTS model is capable of generating realistic tactile predictions across short prediction horizons. Indicating our multi-modal approach may further enable robotics in physical interaction tasks as predicted tactile sensation can be used for more robust and safer physical interactions through model predictive tactile control approach [21].
### _Discussion and Limitations_
Although we use a simple low-resolution sensor, we believe research into which features and attributes of tactile sensation are useful for physical interaction perception is an interesting and required avenue for future research.
Both quantitative and qualitative results show that the inclusion of tactile sensation in visual prediction models produces more accurate scene predictions and an overall improved physical interaction perception.
Quantitative analysis, which assesses the accuracy of images on a pixel-by-pixel basis, shows the SPOTS multi-modal prediction architecture produces the most accurate predictions. SPOTS was also capable of better generalisation to new objects.
We believe SPOTS performs well at this level because the dual pipeline system enables the processing of tactile sensation to be external to the visual processing system, thus enabling the visual processing pipeline to produce more crisp predictions. This is evident as the single pipeline version, SVTG, produces significantly worse results by attempting to combine each of the modalities into one multi-modal prediction architecture.
Qualitative analysis allows us to assess the physical interaction perception of the models by analysing predicted object location, we observed that all the tactile integrated prediction architectures were capable of more accurate prediction when presented with visually identical scenes with objects of different physical properties. In the qualitative analysis section, we observe that SVTG, the worst-performing architecture in the quantitative analysis, performed best. Suggesting that there is
Fig. 13: (a, b) Tactile predictions during the edge-case subset dataset for two separate cases. Each graph shows a single Normal Force taxel from the centre of the tactile sensor. The models are trained to 5 prediction steps, represented by the bold vertical black line, we show an extended prediction horizon of 15 future time steps. SVTG predicts a visual representation of the tactile data so we omit the tactile image prediction results from these plots. (c) This plot shows the Mean Absolute Error tactile prediction performance over the same extended time horizon (15 frames) for the edge-case subset dataset. The bold lines represent the mean performance of the prediction models at each time-step with the 95 % confidence interval shaded.
a trade-off between realistic visual predictions and physical interaction perception. SVTG's objects' location prediction is the best, but the objects' physical structure was the least visually realistic.
Overall, the best architecture depends on the downstream application. For realistic visual predictions, the SPOTS architecture is best. But for the best physical interaction perception, SVTG is best. Independent of the downstream application, these experiments show that an agent's physical robot interactions should utilise tactile sensation as it improves an agent's physical interaction perception.
The household cluster dataset produced poor prediction results from all models tested. The conclusion of the benefits of tactile sensation in physical interaction perception was mostly drawn from the second dataset. Improved models to predict the future scene frames for the household cluster dataset remain open for future work. Moreover, increasing the size of this dataset may contribute to the better performance of the models. The household cluster dataset contains 240'000 frames, however, similar datasets can be as large as 1'500'000 frames [8]. Furthermore, although objects in the dataset had complex geometry, they often contained uni-material surfaces, with evenly distributed weight, by including other objects the task may become more applicable to the tests. Finally, as dataset size can impact the rate of development and the multi-modal approach to physical interaction perception is novel, reducing the number of objects could enable meaningful comparison whilst allowing fast training for development.
Parameter size has an impact on prediction performance. The SPOTS-small model achieved better performance in comparison to SPOTS, suggesting that reduced parameter size can result in beneficial prediction. SVTG has a larger model size to account for the new modality (the hidden layers were doubled from 256 to 512 within the frame prediction network). Unlike SVTG, the SPOTS system uses the ACTP prediction model for its tactile prediction pipeline, which is a very small network (making up \(\approx 14\%\) of the network parameters). One limitation of this study is that the large parameter size of the SVTG model may be negatively impacting its qualitative performance, in future work, a smaller hidden layer size may produce better results. Moreover, future works include investigating an optimal model size.
## VII Conclusion
In conclusion, we presented a novel approach to improve video prediction accuracy in physical robot interactions by utilising tactile sensation as a second sensory modality. Our multi-modal approach was explored with a variety of possible model architectures, and we showed that the inclusion of tactile sensation has a positive impact on video prediction accuracy during robot pushing. Moreover, we demonstrated that the simultaneous prediction of tactile and image data has the greatest positive impact, suggesting that multi-modal prediction models are able to utilise the predictions of the opposite modality to boost performance and physical interaction perception performance.
While our work presents baseline approaches, we believe that the increased benefit of prediction understanding with multi-modal prediction systems can be explored in all physical robot interaction tasks, such as object grasping and manipulation, human-robot interaction, and tactile exploration. We also believe that the introduction of auditory sensation to these prediction systems may further increase an agent's physical interaction perception and cause-effect understanding, enabling interaction in even more complex scenes.Furthermore, another direction for work may also be exploring the wide range of tactile sensing devices and their attributes like temperature, vibration, and texture sensing.
Our approach to video prediction in physical robot interaction enables a wide range of future works, and we believe that the unsupervised learning approach makes the development of models in this setting simple and cost-effective, further enabling future work in physical interaction domains such as robotic surgery, human-robot interaction, elderly care, and food processing and harvesting. Overall, our work highlights the potential of multi-modal prediction models for physical interaction perception and presents exciting opportunities for future research in this field.
## Acknowledgments
Thank you to Jon Flynn, Karoline Heiwoolt and Kiyanoush Nazari for your important discussions on this work.
|
2306.07188 | Inference-time Stochastic Ranking with Risk Control | Learning to Rank (LTR) methods are vital in online economies, affecting users
and item providers. Fairness in LTR models is crucial to allocate exposure
proportionally to item relevance. Widely used deterministic LTR models can lead
to unfair exposure distribution, especially when items with the same relevance
receive slightly different ranking scores. Stochastic LTR models, incorporating
the Plackett-Luce (PL) ranking model, address fairness issues but suffer from
high training cost. In addition, they cannot provide guarantees on the utility
or fairness, which can lead to dramatic degraded utility when optimized for
fairness. To overcome these limitations, we propose Inference-time Stochastic
Ranking with Risk Control (ISRR), a novel method that performs stochastic
ranking at inference time with guanranteed utility or fairness given pretrained
scoring functions from deterministic or stochastic LTR models. Comprehensive
experimental results on three widely adopted datasets demonstrate that our
proposed method achieves utility and fairness comparable to existing stochastic
ranking methods with much lower computational cost. In addition, results verify
that our method provides finite-sample guarantee on utility and fairness. This
advancement represents a significant contribution to the field of stochastic
ranking and fair LTR with promising real-world applications. | Ruocheng Guo, Jean-François Ton, Yang Liu, Hang Li | 2023-06-12T15:44:58Z | http://arxiv.org/abs/2306.07188v3 | # Fair Learning to Rank with Distribution-free Risk Control
###### Abstract.
Learning to Rank (LTR) methods are vital in online economies, affecting users and item providers. Fairness in LTR models is crucial to allocate exposure proportionally to item relevance. The deterministic ranking model can lead to unfair exposure distribution when items with the same relevance receive slightly different scores. Stochastic LTR models, incorporating the Plackett-Luce (PL) model, address fairness issues but have limitations in computational cost and performance guarantees. To overcome these limitations, we propose FairLTR-RC, a novel post-hoc model-agnostic method. FairLTR-RC leverages a pretrained scoring function to create a stochastic LTR model, eliminating the need for expensive training. Furthermore, FairLTR-RC provides finite-sample guarantees on a user-specified utility using distribution-free risk control framework. By additionally incorporating the Thresholded PL (TPL) model, we are able to achieve an effective trade-off between utility and fairness. Experimental results on several benchmark datasets demonstrate that FairLTR-RC significantly improves fairness in widely-used deterministic LTR models while guaranteeing a specified level of utility.
Ruocheng Guo, Jean-Francois Ton and Yang Liu. 2023. Fair Learning to Rank with Distribution-free Risk Control. In Proceedings of (Conference acronym 2X), ACM, New York, NY, USA, 13 pages. [https://doi.org/XXXXXXXX.XXXXXX](https://doi.org/XXXXXXXX.XXXXXX)
2023
## 1. Introduction
Learning to rank (LTR) relies on machine learning to optimize rankings of items in applications such as search and recommendation (Grover et al., 2016; Chen et al., 2017). LTR models play a vital role in online multi-sided economies involving users, item providers and the platform (e.g., e-commerce website), where they have impact on the exposure of items. They are influential on the economic outcomes of entities such as sellers, job candidates, and content creators (Bianchi et al., 2017; Chen et al., 2017).
A LTR model is typically composed of two components. The first component is a scoring function. Given a query and a set of potential items to be reconmented, it predicts ranking scores for these items based on the predicted relevance to the user's query. The second component is a ranking model, which generates a ranking list of products using the scores from stage 1. Traditional LTR models generally employ deterministic ranking models, such as sorting items in accordance to their ranking scores.
Given the growing impact of LTR on online platforms, the demands for fair allocation of exposure among items (Bianchi et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017) has significantly increased. In current literature, fair allocation dictates that the exposure of an item in ranked lists should be proportional to its relevance to the query. However, deterministic ranking models can often result in unfair distribution of exposure. For instance, with a pretrained scoring function that is not 100% accurate, two products with identical relevance can have slightly different ranking scores. With deterministic ranking models, this would result in severely unequal allocation of exposure as the item with higher ranking score will always be ranked at the a higher position (Bianchi et al., 2017).
In response to the inherent issue of deterministic LTR models w.r.t. exposure-based fairness, there has been a shift towards stochastic LTR models. One such representive incorporates the Plackett-Luce (PL) ranking model (Chen et al., 2017). The PL ranking model predicts a distribution of ranking lists based on ranking scores. Thie enables us to sample multiple ranking lists from it, this significantly improving exposure fairness, especially in cases where multiple items have slightly different scores but the same relevance (Bianchi et al., 2017).
However, challenges arise when integrating scoring functions from deterministic models into the PL model, as they are not designed to optimize the expected performance under predicted ranking distributions. In addition, training scoring functions with the PL model is computationally intensive, requiring computing gradients from numerous sampled ranking lists. Finally, the lack of guarantees on ranking performance when we replace deterministic LTR models with the existing stochastic ones presents a significant obstacle to their widespread adoption in real-world applications.
To address these challenges, we present Fair Learning to Rank with Distribution-free Risk Control (FairLTR-RC), a post-hoc, model-agnostic approach for exposure-based fairness in LTR. By incorporating the framework of conformal prediction (Krause et al., 2017; Li et al., 2017) into the LTR setting. Our proposed method incorporates a novel partially stochastic ranking model - Thresholded PL (TPL) ranking model, which offers a delicate trade-off between fairness and utility in a post-hoc manner. TPL can work with pretrained scoring function from deterministic LTR models. This circumvents the expensive training needed by existing stochastic LTR models. In addition, FairLTR-RC provides theoretically supported finite-sample guarantees, assuring a specified performance level even under constrained data settings. and utility by ensuring the utility of our LTR models will not fall below a predetermined threshold.
The contributions of this paper are as follows:
* First, we propose FairLTR-RC, a post-hoc, model-agnostic method that efficiently transforms pre-trained scoring functions from deterministic LTR models, into stochastic ones this avoiding expensive training procedures.
* Second, our method extends distribution-free risk control to LTR. FairLTR-RC achieves a specified level of utility with high probability, despite its stochastic nature.
* Third, extensive experimental results on popular LTR datasets show that FairLTR-RC enhances fairness of various scoring functions (CatBoost (Cai et al., 2017), LightGBM (Cai et al., 2018), and Neural Networks) pretrained with deterministic ranking models, while maintaining the specified level of utility.
## 2. Preliminaries
In this section, we begin by outlining the notation used throughout the paper. Next, we provide a formal definition of a LTR model, which consists of a scoring function and a ranking model. Following that, we introduce definitions for utility and exposure-based fairness measures within the context of Learning to Rank (LTR). Lastly, we conclude this section by presenting our problem statement.
**Notations.** For a query \(q\), there exists \(n_{q}\) candidate documents \(\mathcal{D}^{q}=\{d_{1}^{q},...,d_{n_{q}}^{n}\}\) to be ranked. Each document \(d_{i}^{q}\) is described by a tuple \((\mathbf{x}_{i}^{q},\rho(d_{i}^{q}))\), where the feature vector \(\mathbf{x}_{i}^{q}\in\mathcal{X}\) describes the item and its relationship to the query \(q\). For example, features used in e-commerce search can include the price of the item and the average price of items clicked from the query. And \(\rho(d_{i}^{q})\) is the relevance of document \(d_{i}^{q}\) annotated by human experts. We assume that the relevance is given for each item corresponding to the queries in the training, validation and calibration set, but unknown for the test set. For simplicity of notation, we will omit the subscript \(i\) and the superscript \(q\) when they are not necessary. A top-K ranking \(\mathbf{y}=[y_{1},...,y_{K}]\in\mathcal{Y}\) is a sorted list of \(K\) items, where \(y_{k}=d\) means item \(d\) is ranked at the \(k\)-th position in \(\mathbf{y}\), where \(\mathcal{Y}\) is the space of permutations. Let \(\mathbf{y}_{1:k}\) be the sublist including first \(k\leq K\) elements of \(\mathbf{y}\).
**Scoring Function and Ranking Model.** Here, we formally define the scoring function and the ranking model of a LTR model. First, given query \(q\) and its item set \(\mathcal{D}^{q}\), a scoring function \(f:\mathcal{X}\rightarrow\mathbb{R}\) maps the feature vectors of each item \(d\) to its ranking scores \(s(d)\). In this work, we assume that the scoring function \(f\) is fixed and ranking scores \(s(d)\) are given for \(d\in\mathcal{D}^{q}\). Second, a ranking model \(\pi:\mathbb{R}^{n_{q}}\times\mathcal{Y}\rightarrow[0,1]\) which maps the scores of all items in \(\mathcal{D}^{q}\) and a ranking \(\mathbf{y}\) to its probability to be sampled. Thus, a ranking model \(\pi(\{s(1),...,s(n_{q})\},\mathbf{y})\) predicts a distribution of rankings for each query \(q\). For simplicity of notations, we denote the predicted distribution of rankings for query \(q\) as \(\pi^{q}(\mathbf{y})\). Then, a deterministic LTR model comes with \(\pi^{q}(\mathbf{y})=1\) for a certain ranking \(\mathbf{y}\), and \(\pi^{q}(\mathbf{y}^{\prime})=0\) for \(\mathbf{y}^{\prime}\neq\mathbf{y}\). While a stochastic model can have \(\pi^{q}(\mathbf{y})>0\) for multiple different rankings \(\mathbf{y}\).
The PL ranking model is adopted for improving exposure-based fairness (Fan et al., 2015; Chen et al., 2016; Chen et al., 2016). PL models predict a distribution of rankings for fairer allocation of exposure among items.
Given query \(q\) and the set of items \(\mathcal{D}^{q}\), their ranking scores \(s(d),d\in\mathcal{D}^{q}\), and the sampled items for positions \(1,...,k-1\), denoted by \(\mathbf{y}_{1:k-1}\), the PL ranking model (Chen et al., 2016) samples an item \(d\) for position \(k\) from \(\pi_{PL}(\mathbf{y})=\prod_{k=1}^{K}p_{PL}(d|\mathbf{y}_{1:k-1})\) with:
\[p_{PL}(d|\mathbf{y}_{1:k-1})=\frac{\mathds{1}(d\notin\mathbf{y}_{1:k-1})\exp (s(d)/\tau)}{\sum_{d^{\prime}\in\mathcal{D}^{q}}\exp(s(d^{\prime})/\tau)}, \tag{1}\]
where \(\tau\) is the temperature. However, training such stochastic LTR models is expensive, which requires sampling at least 100 ranking lists for each query (Fan et al., 2015; Chen et al., 2016; Chen et al., 2016) and compute gradients of model parameters based on these samples.
**Utility and Fairness Metrics.** Given the definitions above, here, we define the utility and exposure-based fairness for a LTR model. In LTR, the utility function considers the ranking of each item by weighting each position \(k\) with weight \(\theta_{k}\). The utility of a ranking model \(\pi\) on query \(q\) can be defined as (Chen et al., 2016):
\[U^{q}(\pi)=\sum_{\mathbf{y}\in\mathcal{Y}}\pi^{q}(\mathbf{y})\sum_{k=1}^{K} \theta_{k}\cdot\rho(y_{k}),\]
which leads to the overall utility \(U(\pi)=\mathbb{E}_{q}[U^{q}(\pi)]\). If we choose \(\theta_{k}=\frac{\mathds{1}[k\leq K]}{\log_{2}(1+k)}\), then \(U(\pi|q)\) is DCG@K. Let iDCG@k be the maximal DCG@k for a given query \(q\) at position \(k\), then \(U^{q}(\pi)\) is NDCG@K if \(\theta_{k}=\frac{\mathds{1}[k\leq K]}{\log_{2}(1+k)\times\mathbb{i}\)DCG@K}\), which measures the normalized exposure of items ranked at position \(k\). In this work, we consider bounded utility function\(U^{q}(\pi)\in[0,1]\). Thus, the utility risk to be controlled is \(R_{util}(\pi)=1-U(\pi)\), e.g., \(1-\text{NDCG@K}\).
Fairness in ranking deals with the allocation of exposure over items. Exposure measures the probability of users to examine a certain position. The widely used utility metric NDCG@K is based on the logarithmic reduction of exposure proportional to the position. To measure item exposure fairness, we first define exposure of item \(d\) under the ranking model \(\pi\) as
\[\mathcal{E}^{q}(d;\pi)=\sum_{\mathbf{y}}\pi^{q}(\mathbf{y})\sum_{k=1}^{K} \theta_{k}\cdot\mathds{1}[y_{k}=d],\]
where \(\mathds{1}[y_{k}=d]\theta_{k}\) is the exposure of item \(d\) in the ranking \(\mathbf{y}\). Intuitively, it measures the mean exposure of item \(d\) in the rankings sampled from the predicted distribution \(\pi^{q}(\mathbf{y})\). Let \(\mathcal{E}(d)\) denote \(\mathcal{E}^{q}(d;\pi)\) when \(q\) and \(\pi\) can be dropped. Based on this, we can define a disparity measure for exposure-based fairness in ranking.
_Exposure-based Fairness in Ranking (Fan et al., 2015; Chen et al., 2016; Chen et al., 2016)_. In this work, we focus on fair allocation of exposures to items. Singh and Joachims (Fan et al., 2015) first propose a fairness notion: the exposure of an item \(\mathcal{E}(d)\) should be proportional to its relevance \(\rho(d)\). They compute the average difference of the exposure-relevance ratio \(\frac{\mathcal{E}(d)}{\rho(d)}-\frac{\mathcal{E}(d^{\prime})}{\rho(d^{\prime})}\) between each pair of items for each query. Oosterhuis (Oosterhuis, 2015) proposes a variant of this disparity metric, which handle the cases for the items with \(0\) relevance but was ranked in top-K. Given the ranking model \(\pi\), the disparity measure for exposure-based fairness is defined as (Fan et al., 2015; Chen et al., 2016)
\[R_{fair}^{q}(\pi)=\frac{2\sum_{d\in\mathcal{D}^{q}}\sum_{d^{\prime}\in\mathcal{D} ^{q}_{vd}}\ell(\mathcal{E}^{q}(d;\pi)\rho(d^{\prime}),\mathcal{E}^{q}(d^{\prime}; \pi)\rho(d))}{|\mathcal{D}^{q}|(|\mathcal{D}^{q}|-1)},\]
where \(\mathcal{D}^{q}_{\neg d}\) denotes \(\mathcal{D}^{q}\setminus\{d\}\) and \(\ell(a,b)\) is \((a-b)^{2}\). Let \(R_{fair}(\pi)=\mathbb{E}_{q}[R_{fair}^{q}(\pi)]\) be the expectation over queries. Intuitively, \(R_{fair}^{q}(\pi)\) measures how is the exposure of items under the ranking model \(\pi\) different from the ideal case where the exposure is proportional to the relevance, \(\frac{\mathcal{E}(d)}{\rho(d)}=\frac{\mathcal{E}(d^{\prime})}{\rho(d^{\prime})}\), for all pairs of items \(d,d^{\prime}\in\mathcal{D}^{q}\).
**Problem Statement.** This work examines a real-world situation. For any pretrained scoring function \(f\), we aim to improve exposure-based fairness of the LTR model via a post-hoc method while maintaining a satisfactory level of utility (e.g., NDCG@K is no less than a certain level) with high probability.
Given query \(q\) and the candidate items \(\mathcal{D}^{q}\), and a fixed scoring function \(f\), the goal is to optimize the ranking model \(\pi\) to minimize the disparity with a simultaneous guarantee on the utility, i.e.,
\[\min_{\pi}R_{fair}(\pi)\ s.t.\ P(R_{util}(\pi)\leq\alpha)\geq 1-\delta, \tag{2}\]
where \(\alpha\in(0,1)\ (1-\alpha)\) is the desired risk (utility) level, and \(1-\delta\in(0,1)\) is the desired coverage rate.
## 3. Methodology
In this section, the background of distribution-free risk control is introduced, followed by the proposed framework.
### Background: Distribution-free Risk Control
Distribution-free risk control is a post-hoc model-agnostic method based on split conformal prediction. It uses a calibration set \(\mathcal{Q}_{cal}\) to determine the value of a set of model parameters s.t.
Let \(\mathcal{T}(\lambda)\) be a set-valued function with a scalar parameter \(\lambda\) (e.g., a threshold on item scores) that predicts a set of items. Given a bounded risk function \(R(\mathcal{T}(\lambda))\in[0,B]\) that measures the expected loss of \(\mathcal{T}(\lambda)\) over queries, we define of a Risk Controlling Prediction Set (Han et al., 2017). For simplicity, let \(R(\lambda)\) denote \(R(\mathcal{T}(\lambda))\). Distribution-free risk control (Han et al., 2017) uses an independent and identically distributed (i.i.d.) data split as the calibration set \(\mathcal{Q}_{cal}\) to select \(\lambda\) for the set-valued functions \(\mathcal{T}(\lambda)\) s.t. the risk function \(R\) is guaranteed on the test set \(\mathcal{Q}_{test}\). In our setting, set-valued functions predict a set of items for each position in the ranking. The connection between set-valued functions and ranking models is shown in Section 3.2.
**Risk Controlling Prediction Set (Han et al., 2017).** Given a desired risk level \(\alpha\in[0,B]\) and tolerance rate \(\delta\in(0,1)\), a set-valued function \(\mathcal{T}(\lambda)\) is a \((\alpha,\delta)\) risk-controlling prediction set iff
\[P(R(\lambda)\leq\alpha)\geq 1-\delta \tag{3}\]
Intuitively, in our setting, this means the probability of observing the risk function \(R\leq\alpha\) is at least \(1-\delta\) across repeated runs with different random data splits when the set-valued function \(\mathcal{T}(\lambda)\) is applied. Then, the following assumptions are employed.
* Nesting Properties, \[\lambda<\lambda^{\prime}\Rightarrow\mathcal{T}(\lambda)\subset\mathcal{T}( \lambda^{\prime}),\ \mathcal{S}\subset\mathcal{S}^{\prime}\Rightarrow R(\mathcal{S})\geq R( \mathcal{S}^{\prime})\]
* Existence of an upper confidence bound (UCB) \(\hat{R}^{+}(\lambda)\) for the risk function. It satisfies \(P(R(\lambda)\leq\hat{R}^{+}(\lambda))\geq 1-\delta\).
Under the aforementioned assumptions, Bates et al. (Han et al., 2017) propose to select the threshold \(\hat{\lambda}\) on the calibration set \(\mathcal{Q}_{cal}\) s.t. \(\mathcal{T}(\hat{\lambda})\) is a \((\alpha,\delta)\) risk-controlling prediction set. Intuitively, they select the \(\lambda\) s.t. any \(\lambda^{\prime}\geq\lambda\) leads to UBC smaller than the desired level \(\alpha\) as
\[\hat{\lambda}=\inf\{\lambda\in\Lambda:\hat{R}^{+}(\lambda^{\prime})<\alpha, \forall\lambda^{\prime}\geq\lambda\}, \tag{4}\]
Then, (Han et al., 2017) extends risk control to cases where the nesting properties are violated, which also allows multi-dimensional thresholds. The crux of (Han et al., 2017) is hypothesis testing by the duality of p-values and concentration inequality, which selects a threshold by rejecting its corresponding null hypothesis \(R(\mathbf{\lambda})>\alpha\) with p-value smaller than \(\delta\), where \(\mathbf{\lambda}\) is the vector representing a multi-dimensional threshold. In Section 3.3, we will present concrete instantiation of both UBC and p-value based risk control for ranking.
It can be infeasible to control risk at every level of \(\alpha\) for every data distribution (Han et al., 2017). For example, guaranteeing NDCG@K \(\geq 0.9\) may be unattainable given a subpar fixed scoring function \(f\), where risk control methods should abstain from returning a threshold.
### Thresholded PL Ranking Model
Here, we propose the Thresholded PL (TPL) ranking model. Applying TPL on top of pretrained scoring functions from deterministic LTR models achieves an effective utility fairness trade-off. The TPL model is built upon set-valued functions, which enables distribution-free risk control for LTR. With parameters of the set-valued functions obtained from risk control algorithms, the TPL model provides a guarantee on a specified risk function.
Suppose we have access to a risk control score \(\tilde{s}(d)\) of each document \(d\), which approximates the relevance of the item. We let \(\tilde{s}(d)\) be a function of the provided ranking score \(s(d)\) from the fixed scoring function \(f\). We choose the probability to sample item \(d\) at the first position in the PL model, \(p(d|\emptyset,0)\). More detailed discussion on the choice of risk control score can be found in Appendix D. For each position \(k\), the TPL ranking model uses a set-valued function \(\mathcal{T}(\lambda_{k})\) to select items whose predicted scores are high enough for each position \(k\), where \(\lambda_{k}\) is the threshold parameter for position \(k\):
\[\mathcal{T}(\lambda_{k})=\{d|\tilde{s}_{d}\geq\lambda_{k},\forall d\not\in \mathbf{y}_{1:k-1}\}, \tag{5}\]
where \(\mathbf{y}_{1:k-1}=\emptyset\) if \(k=1\). For each position \(k\), TPL creates a distribution of the items selected based on the set-valued function \(\mathcal{T}(\lambda_{k})\) defined in Eq. (5) and then combines them to predict a distribution of rankings as:
\[\begin{split} p(d|\mathbf{y}_{1:k-1},\lambda_{k})& =\frac{\mathds{1}(d\in\mathcal{T}(\lambda_{k}))\exp(s_{d}/\tau)}{ \sum_{d^{\prime}\in\mathcal{T}(\lambda_{k})}\exp(s_{d^{\prime}}/\tau)},\\ \pi(\mathbf{y})&=\prod_{k=1}^{K}p(d|\mathbf{y}_{1:k -1},\lambda_{k}),\end{split} \tag{6}\]
When \(\lambda_{k}\) takes extreme values, the TPL model will reduce to the PL and the deterministic ranking model. Specifically, when \(\lambda_{k}=0\) and \(\lambda_{k}\geq\max(\{\tilde{s}_{d}\}_{d\in\mathcal{D}^{q}\setminus\mathbf{y}_ {1:k-1}})\) for \(k=1,...,K\), TPL is equivalent to PL and the deterministic ranking model, respectively. We verify this empirically in Section 4.2 (see Fig. 1).
In conformal prediction, it is often desired to have a small prediction set size \(|\mathcal{T}(\lambda_{k})|\), which is not the case here. To achieve the goal described by the problem statement (Eq. (2)), our method first finds a set of \(\lambda_{k}\) that is large enough for maintaining a guaranteed level of utility then adopts the minimal \(\lambda_{k}\) in the set for optimizing exposure-based fairness. More specifically, we aim to minimize \(R_{fair}\) under the constraint that the utility is guaranteed, which is equivalent to maximizing \(|\mathcal{T}(\lambda_{k})|\) with items whose scores are at least \(\lambda_{k}\). We discuss selecting \(\lambda_{k}\) through distribution-free risk control in Section 3.3. With the set-valued function \(\mathcal{T}(\lambda_{k})\), TPL can adapt to the distribution of scores to achieve this goal, compared to the PL model which samples any item from \(\mathcal{D}^{q}\setminus\mathbf{y}_{1:k-1}\) and the deterministic model which only takes the item with the highest score. When there are multiple items with high and similar scores, it is desired to include all of them in the prediction set. When there are two items with scores much higher than others, the prediction set should only include them.
### Risk Control with Thresholded PL Model
Here, we describe the distribution-free risk control algorithm that selects thresholds for the TPL model to provide provable guarantees on the bounded risk function \(R_{util}\). Given user-specified desired utility level \(1-\alpha\) for a bounded list-wise utility function \(U\) (e.g., NDCG@K), our method utilizes the calibration set \(\mathbf{Q}_{cal}\) to find thresholds that provide guaranteed utility. Note that existing post-hoc methods for exposure-based fairness (Beng et al., 2019) are not able to provide such guarantees as they blindly optimize a weighted combination of utility and fairness objectives.
**Selecting Thresholds via Risk Control.** We leverage distribution-free risk control (Krause et al., 2019; Krause et al., 2019) to learn thresholds \(\mathbf{\lambda}=[\lambda_{1},...,\lambda_{K}]\) for top-K positions s.t. the risk of the ranking model is under control, i.e., \(P(R_{util}(\mathbf{\pi})\leq\alpha)\geq 1-\delta\), where \(R_{util}(\mathbf{\pi})=R_{util}(\mathbf{\pi}(\mathbf{\lambda}))\). When the threshold is a vector, the nesting properties may not hold.
The risk control algorithm works as follows. First, we specify a search space \(\Lambda\) for the thresholds. Each value \(\mathbf{\lambda}\in\Lambda\) corresponds to a null hypothesis \(R_{util}(\mathbf{\lambda})>\alpha\). Then, for each value of \(\mathbf{\lambda}\), we test the null hypothesis on the calibration set \(\mathbf{Q}_{cal}\), which is assumed to be exchangeable with the test data \(\mathbf{Q}_{test}\)(Krause et al., 2019). Specifically, we aim to obtain the rejected values of \(\mathbf{\lambda}\) through the hypothesis testing, which is associated with a specified UCB \(\hat{R}^{+}\) for the risk \(R_{util}\) as
\[\hat{\Lambda}=\{\mathbf{\lambda}\in\Lambda|\hat{R}^{+}(\mathbf{\lambda})<\alpha\} \tag{8}\]
Finally, we choose \(\hat{\mathbf{\lambda}}=\arg\min_{\mathbf{\lambda}\in\hat{\Lambda}}R_{fair}(\mathbf{ \lambda})\) to optimize fairness.
However, the computation can be inhibitive if brute-force grid search is performed to test all possible values of \(\mathbf{\lambda}\) from a predefined grid with \(M\) values for each \(\lambda_{k}\). This requires computing \(\hat{R}^{+}(\mathbf{\lambda})\) for \(M^{K}\) times, each of which includes computing the risk \(R_{util}\) on the calibration set. In this work, we overcome this issue by limiting the search space of \(\mathbf{\lambda}\) with a heuristic, where we simply let each position use the same threshold \(\lambda\), which empirically performs well.
**Risk Control for LTR.** Here, we provide concrete instantiations of the UCB (Krause et al., 2019) and p-valued based risk control (Krause et al., 2019), which are crucial for determining the risk-controlling thresholds \(\hat{\Lambda}\).
First, with the duality of concentration inequality and p-values in hypothesis testing, we adopt the widely adopted Hoeffding-Benkus (HB) inequality (Krause et al., 2019; Krause et al., 2019; Krause et al., 2019) which combines the two inequalities by taking the minimum of the p-values from the two inequality. The p-value associated with HB inequality is a function of the mean risk \(\hat{R}_{util}(\lambda)\) of the calibration set and the number of queries in the calibration set \(|\mathbf{Q}_{cal}|\) as:
\[\begin{split} p^{HB}(\lambda)=\min(\exp(-|\mathbf{Q}_{cal}|h_{1}( \hat{R}_{util}(\lambda)\wedge\alpha,\alpha)),\\ \exp{(1)}\times p(Bin(n,\alpha)\leq\lceil|\mathbf{Q}_{cal}|\hat{R}_{util }(\lambda)\rceil)),\end{split} \tag{9}\]
where \(h_{1}(a,b)=a\log(\frac{\theta}{\theta})+(1-a)\log(\frac{1-a}{1-\theta})\), \(Bin(n,\alpha)\) is the Binomial distribution and \(\lceil a\rceil\) takes the ceiling of the scalar \(a\). Given the Hoeffding Benkus p-values computed by Eq. (9), we can obtain the set of selected thresholds \(\hat{\Lambda}=\{\lambda\in\Lambda|p^{HB}(\lambda)<\delta\}\) based on the p-value of its hypothesis and then take the minimal \(\lambda\in\hat{\Lambda}\) as it heuristically minimizes the disparity measure \(\hat{R}_{fair}\).
Second, besides the UCBs introduced in (Krause et al., 2019), we adopt a theory-backed UCB based on the DKWM inequality (Krause et al., 2019) for risk functions taking discrete values (e.g., \(R_{util}=1-\text{NDCG@K}\)).
\[\hat{R}^{+}=R_{util}(\lambda)+\text{const.}\cdot\sqrt{\frac{\ln(2/\delta)}{2 \cdot|\mathbf{Q}_{cal}|}}. \tag{10}\]
where \(\text{const.}\) is a constant that depends on the set of loss values - we specify the details and provide the proof in Appendix A. With such a UCB, we can search for the minimal \(\lambda\in\Lambda\) that satisfies \(R^{+}<\alpha\) (as in Eq. (5)) on the calibration set and obtain the guarantee by Theorem 1 of (Krause et al., 2019). \(R_{util}(\lambda)\) may not satisfy the nesting assumption (Eq. (4)), but we can find \(\bar{R}_{util}(\lambda)=\max_{t\leq\lambda}R_{util}(\lambda)\geq R_{util}\) that satisfies the assumption (Krause et al., 2019). Then we can apply the UCBs on \(\bar{R}_{util}\).
## 4. Experiments
In this section, we perform experiments on popular LTR benchmark datasets with various pretrained scoring functions to answer the following research questions:
* RQ1: Can FairLTR-RC achieve effective trade-off between utility and exposure-based fairness?
* RQ2: Can FairLTR-RC achieve high marginal coverage rate on the risk function and improve exposure-based fairness at the same time?
### Experimental Setup
**Datasets.** We consider two popular publicly available datasets for LTR: Yahoo!Webscope (Yahoo) (Yahoo, 2018) and MSLR-WEB30k (MSLR) (Krause et al., 2019). Table 1 shows the statistics describing the widely used datasets for evaluating LTR models. We observe that Yahoo has more features and MSLR has more queries and much more items per query. These datasets consist of queries, their associated documents, and relevance labels in \(0-4\) indicating the expert-judged relevance between an item and a query. Each feature vector represents a query-document pair.
Similar to (Krause et al., 2019), to compute the coverage rate, we repeat the experiment 50 times by randomly splitting the original test set into calibration (25%) and test sets (75%). The scoring functions are pretrained on the training set and model selection is done by maximizing NDCG@5 on the validation set. Then, for risk control, the threshold \(\lambda\) is selected based on UCB or p-value computed on the calibration set. We compare the proposed FairLTR-RC with the deterministic and PL ranking model with mean and standard deviation of the test performance over these 50 runs.
Finally, we follow (Krause et al., 2019) to ignore all queries with no relevant documents for fair evaluation. It avoids arbitrarily assigning \(\text{NDCG@K}=1\) or \(0\) to such queries with any ranking, to prevent unfair comparisons. Thus, the NDCG@K reported in this work lower than those in the literature.
**Scoring Functions.** We use CatBoost (Krause et al., 2019), LightGBM (LGB) (Krause et al., 2019) and Neural Network (NN) as the pretrained scoring function. The NN is a three-layer MLP with sigmoid activation trained with LambdaLoss (Krause et al., 2019). On top of the pretrained scoring functions, we apply the ranking models (deterministic, PL and TPL). More details about the experimental setup can be found in Appendix B.
To make the coverage results more comprehensive, we also apply our method to the state-of-the-art stochastic LTR models that train scoring function on the top of the PL model, including PL-Rank-3 (Krause et al., 2019), StochasticRank (Krause et al., 2019), and Policy Gradient (Beng et al., 2019), the results for which is in Appendix C.
**Evaluation Metrics.** We consider the widely used NDCG@5 as the utility metric. For exposure-based fairness, we follow (Beng et al., 2019) to adopt Eq. (2) with and \(\ell(a,b)=(a-b)^{2}\) to measure the mean squared
disparity \(R_{sq-fair}\). It measures how the assigned exposure of an item is different from being proportional to its relevance.
For the guarantee, we repeat the experiment 50 times and report the marginal coverage on test sets with thresholds selected by risk control algorithms \(\hat{\lambda}\) as \(\sum_{t=1}^{T}\mathds{1}(R_{util}(\hat{\lambda})\leq\alpha)/T\). We choose \(\alpha\) based on the performance of the original deterministic model.
### Experimental Results
**Trade-off Results.** We first verify that the proposed TPL ranking model can achieve an effective trade-off between utility and fairness. As shown in Fig. 1, when the threshold \(\lambda\) increases, the utility (risk) of the TPL ranking model increases (decreases) while the disparity measure increases. In addition, we verify the claim that TPL model can reduce to the PL and deterministic model. When \(\lambda=0\), the TPL model reduces to the PL ranking model (Sto). When \(\lambda\geq\max(\bar{s}_{d})\), TPL reduces to the deterministic ranking model (Det).
**Coverage and Fairness Improvement.** Let \(U^{*}\) and \(R^{*}_{sq-fair}\) be the NDCG@5 and mean squared disparity of the pretrained deterministic LTR model, respectively. Results in Table 2 show that, with risk control based on Hoeffding-Benktus, in at least 48 out of 50 runs (\(\leq 2\) abstensions), our method achieves 100% coverage (NDCG@5\(\geq 1-\alpha\)). At the same time, our method improves fairness significantly with at least 13.29% drop in \(R_{sq-fair}\). In practice, when the risk control method abstains from selecting any thresholds, we can set the threshold \(\lambda=1\) to make TPL reduce to the deterministic ranking model. Fig. 3 in Appendix C shows distribution of NDCG@5 with thresholds selected by Hoeffding-Benktus and DKWM inequality.
## 5. Related Work
**Stochastic Ranking and Exposure-based Fairness.** Stochastic LTR models were initially adopted to address the challenge of optimizing ranking metrics, which are flat or discontinuous (Han et al., 2017; Wang et al., 2018; Wang et al., 2019), where it is shown that training scoring functions with the PL ranking model improves their ranking performance. In addition, the PL ranking model also can enable exploration for by the estimated uncertainty as it explicitly estimates uncertainty of the scoring function through the probability distribution from the PL ranking model (Chen et al., 2018). This can be helpful when there exists samples (e.g., users, queries and items) with few interactions. Recently, stochastic LTR models are adopted to improve exposure-based fairness. Singh et al. (Singh et al., 2020) proposed the notion of exposure-based fairness and a policy gradient algorithm to train LTR models to optimize a combination of utility and fairness. (Singh et al., 2020) evaluated two types of stochastic ranking models including the PL model w.r.t. exposure-based fairness based on various click models. (Chen et al., 2018; Wang et al., 2019) improve the efficiency of training scoring functions with the PL model forerelevance and fairness. Different from them, our method transforms a pretrained scoring function from a deterministic LTR model to a stochastic one.
**Distribution-free Risk Control for Ranking.** Distribution-free risk control is based on split conformal prediction (Bates et al., 2018; Wang et al., 2019). Bates et al. (Bates et al., 2018) propose a method to predict an interval of a pair-wise score of items with guaranteed confidence. It abstains from predicting unconfident pairs. However, it does not directly provide guarantees on list-wise ranking performance. Angelopoulos et al. (Angelopoulos et al., 2019) apply Learn then Test (Chen et al., 2019) to the recall stage of recommendation systems, which predicts sets with items with scores higher than a threshold, with a guarantee on the expected ratio of false positives. Wang et al. (Wang et al., 2019) propose a method based on (Bates et al., 2018) to select a threshold for marginal guarantee on the number of candidates from each group and minimizes the prediction set size for each query, which is further extended to the scenario with noisy and biased implicit feedbacks (e.g., clicks) (Wang et al., 2019). Different from the existing work, this work focuses on providing guarantee on widely used list-wise ranking metrics (e.g., NDCG@K).
## 6. Conclusion
In this work, we propose FairLR-RC, a post-hoc model-agnostic method for Fair Learning to Rank (LTR). It can create a stochastic LTR model with improved exposure-based fairness with a scoring function from a pretrained deterministic LTR model. With distribution-free risk control, our method can provide guarantee on a user-specified utility function. The integration of the Thresholded Plackett-Luce (TPL) model balances utility and fairness. FairLR-RC avoids expensive training and provides guarantees on a specified metric based on distribution-free risk control. Results on benchmark datasets verify the effectiveness of our proposed method, improving fairness of state-of-the-art deterministic models while ensuring a predefined level of utility.
Despite its promising results, this work is not without limitations. FairLR-RC may abstain from selecting thresholds with subpar
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{\# queries (\# items)} & \multicolumn{1}{c|}{\# features} \\ \hline \hline & training & validation & original test & \\ \hline Yahoo & 19,944 (473,134) & 6,983 (165,660) & 2,994 (71,083) & 700 \\ \hline MSLR-WEB30K & 18,919 (2,270,296) & 6,306 (753,611) & 12,581 (747,218) & 136 \\ \hline \end{tabular}
\end{table}
Table 1. Statistics of the benchmark datasets
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Yahoo} \\ \hline & Coverage & 1-\(\alpha\) & NDCG@5 & 1 - \(\frac{R_{sq-fair}}{R^{*}_{sq-fair}}\) & \# Abstain \\ \hline CatBoost & 100\% & 0.687 & 0.727 & 23.99\% & 2 \\ \hline LGB & 100\% & 0.687 & 0.727 & 20.77\% & 0 \\ \hline NN & 100\% & 0.641 & 0.673 & 35.05\% & 0 \\ \hline \multicolumn{4}{c|}{MSLR} \\ \hline CatBoost & 100\% & 0.449 & 0.481 & 16.08\% & 0 \\ \hline LGB & 100\% & 0.449 & 0.480 & 13.29\% & 2 \\ \hline NN & 100\% & 0.405 & 0.430 & 26.14\% & 0 \\ \hline \end{tabular}
\end{table}
Table 2. Coverage Results with thresholds \(\lambda\) selected by the p-value of Hoeffding-Benktus. Similar results for DKWM can be found in Appendix C.
pretrained scoring functions, small calibration sets, conservative bounds in risk control methods, and when \(\alpha\) is too small.
|
2310.02716 | Transfinite version of the Mittag-Leffler condition for the vanishing of
the derived limit | We give a necessary and sufficient condition for an inverse sequence $S_0
\leftarrow S_1 \leftarrow \dots$ indexed by natural numbers to have ${\rm
lim}^1S=0$. This condition can be treated as a transfinite version of the
Mittag-Leffler condition. We consider inverse sequences in an arbitrary abelian
category having a generator and satisfying Grothendieck axioms ${\rm (AB3)}$
and ${\rm (AB4^*)}.$ We also show that the class of inverse sequences $S$ such
that ${\rm lim}\: S={\rm lim}^1 S=0$ is the least class of inverse sequences
containing the trivial inverse sequence and closed with respect to small limits
and a certain type of extensions. | Mishel Carelli, Sergei O. Ivanov | 2023-10-04T10:39:07Z | http://arxiv.org/abs/2310.02716v4 | # Transfinite version of the Mittag-Leffler condition for the vanishing of the derived limit
###### Abstract.
We give a necessary and sufficient condition for an inverse sequence \(S_{0}\gets S_{1}\leftarrow\dots\) indexed by natural numbers to have \(\lim^{1}\!S=0\). This condition can be treated as a transfinite version of the Mittag-Leffler condition. We consider inverse sequences in an arbitrary abelian category having a generator and satisfying Grothendieck axioms (AB3) and (AB4\({}^{*}\)). We also show that the class of inverse sequences \(S\) such that \(\lim S=\lim^{1}\!S=0\) is the least class of inverse sequences containing the trivial inverse sequence and closed with respect to small limits and a certain type of extensions.
The work is supported by Yanqi Lake Beijing Institute of Mathematical Sciences and Applications (BI MSA)
## Introduction
The study of the right derived functors of the functor of limit was initiated in the works of Yeh, Milnor, Roos and Grothendieck [24, 18, 23, 12] (see also [21, 14, 16]). In particular, it was shown that for inverse sequences of abelian groups, \(\lim^{i}=0\) for \(i>1\), but the functor \(\lim^{1}\) turned out to be non-trivial. This functor is referred to as the derived limit. Milnor emphasized the significant role of this functor in algebraic topology by introducing what is now known as Milnor exact sequence for homotopy groups. It was also proven that if the inverse sequence consists of epimorphisms, the derived limit is trivial. Furthermore, Grothendieck in [12] introduced the Mittag-Leffler condition for an inverse sequence, which generalizes the epimorphism condition, and proved that it also implies the vanishing of the derived limit.
If all components of an inverse sequence are at most countable abelian groups, then the Mittag-Leffler condition becomes necessary and sufficient for the vanishing of the derived limit. However, for arbitrary inverse sequences of abelian groups, it is not a necessary condition. There arose a need to find a necessary and sufficient variant of this condition. In [7], Emmanouil showed that for an inverse sequence \(S\) consisting of \(R\)-modules, the Mittag-Leffler condition is equivalent to \(\lim^{1}\!(S\otimes_{R}M)=0\) for any \(R\)-module \(M\). However, we took a different approach and began to study how the Mittag-Leffler condition can be modified to become both necessary and sufficient for the derived limit to be zero.
It is meaningful to consider the derived limit in arbitrary abelian categories with small limits. From now on, we will always assume that \(\mathcal{A}\) is an abelian category with small direct sums (Grothendieck axiom AB3), exact small products (Grothendieck axiom AB4\({}^{*}\)), and a generator. Roos proved that in such a category, the derived limit of an epimorphic inverse sequence is trivial [22]. Neeman showed that the assumption of having a generator in \(\mathcal{A}\) cannot be omitted [19].
Our work is devoted to description of inverse sequences with trivial derived limit in the category \(\mathcal{A}\). The main result of our work consists of providing a necessary and sufficient condition for the vanishing of the inverse limit, which can be interpreted as a transfinite version of the Mittag-Leffler condition. Among the inverse sequences with a zero derived limit, a special place is occupied by the inverse sequences \(L\) for which \(\lim L=\lim^{1}L=0\). We call such inverse sequences local. The second main result of this work is that we describe the class of local inverse sequences as the least class containing the trivial inverse sequence and closed under small limits and certain extensions. In this work, we deliberately limit ourselves to considering only inverse sequences indexed by natural numbers, without considering more general posets or categories as indices.
For an inverse sequence \(S\) we denote by \(I(S)\) an inverse sequence, consisting of the images \(I(S)_{i}=\operatorname{Im}(S_{i+1}\to S_{i}).\) Furthermore, we define \(I^{n}(S)\) recursively as \(I^{n}(S)=I(I^{n-1}(S)).\) An inverse sequence \(S\) satisfies the Mittag-Leffler condition if, for any given \(i\), the decreasing sequence of subobjects \(S_{i}\supseteq I^{1}(S)_{i}\supseteq I^{2}(S)_{i}\ni\ldots\) stabilizes. This condition is sufficient for the vanishing of the derived limit, but it easy to construct an example that shows that it is not necessarily. For example, if we denote by \(\mathbb{Z}_{p}\) the group of \(p\)-adic integers, then the inverse sequence \(p^{i}\mathbb{Z}_{p}\) has a trivial derived limit but it does not satisfy the Mittag-Leffler condition. More generally, if \(A=A_{0}\supseteq A_{1}\supseteq...\) is a complete Hausdorff filtration on an abelian group \(A\) (i.e. \(A\cong\lim A/A_{i}\)), then \(\lim^{1}A_{i}=0.\)
If we consider the completion of an inverse sequence \(S\) with respect to the image filtration
\[\widehat{S}=\lim_{n}S/I^{n}(S)\]
then in both the case of the Mittag-Leffler condition and the case of a complete Hausdorff filtration, the morphism of inverse sequences \(S\to\widehat{S}\) is an epimorphism. We prove that this morphism is an epimorphism for any inverse sequence with a trivial derived limit. However, it was insufficient for us to consider this completion for formulating a necessary and sufficient condition. So we introduced the concept of \(\lambda\)-completions of an inverse sequence for any limit ordinal \(\lambda.\)
For an ordinal \(\alpha\) and an inverse sequence \(S\), we define \(I^{\alpha}(S)\) such that \(I^{\alpha+1}(S)=I(I^{\alpha}(S))\) and \(I^{\lambda}(S)=\lim_{\alpha<\lambda}I^{\alpha}(S)\) for a limit ordinal \(\lambda.\) It turns out that, despite the inverse sequence being indexed by ordinary natural numbers, this transfinite filtration can stabilize at any ordinal. The \(\lambda\)-completion of the inverse sequence \(S\) is defined as
\[\widehat{S}^{\lambda}=\lim_{\alpha<\lambda}S/I^{\alpha}(S).\]
We say that an inverse sequence \(S\) is \(\lambda\)_-complete_, if the morphism \(S\to\widehat{S}^{\lambda}\) is an epimorphism. The main result of our work is the following theorem (Th. 3.1).
**Theorem**.: _Let \(\mathcal{A}\) be an abelian category with small direct sums, exact small products, and a generator. Then for an inverse sequence \(S\) in \(\mathcal{A}\) the following statements are equivalent:_
1. \(\lim^{1}S=0;\)__
2. \(\lim\operatorname{Coker}(S\to\widehat{S}^{\lambda})=0\) _for any limit ordinal_ \(\lambda\)_;_
3. _for a limit ordinal_ \(\lambda,\) _if cofinality of_ \(\lambda\) _is countable, then_ \(S\) _is_ \(\lambda\)_-complete, if the cofinality of_ \(\lambda\) _is uncountable, then_ \(\lim\operatorname{Coker}(S\to\widehat{S}^{\lambda})=0.\)__
This theorem implies that if \(S\) is \(\lambda\)-complete for any limit ordinal \(\lambda\), then \(\lim^{1}S=0.\) On the other hand, if \(\lim^{1}S=0\) and \(\lambda\) is a limit ordinal of countable cofinality, then \(S\) is \(\lambda\)-complete.
When studying the class of inverse sequences with trivial derived limits, it becomes clear that a more convenient class to investigate is the class of _local inverse sequences_ i.e. inverse sequences \(L\) such that \(\lim L=\lim^{1}L=0\). It turns out that any inverse sequence \(S\) such that \(\lim^{1}S=0\) can be uniquely decomposed into a short exact sequence \(E\nrightarrow S\nrightarrow L\), where \(E\) is epimorphic, and \(L\) is local (Cor. 1.5). Thus, many questions regarding inverse sequences with trivial derived limit are reduced to questions about local inverse sequences.
The advantage of the class of local inverse sequences is that it is closed with respect to small limits. Moreover, under some additional assumptions on the abelian category \(\mathcal{A}\) there is a functor of localization of inverse sequences. Namely, if we, in addition to the above assumptions, assume that \(\mathcal{A}\) has a \(\kappa\)-compact generator for some regular cardinal \(\kappa\), then for any inverse sequence \(S\) there is a universal morphism to a local inverse sequence
\[S\longrightarrow\mathcal{L}(S).\]
In other words, the subcategory of local inverse sequences is reflective (Prop. 2.5).
The choice of the term "local" is related to the fact that for any category \(\mathcal{C}\) and its morphism \(\theta:c\to c^{\prime}\), an object \(l\) is called \(\theta\)-local if the map \(\theta^{*}:\mathcal{C}(c^{\prime},l)\rightarrow\mathcal{C}(c,l)\) is a bijection (see [4, 8, 17, 2]). We show that an inverse sequence is local if and only if it is a \(\theta\)-local object in the category of inverse sequences with respect to some particular choice of \(\theta\) (Prop. 2.3).
A simplest example of a local inverse sequence is a _null inverse sequence_ i.e. an inverse sequence \(N\) such that all the morphisms \(N_{i+1}\to N_{i}\) are zero. If there is a short exact sequence of inverse sequences \(N\nrightarrow S^{\prime}\nrightarrow S\), where \(N\) is null, then \(S^{\prime}\) is called a null-extension of \(S\). The second main result of our work is the following theorem (Th. 4.1).
**Theorem**.: _Let \(\mathcal{A}\) be an abelian category with small direct sums, exact small products, and a generator. Then the class of local inverse sequences in \(\mathcal{A}\) is the least class of inverse sequences containing the trivial inverse sequence and closed with respect to small limits and null-extensions._
We draw an analogy between the category of groups and the category of inverse sequences. In this view, abelian groups are analogous to null inverse sequences, and central extensions are analogous to null-extensions. With this perspective, the theorem is analogous to Bousfield's description of the class of \(H\mathbb{Z}\)-local groups as the least class containing the trivial group and closed with respect to small limits and central extensions [3, Th. 3.10].
In the end of the paper, in order to illustrate the complexity of the class of local inverse sequences and emphasize the reasonableness of the statements of these theorems, we provide two types of examples of inverse sequences of abelian groups. Firstly, for each ordinal \(\alpha\), we construct a local inverse sequence \(S\) such that \(I^{\beta}(S)\neq 0\) for any \(\beta<\alpha\), but \(I^{\alpha}(S)=0\) (Th. 5.1). Secondly, for each regular uncountable cardinal \(\kappa\), we construct a local inverse sequence \(S\) which is not \(\kappa\)-complete (Th. 5.8). In particular for \(\kappa=\aleph_{1}\) we obtain an example of a local inverse sequence which is \(\lambda\)-complete for all limit ordinals \(\lambda\) except \(\lambda=\aleph_{1}\).
In the course of our work, we raised the question: could it be the case that the condition \(\lim^{1}S=0\) is equivalent to the fact that \(S\) is \(\omega\)-complete, and we do not need all the higher ordinals to formulate a necessarily and sufficient condition? We believe that this cannot be true, even in the category of abelian groups, but we have not been able to provide a counterexample. Therefore, we leave this question open for further investigation.
**Question**.: Is there a \(\omega\)-complete inverse sequence of abelian groups with nonzero derived limit?
We expect that, under some additional assumptions on the abelian category \(\mathcal{A}\), the analogy with Bousfield's theory of \(H\mathbb{Z}\)-localization of groups can be continued further. We think that there is a transfinite construction of the localization functor similar to the construction given in [3], using the relative universal null-extensions similar to relative universal central extensions described in [13] and [9]. However, we decided to leave this direction for further research.
## Acknowledgements
We are very grateful to Ekaterina Borodinova for useful discussions.
### Transfinite image filtration
#### Inverse sequences
Further we will always assume that \(\mathcal{A}\) is an abelian category with a generator \(G\), small direct sums (AB3) and exact small products (\(\mathrm{AB4}^{*}\)). These assumptions imply that all small limits exists and left exact. An inverse sequence \(S\) in \(\mathcal{A}\) is a couple consisting of two families \(S=((S_{i}),(f_{i}))\) indexed by natural numbers \(i\in\omega\), where \(S_{i}\) is an object in \(\mathcal{A}\) and \(f_{i}:S_{i+1}\to S_{i}\) is a morphism. One can say that inverse sequences are functors \(S:\omega^{\omega^{\omega}}\to\mathcal{A}.\) In particular, inverse sequences form an abelian category \(\mathcal{A}^{\omega^{\omega}}\) with small direct sums, exact small products (and a generator, Lemma 2.2). Under these assumptions on \(\mathcal{A}\) for an inverse sequence \(S\) in \(\mathcal{A}\) we have an exact sequence
\[0\longrightarrow\lim S\longrightarrow\prod_{i}S_{i}\xrightarrow{1-F}\prod_{i }S_{i}\longrightarrow\lim^{1}S\longrightarrow 0, \tag{1.1}\]
where \(\mathrm{pr}_{j}F=f_{j}\mathrm{pr}_{j+1}\) (here \(\mathrm{pr}_{j}:\prod_{i}S_{i}\to S_{j}\) denotes the canonical projection) and \(\lim^{n}S=0\) for \(n\geq 2\)[20, Remark A.3.6.]. We say that an inverse sequence is epimorphic, if the maps \(f_{i}\) are epimorphisms. Roos proved that for an epimorphic inverse sequence \(S\) we have \(\lim^{1}S=0\)[22, Th. 3.1.]. We say that an inverse \(S\) sequence is null, if \(f_{i}=0\) for each \(i.\) It is easy to see that for a null inverse sequence \(S\) we have \(\lim S=\lim^{1}S=0.\)
Further we will take not only limits of inverse sequences, but also limits of some functors to the category of inverse sequences \(F:J\to\mathcal{A}^{\omega^{\omega}}.\) In this case we always use a subscript
\[\lim_{J}:(\mathcal{A}^{\omega^{\omega}})^{J}\longrightarrow\mathcal{A}^{ \omega^{\omega^{\omega}}}. \tag{1.2}\]
"\(\lim\)" without any subscript always means a limit of an inverse sequence
\[\lim:\mathcal{A}^{\omega^{\omega}}\longrightarrow\mathcal{A}. \tag{1.3}\]
### Transfinite image filtration and completion
For an inverse sequence \(S\) we denote by \(S^{\mathrm{sh}}\) the shifted inverse sequence such that \(S^{\mathrm{sh}}_{i}=S_{i+1}\) and \(f^{S^{\mathrm{sh}}}_{i}=f^{S}_{i+1}\). Then there is a morphism of inverse sequences
\[\tilde{f}:S^{\mathrm{sh}}\to S \tag{1.4}\]
defined by \(\tilde{f}_{i}=f^{S}_{i}:S_{i+1}\to S_{i}.\) It is easy to check that the kernel and cokernel of \(\tilde{f}\) are null inverse sequences. Therefore, \(\tilde{f}\) induces isomorphisms
\[\lim S^{\mathrm{sh}}\cong\lim S,\qquad\quad\lim^{1}S^{\mathrm{sh}}\cong\lim^{1 }S. \tag{1.5}\]
The inverse sequences defined by the image and the cokernel of \(\tilde{f}\) are denoted by \(I(S)\) and \(S^{1}\) respectively. Then we have a short exact sequence
\[I(S)\mapsto S\twoheadrightarrow S^{1}. \tag{1.6}\]
It is easy to check that the morphism \(S\twoheadrightarrow S^{1}\) is a universal morphism from \(S\) to a null inverse sequence. Note that \(S^{1}=0\) if and only if \(S\) is an epimorphic inverse sequence.
Further for any ordinal number \(\alpha\) we define \(I^{\alpha}(S)\) such that \(I^{0}(S)=S,\)
\[I^{\alpha+1}(S)=I(I^{\alpha}(S))\quad\text{ and }\quad I^{\lambda}(S)=\lim_{ \alpha<\lambda}I^{\alpha}(S) \tag{1.7}\]
for a limit ordinal \(\lambda.\) Since the functor of limit is left exact, the limit of monomorphisms is a monomorphism. So we get a transfinite tower of monomorphisms \(I^{\alpha}(S)\twoheadrightarrow S,\) which is called the transfinite image filtration of \(S.\) For any ordinal \(\alpha\) we denote by \(S^{\alpha}\) the cokernel of the monomorphism \(I^{\alpha}(S)\mapsto S.\) So there is a short exact sequence
\[I^{\alpha}(S)\twoheadrightarrow S\twoheadrightarrow S^{\alpha}. \tag{1.8}\]
In proofs, when \(S\) is fixed, we will simplify notation \(I^{\alpha}=I^{\alpha}(S).\) Using the snake lemma, it is easy to check that there is a short exact sequence
\[(I^{\alpha}(S))^{1}\twoheadrightarrow S^{\alpha+1}\twoheadrightarrow S^{ \alpha}. \tag{1.9}\]
For any limit ordinal \(\lambda\) the _\(\lambda\)-completion_ of an inverse sequence \(S\) is defined as
\[\widetilde{S}^{\lambda}=\lim_{\beta<\lambda}S^{\beta}. \tag{1.10}\]
The canonical projections \(S\twoheadrightarrow S^{\beta}\) define a natural map \(S\to\widetilde{S}^{\lambda}.\)\(S\) is called _\(\lambda\)-complete_, if the morphism \(S\to\widetilde{S}^{\lambda}\) is an epimorphism. Take a limit ordinal \(\lambda.\) Since the functor \(\lim_{\alpha<\lambda}\) is left exact, applying it to the short exact sequence \(I^{\alpha}(S)\mapsto S\twoheadrightarrow S^{\alpha},\) we obtain \(I^{\lambda}(S)=\mathrm{Ker}(S\to\widetilde{S}^{\lambda}).\) It follows that for any limit ordinal \(\lambda\) the morphism \(S\to\widetilde{S}^{\lambda}\) induces a monomorphism
\[S^{\lambda}\mapsto\widetilde{S}^{\lambda}. \tag{1.11}\]
**Proposition 1.1**.: _For any ordinal \(\alpha\) there are isomorphisms_
\[\lim S^{\alpha}=\lim\widetilde{S}^{\alpha}=0,\quad\lim I^{\alpha}(S)\cong\lim S, \tag{1.12}\]
_where the last isomorphism is induced by the monomorphism \(I^{\alpha}(S)\mapsto S.\) Moreover, there is a short exact sequence_
\[\lim^{1}I^{\alpha}(S)\mapsto\lim^{1}S\twoheadrightarrow\lim^{1}S^{\alpha}. \tag{1.13}\]
Proof.: The isomorphism \(\lim I^{\alpha}(S)\cong\lim S\) follows from the equation \(\lim S^{\alpha}=0\), the short exact sequence \(I^{\alpha}(S)\nrightarrow S\nrightarrow S^{\alpha}\) and the fact that the functor of limit is left exact. So it is sufficient to prove the equations \(\lim S^{\alpha}=\lim\widehat{S}^{\alpha}=0.\) The proof is by transfinite induction. For \(\alpha=0\) the statement is obvious.
Assume that \(\lim S^{\alpha}=\lim\widehat{S}^{\alpha}=0\) and prove \(\lim S^{\alpha+1}=\lim\widehat{S}^{\alpha+1}=0.\) We have \(\lim\widehat{S}^{\alpha+1}=0\) because \(\widehat{S}^{\alpha+1}=S^{\alpha}.\) Using short exact sequence \((I^{\alpha})^{1}\nrightarrow S^{\alpha+1}\nrightarrow S^{\alpha}\) (1.9), the fact that \((I^{\alpha})^{1}\) is null, and the left exactness of the functor of limit we obtain that \(\lim S^{\alpha+1}=0.\)
Now assume that \(\lambda\) is a limit ordinal and for any \(\alpha<\lambda\) we have \(\lim S^{\alpha}=\lim\widehat{S}^{\alpha}=0.\) Prove that \(\lim S^{\lambda}=\lim\widehat{S}^{\lambda}=0.\) Since limits commute with limits, we obtain \(\lim\widehat{S}^{\lambda}\cong\lim_{\alpha<\lambda}\lim S^{\alpha}=0.\) The equation \(\lim S^{\lambda}=0\) follows from the embedding \(S^{\lambda}\nrightarrow\widehat{S}^{\lambda}\) (1.11) and left exactness of the limit.
### Length of the transfinite image filtration
The _length of the transfinite image filtration_ of \(S\) is the the least ordinal \(\operatorname{len}(S)\) such that for any \(\alpha>\operatorname{len}(S)\) the monomorphism \(I^{\alpha}(S)\nrightarrow I^{\operatorname{len}(S)}(S)\) is an isomorphism. Further in Proposition 1.2 we will show that it is well defined for any \(S\).
**Proposition 1.2**.: _For an inverse sequence \(S\) the ordinal \(\operatorname{len}(S)\) is well defined. Moreover, in this case \(I^{\operatorname{len}(S)}(S)\) is an epimorphic inverse sequence, and the canonical morphisms \(I^{\operatorname{len}(S)}(S)\nrightarrow S\nrightarrow S^{\operatorname{len} (S)}\) induce isomorphisms_
\[\lim^{1}\!S\cong\lim^{1}\!S^{\operatorname{len}(S)},\hskip 28.452756pt\lim S \cong\lim I^{\operatorname{len}(S)}(S). \tag{1.14}\]
Proof.: Since \(\mathcal{A}\) has a generator, it is well-powered [10, Prop. 3.35]. It follows that the transfinite decreasing sequence of subobjects \(I^{\alpha}\nrightarrow S\) stabilises, and there is an ordinal \(\mu\) such that for any \(\alpha>\mu\) the monomorphism \(I^{\alpha}\nrightarrow I^{\mu}\) is an isomorphism. Therefore, we can take the least ordinal with this property and denote it by \(\operatorname{len}(S).\) This property implies that \(I(I^{\mu})\to I^{\mu}\) is an isomorphism. Hence \(I^{\mu}\) is an epimorphic inverse sequence. The result of Roos [22, Th. 3.1.] implies that \(\lim^{1}\!I^{\mu}=0.\) Then the first isomorphism follows from the short exact sequence (1.13), and the second one follows from (1.12).
**Corollary 1.3**.: _For an inverse sequence \(S\) the following statements are equivalent_
1. \(\lim S=0;\)__
2. \(I^{\operatorname{len}(S)}(S)=0.\)__
Proof.: \((2)\nrightarrow(1).\) Follows from Proposition 1.2 directly.
\((1)\nrightarrow(2).\) Set \(T:=I^{\operatorname{len}(S)}(S).\) By Proposition 1.2 we know that \(T\) is epimorphic and \(\lim T=0.\) Since the shift of an inverse sequence does not change \(\lim S\) (see (1.5)), it is sufficient to prove that \(T_{0}=0.\) Consider the kernel \(K_{i}=\operatorname{Ker}(T_{i}\nrightarrow T_{0}).\) Then the morphisms \(T_{i+1}\nrightarrow T_{i}\) induce morphisms \(K_{i+1}\nrightarrow K_{i}\) and we obtain a short exact sequence of inverse sequences \(K\nrightarrow T\nrightarrow T_{0}.\) Using the snake lemma, we see that \(K_{i+1}\to K_{i}\) is an epimorphism. Therefore \(K\) is epimorphic. Then by the result of Roos we have \(\lim^{1}K=0.\) Thus we get a short exact sequence \(\lim K\nrightarrow\lim T\nrightarrow T_{0}.\) Using that \(\lim T=0,\) we obtain \(T_{0}=0.\)
**Corollary 1.4**.: _If \(S\) is an epimorphic inverse sequence and \(\lim S=0,\) then \(S=0.\)_
**Corollary 1.5**.: _For an inverse sequence \(S\) there exists a unique (up to isomorphism) short exact sequence_
\[E\nrightarrow S\nrightarrow S^{\prime} \tag{1.15}\]
_such that \(E\) is epimorphic and \(\lim S^{\prime}=0.\) Moreover, for such an exact sequence we have \(\lim^{1}S\cong\lim^{1}S^{\prime}.\)_
Proof.: Propositions 1.2 and 1.1 imply that \(I^{\operatorname{len}(S)}(S)\nrightarrow S\twoheadrightarrow S^{ \operatorname{len}(S)}\) satisfies this property. Assume that \(E\nrightarrow S\twoheadrightarrow S^{\prime}\) is a such a short exact sequence and prove that it is isomorphic to \(I^{\operatorname{len}(S)}(S)\nrightarrow S\twoheadrightarrow S^{ \operatorname{len}(S)}.\) Consider an ordinal number \(\alpha\) such that \(\alpha\geq\operatorname{len}(S),\operatorname{len}(S^{\prime}).\) Then \(I^{\alpha}(E)=E,I^{\alpha}(S)=I^{\operatorname{len}(S)}(S)\) and \(I^{\alpha}(S^{\prime})=I^{\operatorname{len}(S^{\prime})}(S^{\prime}).\) Corollary 1.3 implies that \(I^{\alpha}(S^{\prime})=0.\) Therefore the composition \(I^{\alpha}(S)\to S\to S^{\prime}\) is trivial. Therefore, there is a morphism \(I^{\alpha}(S)\nrightarrow E.\) The assumption \(\lim S^{\prime}=0\) implies that the morphism \(\lim E\twoheadrightarrow\lim S\) is an isomorphism. By Proposition 1.2 we have that \(\lim I^{\alpha}(S)\rightarrow\lim S\) is an isomorphism. Therefore the map \(\lim I^{\alpha}(S)\rightarrow\lim E\) is an isomorphism. Since \(E\) and \(I^{\alpha}(S)\) are epimorphic, we obtain that \(\operatorname{Coker}(I^{\alpha}(S)\nrightarrow E)\) is epimorphic and \(\lim\operatorname{Coker}(I^{\alpha}(S)\nrightarrow E)=0.\) Then Corollary 1.4 implies that \(\operatorname{Coker}(I^{\alpha}(S)\nrightarrow E)=0\) and the morphism \(I^{\alpha}(S)\nrightarrow E\) is an isomorphism.
## 2. Local inverse sequences
An inverse sequence \(S\) is called _local_ if \(\lim S=\lim^{1}S=0.\) The exact sequence (1.1) implies that \(S\) is local if and only if the morphism \(1-F:\prod S_{i}\rightarrow\prod S_{i}\) is an isomorphism. It is easy to see that the category of local inverse sequences is a Serre subcategory of the category of all inverse sequences. In particular, the class of local inverse sequences is closed with respect to extensions. Null inverse sequences are local. An extension of two null inverse sequences is not necessarily null but it is still an example of a local inverse sequence.
Next, we explain the choice of the term "local" and prove some properties of local inverse sequences. For a general category \(\mathcal{C}\) and a morphism \(\theta:c^{\prime}\to c\) an object \(l\) of \(\mathcal{C}\) is called \(\theta\)_-local_, if the map \(\theta^{*}:\mathcal{C}(c,l)\rightarrow\mathcal{C}(c^{\prime},l)\) is a bijection. The class of \(\theta\)-local objects is closed with respect to small limits [17, SS1.5]. Further we show that local inverse sequences are \(\theta\)-local objects with respect to particular morphism \(\theta\) in \(\mathcal{A}^{\omega^{\omega}}.\) As a corollary we obtain that the class of local inverse sequences is closed with respect to small limits and extensions.
For an object \(A\) in \(\mathcal{A}\) and a natural number \(n\) we denote by \(A(n)\) the inverse sequence such that \(A(n)_{i}=A\) for \(i\leq n\), \(A(n)_{i}=0\) for \(i>n\), and \(f_{i}=1_{A}\) for \(i<n\). We denote by \(\iota(n):A(n)\to A(n+1)\) the morphism of inverse sequences such that \(\iota(n)_{i}=1_{A}\) for \(i\leq n\) and \(\iota(n)_{i}=0\) for \(i>n\).
(2.1)
**Lemma 2.1**.: _For any object \(A\) and any inverse sequence \(S\) there is an natural (adjunction) isomorphism_
\[\mathcal{A}^{\omega^{\omega}}(A(n),S)\cong\mathcal{A}(A,S_{n}),\qquad\ \ \varphi\mapsto\varphi_{n}. \tag{2.2}\]
_Moreover, the diagram_
\[\begin{CD}\mathcal{A}^{\omega^{\omega^{\omega}}}(A(n+1),S)@>{z}>{}>\mathcal{A}(A,S_{n +1})\\ @V{}V{(n)^{*}}V@V{}V{(f_{n})_{*}}V\\ \mathcal{A}^{\omega^{\omega^{\omega}}}(A(n),S)@>{z}>{}>\mathcal{A}(A,S_{n})\end{CD} \tag{2.3}\]
_is commutative._
Proof.: Straightforward.
Further we fix a generator \(G\) of \(\mathcal{A}\) and consider an inverse sequence defined by
\[\tilde{G}\coloneqq\bigoplus_{i<\omega}G(i). \tag{2.4}\]
**Lemma 2.2**.: \(\tilde{G}\) _is a generator of \(\mathcal{A}^{\omega^{\omega}}\) and there is an isomorphism_
\[\mathcal{A}^{\omega^{\omega}}(\tilde{G},S)\cong\prod_{i}\mathcal{A}(G,S_{i}). \tag{2.5}\]
Proof.: The isomorphism (2.5) follows from the isomorphism \(\mathcal{A}^{\omega^{\omega}}(\bigoplus_{i}G(i),S)\cong\prod_{i}\mathcal{A}^{ \omega^{\omega^{\omega}}}(G(i),S)\) and the isomorphism (2.2). The fact that \(\tilde{G}\) is a generator follows from the fact that \(G\) is a generator, and the isomorphism (2.5).
Consider a morphism of inverse sequences
\[1-I:\tilde{G}\longrightarrow\tilde{G}, \tag{2.6}\]
where \(I\mathrm{em}_{n}=\mathrm{em}_{n+1}\iota(n)\) and \(\mathrm{em}_{n}:G(n)\rightarrow\tilde{G}\) is the canonical embedding.
**Proposition 2.3**.: _An inverse sequence \(S\) is local if and only if it is a \((1-I)\)-local object of \(\mathcal{A}^{\omega^{\omega}}\)._
Proof.: Lemma 2.1 implies that there is an isomorphism
\[\mathcal{A}^{\omega^{\omega}}\left(\tilde{G},S\right)\cong\prod_{i}\mathcal{A} (G,S_{i})\cong\mathcal{A}\left(G,\prod_{i}S_{i}\right) \tag{2.7}\]
and the diagram
\[\begin{CD}\mathcal{A}^{\omega^{\omega}}\left(\tilde{G},S\right)@>{z}>{}> \mathcal{A}\left(G,\prod_{i}S_{i}\right)\\ @V{(1-I)^{*}}V{(1-F)_{*}}V\\ \mathcal{A}^{\omega^{\omega}}\left(\tilde{G},S\right)@>{\overline{z}}>{}> \mathcal{A}\left(G,\prod_{i}S_{i}\right)\end{CD} \tag{2.8}\]
is commutative. Therefore, \(S\) is \((1-I)\)-local if and only if the map \((1-F)_{*}\) is an isomorphism. Since \(G\) is a generator, \((1-F)_{*}\) is an isomorphism if and only if \(1-F\) is an isomorphism, which is equivalent to the fact that \(S\) is local.
**Corollary 2.4**.: _The class of local inverse sequences is closed with respect to small limits._
Let \(\kappa\) be a regular cardinal. We say that a poset is \(\kappa\)-directed if any its subset of cardinality \(<\kappa\) has an upper bound. An object \(c\) of a category \(\mathcal{C}\) is called \(\kappa\)-compact or (\(\kappa\)-presentable), if the hom-functor \(\mathcal{C}(c,-)\) commutes with colimits over \(\kappa\)-directed posets. Note that if \(\kappa<\kappa^{\prime}\), then a \(\kappa\)-compact object is also \(\kappa^{\prime}\)-compact.
**Proposition 2.5**.: _Assume that \(\mathcal{A}\) has a \(\kappa\)-compact generator \(G\) for some regular cardinal \(\kappa.\) Then for any inverse sequence \(S\) in \(\mathcal{A}\) there exists a universal initial morphism to a local inverse sequence_
\[S\longrightarrow\mathcal{L}(S). \tag{2.9}\]
_In other words, the full subcategory of local inverse sequences is a reflective subcategory of \(\mathcal{A}^{\omega^{\omega}}.\)_
Proof.: Without loss of generality we can assume that \(\kappa\) is uncountable. Then any countable direct sum of \(\kappa\)-compact objects is \(\kappa\)-compact [1, Prop.1.16]. Since \(G\) is \(\kappa\)-compact and colimits of inverse sequences are computed level-wise, using Lemma 2.1 we obtain that \(G(i)\) is also \(\kappa\)-compact. Therefore \(\tilde{G}=\bigoplus_{i}G(i)\) is \(\kappa\)-compact as well. Hence the assertion follows from Proposition 2.3 combined with the result of Casacuberta, Peschke and Pfenniger [5, Cor. 1.7].
## 3. Transfinite Mittag-Leffler condition
**Theorem 3.1**.: _Let \(S\) be an inverse sequence in \(\mathcal{A}.\) Then the following statements are equivalent:_
1. \(\lim^{1}S=0;\)__
2. \(\lim\mathrm{Coker}(S\to\widehat{S}^{\lambda})=0\) _for any limit ordinal_ \(\lambda\)_;_
3. _for a limit ordinal_ \(\lambda,\) _if cofinality of_ \(\lambda\) _is countable, then_ \(S\) _is_ \(\lambda\)_-complete, if the cofinality of_ \(\lambda\) _is uncountable, then_ \(\lim\mathrm{Coker}(S\to\widehat{S}^{\lambda})=0.\)__
Proof.: For the sake of convenience we set \(C^{\lambda}=\mathrm{Coker}(S\to\widehat{S}^{\lambda}).\)
\((1)\Rightarrow(2).\) Using the short exact sequence \(S^{\lambda}\mapsto\widehat{S}^{\lambda}\twoheadrightarrow C^{\lambda}\) (see (1.11)), we obtain that there is an exact sequence \(\lim\widehat{S}^{\lambda}\to\lim C^{\lambda}\to\lim^{1}S^{\lambda}.\) So it is sufficient to check that \(\lim\widehat{S}^{\lambda}=0\) and \(\lim^{1}S^{\lambda}=0.\) The first equality follows from Proposition 1.1. The second equality follows from the fact that there is an epimorphism \(S\twoheadrightarrow S^{\lambda}\) and \(\lim^{1}S=0\) (see (1.13)).
\((2)\Rightarrow(1).\) By Proposition 1.2 it is sufficient to prove that \(S^{\alpha}\) is local for any \(\alpha.\) Prove it by the transfinite induction. For \(\alpha=0\) it is obvious. Assume that \(S^{\alpha}\) is local, and prove that \(S^{\alpha+1}\) is local. It follows from the short exact sequence \((I^{\alpha})^{1}\mapsto S^{\alpha+1}\twoheadrightarrow S^{\alpha}\) and the fact that \((I^{\alpha})^{1}\) is null.
Now assume that \(\lambda\) is a limit ordinal and \(S^{\alpha}\) is local for any \(\alpha<\lambda,\) and prove that \(S^{\lambda}\) is local. Since local inverse sequences are closed with respect to small limits (Corollary 2.4), \(\widehat{S}^{\lambda}\) is local. The short exact sequence \(S^{\lambda}\mapsto\widehat{S}^{\lambda}\twoheadrightarrow C^{\lambda}\) implies that there is an exact sequence \(\lim C^{\lambda}\to\lim^{1}S^{\lambda}\to\lim^{1}\widehat{S}^{\lambda}.\) By the assumption \(\lim C^{\lambda}=0.\) Since \(\widehat{S}^{\lambda}\) is local, \(\lim^{1}\widehat{S}^{\lambda}=0.\) Therefore \(\lim^{1}S^{\lambda}=0.\) By Proposition 1.1 we have \(\lim S^{\lambda}=0.\) Therefore \(S^{\lambda}\) is local.
\((3)\Rightarrow(2).\) Obvious.
\((1)\&(2)\Rightarrow(3).\) Take an ordinal \(\lambda\) with countable cofinality. Since a shifted inverse sequence has the same \(\lim\) and \(\lim^{1}(1.5),\) it is sufficient to prove that \(S_{0}\to\widehat{S}_{0}^{\lambda}\) is an epimorphism. Let \(\alpha_{i}\) be a strictly increasing sequence of ordinals that tends to \(\lambda.\) Then the sequence \(\alpha_{i}+i\) is also strictly increasing and tends to \(\lambda.\) Therefore \(\widehat{S}_{0}^{\lambda}=\lim_{i}S_{0}^{\alpha_{i}+i}.\) The short exact sequence \(I_{0}^{\alpha_{i}+i}\twoheadrightarrow S_{0}\twoheadrightarrow S_{0}^{ \alpha_{i}+i}\) implies that there is an exact sequence \(S_{0}\to\widehat{S}_{0}^{\lambda}\to\lim_{i}^{1}I_{0}^{\alpha_{i}+i}.\) Therefore, it is sufficient to prove that \(\lim_{i}I_{0}^{\alpha_{i}+i}=0.\) There is an epimorphism \(I_{i}^{\alpha_{i}}\twoheadrightarrow I_{0}^{\alpha_{i}+i}.\) Therefore, it is sufficient to prove that \(\lim_{i}^{1}I_{i}^{\alpha_{i}}=0.\) Consider the short exact sequence \(I_{i}^{\alpha_{i}}\twoheadrightarrow S_{i}\twoheadrightarrow S_{i}^{ \alpha_{i}}.\) Then we
have an exact sequence \(\lim_{i}S_{i}^{\alpha_{i}}\to\lim_{i}^{1}I_{i}^{\alpha_{i}}\to\lim^{1}S.\) By the assumption \(\lim^{1}S=0.\) Then it is sufficient to show that \(\lim_{i}S_{i}^{\alpha_{i}}=0.\) Since the diagonal \(\{(i,i)\mid i\in\omega\}\) is cofinal in \(\omega\times\omega,\) we have \(\lim_{i}S_{i}^{\alpha_{i}}=\lim_{(i,j)\in\omega\times\omega}S_{i}^{\alpha_{j} }=\lim_{i}\lim_{j}S_{i}^{\alpha_{j}}=\lim_{i}\widehat{S}_{i}^{\lambda}=\lim \widehat{S}^{\lambda}.\) Then the assertion follows from Proposition 1.1.
**Corollary 3.2**.: _If \(S\) is \(\lambda\)-complete for any limit ordinal \(\lambda,\) then \(\lim^{1}S=0.\)_
**Corollary 3.3**.: _If an inverse sequence \(S\) (in an abelian category \(\mathcal{A}\) with a generator, small direct sums and exact small products) satisfies the Mittag-Leffler condition, then \(\lim^{1}S=0.\)_
**Corollary 3.4**.: _If \(\lim^{1}S=0\) and \(\lambda\) is a limit ordinal of countable cofinality, then \(S\) is \(\lambda\)-complete._
## 4. A description of the class of local inverse sequences
If we have a short exact sequence of inverse sequences \(N\mapsto S^{\prime}\twoheadrightarrow S\) such that \(N\) is null, we say that \(S^{\prime}\) is a _null extension_ of \(S.\) We think about null extensions of inverse sequences as analogues of central extensions of groups.
**Theorem 4.1**.: _The class of local inverse sequences in \(\mathcal{A}\) is the least class containing the zero inverse sequence and closed with respect to small limits and null-extensions._
Proof.: Corollary 2.4 says that the class of local inverse sequences is closed under limits. It is obviously closed with respect to null extensions. Let us prove that it is the least class satisfying these properties. For this we fix a class satisfying this properties \(\mathcal{L}\) and prove that any local inverse sequence \(S\) is in \(\mathcal{L}.\) Note that, since \(\mathcal{L}\) is closed with respect to null extensions and contains zero inverse sequence, all null inverse sequences are in \(\mathcal{L}.\) Also note that an isomorphism \(S\cong S^{\prime}\) can be treated as an extension with zero kernel. So if \(S\cong S^{\prime}\) and \(S\in\mathcal{L},\) then \(S^{\prime}\in\mathcal{L}.\)
Assume that \(S\) is a local inverse sequence and prove that \(S\in\mathcal{L}\). By Corollary 1.3 we have \(S=S^{\operatorname{len}(S)}.\) Then it is sufficient to prove by transfinite induction that for any ordinal \(\alpha\) we have \(S^{\alpha}\in\mathcal{L}.\) For \(\alpha=0\) it is obvious. If \(S^{\alpha}\in\mathcal{L}\) then the short exact sequence (1.9) implies that \(S^{\alpha+1}\in\mathcal{L}.\)
Let \(\lambda\) is a limit ordinal and assume that for any \(\alpha<\lambda\) we have \(S^{\alpha}\in\mathcal{L}.\) The rest of the proof is devoted to the proof that \(S^{\lambda}\in\mathcal{L}.\) Since \(\mathcal{L}\) is closed under small limits, we get \(\widehat{S}^{\lambda}\in\mathcal{L}.\) Recall that we have a monomorphism \(S^{\lambda}\mapsto\widehat{S}^{\lambda}\) and set \(C^{\lambda}:=\operatorname{Coker}(S^{\lambda}\mapsto\widehat{S}^{\lambda}).\) For any ordinal \(\beta\) we define a decomposition of this monomorphism into two monomorphisms
\[S^{\lambda}\nrightarrow J^{\beta}\mapsto\widehat{S}^{\lambda}, \tag{4.1}\]
where \(J^{\beta}\) is defined as the pullback
(4.2)
By Theorem 3.1 we have \(\lim C^{\lambda}=0.\) By Corollary 1.3 we obtain \(I^{\operatorname{len}(C^{\lambda})}(C^{\lambda})=0.\) Therefore
\[S^{\lambda}=J^{\operatorname{len}(C^{\lambda})}. \tag{4.3}\]
It follows that in order to complete the proof it is sufficient to check that \(J^{\beta}\in\mathcal{L}\) for any \(\beta.\) Let us do it.
For \(\beta=0\) we have \(J^{0}=\widehat{S}^{\lambda}\in\mathcal{L}.\) Assume that \(J^{\beta}\in\mathcal{L}\) and prove that \(J^{\beta+1}\in\mathcal{L}.\) Since (4.2) is a pullback, the kernels of the vertical arrows are isomorphic
\[\operatorname{Ker}(J^{\beta}\twoheadrightarrow I^{\beta}(C^{\lambda})) \cong S^{\lambda}. \tag{4.4}\]
It follows that \(\operatorname{Ker}(J^{\beta}\to I^{\beta}(C^{\lambda}))\cong \operatorname{Ker}(J^{\beta+1}\to I^{\beta+1}(C^{\lambda})).\) Therefore, the snake lemma implies that the right hand vertical arrow in the following diagram is an isomorphism
(4.5)
Since \((I^{\beta}(C^{\lambda}))^{1}\) is null, we have \((I^{\beta}(C^{\lambda}))^{1}\in\mathcal{L}.\) By the assumption \(J^{\beta}\in\mathcal{L}.\) Therefore \(J^{\beta+1}\) is a kernel of a morphism between two objects of \(\mathcal{L}.\) Since the kernel is a limit, we obtain \(J^{\beta+1}\in\mathcal{L}.\)
Now assume that for a limit ordinal \(\mu\) and any \(\beta<\mu\) we have \(J^{\beta}\in\mathcal{L}.\) Since limits commute with limits, the equality \(I^{\mu}(C^{\lambda})=\lim_{\beta<\mu}I^{\beta}(C^{\lambda})\) implies the isomorphism \(J^{\mu}\cong\lim_{\beta<\mu}J^{\beta}\). Therefore \(J^{\mu}\in\mathcal{L}.\) So we proved that \(J^{\beta}\in\mathcal{L}\) for any ordinal \(\beta.\)
## 5. Examples of inverse sequences of abelian groups
### Inverse sequences defined by one abelian group
Further we fix a prime \(p.\) For an abelian group \(A\) and an ordinal \(\alpha\) we denote by \(p^{\alpha}A\) a subgroup of \(A\) defined so that \(p^{0}A=A,\)\(p^{\alpha+1}A=p\cdot(p^{\alpha}A)\) and \(p^{\lambda}A=\bigcap_{\alpha<\lambda}p^{\alpha}A\) for a limit ordinal \(\lambda.\) If \(A\) is a \(p\)-group, the group \(p^{\omega\alpha}A\) is known as the \(\alpha\)-th Ulm subgroup. The least \(\alpha\) such that \(p^{\alpha}A=p^{\alpha+1}A\) is called the \(p\)-length of \(A\) and denoted by \(\operatorname{len}_{p}(A)\). We will also use the notation \(p^{\infty}A=p^{\operatorname{len}_{p}(A)}A.\) It is known that for any ordinal \(\alpha\) there exists an abelian \(p\)-group \(A\), whose length is equal to \(\alpha\)[15, SS11, Exercise 43], [11, Ch.11, Example 3.2]
\[\operatorname{len}_{p}(A)=\alpha. \tag{5.1}\]
Note that for any homomorphism \(f:A\to B\) there is an inclusion \(f(p^{\alpha}A)\subseteq p^{\alpha}B.\) However, in general \(f(p^{\alpha}A)\neq p^{\alpha}B,\) even if \(f\) is an epimorphism.
Consider an inverse sequence \(S(A)\) such that \(S(A)_{i}=A\) and \(f_{i}(a)=pa.\)
\[S(A):\qquad\quad A\xleftarrow{p^{\cdot}}A\xleftarrow{p^{\cdot}}A\xleftarrow{ p^{\cdot}}\dots \tag{5.2}\]
It is easy to see that
\[I^{\alpha}(S(A))=S(p^{\alpha}A). \tag{5.3}\]
Therefore the length of the image filtration is equal to the \(p\)-length of \(A.\)
\[\operatorname{len}(S(A))=\operatorname{len}_{p}(A). \tag{5.4}\]
It follows that for any ordinal \(\alpha\) there exists an inverse sequence of abelian groups \(S\) such that \(\operatorname{len}(S)=\alpha.\) In the next subsection we will construct an explicit abelian group \(A\) such that \(\operatorname{len}(S(A))=\alpha\) and \(S(A)\) is local. Corollary 1.3 implies that
\[\lim S(A)=0\quad\quad\Leftrightarrow\quad\quad p^{\infty}A=0. \tag{5.5}\]
Further in this section we construct some examples of abelian groups \(A\) such that inverse sequences \(S(A)\) satisfy certain properties that confirm the reasonableness of Theorem 3.1.
### A local inverse sequence with a long image filtration
In this section for any \(\alpha\) we will construct an abelian group \(E_{\alpha}\) such that \(S(E_{\alpha})\) is local and \(\operatorname{len}(S(E_{\alpha}))=\alpha.\) The group \(E_{\alpha}\) is a variant of Walker's group [6], [11, Ch.11, Example 3.2]. Further in this subsection we fix an ordinal \(\alpha\).
Denote by \(\alpha^{\diamond}\) the set of all finite increasing sequences of ordinals \((\alpha_{1},\ldots,\alpha_{n})\) such that \(\alpha_{1}<\cdots<\alpha_{n}<\alpha,\)\(n\geq 1.\) We endow \(\alpha^{\diamond}\) by the deg-lex order: \((\alpha_{1},\ldots,\alpha_{n})<(\alpha_{1}^{\prime},\ldots,\alpha_{n^{\prime}}^ {\prime})\) if and only if either \(n<n^{\prime},\) or \(n=n^{\prime}\) and there exists \(1\leq m\leq n\) such that \(\alpha_{m}<\alpha_{m}^{\prime}\) and \(\alpha_{i}=\alpha_{i}^{\prime}\) for any \(1\leq i<m.\) It easy to check that it is a well order.
Consider the direct product and the direct sum of the group of \(p\)-adic integers \(\mathbb{Z}_{p}\) indexed by \(\alpha^{\diamond}\)
\[P_{\alpha}:=\mathbb{Z}_{p}^{\Pi\,\alpha^{\diamond}},\quad\quad\quad P_{\alpha }^{\prime}:=\mathbb{Z}_{p}^{\mathfrak{ph}^{\alpha}}. \tag{5.6}\]
We will treat the abelian groups \(P_{\alpha}\) and \(P_{\alpha}^{\prime}\) as \(\mathbb{Z}_{p}\)-modules. Note that \(P_{\alpha}^{\prime}\) is a free \(\mathbb{Z}_{p}\)-module.
We denote by \((e_{\sigma})_{\sigma\in\alpha^{\diamond}}\) the standard basis of \(P_{\alpha}^{\prime}\) over \(\mathbb{Z}_{p}.\) Consider a \(\mathbb{Z}_{p}\)-submodule of \(R_{\alpha}\subseteq P_{\alpha}^{\prime}\) generated by the elements of the form
\[r_{\alpha_{1}}:=pe_{\alpha_{1}},\quad\quad\quad r_{\alpha_{1},\ldots,\alpha_{n }}:=pe_{\alpha_{1},\ldots,\alpha_{n}}-e_{\alpha_{2},\ldots,\alpha_{n}},\quad \quad\quad n\geq 2, \tag{5.7}\]
and set
\[D_{\alpha}:=P_{\alpha}/R_{\alpha},\quad\quad D_{\alpha}^{\prime}:=P_{\alpha}^ {\prime}/R_{\alpha},\quad\quad E_{\alpha}:=D_{\alpha}/p^{\alpha}D_{\alpha}. \tag{5.8}\]
**Theorem 5.1**.: _For any ordinal \(\alpha\) the inverse sequence \(S(E_{\alpha})\) is local and_
\[\operatorname{len}(S(E_{\alpha}))=\alpha. \tag{5.9}\]
The rest of this subsection is devoted to the proof of the theorem. We will need some additional constructions and lemmas for this.
For \(t\in P_{\alpha}\) the elements \(\operatorname{pr}_{\sigma}(t),\) where \(\sigma\in\alpha^{\diamond},\) will be called coordinates of \(t.\) The leading index \(\operatorname{li}(t)\) is the maximal \(\sigma\) (with respect to the deg-lex order) such that \(\operatorname{pr}_{\sigma}(t)\neq 0.\) The corresponding coefficient is called the leading coordinate of \(t.\)
For any ordinal \(\beta\) we denote by \(P_{\lfloor\beta,\alpha\rfloor}^{\prime}\) the free \(\mathbb{Z}_{p}\)-submodule of \(P_{\alpha}^{\prime}\) generated by the elements \(e_{\alpha_{1},\ldots,\alpha_{n}}\) such that \(\alpha_{1}\geq\beta.\) In other words, \(P_{\lfloor\beta,\alpha\rfloor}^{\prime}\) is the submodule of \(P_{\alpha}^{\prime}\) consisting of elements, whose coordinates with indexes \((\alpha_{1},\ldots,\alpha_{n})\) such that \(\alpha_{1}<\beta\) are trivial.
**Lemma 5.2**.: _The leading coordinate of an element of \(R_{\alpha}\setminus\{0\}\) is in \(p\mathbb{Z}_{p}\setminus\{0\}.\)_
Proof.: Any element of \(r\in R_{\alpha}\setminus\{0\}\) can be presented as \(\sum_{i=1}^{n}x_{i}r_{\sigma_{i}}\) such that \(n\geq 1,x_{i}\in\mathbb{Z}_{p}\setminus\{0\}\) and \(\sigma_{1}<\cdots<\sigma_{n}.\) Then \(\operatorname{li}(r)=\sigma_{n}\) and the leading coefficient is equal to \(px_{n}.\)
**Lemma 5.3**.: _Any element \(t\in P_{\alpha}^{\prime}\) can be uniquely presented as_
\[t=t_{0}+r \tag{5.10}\]
_such that \(r\in R_{\alpha},t_{0}\in P_{\alpha}^{\prime}\) and all coordinates of \(t_{0}\) are in \(\{0,\ldots,p-1\}.\) Moreover, for this presentation we have \(\operatorname{li}(t_{0}),\operatorname{li}(r)\leq\operatorname{li}(t)\) and, if \(t\in P_{\lfloor\beta,\alpha\rfloor}^{\prime},\) then \(t_{0}\in P_{\lfloor\beta,\alpha\rfloor}^{\prime}.\)_
Proof.: First we prove the existence of such presentation \(t=t_{0}+r\) such that \(r\in R_{\alpha},t_{0}\in P^{\prime}_{\alpha}\) and all coordinates of \(t_{0}\) are in \(\{0,\ldots,p-1\}\) and \(\operatorname{li}(t_{0}),\operatorname{li}(r)\leq\operatorname{li}(t)\). Assume the contrary, that there exists \(t\in P^{\prime}_{\alpha}\) such that there is no such a presentation. Chose such \(t\), for which there is no such a representation, so that \(\operatorname{li}(t)\) is the least possible (it is possible because \(\alpha^{\diamond}\) is well ordered). Then \(t=xe_{\operatorname{li}(t)}+\tilde{t}\), where \(x\in\mathbb{Z}_{p}\setminus\{0\}\) and either \(\tilde{t}=0\), or \(\tilde{t}\in P^{\prime}_{\alpha}\) such that \(\operatorname{li}(\tilde{t})<\operatorname{li}(t).\) Note that if \(t\in P^{\prime}_{[\beta,\alpha)},\) then \(\tilde{t}\in P^{\prime}_{[\beta,\alpha)}.\) Let's present \(x\) as \(x=x_{0}+p\tilde{x}\), where \(x_{0}\in\{0,\ldots,p-1\}\) and \(\tilde{x}\in\mathbb{Z}_{p}.\) Then \(t=x_{0}e_{\operatorname{li}(t)}+p\tilde{x}e_{\operatorname{li}(t)}+\tilde{t}.\) If \(\operatorname{li}(t)=(\alpha_{1}),\) then we set \(r:=p\tilde{x}e_{\alpha_{1}}\in R_{\alpha}\) and \(\tilde{t}:=\tilde{t}.\) If \(\operatorname{li}(t)=(\alpha_{1},\ldots,\alpha_{n})\) for \(n\geq 2,\) then we set \(r:=p\tilde{x}e_{\alpha_{1},\ldots,\alpha_{n}}-\tilde{x}e_{\alpha_{2},\ldots, \alpha_{n}}\in R_{\alpha},\) and \(\tilde{t}:=\tilde{t}+\tilde{x}e_{\alpha_{2},\ldots,\alpha_{n}}.\) In both cases we obtain \(t=x_{0}e_{\operatorname{li}(t)}+\tilde{t}+r,\) where \(r\in R_{\alpha}\) and \(\tilde{t}\in P^{\prime}_{\alpha}\) such that \(\operatorname{li}(\tilde{t})<\operatorname{li}(t)\) and \(\operatorname{li}(r)\leq\operatorname{li}(t).\) Note that, if \(t\in P^{\prime}_{[\beta,\alpha)},\) then \(\tilde{t}\in P^{\prime}_{[\beta,\alpha)}.\) By the assumption we can present \(\tilde{t}\) as \(\tilde{t}=\tilde{t}_{0}+\tilde{r},\) where \(\tilde{r}\in R_{\alpha},\)\(\operatorname{\tilde{t}}_{0}\in P^{\prime}_{\alpha},\)\(\operatorname{li}(\tilde{r}),\operatorname{li}(\tilde{t}_{0})\leq\operatorname{li}( \tilde{t})\) and all coordinates of \(\tilde{t}_{0}\) are from \(\{0,\ldots,p-1\}.\) Therefore \(t=(x_{0}e_{\operatorname{li}(t)}+\tilde{t}_{0})+(r+\tilde{r}).\) We claim that this presentation satisfies all the assumptions. Indeed, since \(\operatorname{li}(\tilde{t}_{0})<\operatorname{li}(t),\) we get that the coordinates of \(x_{0}e_{\operatorname{li}(t)}+\tilde{t}_{0}\) are in \(\{0,\ldots,p-1\}.\) Other conditions are obvious. This makes a contradiction. So we proved the existence.
Let us prove the uniqueness. If we have two presentations \(t_{0}+r=t^{\prime}_{0}+r^{\prime}\) that \(r,r^{\prime}\in R_{\alpha}\) and all coordinates of \(t_{0},t^{\prime}_{0}\) are in \(\{0,\ldots,p-1\},\) then all coordinates of \(t_{0}-t^{\prime}_{0}\) are in \(\{-(p-1),\ldots,p-1\}.\) Using Lemma 5.2 and the fact that the sets \(\{-(p-1),\ldots,p-1\}\) and \(p\mathbb{Z}_{p}\setminus\{0\}\) don't intersect, we obtain \(t_{0}=t^{\prime}_{0}.\)
**Lemma 5.4**.: _For any ordinal \(\beta\) we have_
\[p^{\beta}D^{\prime}_{\alpha}=(p^{\beta}D_{\alpha})\cap D^{\prime}_{\alpha}= \operatorname{Im}(P^{\prime}_{[\beta,\alpha)}\to D^{\prime}_{\alpha}). \tag{5.11}\]
Proof.: First we prove that for any ordinal \(\beta\) we have
\[pP^{\prime}_{[\beta,\alpha)}+R_{\alpha}=P^{\prime}_{[\beta+1,\alpha)}+R_{\alpha}. \tag{5.12}\]
In order to prove this equation, we need to check two inclusions \(pP^{\prime}_{[\beta,\alpha)}\subseteq P^{\prime}_{[\beta+1,\alpha)}+R_{\alpha}\) and \(P^{\prime}_{[\beta+1,\alpha)}\subseteq pP^{\prime}_{[\beta,\alpha)}+R_{\alpha}.\) Both of them follow from the fact that \(pe_{\beta}\in R_{\alpha}\) and \(pe_{\beta,\alpha_{2},\ldots,\alpha_{n}}-e_{\alpha_{2},\ldots,\alpha_{n}}\in R _{\alpha}\) for any \(\beta+1\leq\alpha_{2}<\cdots<\alpha_{n}<\alpha,n\geq 2.\)
It is easy to see that \(\bigcap_{\beta<\lambda}P^{\prime}_{[\beta,\alpha)}=P^{\prime}_{[\lambda,\alpha)}\) for any limit ordinal \(\lambda.\) Further we claim that for any limit ordinal \(\lambda\) we have
\[\bigcap_{\beta<\lambda}(P^{\prime}_{[\beta,\alpha)}+R_{\alpha})=P^{\prime}_{[ \lambda,\alpha)}+R_{\alpha}. \tag{5.13}\]
The inclusion \(\supseteq\) is obvious. Lets check the inclusion \(\subseteq.\) Take an element \(t\) from the intersection. By Lemma 5.3 there is a unique presentation \(t=t_{0}+r\) such that \(r\in R_{\alpha},t_{0}\in P^{\prime}_{\alpha},\)\(\operatorname{li}(t_{0}),\operatorname{li}(r)\leq\operatorname{li}(t)\) and coordinates of \(t_{0}\) are from \(\{0,\ldots,p-1\}.\) Moreover, since \(t\in P^{\prime}_{[\beta,\alpha)},\) then \(t_{0}\in P^{\prime}_{[\beta,\alpha)}\) for any \(\beta<\lambda.\) Therefore, \(t_{0}\in P^{\prime}_{[\lambda,\alpha)}.\) The assertion follows.
Let us prove that \(p^{\beta}D^{\prime}_{\alpha}=\operatorname{Im}(P^{\prime}_{[\beta,\alpha)} \to D^{\prime}_{\alpha}).\) The lattice of submodules of \(D^{\prime}_{\alpha}\) is isomorphic to the lattice of submodules of \(P^{\prime}_{\alpha}\) containing \(R_{\alpha}.\) The isomorphism is given by taking the preimage. Comparing the definition of \(p^{\beta}D^{\prime}_{\alpha}\) and the formulas (5.12), (5.13), we see that \(p^{\beta}D^{\prime}_{\alpha}\) corresponds to \(P^{\prime}_{[\beta,\alpha)}+R_{\alpha}.\)
Now let us prove by induction that \(p^{\beta}D^{\prime}_{\alpha}=(p^{\beta}D_{\alpha})\cap D^{\prime}_{\alpha}.\) For \(\beta=0\) it is obvious. Assume that \(p^{\beta}D^{\prime}_{\alpha}=(p^{\beta}D_{\alpha})\cap D^{\prime}_{\alpha}\) and prove that \(p^{\beta+1}D^{\prime}_{\alpha}=(p^{\beta+1}D_{\alpha})\cap D^{\prime}_{\alpha}.\) The inclusion \(\subseteq\) is obvious. Let us prove \(\supseteq.\) Take \(b\in(p^{\beta+1}D_{\alpha})\cap D^{\prime}_{\alpha}.\) Then \(b=p\hat{b}\) for
\(\tilde{b}\in p^{\beta}D_{\alpha}\). Take a preimage \(\tilde{t}\in P\) of \(\tilde{b}\). Hence \(p\tilde{t}\) is a preimage of \(b\in D^{\prime}_{\alpha}\). Thus \(p\tilde{t}\in P^{\prime}_{\alpha}\). Therefore \(\tilde{t}\in P^{\prime}_{\alpha}\). It follows that \(\tilde{b}\in D^{\prime}_{\alpha}\). So we proved \(p^{\beta}D^{\prime}_{\alpha}=(p^{\beta}D_{\alpha})\cap D^{\prime}_{\alpha}\). Now assume that for a limit ordinal \(\lambda\) for any \(\beta<\lambda\) we have \(p^{\beta}D^{\prime}_{\alpha}=(p^{\beta}D_{\alpha})\cap D^{\prime}_{\alpha}\). Since intersection commutes with intersection, we obtain \(p^{\lambda}D^{\prime}_{\alpha}=(p^{\lambda}D_{\alpha})\cap D^{\prime}_{\alpha}\).
**Remark 5.5**.: Lemma 5.4 implies that \((p^{\alpha}D_{\alpha})\cap D^{\prime}_{\alpha}=0\). However, in general \(p^{\alpha}D_{\alpha}\neq 0\). Moreover, we claim that
\[p^{\infty}D_{\omega}\neq 0. \tag{5.14}\]
Let us give a sketch of a proof here. Consider the group \(A=\mathbb{Z}\Pi^{\omega}/\mathbb{Z}^{\Phi\cdot\omega}\). It is easy to see that the element \((p,p^{2},p^{3},\dots)\) of \(A\) lies in \(p^{\infty}A\). Hence \(p^{\infty}A\neq 0\). Consider a map \(\varphi:A\to D_{\omega}\) defined by \(\varphi(n_{0},n_{1},\dots)=\sum_{i\in\omega}n_{i}p^{i+1}e_{0,1,\dots,i}\). Here each summand of the "infinite sum" is equal to zero \(p^{i+1}e_{0,1,\dots,i}=0\) but the whole "infinite sum" is not zero. It is easy to check that \(\varphi\) is a well defined monomorphism. Therefore \(p^{\infty}D_{\omega}\neq 0\).
Proof of Theorem 5.1.: First we prove that \(S(E_{\alpha})\) is local. By the definition of \(E_{\alpha}\) we have \(p^{\alpha}E_{\alpha}=0\). Then (5.5) implies that \(\lim S(E_{\alpha})=0\). The inverse sequence \(S(\mathbb{Z}_{p})\) is local, because \(\operatorname{len}(S(\mathbb{Z}_{p}))=\omega\), \(I^{\omega}(S(\mathbb{Z}_{p}))=0\), \(S(\mathbb{Z}_{p})\to\widehat{\mathcal{S}}^{\omega}(\mathbb{Z}_{p})\) is an isomorphism (Theorem 3.1, Proposition 1.2). Since the class of local inverse sequences is closed with respect to small limits, we obtain that \(S(P_{\alpha})\) is also local. Using that there is an epimorphism \(P_{\alpha}\twoheadrightarrow E_{\alpha}\), we obtain \(\lim^{1}S(E_{\alpha})=0\). Now we prove that \(\operatorname{len}(S(E_{\alpha}))=\alpha\). Equivalently, we need to prove that \(\operatorname{len}_{p}(E_{\alpha})=\alpha\). Since \(p^{\alpha}E_{\alpha}=0\), we get \(\operatorname{len}_{p}(E_{\alpha})\leq\alpha\). Now we need to prove that \(p^{\beta}E_{\alpha}+0\) for any \(\beta<\alpha\). It is sufficient to prove that \(p^{\beta}D_{\alpha}\neq p^{\alpha}D_{\alpha}\) for any \(\beta<\alpha\). By Lemma 5.4 we have \(e_{\beta}+R_{\alpha}\in(p^{\beta}D_{\alpha})\cap D^{\prime}_{\alpha}\) and \((p^{\alpha}D_{\alpha})\cap D^{\prime}_{\alpha}=0\). Lemma 5.3 implies that \(e_{\beta}\notin R_{\alpha}\). Therefore \((p^{\beta}D_{\alpha})\cap D^{\prime}_{\alpha}\neq(p^{\alpha}D_{\alpha})\cap D^ {\prime}_{\alpha}\).
### A local inverse sequence, which is not complete with respect to a regular cardinal
In this subsection for any regular uncountable cardinal \(\kappa\) we will construct an abelian group \(A\) such that \(S(A)\) is local and not \(\kappa\)-complete.
Let \(\kappa\) be a regular cardinal. For a family of abelian groups \((A_{x})_{x\in X}\) we define its \(\kappa\)-supported product as the subgroup \(\prod_{x\in X}^{(\kappa)}A_{x}\) of the product \(\prod_{x\in X}A_{x}\) consisting of elements whose support has cardinality \(<\kappa\). In more categorical terms we can define the \(\kappa\)-supported product as the \(\kappa\)-filtered colimit of products taken by all subsets of \(X\) of cardinality \(<\kappa\)
\[\prod_{x\in X}^{(\kappa)}A_{x}=\operatorname*{colim}_{K\in_{\kappa}X}\bigl{(} \prod_{x\in K}A_{x}\bigr{)}. \tag{5.15}\]
For example, the direct sum is the \(\aleph_{0}\)-supported direct product. It is easy to see check that for any ordinal \(\alpha\) we have
**Lemma 5.6**.: _The \(\kappa\)-supporting product satisfies the following elementary properties._
1. _For any ordinal_ \(\alpha\) _the following holds_ \[p^{\alpha}\left(\prod_{x\in X}^{(\kappa)}A_{x}\right)=\prod_{x\in X}^{(\kappa)} (p^{\alpha}A_{x}).\]
2. _If_ \((A_{x,y})_{(x,y)\in X\times Y}\) _is a family indexed by product_ \(X\times Y\)_, then there is an isomorphism_ \[\prod_{x\in X}^{(\kappa)}\left(\prod_{y\in Y}^{(\kappa)}A_{x,y}\right)\cong \prod_{(x,y)\in X\times Y}^{(\kappa)}A_{x,y}\cong\prod_{y\in Y}^{(\kappa)} \left(\prod_{x\in X}^{(\kappa)}A_{x,y}\right).\]
_
3. _If_ \(|Y|<\kappa,\) _then_ \[\prod_{x\in X}^{(\kappa)}\left(\prod_{y\in Y}A_{x,y}\right)\cong\prod_{y\in Y} \left(\prod_{x\in X}^{(\kappa)}A_{x,y}\right).\]
4. _If_ \(J\) _is a small category such that_ \(|\mathrm{Ob}(J)|,|\mathrm{Mor}(J)|<\kappa,\) _then for any family of functors to the category of abelian groups_ \((A_{x}:J\to\mathrm{Ab})_{x\in X}\) _there is a natural isomorphism_ \[\prod_{x\in X}^{(\kappa)}\left(\lim_{J}A_{x}\right)\cong\lim_{J}\left(\prod_{x \in X}^{(\kappa)}A_{x,y}\right).\]
5. _If_ \(\kappa\) _is uncountable cardinal and there is a family of inverse sequences_ \((S_{x})_{x\in X}\)_, then there is a natural isomorphism_ \[\lim^{1}\left(\prod_{x\in X}^{(\kappa)}S_{x}\right)\cong\prod_{x\in X}^{( \kappa)}\left(\lim^{1}S_{x}\right).\]
Proof.: (1). Straightforward proof by transfinite induction.
(2). Follows from the fact that \(\kappa\) is regular, and hence, a union of sets of cardinality \(<\kappa\) indexed by a set of cardinality \(<\kappa\) has cardinality \(<\kappa.\)
(3). Follows from (2).
(4). It follows from (3) and the fact that \(\lim_{J}\) can be presented as the equalizer of two natural maps \(\prod_{j\in\mathrm{Ob}(J)}A_{x}(j)\to\prod_{\alpha\in\mathrm{Mor}(J)}A_{x} \big{(}\mathrm{cod}(\alpha)\big{)}.\)
(5). It follows from (3) and the exact sequence (1.1).
\(\kappa\)-supported products of inverse sequences are defined component-wise.
**Corollary 5.7**.: _Let \(\kappa\) be a regular uncountable cardinal and \((S_{x})_{x\in X}\) be a family of local inverse sequences of abelian groups. Then the \(\kappa\)-supported product \(\prod_{x\in X}^{(\kappa)}S_{x}\) is local._
**Theorem 5.8**.: _Let \(\kappa\) be a regular uncountable cardinal, \(A=\prod_{\alpha<\kappa}^{(\kappa)}E_{\alpha+1}\) and \(S=S(A)\). Then \(S\) is local, not \(\kappa\)-complete and \(\mathrm{len}(S)=\kappa.\)_
Proof.: In the proof we use simplified notations \(E_{\alpha}^{\beta}=E_{\alpha}/p^{\beta}E_{\alpha}\) and \(A^{\beta}=A/p^{\beta}A.\) Lemma 5.7 and Theorem 5.1 imply that \(S(A)\) is local. Lemma 5.6(2) implies that for any ordinal \(\beta\) we have \(p^{\beta}A=\prod_{\alpha<\kappa}^{(\kappa)}p^{\beta}E_{\alpha+1}\) and \(A^{\beta}=\prod_{\alpha<\kappa}^{(\kappa)}E_{\alpha+1}^{\beta}\). Since \(\mathrm{len}_{p}\big{(}E_{\alpha+1}\big{)}=\alpha+1,\) we obtain that \(\mathrm{len}(A)=\kappa.\) We need to prove that \(A\to\widehat{A}^{\kappa}\) is not surjective. Any element of \(A\) is a family \((e_{\alpha})_{\alpha<\kappa},\) where \(e_{\alpha}\in E_{\alpha+1}\) and \(|\{\alpha\mid e_{\alpha}\neq 0\}|<\kappa.\) Using the isomorphism
\[\widehat{A}^{\kappa}\cong\lim_{\beta<\kappa}\left(\prod_{\alpha<\kappa}^{( \kappa)}E_{\alpha+1}^{\beta}\right) \tag{5.16}\]
we can present elements of \(\widehat{A}^{\kappa}\) as families \((f_{\alpha,\beta})_{\alpha,\beta<\kappa}\) such that
1. \(f_{\alpha,\beta}\in E_{\alpha+1}^{\beta};\)
2. for any \(\beta^{\prime}<\beta<\kappa\) and any \(\alpha\) the map \(E_{\alpha+1}^{\beta}\to E_{\alpha+1}^{\beta^{\prime}}\) sends \(f_{\alpha,\beta}\) to \(f_{\alpha,\beta^{\prime}};\)
3. for any \(\beta<\kappa\) we have \(|\{\alpha\mid f_{\alpha,\beta}\neq 0\}|<\kappa.\)
Then the map \(A\to\widehat{A}^{\kappa}\) sends \((e_{\alpha})_{\alpha<\kappa}\) to \((e_{\alpha}+p^{\beta}E_{\alpha+1})_{\alpha,\beta}.\) Using this description it is easy to see any element \((f_{\alpha,\beta})\) from the image \(A\to\widehat{A}^{\kappa}\) satisfies the following property
1. \(|\{\alpha\mid\exists\beta:f_{\alpha,\beta}\neq 0\}|<\kappa.\)
Now we construct an element from \(\widehat{A}^{\kappa}\) which is not from the image of \(A\to\widehat{A}^{\kappa}.\) Take a non-zero element \(x_{\alpha}\in p^{\alpha}E_{\alpha+1}\) and consider a family \((g_{\alpha,\beta})\) such that \(g_{\alpha,\beta}=x_{\alpha}+p^{\beta}E_{\alpha+1}.\) The family \((g_{\alpha,\beta})\) lies in \(\widehat{A}^{\kappa}\) because it satisfies (i),(ii),(iii). But it does not lie in the image of \(A\to\widehat{A}^{\kappa}\) because \(\{\alpha\mid\exists\beta:g_{\alpha,\beta}\neq 0\}=\kappa\) and it does not satisfy (iv).
**Corollary 5.9**.: _If \(A=\prod_{\alpha\subset\aleph_{1}}^{(\aleph_{1})}E_{\alpha+1},\) then \(S(A)\) is local and \(\lambda\)-complete for any limit ordinal \(\lambda\neq\aleph_{1},\) but not \(\aleph_{1}\)-complete._
|
2305.17813 | Meerkat: A framework for Dynamic Graph Algorithms on GPUs | Graph algorithms are challenging to implement due to their varying topology
and irregular access patterns. Real-world graphs are dynamic in nature and
routinely undergo edge and vertex additions, as well as, deletions. Typical
examples of dynamic graphs are social networks, collaboration networks, and
road networks. Applying static algorithms repeatedly on dynamic graphs is
inefficient. Unfortunately, we know little about how to efficiently process
dynamic graphs on massively parallel architectures such as GPUs. Existing
approaches to represent and process dynamic graphs are either not general or
inefficient. In this work, we propose a library-based framework for dynamic
graph algorithms that proposes a GPU-tailored graph representation and exploits
the warp-cooperative execution model. The library, named Meerkat, builds upon a
recently proposed dynamic graph representation on GPUs. This representation
exploits a hashtable-based mechanism to store a vertex's neighborhood. Meerkat
also enables fast iteration through a group of vertices, such as the whole set
of vertices or the neighbors of a vertex. Based on the efficient iterative
patterns encoded in Meerkat, we implement dynamic versions of the popular graph
algorithms such as breadth-first search, single-source shortest paths, triangle
counting, weakly connected components, and PageRank. Compared to the
state-of-the-art dynamic graph analytics framework Hornet, Meerkat is
$12.6\times$, $12.94\times$, and $6.1\times$ faster, for query, insert, and
delete operations, respectively. Using a variety of real-world graphs, we
observe that Meerkat significantly improves the efficiency of the underlying
dynamic graph algorithm. Meerkat performs $1.17\times$ for BFS, $1.32\times$
for SSSP, $1.74\times$ for PageRank, and $6.08\times$ for WCC, better than
Hornet on average. | Kevin Jude Concessao, Unnikrishnan Cheramangalath, MJ Ricky Dev, Rupesh Nasre | 2023-05-28T21:10:31Z | http://arxiv.org/abs/2305.17813v2 | # Meerkat: A Framework for Dynamic Graph Algorithms on GPUs
###### Abstract
Graph algorithms are challenging to implement due to their varying topology and irregular access patterns. Real-world graphs are dynamic in nature and routinely undergo edge and vertex additions, as well as, deletions. Typical examples of dynamic graphs are social networks, collaboration networks, and road networks. Applying static algorithms repeatedly on dynamic graphs is inefficient. Further, due to the rapid growth of unstructured and semi-structured data, graph algorithms demand efficient parallel processing. Unfortunately, we know only a little about how to efficiently process dynamic graphs on massively parallel architectures such as GPUs. Existing approaches to represent and process dynamic graphs are either not general or inefficient. In this work, we propose a library-based framework for dynamic graph algorithms that proposes a GPU-tailored graph representation and exploits the warp-cooperative execution model. The library, named Meerkat, builds upon a recently proposed dynamic graph representation on GPUs. This representation exploits a hashtable-based mechanism to store a vertex's neighborhood. Meerkat also enables fast iteration through a group of vertices, such as the whole set of vertices or the neighbors of a vertex. We find that these two iteration patterns are common, and optimizing them is crucial for achieving performance. Meerkat supports dynamic edge additions and edge deletions, along with their batched versions. Based on the efficient iterative patterns encoded in Meerkat, we implement dynamic versions of the popular graph algorithms such as breadth-first search, single-source shortest paths, triangle counting, weakly connected components, and PageRank. Compared to state-of-the-art dynamic graph analytics framework Hornnet, Meerkat is 12.6\(\times\), 12.94\(\times\), and 6.1\(\times\) faster, for query, insert, and delete operations, respectively. Using a variety of real-world graphs, we observe that Meerkat significantly improves the efficiency of the underlying dynamic graph algorithm. Meerkat performs 1.17\(\times\) for BFS, 1.32\(\times\) for SSSP, 1.74\(\times\) for PageRank, and 6.08\(\times\) for WCC, better than Hornnet on an average.
+
Footnote †: This project is supported by National Supercomputing Mission, India
## 1 Introduction
Real-world graphs undergo structural changes: nodes and edges get deleted, and new nodes and edges are added. Handling dynamic updates poses new challenges compared to a static graph algorithm. Efficient handling of these dynamic changes necessitates (i) how to represent a dynamically changing graph, (ii) how to update only the relevant part of the graph depending upon the underlying algorithm, and (iii)
how to map this update effectively on the underlying hardware. These issues exacerbate on massively parallel hardware such as GPUs due to SIMD-style execution, the need to exploit on-chip cache for optimal performance, and nuances of the synchronization protocols to deal with hundreds of thousands of threads. Effective addressing of these issues demands new graph representations, binding of the theoretical and systemic graph processing, and tuning the implementation in a GPU-centric manner. Former research has invented multiple graph representations in diff-CSR [28], SlabGraph [7], RaimGraph [37], Hornet [10] and cuStinger [19] to maintain the changing graph structure. The SlabGraph framework [7] proposes the _SlabHash_[5]-based graph data structure and follows a warp-based execution model.
Dynamic graph algorithms can be categorized as (i) incremental wherein nodes and edges are only added, (ii) decremental wherein nodes and edges are only deleted, and (iii) fully dynamic which involves both the incremental and the decremental updates. A few prior works deal primarily with one of these types. For instance, Lacki in [24] proposes a decremental strongly connected components algorithm.
Existing solutions to deal with dynamic graphs are plagued with one of the two issues: they apply to certain types of graphs, or they are inefficient at scale. Thus, the solutions may work well for small-world graphs such as social networks but are expensive on road networks which are characterized by large diameters. Central to solving these issues lie two fundamental questions related to storage and compute: how to represent a dynamic graph and how to enumerate through a set of graph elements (such as vertices). Graph representation is crucial because the optimal representation for static processing quickly goes awry with dynamic updates. Thus, due to dynamic edge addition, memory coalescing on GPUs can be adversely affected, resulting in reduced performance. Similarly, two types of iterators are common in graph processing: through all the current graph vertices, and through the latest neighbors of a vertex (which change across updates). Both these operations are so common that we treat them like primitives, whose performance crucially affects that of the underlying dynamic graph algorithm. Note that unlike in the case of a static graph algorithm which may suffer from load imbalance due to different threads working on vertices having differently-sized neighborhoods, the issue of load imbalance is severe in a dynamic graph algorithm, as the load imbalance itself may vary across structural updates, leading to unpredictable performance results. This makes applying optimizations in a blanket manner difficult for dynamic graphs and demands a more careful custom processing. Such customization allows the techniques to apply to different algorithms as well as to different kinds of updates for the same dynamic graph algorithm.
This paper makes the following contributions:
1. We illustrate mechanisms to represent and manipulate large graphs in GPU memory using a hashtable based data-structure. Our proposed dynamic graph framework Meerkat, makes primitive operations efficient, such as iterating through the current neighbors of a node, iterating through the newly added neighbors of a node, etc.
2. Using the efficient primitives in Meerkat, we demonstrate dynamic versions of popular graph algorithms on GPUs: breadth-first search (BFS), single source shortest paths (SSSP), triangle counting, PageRank, and weakly connected components (WCC). Apart from the common patterns among these algorithms, we highlight their differences and how to efficiently map those for GPU processing.
3. We qualitatively and quantitatively analyze the efficiency of our proposed techniques implemented in Meerkat using a suite of large real-world graphs and four dynamic algorithms (namely, BFS, SSSP, PageRank, and Triangle Counting) and one incremental-only algorithm (namely, WCC). Meerkat eases programming the dynamic algorithms and readily handles both the bulk and the small batch updates to the graph object. We illustrate that the dynamic algorithms built on Meerkat significantly outperform their static counterparts.
4. Compared to state-of-the-art dynamic graph analytics framework Hornet, Meerkat is 12.6\(\times\) times faster for query, 12.94\(\times\) faster for bulk insert, and 6.1\(\times\) for bulk delete operations. Meerkat performs 6.08\(\times\) for weakly connected components, 1.17\(\times\) for BFS, 1.32\(\times\) for SSSP, and 1.74\(\times\) for PageRank, better than Hornet on an average.
## 2 Motivation
Awad et al. [7] propose a dynamic graph data structure (which we shall refer to as SlabGraph) that uses the SlabHash data structure [6] for maintaining the vertex adjacencies. SlabGraph exploits a concurrent hashtable per vertex to store adjacency lists using a form of chaining. The data structure is designed and optimized for warp-based execution on the GPU. SlabGraph allocates a SlabHash object for each vertex. A SlabHash object has a fixed number of _buckets_, determined a priori by the _load-factor_ and the number of adjacent vertices. Each bucket corresponds to a slab list: a linked list of _slabs_. Each slab is 128 bytes long, to closely match the L1 cache line size, for coalesced memory access within a single warp [6]. The adjacent vertices of a source vertex are stored in one of the buckets determined by a hashing function. The 128 bytes in a slab form 32 lanes with 4 bytes per lane (32 is the GPU's warp size). Each lane is to be processed by a corresponding thread in the warp. The last lane is reserved for storing the address of the next slab. SlabHash's ConcurrentSet (ConcurrentMap, respectively) is used for unweighted (weighted, respectively) graphs to store the adjacent neighbours for each vertex. Every slab in the ConcurrentSet can be used to store up to 31 neighbouring vertices. The last lane in a ConcurrentSet slab is reserved for storing the address of the next slab. While all threads participate in retrieving a single ConcurrentSet slab, only 31 threads participate actively in query/traversal operations, since their corresponding slab lanes potentially could have vertex data. The last thread fetches the next slab's address and is used for performing traversal to the next slab. A ConcurrentMap slab used for a weighted graph can store up to 15 pairs of the neighboring vertices and their respective edge-weights adjacent elements, as the graphs are weighted. It can store up to 15 pairs of the neighboring vertices and their respective edge weights. When a slab is retrieved by a warp, every pair of a neighboring vertex and an edge weight is fetched by a pair of threads. While 30 threads are involved in fetching edge-related data, only 15 threads are involved in processing 15 pairs in the ConcurrentMap slab. Like the case of a ConcurrentSet slab, the last thread fetches the next slab's address and is used for performing a traversal to the next slab. A graph with an average degree greater than 15 would allocate at least 2.1\(\times\) more slabs for the weighted SlabGraph representation (with ConcurrentMap), than the unweighted representation (with ConcurrentSet), requiring at least 2.1\(\times\) more slab retrievals in processing weighted representation. The weighted representation has 48.4% processing efficiency compared to the unweighted representation in a full graph traversal operation. An EMPTY_KEY1 is stored in a slab lane if it has not been populated with an adjacent vertex previously, and with a special TOMBSTONE_KEY2 if the slab lane previously held a valid vertex, and is now deleted. Elements within a slab are _unordered_ allowing efficient concurrent access. A slab can be processed efficiently by all the threads of a warp by using warp-wide communication intrinsics such as __ballot_sync, __shfl_sync, and __shfl_down_sync [29]. The warp-cooperative work strategy (WCWS) for searching in a SlabHash hash table is described by [5] and the same is used in the Meerkat framework.
Footnote 1: EMPTY_KEY is defined as UINT32_MAX-1 for ConcurrentSet and UINT64_MAX-1 for ConcurrentMap
Footnote 2: TOMBSTONE_KEY is defined as UINT32_MAX-2 for ConcurrentSet and a 64-bit pair (UINT32_MAX-2, UINT32_MAX-2) for ConcurrentMap
The SlabGraph data structure provides efficient ways for the insertion and deletion of edges on dynamic graph objects. Unlike other dynamic graph data structures, such as Stinger[15] and Hornet[10],
only SlabGraph relies on Warp Cooperative Work Sharing (WCWS) execution model [6]. The SlabGraph data structure has the below shortcomings.
**Memory Allocation**: The original SlabHash data structure assumes the responsibility of allocating the head slabs via cudaMalloc in ConcurrentMap or ConcurrentSet. Thus, when a SlabHash object is associated with each vertex, and when at least one head slab for each vertex, even when the vertex has no incident edges. This is required for maintaining the dynamic graph capabilities of SlabGraph. Since real-world input graphs often have millions of vertices, we observed that a large number of cudaMalloc calls (as many as the number of vertices) for a slab of size 128 bytes results in a significant explosion in the total memory allocated.
**Traversal of Edges**: Programming dynamic algorithm becomes easy when the underlying framework provides different iterators to traverse through all the edges in the graph. This is missing in SlabGraph. Developing an edge-centric algorithm (such as single-source shortest path or triangle counting) without such a facility is difficult.
**WarpLevel APIs**: Understanding and using low level warp primitives is involved. The WCWS can be made easy with abstractions for primitive operations such as reduction and communication within a warp. This is missing in SlabGraph.
**Auxiliary Data Structures**: Programming different dynamic algorithms is made easy with the help of auxiliary data structures that work on top of the dynamic graph data structure. This is missing in SlabGraph.
These shortcomings in SlabGraph motivated us to implement Meerkat. Meerkat makes programming dynamic graph algorithms easy and we were able to program efficient dynamic graph algorithms for SSSP, BFS, PR, TC, and WCC. We did a quantitative and qualitative analysis of our results with Hornet, a state-of-the-art dynamic graph framework.
## 3 Meerkat Framework
Our work Meerkat builds and improves upon SlabGraph[7] by extending the publicly available source code for SlabHash3. Figure 1 shows the extensions done by us to SlabGraph in our framework Meerkat.
Footnote 3: [https://github.com/owensgroup/SlabHash](https://github.com/owensgroup/SlabHash)
Dynamic graph algorithms on GPUs demand two crucial considerations: memory efficiency due to dynamic updates, and computation efficiency since the dynamic processing should be faster than rerunning the static algorithm on the modified graph. Based on this goal, Meerkat offers a two-pronged approach. In Meerkat, we move the responsibility of allocating the head slabs in SlabHash outside (to SlabGraph part of Meerkat) for all the vertices, as the framework has a better picture of the overall allocation. Section 6 demonstrates the significant memory savings with this approach.
Second, Meerkat provides a set of iterators for traversing through the neighbors of a vertex, which is a fundamental requirement for almost all graph algorithms, such as BFS and SSSP. SlabGraph[7] focuses mainly on the representation and operations of dynamic graphs. In many incremental algorithms such as weakly-connected components, it is sufficient to process the updates performed on the graph representation. Our iterators in Meerkat (see Section 3.4) enable us to traverse through individual buckets selectively, through all the slab lists for a vertex, or visit only those slabs holding new updates, depending on the requirements of the underlying dynamic graph algorithm.
Third, The Meerkat framework provide abstractions for _warp-level_ APIs such as _reduction_, and _broadcast_. The Meerkat framework comes with auxiliary data structures such as Frontier, and Union-Find to ease programming dynamic graph algorithms.
### Memory Management in Meerkat
Meerkat moves the responsibility of allocating the head slabs from SlabHash to the SlabGraph object which decides the number of slabs required per vertex according to the load factor (See Figure 1). A single large array of head_slabs is allocated using a single cudaMalloc() function call. Each vertex is assigned a specific number of head slabs according to the initial degree. We maintain an array (bucket_count) such that bucket_count[v] is the initial number of head-slabs allocated for a given vertex \(v\). By performing an exclusive_scan operation on the entries of the bucket_count array, we can determine the offset to the to the head slab for each vertex within the head_slabs array. Each SlabHash object for a vertex maintains a unique context object, which stores device pointers to these allocated slab lists, and is, therefore, shallowly copied to the device's global memory. Every graph search/operation index into an array of SlabHash context objects, to retrieve the object for the source vertex. Using the hash function, the particular slab list which is to store the destination vertex is determined. This slab list is then linearly traversed by the warp which has the source vertex in its work queue. When a bucket is full, the underlying SlabHash data structure invokes a custom allocator to obtain a new slab.
Similar to the SlabHash implementation, a SlabGraph object maintains a device context object that is shallowly copied into the device memory. While the vertex adjacencies are represented and accessible through these SlabHash context objects, the SlabGraph context object provides clean API access for vertex adjacency access and graph manipulation operations inside a device kernel by utilizing a warp-cooperative work strategy. The SlabHash context object supports methods such as Insert() and Delete() that execute in a warp-cooperative fashion. These methods are internally used by SlabGraph's device API's such as InsertEdge() and DeleteEdge(), for inserting and removing adjacent vertices for a specific vertex, respectively.
Figure 1: Our proposed Meerkat framework and its dependencies. Teal colored text shows our extensions.
### Warp Level APIs in Meerkat
The warp cooperative work sharing execution model relies on each warp processing the neighbours of the same vertex, using warp intrinsics. The warp maintains a queue of such vertices which are elected in turns, in First in First Out (FIFO) fashion, using lane-id's of the threads in a warp. Meerkat provides an abstraction for such a queue(FIFO) for each warp. This is implemented using the warp level primitives __ballot_sync() and __ffs(). The pseudocode for the warpdequeue() operation of the warp-private queue is given in Algorithm 1. The explanation of the warpdequeue() operation is given in Section 3.4 along with Algorithm 3. These API functions abstract the warp level primitives and eases programming dynamic graph algorithms in Meerkat. The Meerkat framework also provides APIs for warp level reductions and broadcast (see Algorithms 1 and 2).
```
1__device_intwarpdequeue(bool
*to_process){
2intwork_queue=
3_ballot_sync(@FFFFFF,
*to_process)
4index=__ffs(work_queue)-1
5if(lane_id()==index)then
6*to_process=false
7endif
8returnindex
9}device_intwarpbreductions(T*val)
10inti=1;
11while(i<32)do
12*val+=__shfl_xor_sync
13(@xFFFFFF, F, *val, i)
14i=iX2
15endwhile
16returnval
17}
```
**Algorithm 1**Meerkat Warp device APIs:
The Meerkat framework targets GPU, and the _device_ API comes with detailed abstractions for programming dynamic graph algorithms. The neighbors of a vertex _src_ can be obtained using the call to the API function GetEdgeHashCtxts() with _src_ as the argument. The Meerkat framework provides different types of iterators to traverse over the adjacent vertices for a vertex. These iterators are named as SlabIterator, BucketIterator, and UpdateIterator (See Section 3.4). These iterators come with functions begin(), end(), beginAt(), and endAt(). These functions are briefed in Table 1.
### Auxiliary Data Structures
#### 3.3.1 Union-Find
In Meerkat, the static and incremental WCC implementation uses the union-find approach for discovering weakly-connected components. Meerkat provides the Union-Async strategy [2] for the union operation for the adjacent edges discovered, and full path compression for determining the representative elements for the vertices in find operation.
#### 3.3.2 Frontier
A frontier of type F<T> is internally an array of elements of type \(T\). Each frontier object supports integer-based indexing for accessing its elements by our kernel threads. Every frontier object maintains a _size_ attribute to indicate the number of elements in the frontier array. Insertion of elements into a frontier object is performed by the warp-cooperative warpenqueueefrontier() function. All threads in the warp must be active for its correct invocation. The function warpenqueueefrontier() takes a \(frontier\) object, a _value_ to be inserted into \(frontier\), and a boolean \(to\_enqueue\) predicate to indicate whether the invoking thread has element participates to insert an element into the \(frontier\). After the bitset of participating threads is computed with a __ballot_sync(line 2), the first warp thread increments the size of the frontier by the number of elements to be inserted by the warp (lines 4-5). The base offset obtained by the first thread (line 5), is broadcasted to all the warp threads (line 7). Each participating thread writes its element into the frontier (line 10) by counting the number of participating threads present in the warp positionally before itself (line 9).
Our implementations for the BFS and SSSP algorithms on Meerkat, both static and dynamic, rely on using a pair of frontiers for driving their iterations: a frontier \(f_{current}\) holding a set of edges whose destination vertices must be inspected; the outgoing edges from these destination vertices which have been updated populate the frontier \(f_{next}\) to be used for the next iteration.
### Graph Primitives
One of the primitive graph operations is to iterate through the neighbors of each vertex. Our Meerkat framework maintains three types of iterators: \(\texttt{SlabIterator},\texttt{BucketIterator},\texttt{and UpdateIterator}\) (see Table 2). UpdateIterator is an optimized version of \(\texttt{SlabIterator}\) customized for incremental-only graph processing (no deletions).
\begin{table}
\begin{tabular}{p{71.1pt} p{142.3pt} p{142.3pt}} \hline API & Parameters & Description \\ \hline begin() & - & Returns a \(\texttt{SlabIterator}\) to the first slab, in the first slab list \\ end() & - & Returns an invalid \(\texttt{SlabIterator}\) to a logically invalid slab (that is, at \(\texttt{INVALID\_ADDRESS}\)) \\ \hline beginAt() & index & Returns a \(\texttt{BucketIterator}\) to the first slab in the index’th slack list for the source vertex \\ \hline endAt() & index & Returns an invalid \(\texttt{BucketIterator}\) to a logically invalid slab in the index’th slab list \\ \hline updateBegin() & - & Returns an UpdateIterator to the first slab holding incremental updates. The iterator is invalid if updates are not available for the source vertex. \\ \hline updateEnd() & - & Returns an invalid \(\texttt{UpdateIterator}\) to a logically invalid slab (that is, at \(\texttt{INVALID\_ADDRESS}\)) \\ \hline \end{tabular}
\end{table}
Table 1: Meerkat: Iterator API of \(\texttt{SlabHashCtxt}\) representing a source vertex.
The unit of access for all our iterator variants is a slab. Both the ConcurrentSet slab and the ConcurrentMap have the same size and store the next slab's address at identical locations. Consequently, our iterators have been designed to be decoupled from the implementation of our backing stores (ConcurrentSet/ConcurrentMap): they expose the same API (see iterator-specific methods in Table 3) for both the weighted and unweighted representation of Meerkat. Our iterators behave identically in the manner of traversal of slabs and in the retrieval of slab content, regardless of whether ConcurrentSet or ConcurrentMap is used for storing the neighbors of a vertex.
A BucketIterator is constructed for a specific slab list in the slab-hash table. The SlabIterator is an abstraction over the BucketIterator: the SlabIterator internally maintains a BucketIterator for the first slab list on construction. When the first slab list has been traversed, it maintains an iterator for the second slab list, and so on, until all the slab lists for a vertex have been fully traversed. The begin_at(bucket_id) on a slab hash table constructs an iterator to the first slab of the slab list indexed with bucket_id. The end_at() method returns an iterator for a logical sentinel slab for the slab list. The slab hash tables expose begin() and end() methods for retrieving the iterators to the first slab, and to a sentinel slab respectively, for the entire hash table, storing the adjacent vertices.
Both types of iterators are equality-comparable and support the increment operation. The increment
\begin{table}
\begin{tabular}{l l} \hline Iterator & \multicolumn{1}{c}{Description} \\ \hline SlabIterator & traverses through all the slabs contained in the slab lists for a given vertex, one slab \\ & list at a time \\ BucketIterator & our primitive form of SlabHash iterator; traverses through all the slabs of a single slab-list only \\ UpdateIterator & traverses through only those slabs containing new adjacent vertices, contained in updated slab-lists \\ \hline \end{tabular}
\end{table}
Table 2: Meerkat iterators
\begin{table}
\begin{tabular}{l l} \hline Function & Description \\ \hline \hline iter.operator++() & Advances the iterator to the next slab in sequence. \\ iter.get\_pointer() & accepts a lane-id (0-31), returns a pointer to an element within a slab with an offset of lane-id \\ iter.first\_lane\_id() & used when iter is an UpdateIterator; returns laneid of the first new neighbor in the slab \\ \hline \hline \end{tabular}
\begin{tabular}{l l} \hline \hline \multicolumn{3}{c}{Meerkat context object-specific methods} \\ \hline G.get\_vertex\_adjacencies() & returns pointer to a device vector of SlabHash objects; i’th element has neighbors of i’th vertex \\ begin() & returns a SlabIterator to the first slab, in the first slab list \\ begin\_at(i) & returns a BucketIterator to the first slab in the i’th slab list \\ update\_begin()()()()()()()()()()()()()
operator changes its internal state to refer to the elements in the next slab in sequence. Both the iterators support the get(lane_id) method to obtain the element stored at a given lane_id of the slab. We use is_valid_vertex() to determine if the value returned by iterator.get(lane_id) is suitable for processing by our algorithms using these iterators (see Table 3).
```
1functionIterationSchemel(Graph \(G\), VertexV[vertex_n]){
2Vertex_Dictionary*vert_adjs[]=G.get_vertex_adjacencies();
3if((thread_id()-lane_id))<vertex_n))then
4bool to_process=(thread_id()<vertex_n);
5intdequeue_lane=0;/*queuesizeiswersize(i.e 32)*/ /*dequeue()APIinternallyuses_ballot_sync()and__ffswarprimitive*/
6while((dequeue_lane=warpedequeue(&to_process)) \(\neq\) -1)do
7intcommon_tid=(thread_id()-lane_id()+dequeue_lane);
8Vertex src=V[common_tid]/*allwarpthreadsprocessneighboursofvertexsrc
9*/
10SlabIteratoriter=G.vert_adjs[src].begin();
11SlabIteratorlast=G.vert_adjs[src].end();
12/*warpcooperativeprocessingofadjacencyslabsofvertexsrc
13while(iter\(\neq\)last)do
14Vertexv=iter.get_pointer(lane_id);/*eachwarpthreadindextodifferentslabentry
15if(is_valid_vertex())then
16/*Processadjacentvertex,iftheslab-entryisnotTOMBSTONE_KEY*/
17endif
18++iter;
19
20endwhile/*Post-processing
21
22endwhile
23
24 endif
25
[MISSING_PAGE_POST]
The _vertex-id_ of the graph object ranges from 0 to \((vertex\_n-1)\). At line 4, we identify those threads whose thread-ids are less than _vertex_n_ and can validly index into \(V\), the array storing the list of vertices to process.
The warpdequeue() function (See Lines 1-7, Algorithm 1) identifies those threads within the warp having a vertex remaining to be processed and stores in the variable _work_queue_ (See Line 2, Algorithm 1). Each set bit in the work queue corresponds to one unique thread within the warp that needs to be processed with the value of variable _to_process_ == true. Using the CUDA function __ffs(), we elect the first outstanding thread from _work_queue_ and store it in the local variable _index_ (See Line 3, Algorithm 1). The first outstanding bit is the first set bit starting from the least significant bit position. If all the bits in the variable _work_queue_ have a value of zero, then the variable _index_ will get a value of -1. The local variable _to_process_ passed by reference to the warpdequeue() function is set to false for the warp thread at _lane_id index_. The warpdequeue() function then returns the value of the variable _index_ (See Line 7, Algorithm 1).
The value returned by the warpdequeue() function is assigned to the variable _dequeue_lane_ (See Line 6). The while loop terminates when the value returned by the warpdequeue() function is -1. Thus the loop at Lines 6-17 continues as long as there is an outstanding thread within the warp whose associated vertex is left to process. All the threads within the warp, index into the same position of the Vertex array \(V\), and the Vertex variable _src_ will have the same value for all threads in the warp (See Lines 7-8. A pair of SlabIterators, namely iter, and last, are constructed (Lines 9-10) to traverse through the slabs storing the adjacent vertices of the Vertex _src_ (within the loop at Lines 11-15). All the threads within the warp perform a coalesced memory access to the contents of the slab represented by iter (Line 12). If the value fetched by the thread from the current slab represents a valid vertex, (line 13), the thread processes it as the adjacent vertex. After processing the current slab, the iterator iter is incremented so that it refers to the next slab in the sequence.
Unlike _IterationScheme1_ which uses SlabIterators, _IterationScheme2_ (presented in Algorithm 4) uses BucketIterators and eliminates the use of a work queue of vertices, and instead operates with a grid-stride loop. This iteration scheme imposes no restriction on the number of thread blocks. However, for the warp-level primitives (such as __shfl_sync) to work correctly on the slabs, the number of active threads within a thread-block must be a multiple of the warp-size (32 on our GPU). Since the adjacencies of a vertex are distributed among multiple slab lists, a slab list can thus be identified with a \(\langle v,i\rangle\) pair, which refers to the \(i^{th}\) slab-list of a vertex \(v\). Such pairs are stored in the bucket_vertex and bucket_index device vectors. Each loop iteration within a warp traverses and processes all the slabs contained in one slab-list uniquely identified by its \(\langle v,i\rangle\) pair. The \(\langle v,i\rangle\) pairs are represented in the bucket_vertex and bucket_index device vectors. For example, if a vertex frontier contains two vertices \(v_{i}\) and \(v_{j}\), containing 3 and 2 slab-lists respectively. To enable _IterationScheme2_ to traverse through all the their respective slabs, \(bucket\_vertex\)[] is initialized as \(\big{[}v_{i},v_{i},v_{j},v_{j}\big{]}\), and \(bucket\_index\)[] is initialized as \([0,1,2,0,1]\).
The total number of warps in the kernel is computed and stored in _warps_n_ (at line 3). Each warp is uniquely identified with a global-warp id (computed at line 4). By using its _global_warp_id_ as the initial value for index variable \(i\), each warp identifies its bucket \(\langle v,i\rangle\) to process by indexing into the bucket_vertex and bucket_index vectors (See Lines 7-8). This index is incremented at the stride of the total number of warps in the grid for the CUDA kernel (Line 17).
_IterationScheme1_ exploits the warp-based processing and schedules one vertex to a warp for processing the neighboring vertices. Although the adjacent vertices within a slab are accessed in a coalesced fashion, the number of slabs inspected is ultimately determined by the degree of the vertex. This leads to an imbalance in the amount of work assigned to each warp if the vertex degree variance is high. This concern is alleviated significantly with _IterationScheme2_ since the initial number of buckets for each vertex is determined using the load factor and the initial degree of that vertex. Further, hashing attempts to distribute
the elements uniformly among the buckets. On average, Meerkat can distribute the work equally among the warps.
In several incremental graph algorithms, such as incremental WCC, it is sufficient to iterate over slabs for which new adjacent vertices have been inserted. To facilitate iteration over the updated slabs alone, Meerkat maintains the following fields per slab list. Each slab list is augmented with a bool value is_updated, which is set to true if new edges are inserted into the slab list. Each slab list stores an allocator address field alloc_addr to store the allocator address of the first slab in which new edges have been inserted. Since head slabs are allocated through cudaMalloc(), we use a special value A_INDEX_POINTER in the alloc_addr field, to distinguish the head slab from other slabs returned by the Meerkat allocator. Each slab list also stores the lane-id of the first updated value, in the first updated slab.
Initially, is_updated for a slab list is set to false. The InsertEdge() device method is responsible for setting is_updated for a slab list to true, if an insertion occurs at the end of the slab list. For every SlabHash object associated with a vertex, we define UpdateIteratorsto iterate over only the slabs storing new vertices. In other words, we can traverse only over those slabs in which new vertices have been inserted. An UpdateIterator skips over slab lists for whom is_updated is false. Once the updates have been processed, Graph.UpdateSlabPoints() sets the is_updated field to false, for all the slab lists previously set to true. For such slab lists, Graph.UpdateSlabPoints() sets alloc_addr to the last slab in the slab list, and lane id 1 to the next lane, where subsequent insertions of adjacent vertices are to take place (See Figure 1(a)). If the slab list is completely full, the lane id field lane is assigned a special value INVALID_LANE to denote that the updates would occur at newly allocated slabs, chained at the end of the last slab. (See Figure 1(b))
Essentially, an UpdateIterator behaves like a SlabIterator, but, over slabs that are recognized to
be holding incremental updates. Hence, like the \(\mathsf{SlabIterator}\), the use of \(\mathsf{UpdateIterators}\) is only compatible with _IterationScheme1_. In our experiments involving _traversal_ of adjacencies _of all vertices_, _IterationScheme1_ (with \(\mathsf{SlabIterators}\)) outperforms _IterationScheme2_ (with \(\mathsf{BucketIterator}\)) by \(1.24\)-\(1.48\times\). \(\mathsf{IterationScheme2}\) performs marginally better with algorithms when the working set of vertices is small.
## 4 Dynamic Algorithms using Meerkat
We evaluate \(\mathsf{Meerkat}\) using the dynamic versions of five fundamental graph algorithms: Breadth First Search (BFS), and Single Source Shortest Path (SSSP), Triangle Counting (TC), PageRank (PR), and Weakly Connected Components (WCC). BFS, SSSP, TC and PR are developed for both incremental and decremental processing, whereas WCC is programmed only for incremental processing. The BFS, PR, TC, and WCC algorithms operate on unweighted graphs, and \(\mathsf{Meerkat}\) uses ConcurrentSet for storing the adjacencies for every vertex. On the other hand, the SSSP algorithm requires a weighted graph representation, and hence, the ConcurrentMap is used for representing adjacencies of every vertex, and their respective edge weights. The fully-dynamic versions are implemented with incremental and decremental processing as two computation steps.
### Dynamic PageRank
The PageRank algorithm assigns a score to every vertex in the range \([0,1]\), which determines its importance in the input graph object. The PageRank value of a vertex can be understood as a probability that a random walk in the graph (with \(N\) vertices), will arrive at that vertex, computed by an iterative application of equation 1 for all vertices in a sequence of super-steps until a steady state/convergence condition is met [4].
\[PR_{i}[v]=\frac{1-d}{N}+d\cdot\sum_{u\to v}\frac{PR_{i-1}\left[u\right]}{ out\left[u\right]} \tag{1}\]
The pseudocode for static/dynamic PageRank is discussed in Algorithm 5. Algorithm 5 accepts a dynamic graph object \(G\), and an array \(PR[vertex\_n]\) which identifies the PageRank value for each vertex in the input graph object. In the case of the static algorithm, each element in the array \(PR[vertex\_n]\) is initialized with the value \(\frac{1}{vertex\_n}\). In the incremental/decremental case, the array element \(PR[v]\) contains the PageRank value of the vertex \(v\), computed before insertion/deletion. Each iteration of the loop (in lines 7-18) represents a "super-step". The PageRank values of iteration \(i\), are determined from those computed in
Figure 2: UpdateSlabPointers()
iteration \(i-1\). The maximum number of iterations is upper bounded by \(max\_iter\). The iterations continue until \(delta=\sum_{v\in V}|PR_{i}(v)-PR_{i-1}(v)|>error\_margin\). In other words, \(delta\) is the L1-Norm between the PageRank vectors \(\mathbf{PR}_{i}\) and \(\mathbf{PR}_{i-1}\), and is computed at line 16. Ordinarily, computing the ratio \(\frac{PR_{i-1}[u]}{out|u|}\) requires two divergent memory access per every incoming \(u\to v\), for computing the PageRank \(PR_{i}[v]\) for vertex \(v\). This ratio does not change in a "super-step". By caching these ratios for every vertex, we can reduce the divergent memory accesses down to one, during the PageRank computation. The _FindContributionPerVertex_ GPU kernel in line 8, initializes \(Contribution_{i}[u]=\frac{PR_{i-1}[u]^{\cdot}}{out|u|}\) for each vertex \(u\), which can performed with coalesced memory access.
The new PageRank values are computed in line 9 according to equation 1 and are adjusted to account for teleportation from zero-outdegree vertices to any other vertex in the input graph object, at lines 10-13, on the GPU. The teleportation probability is added to the PageRanks value for every vertex (lines 12- 13), if there exists any vertex \(v_{z}\) whose out-degree is zero (line 10). The teleportation probability to be computed at iteration \(i\) is given by \(\sum_{v_{z}}\frac{PR_{i-1}[u_{z}]}{vertex\_n}\) is computed in the _FindTeleportProb_ GPU kernel.
The _Compute_ GPU kernel is invoked with a dynamic graph object \(G\) storing incoming edges, an array of PageRank contributions for each vertex, the damping factor, and an array of new PageRank values. Each thread, with thread-id equal to \(v\), represents a unique vertex \(v\) in the graph object \(G\). Hence, each thread maintains a local variable to hold the new PageRank value for the vertex \(v\) it represents. A pair of _SlablIterator_'s are constructed are constructed to traverse the slabs holding the in-edges of vertex \(v\). The accumulation of the contribution of the in-edges to the PageRank of vertex \(v\) is commutative. Hence, the _Compute_ kernel maintains a thread-local variable _local_prsum_ to accumulate the PageRank contributions of the neighboring vertices along the incoming edges encountered by the warp threads for vertex \(v\).
### Dynamic Single Source Shortest Path and Breadth First Search
```
input: Graph \(G\), Vertex SRC, EdgeBatches\(\{\)batches\(\)\(\}\)\(\}\) */ /* Static SSSP
1 Frontier<Edge> \(F_{current}\), \(F_{next}\) tree_node \(nodes[vertex\_n]\) InitializeDistance(SRC, nodes) CreateFrontier(\(G\), \(F_{current}\), SRC, \(G\).get_bucket_count()[SRC]) while(\(F_{current}\)_size \(\neq\) 0)do SSSP_Kernel(G, nodes, \(F_{current}\), \(F_{next}\)) swap(\(F_{current}\), \(F_{next}\)) zero\(F_{next}\)
2
3 endwhile /* Dynamic SSSP
4
5 zero\(F_{current}\), \(F_{next}\)
6for\(b\)in batchesdo if(\(b\).is_insertion())then
7 Incremental Algorithm Prologue:\(F_{current}\gets b\)
8 endif
9
10
```
**Algorithm 6**SSSP - Incremental / Decremental
The single-source shortest path (SSSP) algorithm, described in Algorithm 6, takes a dynamic graph object \(G\), and single source vertex SRC, and computes the shortest path to all other vertices from SRC. In the dynamic setting, Algorithm 6 is batch-dynamic in nature: it takes a sequence of edge \(batches\), where each batch is either an incremental or decremental batch. The graph object \(G\) undergoes modifications through the application of an insertion/deletion edge batch; the incremental/decremental SSSP algorithm re-computes the shortest paths/distances for the affected vertices in the graph, from the vertex SRC. For each node \(v\), let \(P_{v}=(SRC\nightsquigarrow\cdots\nightsquigarrow parent(v)\to v)\) be the shortest path from the source vertex SRC. Our SSSP algorithm is responsible for computing \(<distance_{v},parent(v)>\text{pair}\)4, where \(distance_{v}\) is the length of the shortest path \(P_{v}\), and \(parent(v)\) is the predecessor to the vertex \(v\) in path \(P_{v}\). Every vertex \(v\) must have a unique \(parent(v)\) in its shortest path \(P_{v}\), which implicitly implies that every vertex \(v\) has a unique
shortest-path to the source \(SRC\). It is therefore understood, that by identifying the \(parent(v)\) for every vertex \(v\), in its shortest path \(P_{v}\), we are implicitly maintaining a directed tree \(T_{G}\) which is rooted at \(SRC\), such that each edge \(e=(u,v)\in T_{G}\), \(v\) is the \(parent\) of \(u\) in \(P_{v}\). Our batch-dynamic incremental/decremental algorithm is responsible for maintaining this dependence tree. In the ensuing discussion, a subtree in \(T_{G}\), rooted at vertex \(v\), will be represented by \(T_{v}\). A formal discussion on value dependence in shortest distance computation and its representation as a _dependence tree_ can be found in [35].
#### Incremental SSSP
The addition of a new edge \((u,v)\) could only result in \(distance_{new}(v)<distance_{old}(v)\), if \(distance(u)<distance(parent(v))\). In such a case, the sub-tree \(T_{v}\) is transplanted under a new parent \(u\) in \(T_{G}\). All such shortest paths \(P_{x}=(SRC\leadsto parent_{old}(v)\to v\leadsto x)\), are now \(P_{x}=(SRC\leadsto u\to v\leadsto x)\). Therefore, it is necessary to re-compute the shortest-path distances for all the vertices in the sub-tree \(T_{v}\). Our incremental SSSP algorithm takes an incremental batch of edges as the initial frontier for our static SSSP algorithm.
#### Decremental SSSP
The deletion of an edge \((u,v)\) from the graph \(G\) invalidates \(distance(v)\) from the source vertex \(\mathsf{SRC}\), and the shortest paths \(P_{x}\) for all vertices \(x\) in the subtree \(T_{v}\), if the edge \((u,v)\) in \(G\) is also an edge in \(T_{G}\). If \(distance(v)\) is invalidated on the deletion of an edge \((u,v)\), vertex \(u\) ceases to be \(parent(v)\). This prompts a propagation of invalidations for the shortest distances (from \(\mathsf{SRC}\)) and the parent vertices determined for all vertices in \(T_{v}\). In effect, the previously computed \(T_{v}\) ceases to exist in \(T_{G}\). At this juncture, there are three types of vertices in the graph: (i) a set of vertices \(V_{valid}\) whose shortest distances and \(parent\) information have not been invalidated (ii) a set of vertices \(V_{invalid}\) whose shortest distances and \(parent\) information have suffered invalidation, as a direct consequence of being destination vertices of deleted edges present in \(T_{G}\), or indirectly, as a consequence of the propagation of invalidation, and (iii) a set of vertices \(V_{unreachable}\) which were not part of \(T_{G}\) owing to an absence of a path from the vertex \(\mathsf{SRC}\) in \(G\). Such vertices in \(V_{unreachable}\) will continue to remain unreachable even after a batch of edge deletions. Thus, the shortest paths for vertices in the \(V_{invalid}\), still reachable from \(\mathsf{SRC}\), can be computed by taking all edges \((u,v)\) such that \(u\in V_{valid}\) and \(v\in V_{invalid}\), as the initial frontier for our static SSSP algorithm.
The specific details of the implementation of the static SSSP, and the incremental/decremental algorithms, presented in Algorithm 6 are explained below:
The static SSSP computation kernel does frontier-based computation: it accepts a frontier of edges \(F_{current}\) and until it produces a new frontier \(F_{next}\) for the next invocation. The SSSP kernel is repeatedly invoked it produces an empty frontier \(F_{next}\). A frontier of type F<T> is internally an array of elements of type \(T\). Each frontier object supports integer-based indexing for accessing its elements by our kernel threads. Every frontier object maintains a \(size\) attribute to indicate the number of elements in the frontier array. Insertion of elements into a frontier object is performed by the warp-cooperative \(\mathsf{warpenquee}\mathsf{frontier}()\) function (see Algorithm 2). All threads in the warp must be active for its correct invocation. Line 3 initializes the tree nodes for all vertices: it sets the shortest path distance for all vertices to \(\mathsf{INF}\) (infinity) (\(\emptyset\) for the source vertex \(\mathsf{SRC}\)), and their \(parent(v)\)'s to \(\mathsf{INVALID}\) vertex (\(\mathsf{SRC}\), for the source vertex \(\mathsf{SRC}\)). Line 4 initializes the initial frontier \(F_{current}\) with the outgoing edges of the source vertex \(\mathsf{src}\)
Lines 5-8 contains the iterative application of the SSSP kernel on the current edge frontier \(F_{current}\), to produce the next edge frontier \(F_{next}\). Subsequently, \(F_{current}\) is initialized with the newly produced frontier \(F_{next}\) (line 7). The iterative process continues until the current frontier for an iteration is non-empty, that is \(F_{current}.size\neq 0\) (line 5). Line 6 invokes the SSSP kernel on a frontier of edges \(F_{current}\). Each GPU thread
\(t_{i}\) of the SSSP kernel is assigned one frontier edge \(e=(e.src,e.dst)=(u_{f},v_{f})\), where \(0\leq i<F_{current}.size\). The SSSP kernel updates the tree node of vertex \(v_{f}\) to store the distance \(d_{new}=distance(u_{f})+e_{f}.weight\) of \(v_{f}\), along the edge \(e_{f}\), and the parent vertex \(u_{f}\). The vertex \(v_{f}\)'s tree_node is _atomically_ updated to that of \(d_{new}\), if the old shortest path distance of \(v_{f}\), \(d_{old}\), is greater than \(d_{new}\), or if \(d_{old}=d_{new}\) and if \(parent(v_{f})<u_{f}\). Using a pair of SlabIterators's for the adjacencies of \(v_{f}\) valid outgoing edges are identified and enqueued into the next edge frontier.
Lines 12-14 defines the prologue for the incremental SSSP algorithm and Lines 16-20 defines the prologue for the decremental SSSP algorithm. Lines 23-26 defines the common epilogue for both the incremental and decremental prologues. The epilogue accepts the frontiers produced by incremental/decremental prologues for each incremental/decremental batch and iteratively applies the static SSSP computation kernel until convergence.
The incremental/decremental BFS algorithm uses the same kernels as that of incremental / decremental SSSP algorithms described in Algorithm 6 (lines 11-27). However, the static algorithm uses a fast _level-based_ approach.
### Dynamic Triangle Counting
Our library's dynamic triangle counting algorithm is adapted from [27], which is based on an inclusion-exclusion formulation. The algorithm consumes a pair of undirected graphs, namely, \(G_{1}\) and \(G_{2}\), and a sequence of _edges_. For each such edge \((u,v)\in edges\), the the cardinality of the intersection of the \(adjacency(u)\) in \(G_{1}\) and \(adjacency(v)\) in \(G_{2}\) is computed in a warp-cooperative fashion. Each thread is assigned an edge; the edges assigned to a warp of threads are processed using the warp cooperative work strategy. After electing the thread whose edge needs processing (using the warpdequeue function of Meerkat) the end-points are broadcasted to the warp threads using the _warprobadcast()_ function of Meerkat. A pair of SlabIterators are constructed to iterate over the neighbours of vertex \(v\) in \(G_{2}\). For each such adjacent vertex \(adj\_v\) we check if the edge \(u\to adj\_v\) exists. Such an edge indicates the presence of the triangle comprising of vertices \(\langle u,adj\_v,v\rangle\), and the thread-local triangle count is incremented by one. It must be remembered that each thread in the warp sees a different \(adj\_v\); hence detects different triangles, at the same time. The thread-local triangle counts are finally accumulated at warp level using _warpreduxsum()_ API of Meerkat and then updated to the global variable storing the total number of triangles.
### Incremental WCC
A Weakly Connected Component (WCC) of an directed graph is a subgraph where all the vertices in the subgraph are reachable from all others vertices in the subgraph. An efficient way to compute the set of all WCCs in a graph object is by using the Union-Find data structure [2]. A root-based union-find tree, followed by full path compression can be used efficiently for computing the labels for the vertices, which are representatives of their WCCs, in both the static and the incremental computation.
## 5 Related Work
In the recent past, multiple frameworks have addressed the challenges dealing with dynamic graphs on GPUs: Hornet [10], CuSTINGER [19], GPMA [32], LPMA [40], aimGraph [38] and famGraph [37]. The cuSTINGER data structure uses structure-of-arrays (SoA) representation for maintaining edges and large over-provisioned arrays for maintaining the vertex adjacency lists. Hornet [10] maintains several block
arrays. Each block has a size of some power of two. A vertex maintains its adjacency list within one such fitting block. On insertion, if the allocated block cannot accommodate the new edges, the adjacency list is migrated to a larger block in another block array. It maintains a vectorized bit-tree for determining if a block array has empty blocks, which is used in reclamation. Further, to identify blocks of a particular block size, Hornet maintains an array of B+ trees.
The Packed Memory Array (PMA) [9] data structure that stores elements in a sorted order leaving gaps within the array for accommodating future updates. PMA is maintained as a self-balancing binary tree [16].
The PMA data structure is extended for GPU [32]. This data structure suffers from uncoalesced memory accesses, overheads in obtaining locks, and lower parallelism if threads conflict on the same segment. GPMA+ [32] overcomes these limitations by first sorting the batch updates by their keys and grouping the updates that fall into the same leaf segment to be processed from the leaf to the root. LPMA [40] overcomes the array expansion problem of GPMA+ by using a leveled array for maintaining the dynamic graph updates.
The aimGraph [38] data structure mainly focuses on memory management for handling updates for a dynamic graph. By allocating a single large block of global memory, aimGraph eliminates round trips between the CPU and the GPU for memory allocations. Like aimGraph, \(\operatorname{\textsc{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \small{\small{{\small{{\small{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \}} \}\}\}\}\}\}\}\ \}\ \}\ \}\ \}\ \ \}\ \ \ \ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\
are available in the block array or not. A new block array for a given block size may be allocated if no free blocks are available. For each new edge for a vertex, a GPU thread searches for free space within the vertex's edge block. This is warp-divergent in nature. However, if there are additional edges that cannot be added to the block, the new edges are separately queued, a new edge block of a larger size is allocated, the old adjacencies and the additional edges are added to the new block, and the old block is reclaimed. Similarly, on the deletion of an edge, if the number of valid edges in a block becomes smaller than half its capacity, these edges are migrated to a smaller block, and the original block is reclaimed. In Hornet, the edges to be processed are divided equally among thread blocks and equally among the threads within the thread block. Since all threads within a warp have the same number of edges to process, warp divergence is avoided. However coalesced memory access is not obtained in Hornet.
## 6 Experimental Evaluation
The experimental evaluation was performed on an NVidia RTX 2080 Ti GPU. The GPU is equipped with 11GB of global memory with a memory bandwidth of 616GB/s, and 4352 CUDA Cores (68 SMs and 64 cores/SM). All the implementations were compiled with -O3 and -use_fast_math flags on the nvcc version 11.7 compiler. Table 5 shows the seven publicly available graphs used for our comparison of the static and dynamic graph algorithms, their average/maximum vertex degrees, and their maximum diameters.
The last two columns of the Table 5 compare the memory space requirements (in GiB) when the memory allocation for the graph is performed inside SlabHash objects versus inside the dynamic graph in the Meerkat framework. We observe \(1.4-3.67\times\) (\(2.33\times\) on an average) of memory savings by using the latter strategy. When the memory allocation is handled inside SlabHash, the number of cudaMalloc() calls are equal to the number of slab-lists, which is at least the total number of vertices in the input graph object, resulting in significantly large memory consumption. The Meerkat framework moves the responsibility of allocating the head slabs from SlabHash to the SlabGraph object, which decides the number of slabs required per vertex according to the load factor. A single large array of head slabs is allocated using a single cudaMalloc() function call, resulting in better memory utilization.
We evaluate our implementation for _five_ dynamic graph algorithms: Breadth First Search (BFS), Single Source Shortest Path (SSSP), Triangle Counting (TC), PageRank (PR), and Weakly Connected Components (WCC). We have compared the performance of the static versions of these algorithms on Meerkat against Hornet, a dynamic graph data structure for GPU. The Hornet data structure comes with the implementation of static graph algorithms on full graphs and does not have implementations for _incremental_ and
\begin{table}
\begin{tabular}{r r r r r r r r} \hline \hline \multirow{2}{*}{Graph} & \multirow{2}{*}{\#Nodes} & \multirow{2}{*}{\#Edges} & \multicolumn{2}{c}{Average} & \multicolumn{1}{c}{Maximum} & \multicolumn{2}{c}{Memory Allocation} \\ & & & Degree & Degree & Diameter & in SlabHash & in Meerkat \\ \hline LJournal [8] & 4.85M & 69M & 14 & 20293 & 16 & 2.56 & 0.97 \\ Rand10M & 10M & 80M & 8 & 27 & 11 & 4.92 & 1.34 \\ BerkStan [26] & 685K & 7.6M & 11 & 249 & 573 & 0.48 & 0.25 \\ Wiki-talk [25] & 2.4M & 5M & 2 & 100022 & 9 & 1.32 & 0.46 \\ Wikipedia & 3.4M & 93.4M & 27 & 5333 & 262 & 2.02 & 1 \\ Orkut [39] & 3.1M & 234.4M & 76 & 33313 & 9 & 2.37 & 1.69 \\ USAfull [1] & 23.9M & 58.3M & 2 & 9 & 6261 & OOM & 6.1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Input Graphs and Memory Requirement (in GiB) in Meerkat
_decremental_ dynamic graph algorithms 5. For the comparison of the performance benefit of our dynamic algorithms over their static counterparts, we computed the _self-relative-speedup_ over Meerkat. We define the following quantity: \(s_{b}^{n}=\frac{static_{b}^{n}}{dynamic_{b}^{c}}\). \(static_{b}^{n}\) is the cumulative execution times of the static algorithm applied for a sequence of \(n\) incremental/decremental edge updates of size \(b\), measured after each batch. \(dynamic_{b}^{n}\) is the cumulative execution times of the incremental/decremental algorithm applied for the same sequence of \(n\) incremental/decremental edge updates of size \(b\), measured after each batch. We define \(s_{b}^{n}\), the speedup of the incremental/decremental algorithm over the static algorithm, as the ratio of \(static_{b}^{n}\) to \(dynamic_{b}^{n}\). We have reported speedups \(s_{10k}^{10}\) for our dynamic BFS, SSSP, PageRank, and Triangle Counting algorithms in Figures 6(a), 6(b), 6(b), and 10(b), respectively.
Footnote 5: We were unable to compare cuSTINGER [19] with our work, since the publicly available source code ([https://github.com/cuStinger/cuStinger](https://github.com/cuStinger/cuStinger)) does not compile for our GPU’s architecture (with CUDA compute capability 7.5).
The warp-cooperative execution strategy (WCWS) adopted by Meerkat framework in insertion, deletion, query operations, and in our algorithms require that warp-divergence be avoided. The data is exchanged among threads using warp-cooperative functions such as ballot_sync, ffs, etc., which are fast as they work only with registers. In WCWS, multiple threads within a warp thread have different tasks (vertices/edges) assigned to them. The warp threads form a queue for processing these tasks (using ballot_sync); a task to be processed is collectively elected by the warp (using ffs). Each slab occupies 128 bytes, which closely matches the GPU's \(L1\) cache line size. All warp threads perform coalesced vectorized memory accesses on a slab storing a vertex's adjacent neighbors. In Hornnet, the load balance is achieved by sequentially distributing edges equally among all the threads spawned for the kernel grid. However, this method affects the coalesced memory access resulting in poorer performance.
As described in Section 5, Hornnet migrates the adjacent neighbors of a vertex to a larger edge block if the current block cannot accommodate the incoming edges. In the case of deletion, if the adjacent edges are fewer than a threshold, the edges are migrated to a smaller block. This migration of adjacent neighbors does not happen in Meerkat. For every new edge \(\langle u,v\rangle\) to be inserted, Meerkat applies a hashing function on the destination vertex \(v\), to choose which slab-list of the source vertex \(u\) must store its neighbor \(v\). An edge is stored at the end of the slab list, if it is not already present in the slab list. This requires a traversal till the end of the slab list to check for previously added identical edges. Hashing distributes the destination vertices among multiple slab lists, implicitly reducing the number of slabs to be retrieved for checking duplication. If the last slab in the slab list cannot accommodate a new edge, Meerkat obtains a new slab from the pool of pre-allocated slabs, by invoking the slab allocator. The new slab is linked to the end of the slab list, and the new edge is recorded in it.
Figure 3: Insertion - Speedup Over Hornnet
Figure 3 compares the insertion throughput of loading an entire graph, and for small-batch insertions (2K, 4K, 8K). Meerkat, for small batch insertions of 2K, 4K, 8K, performs 3.7\(\times\)-11.04\(\times\) better than Hornet. Figure 5 compares the performance of a query benchmark for batches of randomly generated edges, of sizes ranging from \(2^{16}\) to \(2^{20}\). On average, Meerkat performs 2.16-6.37\(\times\) better than Hornet. Figure 4 compares the throughput of the deletion operation for the whole graph and small batches (2K, 4K, 8K). For small batch deletions, Meerkat performs 10.48\(\times\)-16.15\(\times\) better than Hornet. The performance improvement is due to better coalesced memory accesses, and lack of memory block migrations in Meerkat. The deletion benchmark shows better performance as the deletion operation only flips a valid entry to TOMBSTONE_KEY. Unlike the insertion operation, the adjacent neighbor to be deleted could occur anywhere within a slab list. The traversal of the slab list halts once the incident edge to be deleted is found. The insertion operation requires adding new slabs once a slab list becomes full. In the Meerkat framework, each thread in a warp processes slabs holding neighbors of the same vertex, resulting in better load balance and coalesced memory access.
Disabling hashing has a direct consequence in improving slab occupancy, especially in graphs that have a high average out-degree (such as Orkut, Higgs, and Wikipedia). On disabling hashing, the average slab occupancy improved by 24% for Orkut, 14.35% for Higgs, 8% for Pokec, and 5% for LJournal, with an overall 6.26% improvement across all our benchmark graphs. In traversal algorithms such as BFS, SSSP, PageRank, and WCC, where retrieving all neighbours of a vertex is of interest, it is compulsory to visit all slabs associated with the vertex. The improvement in slab occupancy has two direct consequences. Firstly, fewer slabs per vertex translate to fewer memory accesses needed to retrieve all its neighbours. Secondly, more neighbours of a vertex are retrieved per slab, resulting in better workload for warps.
Figure 4: Deletion - Speedup Over Hornet
Figure 5: Query - Speedup Over Hornet
If the operation of interest is SearchEdge(), enabling hashing gives a performance improvement. This happens with the Triangle Counting algorithm. Enabling hashing distributes the neighbours of a vertex among several slab lists, reducing the length of a single slab list. Out of the several slab lists associated with a vertex, it is sufficient to look at the one which could be holding the neighbouring vertex we are looking for. The slabs in other slab lists need not be inspected in the query operation due to hashing.
### BFS and SSSP
The BFS and SSSP algorithms are programmed in Meerkat using two approaches. The vanilla implementation uses 32-bit atomics and the tree-based implementation uses 64-bit atomics. The vanilla implementation computes only the shortest distances for the reachable vertices from the source vertex. The tree variant, however, also computes a dependency tree tracking how these distances have been computed. The maintenance of this dependency tree is necessary for correct working of our incremental/decremental BFS and SSSP algorithms. The BFS and SSSP algorithms in Meerkat and Hornet follow an iterative approach using a pair of frontiers [18].
Disabling hashing for the BFS algorithm produces an average of 10.78% improvement (up to 28.1%) in performance. Similarly, an average of 9.1% improvement (up to 23.28%) for tree-based variant. Similarly, disabling hashing for the SSSP benchmark produces an average of 9.9% improvement (upto 35%). The tree-based variant shows a similar average improvement of 11% (upto 28.95%).
#### 6.1.1 Static BFS and SSSP comparation with _Hornet_
The Figure 5(a) shows the speedup of the static vanilla and the tree-based implementation of BFS algorithm in Meerkat over the static implementation in Hornet. The Figure 5(b) shows the speedup of the static Vanilla and tree-based SSSP algorithm on Meerkat against that of Hornet. The tree-based approach is necessary for setting up the initial data structures for incremental/decremental variants of our BFS/SSP implementations on Meerkat. Our vanilla BFS algorithm on Meerkat is on an average, 1.17\(\times\) (upto 1.82\(\times\)) faster than that of Hornet. The vanilla SSSP algorithm on Meerkat is on average, 1.32\(\times\) (upto 1.85\(\times\))
Figure 6: Static BFS / SSSP
faster than Hornet's implementation. The reasons for the speedup in Meerkat are better-coalesced access memory access, and lack of migration of memory blocks.
The tree-based BFS and SSSP algorithm on Meerkat, in contrast to the vanilla implementations initialize tree-nodes (a 64-bit sized \(\langle\mathit{distances}_{\mathit{SRC}},\mathsf{parent}\rangle\) pair) using 64-bit atomics. The tree-based BFS has an average overhead of 17.2% over the vanilla version, seen in the execution time. A similar overhead of \(\approx\) 14% was seen in the case of tree-based SSSP.
#### 6.1.2 Dynamic BFS and SSSP in Meerkat
Figures 6(a) and 6(b) show the \(s^{10}_{10k}\) incremental and decremental speedups with respect to the static algorithms on Meerkat, for dynamic BFS and SSSP algorithms, respectively. The incremental BFS and SSSP are bound to be faster than the decremental algorithm. The incremental BFS and SSSP are performed by choosing the input batch of edges as the initial frontier and iterative application of the static algorithm to recompute the tree. The decremental variant involves invalidation of affected vertices in the tree, the propagation of invalidation up the tree, computation of the initial frontier from unaffected vertices and re-computation of the tree invariant by iterative application of the static algorithm.
With the exception of USAfull and BerkStan graph inputs, the execution times of the repeated application of the static algorithm were on an average 7.3% and 5.6% lower than the static running times on the original graph, for incremental BFS and incremental SSSP, respectively. For USAfull and BerkStan graphs, this difference was close to 80% and 71% respectively. In the case of incremental BFS on USAfull, we observed that there was a 11.53\(\times\) decrease in the average distance after the addition of first 10K, and nearly 2\(\times\) decrease from the first to the tenth batch. The USAfull is fully connected and, hence, no increase in the number of reachable vertices was observed. In the case of BerkStan, while the average distance decreased from 11.7 to 8.58 for our sequence of ten incremental batches of 10K, we observed that the number of reachable vertices increased from \(\approx\)460K to \(\approx\)591K. Due to their graph topology, the speedup for incremental BFS and SSSP for USAfull and BerkStan, was much lower compared to the other graphs. There was no significant increase in the number of reachable vertices or decrease in the the average distance for other
Figure 7: Dynamic BFS and SSSP on Meerkat
benchmark graphs.
In the case of decremental BFS and SSSP, the number of edges in the dependence tree that have been invalidated depends on the average in-degree of a vertex. We observed that for low average-in-degree graphs, likelihood of tree-edges being invalidated was higher than those of high in-degree graphs. For example, in the case of decremental BFS for a sequence of ten 10K batches, for USAfull graph (with an average in-degree 2), an average of 38.97% of the decremental batch were tree edges, while it was 0.769% of the decremental batch for Orkut (with an average in-degree 72). It needs to be noted that the depth of the dependence tree is the BFS distance. Smaller tree-depth (BFS distance) and large average degree favors only fewer vertices to be invalidated. In our observation for decremental BFS for 10K batches, we saw an average of 0.23K vertices for Orkut, 0.4K for Wikipedia, 1K for LJournal, 3.94K for BerkStan, 6K vertices for Rand10M, whose distances were invalidated, while it was an average of 9.54M vertices for USAfull, after each batch. This explains why USAfull graph performs poorly with our decremental BFS and SSSP algorithms. In the case of BerkStan graph, we have seen decrease in the average distance for successive decremental batches, while other graphs have shown a marginally increasing trend in the average distance for successive decremental batches. The number of reachable vertices reduced by \(\approx\)2% after ten batches for BerkStan, resulting in higher speedups. Rand10M and Orkut did not show any decrease in reachable vertices, while the decrease was on an average 0.047% (upto 0.18% for USAfull) for other graphs.
### PageRank
For our experimental evaluation of PageRank, we have set the damping factor to be 0.85, and the error margin to be 0.00001. The computation of PageRank (See Algorithm 5) involves the traversal of neighbors of each vertex, along their incoming edges. Disabling hashing improves the slab occupancy, especially in graphs with a higher average in-degree. For low average in-degree graphs such as USAfull (2), Rand10M (8), and Wiki-talk (2), there is no performance improvement, as disabling hashing has virtually no effect: most vertices owing to their low indegree have single slab lists. However, for large average indegree graphs such as Orkut (76) and Wikipedia (27), disabling hashing produces a speedup of about 1.36-1.62\(\times\) for the static PageRank algorithm.
Figure 8: Static / Dynamic PageRank
#### 6.2.1 Comparison of Static implementation with _Hornet_
Figure (a)a compares the performance of static PageRank on Meerkat, with that of Hornet. It is observed that in six out of seven graphs(that is, except Rand10M), Meerkat performs 1.18-2.49\(\times\) (with an average of 1.74\(\times\)) faster than Hornet. The PageRank implementation on both Meerkat and Hornet are traversal based algorithms. Each iteration applies the computation of PageRank on all vertices. The number of iterations depends on when the convergence is achieved based on the convergence strategy. The performance improvement in Meerkat can be attributed to our efficient iterators performing coalesced accesses in retrieving adjacent vertices. While Hornet attempts to avoid warp-divergence, its traversal mechanism does not perform coalesced memory accesses.
#### 6.2.2 Speedup of Dynamic Implementation in Meerkat
The incremental and decremental algorithms are identical: the same static-PageRank algorithm is applied on the entire graph after performing insertion/deletion of edges, respectively. Figure 9 and Figure 10 show the average running time (in \(\times\)) and the average number of iterations (in \(\circ\)) for small incremental/decremental batches ranging from 1K to 10K, respectively. We observe an increasing trend in the execution time as the batches increase in size. The trend in the execution time bears a striking resemblance with the number of iterations. The number of vertices, whose PageRank values are invalidated, increases with the batch size.
Since all vertices participate in the computation of incremental/decremental PageRank, the running time must depend on the number of vertices. Further, the average execution time per iteration increases with the increase in the number of vertices. USAfull has the highest number of vertices and exhibits the
Figure 9: PageRank (incremental - small batches)
highest running time among all graphs. Rand10M also has a high running time for similar reasons. Across, the incremental (decremental) batches, BerkStan has 1.8(1.57)\(\times\) higher number of iterations compared to USAfull, respectively. BerkStan has a smaller number of vertices but a higher diameter compared to other graphs. Owing to their small diameters, Orkut, IJournal, Pokec, and Rand10M converge with fewer iterations, and show the slowest growth across 1K to 10K.
Figure 7(b) shows the \(s_{10k}^{10}\) incremental/decremental speedups with respect to the static algorithms on Meerkat. The speedups are directly proportional to the measure by which fewer iterations are required until convergence is achieved. For batches of 10K edges, we observed that Orkut achieved convergence with \(\approx\)20% and \(\approx\)13% of iterations required for the static variant, for incremental and decremental algorithms, respectively. Whereas, the Rand10M converged in \(\approx\) 64% of iterations compared to the static variant, for incremental and decremental algorithms, registering the smallest speedups.
### Triangle Counting
Figure 10(a) compares the performance of the static triangle counting algorithm on Meerkat against Hornet. Hornet performs on an average 31.12\(\times\) (upto 59.34\(\times\)) faster than that of Meerkat on our benchmark graphs. The implementation of triangle counting on Hornet pre-processes the input batch, sorting the edges so that adjacent neighbours of every vertex can be accessed in ascending order. This ordering of edges is beneficial for performing intersections of the adjacencies of the endpoints of an edge. In Hornet, all the neighbours of a vertex are contiguously available within an edge block, whose size is the smallest power of two greater than the number of adjacent neighbours. Hornet divides the edges into multiple bins. In each bin, edges have the same estimated effort for performing an intersection of the adjacencies
Figure 10: PageRank (decremental - small batches)
of their endpoints. Each bin of edges is processed in a separate kernel invocation so that proper load balancing is achieved among the kernel threads. Hornet uses two methods for performing the intersection of the adjacencies of the endpoints of the edges, which are dependent on their storage in ascending order. In the first method, Hornet chooses the endpoint having the smallest adjacency list of the two. It traverses through the vertices in this adjacency list and counts the membership in the adjacency list of the other vertex, by using binary search. The second method is based on a simultaneous traversal of both the adjacency lists. A pointer is associated with each adjacency list; the pointer is advanced if it points to an element smaller than that referred to by the other pointer. The intersection count is incremented if both the pointers refer to the same element. The drawback of forcing an ascending order among the adjacencies for each vertex is that it makes the Hornet graph object unsuitable for the triangle counting re-computation after insertion/deletion of edges.
In contrast, the presence of hashing in Meerkat distributes the adjacent vertices among multiple slabs, and the ordering among elements cannot be enforced. The Meerkat framework cannot use efficient methods which Hornet can use due to this lack of ordering of edges. In Meerkat, for each edge \(u\to v\) (assuming \(u\) has a smaller adjacency than \(v\) without loss of generality), the slabs of \(u\) are first traversed with the help of our iterators. For each vertex \(w\) adjacent to \(u\) retrieved by our iterators, the SearchEdge() device method checks for the existence of \(w\) in the adjacencies of \(v\). This is achieved by identifying the slab list using hashing with vertex \(w\) to identify the slab list that could potentially hold \(w\), and by traversing the slab list to discover the vertex \(w\). Enabling hashing for the Triangle Counting benchmark improves the performance by \(15.44\times\) on our benchmark graphs.
Figure (b)b shows the \(s^{10}_{10k}\) speedup of our incremental/decremental algorithms over the static algorithms on Meerkat. Across the benchmarks, superlative speedups are observed since, for each batch, the static algorithm counts the number of triangles by performing an intersection for the adjacencies of both endpoints for every graph edge, while the dynamic algorithm performs intersection only for the end-points of the edges in the batch. The speedup observed is very large if the batch size is very small compared to
Figure 11: Static / Dynamic Triangle Counting on Meerkat
the number of edges in the graph. Hence, graphs such as Orkut, LJournal, Rand10M, and Wikipedia enjoy very high speedups compared to the repeated application of the static algorithm.
### Weakly Connected Component (WCC)
We evaluate the performance of the static WCC algorithm on Meerkat against Hornet, followed by, the performance of incremental WCC in Meerkat for various optimizations. The decremental WCC on GPU, at the time of writing, is an unsolved problem.
#### 6.4.1 Static WCC in Meerkat vs. Hornet
Figure 11(a) compares the performance of static WCC on Meerkat against Hornet. Hornet uses a modified-BFS like algorithm discovering connected components, using a two-level queue. With a two-level queue, the insert() and () operations are performed on two separate queues. In the first step, the discovery of the largest connected component is attempted. A BFS is performed from the source vertex with the help of a two-level queue, and all the reachable vertices are marked with the same color. In the second step, all the unvisited vertices are incrementally assigned a unique color. In the third step, the edges from these unreachable vertices are queued, and both end-points are iteratively assigned the smallest of their colors. This iterative process continues until all the endpoints of such edges have the same color. In Meerkat, the static WCC implementation uses the union-find approach for discovering weakly-connected components. It performs a single traversal through all the adjacencies of the graph: it uses the Union-Async strategy for the union operation for the adjacent edges discovered, and full path compression for determining the representative elements for the vertices in find operation.
We observe that while Meerkat performs \(6.08\times\) on an average across all our input graphs, the speedup against Hornet is lower if there are vertices with very large out-degree. This is observed in graphs such as Orkut, LJournal, and Wikipedia. This is because a large-out degree vertex will cause many vertices to be enqueued into the BFS frontier queue, thereby improving parallelism. However, for networks such as
Figure 12: Weakly Connected Components
USAfull, BerkStan, the diameter of such graphs is much higher, the BFS-approach in Hornet performs significantly worse compared to Meerkat.
#### 6.4.2 Incremental WCC in Meerkat
Incremental WCC is evaluated with different schemes.
* _Naive_: traverses through all the slab lists as it is ignorant about the location of the new updates. Hence, it is expected to be the least performant implementation. This algorithm is identical to the naive static WCC algorithm on SlabGraph.
* _SlabIterator_: maintains a boolean flag for every vertex. This flag is set to true for a vertex if there are new adjacent vertices in the input batch. All the neighboring adjacencies of such a vertex are traversed. This bears the potential to significantly reduce the need to visit a large number of source vertices, especially when the batch size is small compared to the current graph, and when a source vertex is shared by a significant majority of batch update edges.
* _UpdateIterator_: apart from maintaining a boolean flag to identify source vertices with adjacencies, this implementation maintains an update flag for every slab list and the allocator address of the first updated slab. Unlike the _SlabIterator_, UpdateIterators allow visitation of only those slabs storing the new updates. flag along with the _UpdateIterator_.
* _UpdateIterator + Single Bucket_: Evaluates the _UpdateIterator_ approach in the absence of hashing. Intuitively, this should ensure that a warp operating on an _UpdateIterator_ would see more updates in single memory access while traversing a slab list. This method produced the highest speedup with respect to the _Naive_ variant.
Figure 11(b) compares the performance of _UpdateIterator + Single Bucket_ against the _naive_ scheme, while a similar comparison of the _SlabIterator_ and _UpdateIterator_ schemes is shown in Table 6. It must be remembered that the naive scheme cannot distinguish between the updated slabs, and those slabs holding adjacent vertices previously inserted. Hence, its running time is proportional to the number of edges present in the graph representation. Our optimized processing iterates over only the updated slabs (_UpdateIterator_,
\begin{table}
\begin{tabular}{l l r r r} \hline \hline Method & Dataset & 2K & 4K & 8K \\ \hline \multirow{8}{*}{SlabIterator} & LJournal & 40.5\(\times\) & 12.3\(\times\) & 10.9\(\times\) \\ & Rand10M & 149.8\(\times\) & 170.4\(\times\) & 146.9\(\times\) \\ & BerkStan & 9.55\(\times\) & 9.98\(\times\) & 9.03\(\times\) \\ & Wiki-talk & 18.3\(\times\) & 8.62\(\times\) & 7.37\(\times\) \\ & Wikipedia & 23.65\(\times\) & 18.19\(\times\) & 18.07\(\times\) \\ & Orkut & 82.3\(\times\) & 62.9\(\times\) & 20.5\(\times\) \\ & USAfull & 33.9\(\times\) & 34.3\(\times\) & 34.0\(\times\) \\ \hline \multirow{8}{*}{UpdateIterator} & LJournal & 34.6\(\times\) & 8.36\(\times\) & 7.58\(\times\) \\ & Rand10M & 149.0\(\times\) & 146.8\(\times\) & 144.7\(\times\) \\ & BerkStan & 7.71\(\times\) & 7.82\(\times\) & 7.34\(\times\) \\ \cline{1-1} & Wiki-talk & 15.8\(\times\) & 7.27\(\times\) & 6.37\(\times\) \\ \cline{1-1} & Wikipedia & 19.56\(\times\) & 15.34\(\times\) & 15.08\(\times\) \\ \cline{1-1} & Orkut & 71.0\(\times\) & 52.9\(\times\) & 17.0\(\times\) \\ \cline{1-1} & USAfull & 33.9\(\times\) & 34.0\(\times\) & 33.6\(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Incremental WCC: Speedup over the naïve scheme
_UpdateIterator + Single Bucket_)/slab-lists (_SlablIterator_); therefore, the running time is proportional to the number of slabs/slab lists holding new updates, respectively. Hence, the speedup decreases with increasing batch size.
The use of a _update_ flag for vertex contributes significantly to eliminating the traversal of the adjacencies of the source vertices which do not have updated slab lists. Almost identical performance is observed between the _UpdateIterator_ and the _UpdateIterator + Single Bucket_ schemes for low-degree graphs such as USAfull. If the average degree of a vertex is less than a slab's capacity, only one slab list (one head slab) would be allocated per vertex, and most updates would thus fit the head slab comfortably. The extra overhead visible in the fourth major row would be attributed to checking the slab list's _update_ flag. In high-degree graphs such as social networks (namely, Orkut, Wikipedia, and Wiki-talk), the _UpdateIterator_ over a single bucket overcomes previously inserted vertices, with the updates restricted contiguously to a single slab list, resulting in a marginal increase in performance. However, the use of _UpdateIterator_ with the vertex flag and the allowance of multiple slab lists seems to have lower performance than the use of a single slab list for high-degree graphs that follow the _power-law_ distribution: Orkut, Wikipedia, and Wiki-talk. In the presence of multiple slab lists, the _UpdateIterator_ must sequentially probe if the _update_ flag for a slab list is set for every slab list of the source vertex. This overhead is overcome in the use of a single slab list by subsuming the function of the update flag of a slab list within that for the source vertex.
## 7 Conclusion and Future Work
We presented a new framework Meerkat for dynamic graph algorithms on GPUs. It builds upon and significantly enhances a hash-based SlabHash data structure. Meerkat offers a memory-efficient alternative, proposes new iterators, and optimizes their processing to improve on both the execution time as well as the memory requirement. These enhancements allow dynamic graph algorithms, containing both incremental and decremental updates, to be implemented efficiently on GPUs. We illustrated the effectiveness of the framework using fundamental graph algorithms: BFS, SSSP, PR, TC, and WCC, and fundamental graph operations: insert, delete, and query. As part of future work, we would like to implement more complex graph algorithms using our framework, and also check for the feasibility of approximations to reduce the memory requirement of Meerkat further.
|
2306.10392 | GlyphNet: Homoglyph domains dataset and detection using attention-based
Convolutional Neural Networks | Cyber attacks deceive machines into believing something that does not exist
in the first place. However, there are some to which even humans fall prey. One
such famous attack that attackers have used over the years to exploit the
vulnerability of vision is known to be a Homoglyph attack. It employs a primary
yet effective mechanism to create illegitimate domains that are hard to
differentiate from legit ones. Moreover, as the difference is pretty
indistinguishable for a user to notice, they cannot stop themselves from
clicking on these homoglyph domain names. In many cases, that results in either
information theft or malware attack on their systems. Existing approaches use
simple, string-based comparison techniques applied in primary language-based
tasks. Although they are impactful to some extent, they usually fail because
they are not robust to different types of homoglyphs and are computationally
not feasible because of their time requirement proportional to the string
length. Similarly, neural network-based approaches are employed to determine
real domain strings from fake ones. Nevertheless, the problem with both methods
is that they require paired sequences of real and fake domain strings to work
with, which is often not the case in the real world, as the attacker only sends
the illegitimate or homoglyph domain to the vulnerable user. Therefore,
existing approaches are not suitable for practical scenarios in the real world.
In our work, we created GlyphNet, an image dataset that contains 4M domains,
both real and homoglyphs. Additionally, we introduce a baseline method for a
homoglyph attack detection system using an attention-based convolutional Neural
Network. We show that our model can reach state-of-the-art accuracy in
detecting homoglyph attacks with a 0.93 AUC on our dataset. | Akshat Gupta, Laxman Singh Tomar, Ridhima Garg | 2023-06-17T17:16:53Z | http://arxiv.org/abs/2306.10392v1 | GlyphNet: Homoglyph domains dataset and detection using attention-based Convolutional Neural Networks
###### Abstract
Cyber attacks deceive machines into believing something that does not exist in the first place. However, there are some to which even humans fall prey. One such famous attack that attackers have used over the years to exploit the vulnerability of vision is known to be a Homoglyph attack. It employs a primary yet effective mechanism to create illegitimate domains that are hard to differentiate from legit ones. Moreover, as the difference is pretty indistinguishable for a user to notice, they cannot stop themselves from clicking on these homoglyph domain names. In many cases, that results in either information heft or malware attack on their systems. Existing approaches use simple, string-based comparison techniques applied in primary language-based tasks. Although they are impactful to some extent, they usually fail because they are not robust to different types of homoglyphs and are computationally not feasible because of their time requirement proportional to the string's length. Similarly, neural network-based approaches are employed to determine real domain strings from fake ones. Nevertheless, the problem with both methods is that they require paired sequences of real and fake domain strings to work with, which is often not the case in the real world, as the attacker only sends the illegitimate or homoglyph domain to the vulnerable user. Therefore, existing approaches are not suitable for practical scenarios in the real world. In our work, we created GlyphNet, an image dataset that contains 4M domains, both real and homoglyphs. Additionally, we introduce a baseline method for a homoglyph attack detection system using an attention-based convolutional Neural Network. We show that our model can reach state-of-the-art accuracy in detecting homoglyph attacks with a 0.93 AUC on our dataset.
**Keywords: Homoglyph Attacks, Convolutional Neural Networks, Cyber Security, Phishing**
## Introduction
In cyber security, attackers employ different attacks to infiltrate our systems and networks, with the objective varying from stealing crucial information to inflicting system damage. One such deceptive attack is the homoglyph attack [17], which involves an attacker trying to fool humans and computer systems by using characters and symbols that may appear visually similar to characters used in real domain and process names but are different. For example, a typical homoglyph attack may involve changing "d" to "cl", "o" to "\(\theta\)", and "l" to "1".
Some of the above substitutions can be difficult for the naked eye to detect, as shown in Figure1, It would mean that users would be easily susceptible to clicking on the homoglyph links, more so when navigating from one website to another. The problems arising from such an attack are of two types: a) Deceiving humans to believe that an illegitimate domain name is real by fooling the users, resulting in users using fake webpages as if they were the real ones. b) Create fake academic documents and papers by changing the real strings with homoglyphs to deceive plagiarism detection tools such as Grammarly.com
Both types of problems are hard to detect and hence require robust methods to identify an attack before it causes any information breach. Previous approaches mainly used methods from comparative algorithms such as edit distance to identify homoglyph attacks from legit strings[10]. Any domain name string that returned an edit distance beyond an acceptable threshold was considered a homoglyph. Edit distance covers simple approaches like insertion, deletion, transposition, swapping, and substitution. Due to this shortcoming, a slight change to the illegitimate domain name can easily bypass it as quickly as a real one. Now, a slightly better version of it was proposed, which was called Visual Edit Distance[14]. It proposes to have a particular edit distance for the visual similarity of
Figure 1: example of a real domain and their homoglyphs
the two domain name strings. However, these methods were more relevant in academia and had negligible prevalence in the real world. A homoglyph attack differs from a phishing attack because domain names in the former are hardly distinguishable but can be apparent in the latter.
We have taken the famous poem "The Road Not Taken" by Robert Frost to demonstrate this concept. In Figure 2, we have taken the poem text and run Grammarly's Plagiarism Detector tool. It reports 100% plagiarism, which is correct but later when we passed the homoglyphed version of the same text, it reports the text to be 100% unique, as shown in Figure 3. This proves that even today's state-of-the-art systems cannot effectively deal with texts comprising homoglyphs.
Recently, Microsoft obtained a court order to remove many fraudulent "homoglyph" domains used to conduct fraud and pose as Office 365 users.Page (2021) following a customer complaint about a business email compromise attack, Microsoft conducted an investigation and discovered that the unidentified criminal organization responsible for the attack also created 17 other malicious domains, which were combined with the customer credentials that had been stolen to gain unauthorized access to Office 365 accounts and monitor the contacts of the customers.
Microsoft stated that the cybercriminals have caused and continue to cause irreparable injury to Microsoft, its customers, and the general public. The complaint also stated that the cybercriminals have illegally accessed customer accounts, monitored customer email traffic, gathered information on pending financial transactions, and criminally impersonated [12] customers.
According to studies, this attack hit \(71\%\) organizations in 2021. Sixty-two countries people were the subject of a massive cyberattack last year.
In this research, we aim to create a data set that can help expand research on homoglyph attacks. We propose to apply an attention-based Convolutional Neural Network (CNN) to detect homoglyphs without the need to provide paired data. Additionally, our model achieves a significant performance boost compared to other approaches due to its architectural design. Our method can be applied directly as an API or web service for an organization to check a domain or process name before accessing it. For evaluation, we compared our performance with other baselines and found that our model outperforms them. Moreover, our approach also addresses the problem of unpaired data setting, which is often the case in the real world.
The major contributions of our research are as follows:
1. Created a benchmark dataset of 4 million real and homoglyph domain images based on known homoglyph patterns and URLs. It is generated via strings from single- and dual-character noise sampled using a Gaussian distribution over a homoglyph character pool.
2. A method that uses an image dataset for detecting homoglyphs involving an attention-based convolutional neural network trained in a supervised fashion achieves a better AUC score than the other existing baselines.
The paper's organization starts by introducing the problem faced by existing approaches to detect homoglyph-based phishing attacks in both academia and the real world. In Related Work, we discuss the existing approaches which propound the idea of solving this problem with either string matching or Deep Learning based methods like Siamese Neural Networks and GANs. We have explained their major pitfalls in terms of generalizing capabilities and feasibility. In the Dataset section, a comprehensive description is provided of the generation of the proposed images dataset. It follows a brief description of our attention-based CNN baseline implementation. The Experimentation section describes dataset splitting, metrics used, and other settings. Later, in the Results section, we examine the results and scores obtained after the experiments conducted in the last section. Both data and baseline implementation results are validated and explained with the help of an elegant table within the same section. The following section, Discussion, presents experiments we tried that did not work. Finally, the Conclusion Section summarizes the observations and contributions.
## Related Work
The work by [10] used a Siamese Neural Network to detect homoglyphs using a paired dataset. This dataset included pairs of strings; one was a real domain name, and the other was a homoglyph domain name. In their work, they converted this pair of strings into binary images that were later fed to a Siamese Neural Network[14].
Figure 3: Homoglyph text on Grammarly plagiarism detector
Figure 2: Real text on Grammarly plagiarism detector
2015). The Siamese neural network uses two identical convolutional neural networks(LeCun, Bengio et al., 1995) to learn the visual difference between a pair of images. They were applied to domains such as healthcare, finance, and others but have recently gained popularity in cyber security.
Though(LeCun, Bengio et al., 1995) their work showed significant improvement from previous baselines but suffered from two major pitfalls:
1. In online security systems, it is impossible to provide paired data, without which these systems will not work.
2. It cannot be used in academia due to the inability to find a paired word for each word present in a scientific article.
Therefore, although this approach performs well, it cannot be employed in real-world systems.
The traditional solutions to prevent homoglyph attacks were inspired by genomicsLu, Mohammed, and Wang (2019), which proposed the idea that homoglyph domains are in string formats and, therefore, should be compared with legitimate ones to detect whether they are real or not. Edit Distance(Ristad and Yianilos, 1998) is the measure of the minimum number of operations required to transform one sequence (domain or process name string in our case) into another. If the value exceeds an acceptable threshold, it should predict as homoglyph. This looks effective but not when giving it a second thought. The reason is that in cases like "google.com" and "[email protected]", edit distance would return only '1' which does not look so threatening but is a homoglyph domain name. Furthermore, a paired sequence of strings is required to make comparisons, which would not be the case if it was a homoglyph of a new domain name. Finally, in the real world, this approach lacked severely good results.
Phishing attacks(Hong, 2012) should not be confused with homoglyph attacks. Phishing is an attack involving the attacker sending homoglyph/false/fake links that appear to be coming from a trusted vendor. It leads to information compromiseHelms, Ettkin, and Morris (2000), data breachesCheng, Liu, and Yao (2017), and financial fraud(Rewink, 2018). The difference between Phishing and Homoglyphs is that the former uses tricks such as longer, bad, and misspelled domain names and URLsMa et al. (2009) to fool people. At the same time, the latter takes advantage of the inability to differ in visually similar but different domain name strings. Thus, it is required to create better solutions for homoglyph detection.
### Siamese Neural Networks
The Siamese neural network architecture is proposed to detect homoglyphs using a paired data set. This dataset included pairs of strings, one was a real domain name, and the other was a homoglyph domain name. Each instance was a tuple that contained a real domain string, a homoglyph domain string, and a score that denotes whether the second element is a valid homoglyph of the first or not as part of its elements. In their work, they converted this pair of strings into binary images, images that were later fed to a Siamese Neural Network. However, we observed a significant difference while reproducing the results in our dataset.
### PhishGANs
Approaches such as Siamese Neural Networks suffered severely in terms of performance due to lack of data, as they only had close to \(91k\) real domain images. As a remedial solution, we were required to produce comprehensive data to train our models well. Recently, Lee Joon Sern et al. proposed PhishGANs to generate synthetic data(Sern, David, and Hao, 2020). They discussed creating a generative adversarial networkGoodfellow et al. (2014) that aimed to create images similar to real domain names to increase existing data sets. PhishGANs(Sern, David, and Hao, 2020) being a GANGoodfellow et al. (2014) involved a generator and a discriminator, both trained in an adversarial fashion such that the generator is trained well enough to produce images similar to those of a real domain which the discriminator cannot detect. Later, these images were fed to a different network for binary classification aimed at distinguishing real domain names from homoglyphs. They used UNet(Ronneberger, Fischer, and Brox, 2015) architecture as a generator using a custom loss function called a dot product loss. The PhishGANs(Sern, David, and Hao, 2020) were trained similarly to how Pix2Pix(Isola et al., 2017) is trained. Later, for classification purposes, a new architecture was defined and called homoglyph identifier (HI) using CNN(LeCun, Bengio et al., 1995) as an encoder using a triplet loss function (Hoffer and Ailon, 2015) as input, the positive domain (google.com), the an
Figure 4: Siamese neural network architecture(Woodbridge et al., 2018)
Figure 5: PhishGAN architecture(Sern, David, and Hao, 2020)
chor domain (go0gle.com) and the negative domain (apple.com). On some popular domains, such as youtube.com and facebook.com, HI achieved an accuracy of roughly \(0.81\) while testing their homoglyphs. On an unseen domain, HI achieved an accuracy of \(0.76\) while feeding it back again in PhishGANs(Sern, David, and Hao 2020) and generating its homoglyphs, and later training on them, which helped detect its homoglyphs using \(0.86\) accuracy. Although the idea of generating synthetic data using GANs(Goodfellow et al. 2014) looks promising and intriguing but is not motivating when it comes to real-world usage. GANs(Goodfellow et al. 2014), in general, is one of the trickiest architectures in Deep Learning(LeCun, Bengio, and Hinton 2015) since their advent and are often found to have issues while training in the real world, which is not the case in the constrained environment of academia. It is common to encounter issues like problems in convergence, generator oscillating between generating specific examples in the domain, and multiple inputs resulting in generating the same output. Also, the performance increase was not drastic enough to compel us for its usage.
## Dataset
The work by (Woodbridge et al. 2018) proposed their custom paired data set that comprises \(91k\) real domains and \(900k\) homoglyphs. Each real domain is used to generate its respective homoglyphs. Each point in this dataset is a three-element tuple denoting domain, homoglyph, and score. Here, if the value of the score is \(1.0\), then it is a valid homoglyph of the real domain. The real-world data limitation to Homoglyph-based attacks is the lack of publicly available data sets.
**Proposed dataset: GlyphNet**
We have proposed a dataset consisting of real and homoglyph domains. To generate homoglyph domains, real domains are needed. We have obtained domains from the Domains Project(Turkynewych 2020). This repository is one of the largest collections of publicly available active domains. The entire repository comprises 500M domains, and we restricted our work to 2M domains due to hardware restrictions.
**Homoglyph Creation Algorithm**
Homoglyph Generation is an important task, as one needs to ensure enough randomness to make it appear real and keep the process simple enough to fool the target. Publicly available tools like dnstwist(Ulikowski 2015) replace every character in the real input domain with their respective glyphs. It generates poor homoglyphs for the large part because it relies on paired data which is not fit to serve the purpose practically. We created our novel algorithm for the generation of homoglyph domains to ensure that real homoglyphs are generated with randomness and closeness. To achieve this, we sample homoglyph noise characters using Gaussian sampling(Boor, Overmars, and Van Der Stappen 1999) from the glyph pool. We used 1M real domains to generate \(2M\) homoglyphs with a single glyph character and introduce diversity in our dataset; we reran this algorithm on the remaining 1M real domains to generate homoglyph domains with two character glyphs. Finally, we have the 4M real and homoglyph domains.
**Image Generation**
Homoglyph attacks exploit the weakness of human vision to differentiate real from homoglyph domain names. From a visual perspective, we are interested in learning the visual characteristics of real and homoglyph domain names. To do so, we rendered images from the real and homoglyph strings generated via our algorithm. We have used ARIAL Typeface as our chosen font, a \(28\) font size, on a black background with white text from the middle left of the image; the image size is \(150\times 150\).
## Methodology
This section presents our approach to building an end-to-end homoglyph detection system. We build on attention-based(Bahdanau, Cho, and Bengio 2014)(Vaswani et al. 2017) convolutional neural network(LeCun, Bengio et al. 1995) that aims to exploit the visual dissimilarity between real and homoglyph domain names. The architecture of our model is shown in Figure 7 and Figure 8.
The rendered images are then used as input to the CNN to learn the desired visual feature information. The model consists of four Conv2D layers to learn visual information such as edges, curves, and strokes. Each convolutional layer is paired with a max-pooling layer to perform dimensionality reduction on the learned features. This model is developed in keras(Chollet et al. 2015). Each convolution block is followed by a convolutional block attention module (CBAM), as described in the following.
Figure 6: Rendered images from the dataset, \(0\); homoglyph domain and, \(1\); real domain
Attention processes boost the strength of representation by focusing on essential traits and suppressing unneeded ones. It uses the feed-forward convolutional neural network's CBAM, a specific and efficient attention module. Given a preliminary feature map, the module successively infers attention maps along the channel and spatial dimensions. It then multiplies the attention maps by the preliminary feature map to achieve adaptive feature refinement. The overall attention process is summarized as follows:
\[\begin{array}{l}F^{\prime}=M_{c}(F)\otimes F,\\ F^{\prime\prime}=M_{s}(F^{\prime})\otimes F^{\prime},\end{array}\]
1. Given an intermediate feature map \(F\in\mathcal{R}^{C\times H\times W}\) as input. \(C\) represents a number of channels, \(H\) and \(W\) represent the height and width of the feature map \(F\) respectively.
2. CBAM sequentially infers a 1D channel attention map \(M_{c}\in\mathcal{R}^{C\times 1\times 1}\)
3. And a 2D spatial attention map \(M_{s}\in\mathcal{R}^{1\times H\times W}\)
4. \(\otimes\) element-wise multiplication
For the sake of non-linearity, the RELU activation function is used.
## Experimentation
### Dataset and Metrics
We have split our dataset into three parts, train, validation, and test, with a ratio of \(70:20:10\), respectively which
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Dataset Name** & **Real** & **Homoglyph** & **Total** \\ \hline Domain and Process StringsWoodbridge et al. (2018) & \(90k\) & \(900k\) & \(990k\) \\ \hline Similar and Dissimilar PairsMajumder et al. (2020) & \(2257\) & \(2257\) & \(4514\) \\ \hline
**GlyphNET (Ours)** & \(2000k\) & \(2000k\) & \(4000k\) \\ \hline \end{tabular}
\end{table}
Table 1: Dataset comparison
Figure 8: Zoom in view of conv-attention module
Figure 7: Our neural network architecture
amounts to \(2.8M\), \(0.8M\), and \(0.4M\) images in train, validation, and test sets. Each image size is \(150\times 150\).
We use accuracy for measuring the performance of the classification task. Since accuracy can sometimes be misleading in a binary classification task, especially for unbalanced data sets, we consider precision, recall, and F1 score as our evaluation metrics, even though our dataset is balanced. We have also used the AUC score to compare our solution with some other works.
### Experimental Settings
For the training part, we used binary cross-entropy as a Loss Function. We have used RMSProp optimizer to optimize the loss obtained from the binary cross-entropy loss function, with a learning rate of \(10e^{-4}\), and the network is trained for \(30\) epochs with early stopping. We trained with a batch size of \(256\). We evaluated the performance of our model in terms of accuracy vs. epochs and loss vs. epochs plots.
## Results
We evaluated our model on two unpaired data sets for domain names. We took an input string from the dataset we created in the previous section, converted it into an image, and fed it to the model to generate a binary label. The results for the domain names are tabulated in Table 2. Out of the \(400k\) test images, our model correctly categorized \(372k\) images resulting in \(0.93\) accuracy. Our model achieved an f1-score of \(0.93\), \(13\) points higher than the previous models. Our model outperforms other baselines and comparable works on the other metrics, including accuracy, precision, recall, and AUC. The performance of other models on our dataset was also below par compared with the proposed datasets in their works, signifying our dataset's variations, difficulty, and importance.
Our dataset, code, and models are publicly available under MIT LICENSE and can be accessed from our project's GitHub repository1
Footnote 1: [https://github.com/Akshat4112/Glyphnet](https://github.com/Akshat4112/Glyphnet)
## Discussion
We now discuss some interesting observations and experiments which did not work and possible explanations regarding them.
### Using only Grayscale Images
During the image rendering phase, where we generated images from the data set containing real and homoglyph domains, we experimented with generating colored images instead of grayscale ones. We used (\(73\), \(109\), \(137\)) as the background color while (\(255\),\(255\),\(0\)) as the color of the text to be written. However, the network trained from these colored images was always found to be underperforming the network trained on grayscale images. One possible reason might be that the grayscale involves black and white as two colors, which are two extremes. Hence, it preserves the difference in adjoining pixels at the periphery of the letter and background pixels. Meanwhile, the colors though appearing to us as distinctly different, suffered to preserve the difference when later passed through resizing operations. We perform data augmentation on our data and later train our network using the data generated, but it leads to a downfall in accuracy. One possible reason might be that data augmentation[14] is used in those scenarios where we expect distinctive image features to exist, but they do not exist in the actual data set. It can be understood from a Cat vs. Dogs example. Usually, data sets contain cats and dogs in limited positions in the pictures, so our model fails to recognize some of the real images. The reason is that in the real world, either a cat or a dog might turn their heads and might be sitting in different postures, which makes it difficult for our model to locate distinctive features like whiskers and pointy ears in cats and tongues in case of dogs in the absence of large amounts of data catering these considerations. Therefore, to mimic such behavior, Data Augmentation is used, which helps to create all these different types of images. However, in our case, using it leads to flipped characters, and rotated images lead to anchor and tilde signs over letters going in different directions, which is not the case with real-world strings. Therefore, data augmentation was, in fact, counterproductive for our use case.
We rendered images of sizes \(256\times 256\) during the image generation phase. Apart from the image size \(256\times 256\), at which we observed the best results, we tried experimenting with the following image sizes: \(128\times 128\), \(150\times 150\), \(224\times 224\), and \(512\times 512\). The smaller the image size, the more performance degradation there is relative to it. An increase in size did not lead to any significant improvement but increased the training time of the model. Hence, we use \(256\times 256\) image size.
### Building Model without Transfer Learning
We train a base network on a base dataset and task and then reuse the learned features or transfer them to a second target network to be trained on a target dataset. This process will tend to work if the features are general, meaning suitable for both base and target tasks rather than specific to the base task. We performed experiments with transfer
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Architecture** & **Accuracy** & **Precision** & **Recall** & **F1-score** & **AUC** \\ \hline Siamese CNN[14] & \(0.79\) & \(0.78\) & \(0.71\) & \(0.74\) & \(0.78\) \\ \hline Ensemble CNN[1] & \(0.83\) & \(0.82\) & \(0.79\) & \(0.80\) & \(0.83\) \\ \hline PhishGAN[1] & \(0.71\) & \(0.74\) & \(0.65\) & \(0.69\) & \(0.71\) \\ \hline
**Attention CNN (Ours)** & \(0.93\) & \(0.93\) & \(0.93\) & \(0.93\) & \(0.93\) \\ \hline \end{tabular}
\end{table}
Table 2: Model performance comparison on our dataset
learningPan and Yang (2009) by incorporating networks like VGG16Simonyan and Zisserman (2014), Resnet18He et al. (2016), Resnet34, Resnet50, Wide ResNet-101-2, ResNeXt-50-32x4d and ResNeXt-101-32x8d which were trained on ImageNet(Deng et al., 2009) dataset. Our experiments did not obtain good accuracy using these architectures, either pre-trained or from scratch.
There are two possible reasons:
1) Large number of hidden layers: These architectures have many hidden layers ranging from \(16\) up to \(100\). The deeper the network, the more it tries to aggregate the learned features to create high-level features. It works well in images of real-world entities, but in our context, it does not help as these are just images generated from strings. Going further deep into the network makes the network lose all the subtle features from parts of strings like tilde and apostrophes. It has learned to differentiate real from homoglyph strings.
2) Pre-trained in a data set of different domains: Another reason is that these networks were pre-trained on the ImageNet dataset, which contains images from real-world entities, but does not have images similar to our problem. Hence, using a pre-trained network having weights learned from such images instead of our domain problem did not help. We obtained an accuracy of \(63\%\) to \(67\%\) using the above architecture.
## Conclusion
In this work, we created a first-of-its-kind large-scale homoglyph phishing image dataset comprising 4M images of real and homoglyph domains. Later, we presented a baseline that relied on learning features from an attention-based convolutional neural network using our constructed data set to differentiate real domain names from homoglyph domain names to avoid homoglyph attacks. Our dataset and approach are robust because we can generalize on unseen homoglyphs as compared to other approaches which are data-dependent for every single inference, which leads it to outperform existing approaches. We believe this work is significant and provides an important benchmark to propel work in this area, and its applications would serve as a safeguard against phishing attacks in the real world.
|
2304.06389 | Kinematical higher-twist corrections in $γ^* \to M_1 M_2 γ$:
Neutral meson production | We carry out the calculation of kinematical higher-twist corrections to the
cross section of $\gamma^* \to M_1 M_2 \gamma$ up to twist 4, where $M_i$ is a
scalar or pseudoscalar neutral meson. The three independant helicity amplitudes
are presented in terms of the twist-2 generalized distribution amplitudes
(GDAs), which are important non-perturbative quantities for understanding the
3D structure of hadrons. Since this process can be measured by BESIII in $e^+
e^-$ collisions, we perform the numerical estimate of the kinematical
higher-twist corrections by using the kinematics of BESIII. We adopt the $\pi
\pi$ GDA extracted from Belle measurements and the asymptotic $\pi \pi$ GDA to
study the size of the kinematical corrections in the case of pion meson pair,
and a model $\eta \eta$ GDA is used to see the impact of target mass
corrections $\mathcal O(m^2/s)$ for $\gamma^* \to \eta \eta \gamma$. Our
results show that the kinematical higher-twist corrections account for $\sim
20\%$ of the cross sections at BESIII on the average, and it is necessary to
include them if one tries to extract GDAs from experimental measurements
precisely. We also comment the case of $\pi^0 \eta$ production which is
important for the search of hybrid mesons. | Bernard Pire, Qin-Tao Song | 2023-04-13T10:43:40Z | http://arxiv.org/abs/2304.06389v2 | # Kinematical higher-twist corrections in \(\gamma^{*}\to M_{1}M_{2}\gamma\)
###### Abstract
We carry out the calculation of kinematical higher-twist corrections to the cross section of \(\gamma^{*}\to M_{1}M_{2}\gamma\) up to twist 4, where \(M_{\rm i}\) is a scalar or pseudoscalar neutral meson. The three independant helicity amplitudes are presented in terms of the twist-2 generalized distribution amplitudes (GDAs), which are important non-perturbative quantities for understanding the 3D structure of hadrons. Since this process can be measured by BESIII in \(e^{+}e^{-}\) collisions, we perform the numerical estimate of the kinematical higher-twist corrections by using the kinematics of BESIII. We adopt the \(\pi\pi\) GDA extracted from Belle measurements and the asymptotic \(\pi\pi\) GDA to study the size of the kinematical corrections in the case of pion meson pair, and a model \(\eta\eta\) GDA is used to see the impact of target mass corrections \({\cal O}(m^{2}/s)\) for \(\gamma^{*}\to\eta\eta\gamma\). Our results show that the kinematical higher-twist corrections account for \(\sim 20\%\) of the cross sections at BESIII on the average, and it is necessary to include them if one tries to extract GDAs from experimental measurements precisely. We also comment the case of \(\pi^{0}\eta\) production which is important for the search of hybrid mesons.
## I Introduction
Generalized distribution amplitudes (GDAs) [1; 2; 3; 4; 5] are important non-perturbative functions which reveal the 3D partonic structure of hadrons, and they correspond to the amplitudes of the soft processes \(q\bar{q}\to h\bar{h}\) or \(gg\to h\bar{h}\). GDAs were firstly investigated in \(\gamma^{*}\gamma\to h\bar{h}\) with large photon virtuality and small invariant mass of hadron pair to satisfy the QCD factorization. This process is known as the \(s\)-\(t\) crossed channel of deeply virtual Compton scattering (DVCS), from which generalized parton distributions (GPDs) [6; 7; 8; 9] are probed. Recently, measurements of \(\gamma^{*}\gamma\to h\bar{h}\) were released for neutral pion pair [10] and neutral kaon pair [11] production by the Belle collaboration at KEK, and the \(\pi\pi\) quark GDA was extracted from the cross section of \(\gamma^{*}\gamma\to\pi^{0}\pi^{0}\)[12]. In addition to \(\gamma^{*}\gamma\to h\bar{h}\), GDAs can also be accessed in the crossed reaction [13]:
\[\gamma^{*}(q_{1})\to h(p_{1})\bar{h}(p_{2})\gamma(q_{2})\,, \tag{1}\]
which may be studied in the electron-positron annihilation process
\[e^{-}(k_{1})e^{+}(k_{2})\to h(p_{1})\bar{h}(p_{2})\gamma(q_{2})\,, \tag{2}\]
where the large scale \(Q^{2}=q_{1}^{2}\) is now timelike; a first access to this process was released by BaBar [14] in the charged meson channel case, and future results should be obtained at BESIII and Belle (Belle II).
There exists a basic difference between the neutral (say \(\pi^{0}\pi^{0}\)) production channel and the charged one (say \(\pi^{+}\pi^{-}\)), due to the charge conjugation property of the \(\pi\pi\) state. Since the \(\pi^{+}\pi^{-}\) pair can be produced both with \(C=+\) and \(C=-\) charge conjugation, the QCD amplitude (1) can interfere with the QED process, known as Initial State Radiation (ISR) :
\[e^{-}(k_{1})+e^{+}(k_{2})\to\gamma^{*}(q_{1}^{\prime})+\gamma(q_{2})\ \ ;\ \ \gamma^{*}(q_{1}^{\prime})\to h(p_{1})\bar{h}(p_{2})\,, \tag{3}\]
where the \(h(p_{1})\bar{h}(p_{2})\) pair is produced in a \(C=-\) state. The amplitude of the ISR process (3) does not depend on GDAs, and is readily calculated with the help of the measured \(\pi\) meson timelike electromagnetic form factor. The ISR process turns out to dominate1 the QCD process in most kinematics, which renders inefficient the extraction of GDAs from integrated \(\pi^{+}\pi^{-}\) cross-sections, but allows us to extract the QCD contribution - and hence the GDAs - at the amplitude level through cleverly defined asymmetries, taking advantage of the different \(C\) parities of the meson
pair selected by the two processes. This is quite reminiscent of the usual procedure in DVCS or TCS measurements where the interference of the QCD process with the Bethe-Heitler contribution populates interesting asymmetries. We shall address these questions in a forthcoming study and restrict here to the neutral pseudoscalar meson pair case, namely \(\pi^{0}\pi^{0}\), \(\eta\eta\) and \(\pi^{0}\eta\) channels, where the process (3) does not contribute.
On the one hand, GDAs are important inputs for the three-body decays of \(B\) mesons, which are used to study the Cabibbo-Kobayashi-Maskawa (CKM) matrix [15; 16; 17; 18]. On the other hand, GDAs and GPDs are key objects to investigate the matrix elements of the energy-momentum tensor (EMT) for hadrons, which are expressed in terms of the EMT form factors. In principle, one can not measure the EMT form factors of hadrons directly by experiment since the gravitational interactions between hadrons and gravitons are too tiny to probe. However, GDAs and GPDs can be accessed via electromagnetic interactions, as a consequence, the studies of GDAs and GPDs are quite valuable. Many important physical quantities of hadrons can be obtained through the study of the EMT form factors, e.g., mass, pressure and shear force distributions of hadrons[24; 25; 26; 27; 28; 29; 30; 31; 32]. Let us also note that the production of two different mesons, for example \(\gamma^{*}\to\gamma\pi\eta\) where the \(\pi\eta\) GDA is accessed, can be also used to search for the hybrid meson (\(J^{PC}=1^{-+}\))[33] and to investigate the shear viscosity of quarks in hadronic matter [34].
GPDs and GDAs can currently be accessed at many experimental facilities, but in a quite limited range of the hard scale \(Q^{2}\) which is for instance the virtuality of the incoming photon. Compared with the leading-twist cross sections, the higher-twist corrections are thus not negligible considering the energy scales of present and near future experimental measurements. In order to extract GPDs and GDAs precisely, one needs to include the higher-twist contributions to the cross sections. However, higher-twist GPDs and GDAs are required to describe the higher-twist corrections, and this will make the analysis difficult when one tries to extract GPDs and GDAs from the experimental measurements. Recently, a separation of kinematical and dynamical contributions in the operator product of two electromagnetic currents \(T\{j_{\mu}^{\rm em}(z_{1}x)j_{\nu}^{\rm em}(z_{2}x)\}\) was proven in Refs. [35; 36; 37; 38], and these operator results can be applied to the off-forward hard reactions such as \(\gamma^{*}h\to\gamma h\), \(\gamma h\to\gamma^{*}h\), \(\gamma^{*}\gamma\to h_{1}h_{2}\) and \(\gamma^{*}\to h_{1}h_{2}\gamma\), where GPDs and GDAs can be accessed. If one includes the kinematical higher-twist contributions to the leading-twist cross sections, then only the leading-twist GPDs and GDAs are involved. This does not prevent genuine higher twist contributions from being potentially important; progress in their studies is indeed much needed. The kinematical higher-twist corrections can be considered as a generalization of the target mass corrections [39], which are applied to the reaction of Deep Inelastic Scattering (DIS), and such corrections were already included in Ref. [40] where the parton distribution functions were extracted. However, the kinematical higher-twist corrections are more complicated in the off-forward hard reactions due to the higher-twist operators which are reduced to the total derivatives of the leading-twist ones, and these operators do not contribute in DIS since their forward matrix elements vanish.
The kinematical higher-twist corrections were given up to the twist-4 accuracy for the DVCS amplitude with a (pseudo)scalar target [41] and a spin-1/2 target [42; 43]. The theoretical results were applied to the recent DVCS measurements by the JLAB Hall A collaboration[44]. The authors of Ref. [45; 46] also estimated the kinematical higher-twist corrections for \(\gamma^{*}\gamma\to M\bar{M}\) with a (pseudo)scalar meson pair. All these theoretical studies suggest that the kinematical higher-twist corrections are not negligible in realistic experiments; besides, experimental measurements of DVCS also indicate that the kinematical corrections are sizeable in the cross section and have to be taken into account [47; 48]. In this work, we intend to calculate the kinematical higher-twist corrections in \(\gamma^{*}\to M_{1}M_{2}\gamma\), and this process can be measured at BESIII in future. The kinematics of BESIII measurements on this process will be similar to the Belle (Belle II) measurements on \(\gamma^{*}\gamma\to M_{1}M_{2}\), whose cross sections were released recently [10; 11]. In this case, the GDAs can be extracted by combining \(\gamma^{*}\to M_{1}M_{2}\gamma\) and \(\gamma^{*}\gamma\to M_{1}M_{2}\). Moreover, one can study the universality of GDAs by comparing the two processes, taking into account of the fact that the virtual photon is timelike in the former and it is spacelike in the latter.
In Sec. II, we discuss the kinematics of \(\gamma^{*}\to M\bar{M}\gamma\), and the cross section is presented in terms of helicity amplitudes. We carry out a complete calculation of kinematical higher-twist corrections to the helicity amplitudes up to twist 4 in Sec. III, and the numerical estimate of the kinematical higher-twist corrections are also presented. Our results are summarized in Sec. IV.
## II Kinematics and helicity amplitudes of \(\gamma^{*}\to M\bar{M}\gamma\)
If the center-of-mass energy \(\sqrt{s}\) is large enough to satisfy the QCD factorization in the process \(e^{-}e^{+}\to\gamma^{*}\to M\bar{M}\gamma\), then the amplitude can be factorized into a perturbative subprocess \(\gamma^{*}\to q\bar{q}\gamma\) and a two-meson GDA which describes the amplitude of \(q\bar{q}\to M\bar{M}\)[13]. In Fig. 1, we show the polar angle \(\theta\) and the azimuthal angle \(\varphi\) in the center-of-mass frame of the meson pair, for convenience we choose a coordinate system with the z axis along the direction of photons,
so that the momenta of the mesons lie in the x-z plane. The polar angle \(\theta\) can be expressed as
\[\cos\theta=\frac{q_{1}\cdot(p_{2}-p_{1})}{\beta_{0}\,(q_{1}\cdot q_{2})},\qquad \beta_{0}=\sqrt{1-\frac{4m^{2}}{\hat{s}}}, \tag{4}\]
where \(m\) is the meson mass and \(\hat{s}=W^{2}=(q_{1}-q_{2})^{2}=(p_{1}+p_{2})^{2}\). Similarly, the azimuthal angle \(\varphi\) is also given in terms of Lorentz invariants,
\[\sin\varphi=\frac{4\epsilon_{\alpha\beta\gamma\delta}q_{1}^{\alpha}q_{2}^{ \beta}p_{1}^{\gamma}k_{1}^{\delta}}{\beta_{0}\sin\theta\sqrt{us\hat{s}(\hat{s}- u-s)}} \tag{5}\]
with \(\epsilon_{0123}=1\), \(s=(k_{1}+k_{2})^{2}\) and \(u=(k_{1}-q_{2})^{2}\). Two lightlike vectors \(n\) and \(\tilde{n}\) are chosen with the help of the momenta of the timelike virtual photon \(q_{1}\) and the real photon \(q_{2}\),
\[\tilde{n}=q_{1}-(1+\tau)q_{2},\qquad n=q_{2}, \tag{6}\]
where \(\tau=\hat{s}/(s-\hat{s})\). The momentum \(\Delta=p_{2}-p_{1}\) can be written as \(\Delta=\zeta_{0}(\tilde{n}-\tau n)+\Delta_{T}\), and \(\Delta_{T}\) is the transverse component. \(\zeta_{0}\) is a parameter which is defined as
\[\zeta_{0}=\frac{(p_{2}-p_{1})\cdot n}{(p_{2}+p_{1})\cdot n}, \tag{7}\]
and it is related to the polar angle \(\theta\) as \(\zeta_{0}=\beta_{0}\cos\theta\). We can obtain \(\Delta_{T}^{2}=4m^{2}-(1-\zeta_{0}^{2})\hat{s}\) by the on-shell condition.
To describe the process \(\gamma^{*}\to M\bar{M}\gamma\), one needs to define the amplitude
\[A_{\mu\nu}=i\int d^{4}x\,e^{-ir\cdot x}\langle\bar{M}(p_{2})M(p_{1})|\,T\{j_{ \mu}^{\rm em}(z_{1}x)j_{\nu}^{\rm em}(z_{2}x)\}\,|0\rangle, \tag{8}\]
where \(z_{1}\) and \(z_{2}\) are real constants with the constraint \(z_{1}-z_{2}=1\), and \(r=z_{1}q_{1}-z_{2}q_{2}\) is used. This amplitude can be further written in terms of helicity amplitudes by considering the electromagnetic gauge invariance [41]
\[A^{\mu\nu}=-A^{(0)}\;g_{\perp}^{\mu\nu}+A^{(1)}\;(\tilde{n}^{\mu}-(1+\tau)n^{ \mu})\,\frac{\Delta_{\alpha}g_{\perp}^{\alpha\nu}}{\sqrt{s}}+\frac{1}{2}\,A^{ (2)}\,\Delta_{\alpha}\Delta_{\beta}(g_{\perp}^{\alpha\mu}g_{\perp}^{\beta\nu} -\epsilon_{\perp}^{\alpha\mu}\epsilon_{\perp}^{\beta\nu})+A^{(3)\mu}\,n^{\nu}, \tag{9}\]
and the transverse tensors \(g_{\perp}^{\mu\nu}\) and \(\epsilon_{\perp}^{\mu\nu}\) are defined as
\[g_{\perp}^{\mu\nu}=g^{\mu\nu}-\frac{n^{\mu}\tilde{n}^{\nu}+n^{\nu}\tilde{n}^ {\mu}}{n\cdot\tilde{n}},\qquad\epsilon_{\perp}^{\mu\nu}=\epsilon^{\mu\nu \alpha\beta}\,\frac{\tilde{n}_{\alpha}n_{\beta}}{n\cdot\tilde{n}}. \tag{10}\]
The longitudinal polarization vector \(\epsilon_{0}\) exists in addition to the transverse ones \(\epsilon_{\pm}\) for the virtual photon. In the center-of-mass frame of the meson pair as shown in Fig. 1, the polarization vectors read
\[\epsilon_{0}^{\mu}=\frac{1}{\sqrt{s}}(|q_{1}^{3}|,0,0,q_{1}^{0}),\quad\epsilon _{\pm}^{\mu}=\frac{1}{\sqrt{2}}(0,\mp 1,-i,0), \tag{11}\]
Figure 1: Kinematics of \(e^{-}(k_{1})e^{+}(k_{2})\to\gamma^{*}(q_{1})\to M(p_{1})\bar{M}(p_{2})\gamma(q _{2})\) in the center-of-mass frame of the meson pair, and the direction of the photons is chosen to the z axis.
and the transverse polarization vectors \(\tilde{\epsilon}_{\pm}\) of the real photon are the same as the ones of the virtual photon. Then, the three independent helicity amplitudes are given by
\[A_{++}=A_{--}=A^{(0)},\qquad A_{0\pm}=-A^{(1)}(\Delta\cdot\epsilon _{\mp}),\] \[A_{\pm\mp}=-A^{(2)}(\Delta\cdot\epsilon_{\pm})^{2}. \tag{12}\]
where the notation \(A_{ij}=\epsilon_{i}^{\mu}\tilde{\epsilon}_{j}^{*\nu}A_{\mu\nu}\) is used, and only \(A_{++}\) can receive twist-2 contribution at leading order of \(\alpha_{s}\). In Ref. [13], \(A_{++}\) was given in terms of the \(\pi\pi\) GDA at twist-2 for the process of \(e^{-}e^{+}\to\gamma^{*}\to\pi\pi\gamma\), and the twist-2 \(\pi\pi\) GDA is defined as [2; 3]
\[\langle\bar{M}(p_{2})M(p_{1})|\,\bar{q}(z_{1}n)\not{n}q(z_{2}n)\,|0\rangle=2P \cdot n\int dz\,e^{2i[z_{1}+(1-z)z_{2}]P\cdot n}\,\Phi_{g}(z,\zeta_{0},\hat{s}), \tag{13}\]
where \(z\) is the momentum fraction of the quark, \(P\) is the average momentum of the meson pair \(P=(p_{1}+p_{2})/2\), and the real constants \(z_{1}\) and \(z_{2}\) do not have to satisfy \(z_{1}-z_{2}=1\). Note that the GDAs depend on a renormalization scale \(\mu^{2}\) which one usually takes as \(\mu^{2}=s\).
The differential cross section can be expressed in terms of the helicity amplitudes for \(e^{-}e^{+}\to\gamma^{*}\to M\bar{M}\gamma\),
\[\frac{d\sigma}{d\hat{s}\,du\,d(\cos\theta)\,d\varphi}= \frac{\alpha_{\rm em}^{3}\beta_{0}}{16\pi s^{3}}\,\frac{1}{1+ \epsilon}\,\Big{[}|A_{++}|^{2}+|A_{-+}|^{2}+2\epsilon\,|A_{0+}|^{2}-2{\rm sgn} (\tau)\sqrt{\epsilon(1-\epsilon)}\] \[\times{\rm Re}(A_{++}^{*}A_{0+}-A_{-+}^{*}A_{0+})\cos\varphi+2 \epsilon\,{\rm Re}(A_{++}^{*}A_{-+})\cos(2\varphi)\Big{]}, \tag{14}\]
where \(M\bar{M}\) is the pseudoscalar meson pair with even charge conjugation and the parameter \(\epsilon\) is defined as
\[\epsilon=\frac{y-1}{1-y+\frac{y^{2}}{2}},\qquad y=\frac{q_{1}\cdot q_{2}}{k_{ 1}\cdot q_{2}}. \tag{15}\]
\({\rm sgn}(\tau)=|\tau|/\tau\) is the sign function with \(\tau=\hat{s}-s-2u\).
## III Results
### Theoretical amplitudes in terms of GDAs
Recently, Braun _et al._ have proved that the kinematical contributions can be separated from dynamical ones in the time-ordered product of two electromagnetic currents \(i\,T\{j_{\mu}^{\rm em}(z_{1}x)j_{\nu}^{\rm em}(z_{2}x)\}\)[35; 36; 37]. If we introduce the kinematical higher-twist contributions to the twist-2 cross sections, one can improve the description of reactions where two photons are involved without any knowledge of the genuine higher-twist distributions. This is very helpful when one intends to extract the twist-2 distributions from the measurements of the cross sections, since taking into account of genuine higher-twist distributions would imply more parameters to be used in the analysis. At twist-4 accuracy, the kinematical contributions in \(i\,T\{j_{\mu}^{\rm em}(z_{1}x)j_{\nu}^{\rm em}(z_{2}x)\}\) can be written as [35; 36; 37]
\[T_{\mu\nu}=\frac{-1}{\pi^{2}x^{4}z_{12}^{3}}\left\{x^{\alpha}\left[S_{\mu \alpha\nu\beta}\mathbb{V}^{\beta}-i\epsilon_{\mu\alpha\nu\beta}\mathbb{W}^{ \beta}\right]+x^{2}\left[(x_{\mu}\partial_{\nu}+x_{\nu}\partial_{\mu}) \mathbb{X}+(x_{\mu}\partial_{\nu}-x_{\nu}\partial_{\mu})\mathbb{Y}\right] \right\}, \tag{16}\]
where the notation \(z_{12}=z_{1}-z_{2}\) is used, and the tensor \(S^{\mu\alpha\nu\beta}\) is defined by
\[S^{\mu\alpha\nu\beta}=g^{\mu\alpha}g^{\nu\beta}-g^{\mu\nu}g^{\alpha\beta}+g^ {\mu\beta}g^{\nu\alpha}. \tag{17}\]
\(\mathbb{V}_{\mu}\) and \(\mathbb{W}_{\mu}\) contain all twists starting from twist 2 to twist 4, whereas \(\mathbb{X}\) and \(\mathbb{Y}\) are purely twist 4 operators, the detailed expressions of them can be found in Appendix A of our previous work [45].
In this work we shall use Eq. (16) to calculate the kinematical higher-twist contributions in the reaction \(\gamma^{*}\to M\bar{M}\gamma\), where \(M\bar{M}\) is the scalar meson pair with even charge conjugation. The spinor formalism [49; 50] is used for \(T_{\mu\nu}\) in the calculation, and it will help us to figure out the twist of the matrix elements for the operators much easier. Even though the final amplitudes are presented in terms of GDAs, the double distributions [51] are used to calculate the helicity amplitudes. Those techniques are explained in Appendix A, and the helicity amplitudes are written as
\[A^{(0)}=\chi\left\{\left(1+\frac{\hat{s}}{2s}\right)\int_{0}^{1}dz\,\frac{\Phi( z,\eta,\hat{s})}{1-z}+\frac{\hat{s}}{s}\int_{0}^{1}dz\,\frac{\Phi(z,\eta,\hat{s})}{z}\, \ln(1-z)\right.\]
\[\beta_{0} \rightarrow \beta_{0}=\sqrt{1-\frac{2(m_{1}^{2}+m_{2}^{2})}{\hat{s}}}+\frac{(m_ {1}^{2}-m_{2}^{2})^{2}}{\hat{s}^{2}},\] \[\Delta_{T}^{2} \rightarrow \Delta_{T}^{2}=2(m_{1}^{2}+m_{2}^{2})-(1-\zeta_{0}^{2})\hat{s},\] \[\zeta_{0} \rightarrow \zeta_{0}=\beta_{0}\cos\theta+\frac{(m_{2}^{2}-m_{1}^{2})}{\hat{ s}}, \tag{21}\]
where \(m_{1}\) and \(m_{2}\) denote the masses of \(M_{1}\) and \(M_{2}\), respectively2.
Footnote 2: As for \(\gamma^{*}\gamma\rightarrow\pi\eta\), one can also use the theoretical expressions of Ref. [45] together with the first two replacements in Eq. (21), and the third one is slightly modified as \(\zeta_{0}\rightarrow\zeta_{0}=-\beta_{0}\cos\theta+\frac{(m_{2}^{2}-m_{1}^{2 })}{\hat{s}}\).
There are a few candidates for the isovector hybrid mesons, for example \(\pi_{1}(1400)\)[52; 53], \(\pi_{1}(1600)\)[54; 55; 56; 57] and \(\pi_{1}(2015)\)[58], however, their existence is still controversial (see [59] and [60] for a recent review), and further confirmation is necessary. In the near future, these candidates can be investigated in \(\gamma^{*}\to\pi\eta^{(\prime)}\gamma\) and \(\gamma^{*}\gamma\to\pi\eta^{(\prime)}\), which are accessible at BESIII and Belle (Belle II). Meanwhile, BESIII observed a resonance called \(\eta_{1}(1855)\) from the \(P\)-wave analysis of \(\eta\eta^{\prime}\) in the decay of \(J/\Psi\to\eta\eta^{\prime}\gamma\) very recently, which is a candidate of isoscalar hybrid mesons (\(I^{G}(J^{PC})=(0^{+})1^{-+}\)) [61; 62]. It will be promising to search for this resonance in \(\gamma^{*}\to\eta\eta^{\prime}\gamma\), since one just needs to replace \(J/\Psi\) with a timelike photon.
The asymptotic GDAs are slightly modified by the additional \(P\)-wave term in the production of two different scalar mesons \(M_{1}\) and \(M_{2}\)[33],
\[\Phi_{q}(z,\cos\theta,\hat{s})=30\,z(1-z)(2z-1)\left[\tilde{B}_{10}(\hat{s})+ \tilde{B}_{11}(\hat{s})P_{1}(\cos\theta)+\tilde{B}_{12}(\hat{s})P_{2}(\cos \theta)\right], \tag{22}\]
and the second term denotes the \(P\)-wave GDA, which is related to the production of exotic hybrid mesons. We have \(\Phi_{u}(z,\cos\theta,\hat{s})=\Phi_{d}(z,\cos\theta,\hat{s})\) and \(\Phi_{u}(z,\cos\theta,\hat{s})=-\Phi_{d}(z,\cos\theta,\hat{s})\) for the total isospin \(I=0\) and \(I=1\) of the meson pairs, respectively. The \(M_{1}M_{2}\) GDAs can be also used to study the matrix element of the EMT,
\[\left\langle M_{2}(p_{2})M_{1}(p_{1})\right|T_{q}^{\mu\nu}\left|0\right\rangle \sim E_{q}(\hat{s})P^{\mu}\Delta^{\nu}, \tag{23}\]
where \(E_{q}(\hat{s})\) is a new EMT form factor related to the shear viscosity; its sum over quarks and gluons should be zero as a consequence of the conserved EMT, but however, \(E_{q}(\hat{s})\) will exist for a single flavor \(q\) on condition that there is \(P\)-wave GDA [34]. Thus, if one observes the candidates of the hybrid mesons in \(\gamma^{*}\to\pi\eta^{(\prime)}\gamma\) and \(\gamma^{*}\gamma\to\pi\eta^{(\prime)}\), the existence of \(E_{q}(\hat{s})\) will be proved by experiment.
### Numerical estimates of the kinematical higher-twist contributions
In principle, the process \(\gamma^{*}\to M_{1}M_{2}\gamma\) can be measured by BESIII and Belle (Belle II) in \(e^{+}e^{-}\) collisions. The center-of-mass energy is \(\sqrt{s}=3-5\) GeV at BESIII, while it is \(\sqrt{s}=8-10\) GeV at Belle (Belle II). It should
Figure 2: Differential cross section of \(e^{-}e^{+}\to\pi^{0}\pi^{0}\gamma\) is dependent on the invariant mass of pion pair \(W=\sqrt{\hat{s}}\), using the \(\pi\pi\) GDA extracted from Belle measurements [12]. The dashed lines are the twist-2 cross sections, and the solid lines include the kinematical higher-twist contributions.
be much easier to measure this process at BESIII due to the larger cross section which can be seen from Eq. (14). As a consequence, we shall use the kinematics of BESIII in the numerical estimate of the kinematical higher-twist contributions, and the differential cross section of Eq. (14) is used by integrating over \(\varphi\),
\[\frac{d\sigma}{du\,dW^{2}\,d(\cos\theta)}= \frac{\alpha_{\rm em}^{3}\beta_{0}}{8s^{3}}\,\frac{1}{1+\epsilon }\,\Big{[}|A_{++}|^{2}+|A_{-+}|^{2}+2\epsilon\,|A_{0+}|^{2}\Big{]}, \tag{24}\]
where the helicity amplitudes are given by Eq. (18) including the kinematical higher-twist contributions up to twist 4.
We firstly calculate the cross section of \(e^{-}e^{+}\to\gamma^{*}\to\pi^{0}\pi^{0}\gamma\) with the \(\pi\pi\) GDA extracted from Belle measurements [12]. In Fig. 2, the twist-2 cross sections are depicted as dashed lines, and the solid lines include the kinematical higher-twist contributions. The kinematics is set according to the BESIII experiment as \(s=12\) GeV\({}^{2}\) and \(W\in(0.5,2)\) GeV. The colors of the lines (black, orange, red, blue) represent different values of \(\cos\theta\) (0,2, 0.4, 0.6, 0.8), and \(u\) is chosen as \(u=-3\) GeV\({}^{2}\) and \(u=-6\) GeV\({}^{2}\). We can clearly see that the kinematical higher-twist corrections are always positive in the cross section, and this is different from the case of \(e^{-}\gamma\to e^{-}\pi^{0}\pi^{0}\) where the corrections can be positive or negative [45]. In the region \(W>1\) GeV, the kinematical higher-twist corrections turn out to be important and it is thus crucial to include them to extract in a valuable way GDAs from the cross section of \(e^{-}e^{+}\to\gamma^{*}\to\pi^{0}\pi^{0}\gamma\), and then access both the timelike pion EMT form factors, and the spacelike ones, obtained from the timelike ones by using dispersion relations requiring reliable information at \(W>1\) GeV so as to make the integrals convergent.
The ratio \(d\sigma(2+3+4)/d\sigma(2)\) is also shown in Fig. 3, where \(d\sigma(i)\) (\(i=2,3,4\)) denotes the twist-\(i\) contribution to the cross section. The colors of the lines indicate different values of \(\cos\theta\) as in Fig. 2. We can see that the kinematical higher-twist contributions have a significant impact on the cross section when \(W>1\) GeV. The ratios just slightly change from \(u=-3\) GeV\({}^{2}\) to \(u=-6\) GeV\({}^{2}\), and this is because the ratios are dependent on \(u\) through the parameter \(\epsilon\), namely, only the contribution from the amplitude \(A_{0+}\) is affected in the ratios as one changes \(u\). The peaks around \(W\sim 1.1\) GeV and \(W\sim 1.5\) GeV with \(\cos\theta=0.8\) in Fig. 3 are due to the fact that the twist-2 cross sections are quite tiny in this region as indicated by Fig. 2; however, the extracted GDA used in this estimate may not be accurate enough due to the large uncertainties of Belle measurements, and these peaks in the ratio may thus not reflect real physics.
For comparison, we also present our results in Fig. 4 when we employ the asymptotic pion GDA in the analysis of \(e^{-}e^{+}\to\gamma^{*}\to\pi^{0}\pi^{0}\gamma\). In Ref. [2], the asymptotic GDA was given when the energy scale \(s\to\infty\),
\[\Phi(z,\cos\theta,\hat{s})=20\,z(1-z)(2z-1)R_{\pi}\left[\frac{-3+\beta_{0}^{2} }{2}\,e^{i\delta_{0}}+\beta_{0}^{2}e^{i\delta_{2}}P_{2}(\cos\theta)\right], \tag{25}\]
where \(R_{\pi}=0.5\) is the momentum fraction carried by quarks in the pion meson. \(\delta_{0}\) is \(\pi\pi\) the elastic scattering phase shift for S wave, and \(\delta_{2}\) is the one for D wave [63; 64; 65]. The asymptotic pion GDA is indeed very different with the one extracted from Belle measurements, for example there is no contribution of resonance \(f_{2}\) in the asymptotic GDA. However, the main purpose of this work is not to estimate cross sections accurately, but to see whether the kinematical corrections are sizeable or not. In Fig. 4, \(u\) is chosen as \(u=\)-3 GeV\({}^{2}\) and \(u=\)-6 GeV\({}^{2}\) together with 0.5 GeV \(\leq W\leq 2.1\) GeV and \(s=12\) GeV\({}^{2}\). The dashed lines represent the twist-2 cross sections, and the solid ones include kinematical
higher-twist contributions. Black lines denote \(\cos\theta=0.2\) and orange lines correspond to \(\cos\theta=0.4\), while \(\cos\theta=0.6\) and \(\cos\theta=0.8\) are depicted as red and blue, respectively. We can clearly see that the kinematical higher-twist corrections become more and more important as one increases \(W=\sqrt{\hat{s}}\) in this figure, which is evident since the corrections are expected to be proportional to \(\hat{s}/s\). As for the case of the extracted GDA from Belle measurements, it is thus necessary to include the kinematical higher-twist corrections to describe the cross section in the region of \(W>1\) GeV. Both GDAs indicate a similar magnitude of the cross section for \(e^{-}e^{+}\to\gamma^{*}\to\pi^{0}\pi^{0}\gamma\).
In Fig. 5, the ratio of \(d\sigma(2+3+4)/d\sigma(2)\) is also presented so as to see the proportion of the kinematical higher-twist corrections in the cross section clearly. \(u\) is set as \(u=\)-3 GeV\({}^{2}\) and \(u=\)-6 GeV\({}^{2}\) for the left panel and right panel, respectively, and the colors of lines indicate the values of \(\cos\theta\) as in Fig. 4. The ratios increase rapidly as \(W\) goes
up, and the kinematical higher-twist corrections account for more than 40% of the cross section around \(W\sim 2\) GeV, which proves that they need to be included in any reliable GDA extraction from the cross section.
There are two types of corrections \(m^{2}/s\) and \(\hat{s}/s\) in the kinematical corrections, and only the latter contributes to the cross section of \(e^{-}e^{+}\to\gamma^{*}\to\pi^{0}\pi^{0}\gamma\) due to the small mass of pion meson in comparison with the value of \(s\). In order to see the impact of the target mass correction \(m^{2}/s\), we now consider the production of a pair of slightly heavier mesons, namely \(\eta\eta\). However, very little information is known about their GDAs at the current stage, and we thus estimate the kinematical higher-twist corrections for \(e^{-}e^{+}\to\gamma^{*}\to\eta\eta\gamma\) by using a simple model GDA identical to the asymptotic \(\pi\pi\) GDA except that the \(\eta\) mass is used. The center-of-mass energy of \(e^{-}e^{+}\) is again chosen as \(s=12\) GeV\({}^{2}\). In Fig. 6, the cross sections are shown with the range of \(1.2\) GeV \(\leq W\leq 2.1\) GeV. The colors of the lines denote different values of \(\cos\theta\) as indicated on the different panels of the figure. The dashed lines represent the twist-2 cross sections, and the solid ones include kinematical higher-twist contributions. The gaps between the dashed lines and the solid ones increase along with \(W\) as expected, and one thus cannot neglect the kinematical higher-twist contributions in the cross section. The kinematical higher-twist contributions are always positive in the cross section. We present the ratios of \(d\sigma(2+3+4)/d\sigma(2)\) in Fig. 7. The kinematical higher-twist contributions account for less than 40% of the cross section around \(W\sim 2\) GeV. Compared with the results in Fig. 5, the ratios decrease if one replace the mass of the \(\pi\) meson by the one of the \(\eta\) meson, keeping the asymptotic \(\pi\pi\) GDA as a model for the \(\eta\eta\) GDA ; indeed, the target mass corrections are negative and thus diminish the positive corrections of order \(\hat{s}/s\) in the cross section.
We do not plot the kinematical higher twist contributions for the \(\pi\eta\) production case, since they depend much on the unknown \(\pi\eta\) GDAs. One can estimate that they are somewhat in between the relative contributions in the \(\pi\pi\) and \(\eta\eta\) cases displayed in Fig. 5 and Fig. 7.
## IV Summary
GDAs can be studied in both \(\gamma^{*}\gamma\to M_{1}M_{2}\) and \(\gamma^{*}\to M_{1}M_{2}\gamma\). The former process has been measured by Belle for \(\pi\pi\)[10] and \(KK\)[11] with large uncertainties; in the near future we can expect more precise measurements from Belle II due to the much higher luminosity. It will be more advantageous to measure the latter process at BESIII, since its cross section will be suppressed by the larger center-of-mass energy of electron-positron pair at Belle (Belle II). In
this case, the measurements of \(\gamma^{*}\gamma\to M_{1}M_{2}\) at Belle and Belle II can be cross checked by the ones of \(\gamma^{*}\to M_{1}M_{2}\gamma\) at BESIII due to the similar kinematics, and the GDAs can be extracted by combining the measurements of the two processes. Besides, since GDAs are probed by a spacelike photon and a timelike photon in \(\gamma^{*}\gamma\to M_{1}M_{2}\) and \(\gamma^{*}\to M_{1}M_{2}\gamma\), respectively, we can also check the university of GDAs.
In this work we calculate the kinematical higher-twist corrections for \(\gamma^{*}\to M_{1}M_{2}\gamma\) up to twist 4, and three helicity amplitudes are expressed in term of the twist-2 GDA. We calculated the cross section with and without the kinematic higher twist contributions in terms of the leading twist GDAs. We adopt two types of GDAs to estimate the kinematical higher-twist contributions for \(\gamma^{*}\to\pi^{0}\pi^{0}\gamma\) numerically. In the calculation, the center-of-mass energy of the electron-positron pair is chosen as \(s=12\) GeV\({}^{2}\), which is typical for BESIII. All the numerical results indicate that the kinematical higher-twist corrections have a significant impact on the cross section of \(\gamma^{*}\to M_{1}M_{2}\gamma\) as in the case of \(\gamma^{*}\gamma\to M_{1}M_{2}\)[45; 46]. However, the corrections are always positive in the cross section of the former, and this is different from the latter process where kinematical higher-twist corrections can go both ways. A model \(\eta\eta\) GDA is used to see the impact of the target mass corrections of \({\cal O}(m^{2}/s)\), and the kinematical higher-twist corrections account for about 20% of the cross section in the region of 1.2 GeV \(\leq W\leq\)2.1 GeV on the average, which are not negligible. As a consequence, it is important to use the accurate description of the cross section with the inclusion of kinematical contributions when one tries to extract GDAs from experimental measurements. The extracted GDAs can be used to study the EMT form factors of hadrons, which are important physical quantities to investigate mass, pressure and shear force distributions of hadrons.
The present study was performed at lowest order in the strong coupling, but it would be interesting to include higher order corrections which are known - at leading twist - to be very sensitive to the timelike vs spacelike nature of the probe [66].
###### Acknowledgements.
We acknowledge useful discussions with Cedric Lorce, Wen-Cheng Yan and Ya-Teng Zhang. Qin-Tao Song was supported by the National Natural Science Foundation of China under Grant Number 12005191.
## Appendix A Helicity amplitudes in terms of DDs
The double distributions (DDs) of a scalar meson are defined by [51]
\[\left\langle\bar{M}(p_{2})M(p_{1})\right|\bar{q}(z_{1}n)\!\not\!nq(z_{2}n)\left| 0\right\rangle=\int d\beta\,d\alpha\left[f_{q}(\beta,\alpha)\,\Delta\cdot n-g _{q}(\beta,\alpha)\,2P\cdot n\right]e^{-il_{z_{1}z_{2}}\cdot n}, \tag{10}\]
where the support region of \(f_{q}\) and \(g_{q}\) is given by the rhombus \(|\alpha|+|\beta|\leq 1\), and the momentum \(l_{z_{1}z_{2}}\) is written as
\[l_{z_{1}z_{2}}=(z_{2}-z_{1})\left[\beta\,\frac{\Delta}{2}-(\alpha+1)P\right]- 2z_{1}P. \tag{11}\]
If one combines Eqs. (13) with (11), the GDA can be expressed in terms of DDs,
\[\Phi_{q}(z,\zeta_{0},s)=2\int d\beta\,d\alpha\,\delta(y+\alpha-\beta\zeta_{0}) \left[f_{q}(\beta,\alpha)\,\zeta_{0}-g_{q}(\beta,\alpha)\right], \tag{13}\]
where \(y=2z-1\). Assuming that the DDs vanish at the boundaries, Eq. (11) can be expressed as [45]
\[\langle\bar{M}(p_{2})M(p_{1})|\,\bar{q}(z_{1}n)\not{n}q(z_{2}n)\,|0\rangle= \frac{2i}{z_{12}}\int d\beta\,d\alpha\,\phi_{q}(\beta,\alpha)\,e^{-il_{z_{1}z _{2}}\cdot n}, \tag{14}\]
where the notation \(z_{12}=z_{1}-z_{2}\) is used, and \(\phi_{q}(\beta,\alpha)\) is defined by
\[\phi_{q}(\beta,\alpha)=\partial_{\beta}f_{q}(\beta,\alpha)+\partial_{\alpha}g _{q}(\beta,\alpha). \tag{15}\]
Due to even charge conjugation of the meson pair, we can have the symmetry \(\phi_{q}(\beta,\alpha)=\phi_{q}(\beta,-\alpha)=\phi_{q}(-\beta,-\alpha)\), which is used to simplify the calculation of the amplitudes.
The leading-twist operator \(\mathcal{O}_{++}^{t=2}(z_{1},z_{2})\) appears in the kinematical contributions of Eq. (16), where the separation \(x\) is not necessarily lightlike. However, GDAs and DDs are defined by the matrix element of \(O_{++}(z_{1}n,z_{2}n)\) with the lightlike separation \(n\) as shown in Eq. (11),
\[O_{++}(z_{1}n,z_{2}n)=\sum_{q}e_{q}^{2}\,\bar{q}(z_{1}n)\not{n}q(z_{2}n). \tag{16}\]
The matrix element of \(\mathcal{O}_{++}^{t=2}(z_{1},z_{2})\) is related to the one of \(O_{++}(z_{1}n,z_{2}n)\) by using the leading-twist projector \(\Pi(x,n)\)[35; 36; 37],
\[\langle\bar{M}(p_{2})M(p_{1})|\,\mathcal{O}_{++}^{t=2}(z_{1},z_{2})\,|0\rangle =\Pi(x,n)\langle\bar{M}(p_{2})M(p_{1})|\,O_{++}(z_{1}n,z_{2}n)\,|0\rangle. \tag{17}\]
If one combines Eqs. (17) and (14), the matrix element of \(\mathcal{O}_{++}^{t=2}(z_{1},z_{2})\) can be obtained [45],
\[\langle\bar{M}(p_{2})M(p_{1})|\,\mathcal{O}_{++}^{t=2}(z_{1},z_{2})\,|0\rangle =\chi\,\frac{2i}{z_{12}}\int d\beta\,d\alpha\,\phi(\beta,\alpha) \left[e^{-il_{z_{1}z_{2}}\cdot x}+\frac{x^{2}l_{z_{1}z_{2}}^{2}}{4}\int_{0}^{1} dv\,v\,e^{-ivl_{z_{1}z_{2}}\cdot x}\right], \tag{18}\]
where \(\phi=\phi_{u}+\phi_{d}\) and \(\chi=5e^{2}/18\) for an isosinglet meson pair. Furthermore, the matrix elements of \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\) can be given by
\[\langle\bar{M}(p_{2})M(p_{1})|\,\mathcal{O}_{1}(z_{1},z_{2})\,|0\rangle =-\chi\,\frac{2i}{z_{12}}\,\hat{s}\int d\beta\,d\alpha\,\phi(\beta,\alpha)\,e^{-il_{z_{1}z_{2}}\cdot x},\] \[\langle\bar{M}(p_{2})M(p_{1})|\,\mathcal{O}_{2}(z_{1},z_{2})\,|0\rangle =\chi\,\frac{2i}{z_{12}}\int d\beta\,d\alpha\,\phi(\beta,\alpha) \left[2P\cdot l_{z_{1}z_{2}}\,e^{-il_{z_{1}z_{2}}\cdot x}+iP\cdot x\,l_{z_{1} z_{2}}^{2}\int_{0}^{1}dv\,v\,e^{-ivl_{z_{1}z_{2}}\cdot x}\right], \tag{19}\]
where
\[\mathcal{O}_{1}(z_{1},z_{2}) =\left[i\mathbf{P}^{\mu},\,\left[i\mathbf{P}_{\mu},\,\mathcal{O} _{++}^{t=2}(z_{1},z_{2})\right]\right],\] \[\mathcal{O}_{2}(z_{1},z_{2}) =\left[i\mathbf{P}^{\mu},\,\frac{\partial}{\partial x^{\mu}} \mathcal{O}_{++}^{t=2}(z_{1},z_{2})\right]. \tag{20}\]
The helicity amplitudes are expressed in terms of matrix elements of operators, which are shown in Eqs. (18) and (19). One obtains after a lengthy calculation
\[A_{0-} =-2\chi\,\frac{\Delta\cdot\epsilon_{+}}{\sqrt{s}}\int d\beta\,d \alpha\,\phi(\beta,\alpha)\,\beta\,\frac{\ln(F)}{F-1},\] \[A_{+-} =\chi\,\frac{(\Delta\cdot\epsilon_{+})^{2}}{2n\cdot\tilde{n}}\int d \beta\,d\alpha\,\phi(\beta,\alpha)\,\beta^{2}\,\partial_{F}\left[\frac{1-2F}{ F-1}\,\ln(F)\right],\] \[A_{++} =\chi\int d\beta\,d\alpha\,\phi(\beta,\alpha)\left\{2\ln(F)-\left[ \frac{\hat{s}}{n\cdot\tilde{n}}\,(F-\alpha)+\frac{\beta^{2}\Delta_{T}^{2}}{4n \cdot\tilde{n}}\,\partial_{F}\right]\frac{1}{F-1}\left[\frac{\ln(F)}{2}-\text{Li }_{2}(1)+\text{Li}_{2}(F)\right]\right\}, \tag{21}\]
where \(F\) is defined as
\[F(\alpha,\beta)=\frac{\alpha-\beta\zeta_{0}+1}{2}. \tag{22}\]
The helicity amplitudes can be presented in terms of the GDA using [45]
\[\frac{\partial\Phi_{q}(z,\zeta_{0},s)}{\partial z}=4\int d\beta\,d\alpha\,\delta( (2z-1)+\alpha-\beta\zeta_{0})\,\phi_{q}(\beta,\alpha). \tag{13}\]
|
2310.02070 | Controlled Quasi-Latitudinal Solutions for ultra-fast Spin-Torque
Precessional Magnetization Switching | The aim of the paper is to present a novel class of time-dependent controls
to realize ultra-fast magnetization switching in nanomagnets driven by
spin-torques produced by spin-polarized electric currents. Magnetization
dynamics in such systems is governed by the Landau-Lifshitz-Slonczewski
equation which describes the precessional motion of (dimensionless)
magnetization vector on the unit-sphere. The relevant case of nanoparticles
with uniaxial anisotropy having in-plane easy and intermediate axes and
out-of-plane hard axis is considered. By exploiting the characteristic
smallness of damping and spin-torque intensity, the aforementioned controls are
constructed via suitable perturbative tools in a way to realise approximate
\emph{latitudinal solutions} (i.e. motions on a sphere in which the
out-of-plane magnetization component stays constant) with the effect to fast
``switch'' the system from one stationary state to another. The possibility to
keep a (``small'') bounded value of the out-of-plane coordinate throughout this
process of ``transfer'', turns out to be advantageous in the applications as it
sensibly reduces the post-switching relaxation oscillations that may cause the
failure of switching in real samples. Further relevant quantitative results on
the behaviour of the solutions during the pre- and post-switching stages
(termed ``expulsion'' and ``attraction'', respectively), are given as a
byproduct. A selection of validating numerical experiments is presented
alongside the corresponding theoretical results. | Alessandro Fortunati, Massimiliano d'Aquino, Claudio Serpico | 2023-10-03T14:13:02Z | http://arxiv.org/abs/2310.02070v1 | Controlled Quasi-Latitudinal Solutions for ultra-fast Spin-Torque Precessional Magnetization Switching
###### Abstract
The aim of the paper is to present a novel class of time-dependent controls to realize ultra-fast magnetization switching in nanomagnets driven by spin-torques produced by spin-polarized electric currents. Magnetization dynamics in such systems is governed by the Landau-Lifshitz-Slonczewski equation which describes the precessional motion of (dimensionless) magnetization vector on the unit-sphere. The relevant case of nanoparticles with uniaxial anisotropy having in-plane easy and intermediate axes and out-of-plane hard axis is considered. By exploiting the characteristic smallness of damping and spin-torque intensity, the aforementioned controls are constructed via suitable perturbative tools in a way to realise approximate _latitudinal solutions_ (i.e. motions on a sphere in which the out-of-plane magnetization component stays constant) with the effect to fast "switch" the system from one stationary state to another. The possibility to keep a ("small") bounded value of the out-of-plane coordinate throughout this process of "transfer", turns out to be advantageous in the applications as it sensibly reduces the post-switching relaxation oscillations that may cause the failure of switching in real samples. Further relevant quantitative results on the behaviour of the solutions during the pre- and post- switching stages (termed "expulsion" and "attraction", respectively), are given as a byproduct. A selection of validating numerical experiments is presented alongside the corresponding theoretical results.
_Keywords:_ Magnetisation Dynamics, Landau-Lifshitz-Slonczewski equation, Spintronics, Perturbation Theory, Qualitative Methods.
_2010 MSC._ Primary: 78A25, 34D10, 34H05. Secondary: 37C50, 37C75.
## 1 Introduction
The efficient and high-speed manipulation of magnetic nanoelements holds immense significance in the framework of magnetization dynamics, particularly in the context of magnetic storage nanodevices and spintronics. Over the past decade, an extensive body of research has been dedicated to investigating ultra-fast magnetization switching within spintronic devices[DPG\({}^{+}\)20], which hold promise as potential candidates for advancing Magnetic Random Access Memory (MRAM) technology [ADS16, BSH\({}^{+}\)17].
In the pursuit of achieving rapid magnetization switching, researchers have turned their attention to the utilization of electric current pulses via the spin-transfer torque (STT) effect
like MRAM cells [1]. Nevertheless, it is worth noting that ballistic spin-torque magnetization switching engendered by pulsed injected currents encounters a challenge reminiscent of the circumstance faced in precessional switching via transverse external magnetic fields [14].
Indeed, both of these switching methodologies demand meticulous synchronization of the excitation pulse timing to ensure effective switching occurs at the precise moment[1]. This critical timing requirement is essential for avoiding unsuccessful switching and ensuring reliable performance. Once the excitation pulse is appropriately timed, the magnetization state transits from a high-energy to a low-energy configuration, ultimately reaching an equilibrium magnetization state [1]. Intriguingly, the underlying relaxation mechanism driving this transition is inherently stochastic, even in scenarios where thermal fluctuations are relegated to a secondary role [11].
This inherent stochasticity can be attributed to the intricate interplay between the extreme sensitivity of the system to initial conditions and its multistable nature [1]. These factors collectively bestow upon the system a probabilistic dimension, contributing to the inherent randomness in the relaxation process. This intrinsic unpredictability presents both challenges and opportunities for controlling and optimizing spin torque-induced magnetization switching in nanoscale devices.
In this respect, there is great interest in developing strategies to achieve fast, reliable and energy-efficient spin-torque switching. It has been shown that the reliability of spin-torque ballistic switching in in-plane ferromagnetic nanodots can be strongly improved by controlling the quasi-randomness by using appropriate bias fields [1]. On the other hand, optimization of current pulse design targeted at minimization of the energy cost has been recently proposed [13].
In this paper, we propose a full analytical treatment for precessional spin-torque switching of in-plane magnetized nanomagnets that relies on the idea of forcing magnetization to evolve on controlled quasi-latitudinal (CQL) trajectories associated with lower values of the ferromagnet's free energy, which drive magnetization very close to the target reversed state. This is achieved by exciting magnetization precession using injected current pulses of suitable shape. This switching scheme minimizes ringing phenomena that are the main source for failure of the switching process due to back-hopping into the original magnetization state [1, 2]. By developing suitable perturbation theory, the rigorous conditions for the realization of CQL magnetization
Figure 1: Sketch of a MTJ used as STT-MRAM cell.
switching as well as the analytical expression for the optimal current pulse shape are derived as function of the system physical parameters and summarised in dedicated Lemmata.
The paper is organized as follows. In section 2, a simple schematic of the magnetic nanosystem studied as the archetypal for in-plane STT-MRAM cells is introduced and the equation governing the magnetization dynamics driven by the spin-transfer torque is recalled along with the relevant parameters of the system. Then, after some preliminaries in sec. 3, the main result of the paper is stated in sec. 4 as Theorem 4.1. The development of this result requires the definition and the detailed study of three stages of the spin-torque precessional switching, termed expulsion, transfer and attraction, which is carried out in the sections 5-7. Finally, validation of the proposed approach that demonstrated its effectiveness is presented.
## 2 Magnetization dynamics driven by Spin-Transfer Torque
We consider the magnetic nanosystem sketched in fig.1, representing a magnetic tunnel junction (MTJ) sandwiched between two electrodes and subject to a current of intensity \(i\) flowing through the MTJ in the \(z\) direction. A typical MTJ structure [1] is composed by two ferromagnetic layers separated by a non-magnetic (NM) (insulating) layer. The magnetization of one layer, termed fixed (lower layer labelled as FM in fig.1), is artificially pinned to a given orientation and acts as polarizer for electron spins flowing through it. The second ferromagnetic layer, termed free (upper FM layer in fig.1), is where magnetization dynamics can take place driven by the external actions (injected current and applied magnetic field). The MTJ acts as a single bit-cell of a MRAM cell array where the bit state \(0,1\) is coded into the mutual orientation of the free and fixed layers, namely parallel (P) or anti-parallel (AP) magnetization, respectively. The switching of the bit is triggered by the spin-transfer (STT) torque created by the electric current produced when both the bit and word lines are simultaneously addressed [1, 2], which is able to eventually switch the magnetization in the free layer of the MTJ.
The mechanism governing the spin-torque switching can be conveniently explained and quantified under the assumption of spatially-uniform magnetization and negligible thermal fluctuations. In this situation, magnetization dynamics is described by the Landau-Lifshitz-Slonczewski (LLS) equation (written in dimensionless form)[15, 16]:
\[\frac{d\mathbf{m}}{dt}=-\mathbf{m}\times\mathbf{h}_{\rm eff}-\alpha\mathbf{m}\times(\mathbf{m} \times\mathbf{h}_{\rm eff})+\beta\mathbf{m}\times(\mathbf{m}\times\mathbf{e}_{p})\quad, \tag{1}\]
where \(\mathbf{m}\) is the magnetization unit-vector, \(\mathbf{h}_{\rm eff}\) is the effective magnetic field normalized by the saturation magnetization \(M_{s}\), \(\alpha\) is the Gilbert damping, \(\beta\) is the normalized current which measures the strength of the spin-transfer torque, \(\mathbf{e}_{p}\) is the fixed layer unit-vector acting as polarizer for the traversing electric current, and time is measured in units of \((\gamma M_{s})^{-1}\) (\(\gamma\) is the absolute value of the gyromagnetic ratio). The function \(\beta(t)\) is proportional to the injected current \(i(t)\) through the relationship \(\beta(t)=b_{p}i(t)/(SJ_{p})\) where \(b_{p}\) is a model-dependent parameter in the order of unity, \(S\) is the device cross sectional area, and \(J_{p}=\mu_{0}M_{s}^{2}|e|d/\hbar\) is a characteristic current density (\(\mu_{0}\) is the vacuum permeability, \(e\) is the electron charge, \(d\) is the thickness of the free layer, and \(\hbar\) is the reduced Planck constant).
The effective field is expressed as:
\[\mathbf{h}_{\rm eff}=-\frac{\partial g_{L}}{\partial\mathbf{m}}=-D_{1}m_{1}\mathbf{e}_{1} -D_{2}m_{2}\mathbf{e}_{2}-D_{3}m_{3}\mathbf{e}_{3}+\mathbf{h}_{a}\quad, \tag{2}\]
where \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) are the cartesian unit-vectors along the coordinate axes, \(g_{L}\) is the magnetic free
energy of the particle
\[g_{L}(\mathbf{m},\mathbf{h}_{a})=\frac{1}{2}D_{1}m_{1}^{2}+\frac{1}{2}D_{2}m_{2}^{2}+ \frac{1}{2}D_{3}m_{3}^{2}-\mathbf{h}_{a}\cdot\mathbf{m}\, \tag{3}\]
\(\mathbf{h}_{a}\) is the external applied magnetic field and \(D_{1},D_{2},D_{3}\) are effective demagnetizing factors taking into account shape and magneto-crystalline anisotropy. In the sequel, we assume
\[0<D_{1}<D_{2}<D_{3}, \tag{4}\]
meaning that \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) refer to the easy, intermediate, hard axes, respectively.
In the absence of injected current \(\beta=0\) and small enough bias applied field \(\mathbf{h}_{a}\), magnetization lies in one of the stable equilibria aligned with the easy axis. We assume that this equilibrium is \(s^{-}\) such that \(m_{1}\approx-1\). Under this assumption, it is apparent from eq.(1) that, in order to have the maximum torque on magnetization when the current is turned on, the polarizer \(\mathbf{e}_{p}\) must be orthogonal to the easy axis. In this paper we assume \(\mathbf{e}_{p}=\mathbf{e}_{3}\) for reasons that will be clear in the sequel.
## 3 Preliminaries and set-up
Leu us consider the following system of ODEs
\[\begin{cases}\dot{u}_{1}=h_{2}u_{3}+D_{3,2}u_{2}u_{3}+u_{1}u_{3}\beta(t)+ \alpha u_{1}\left(D_{3,1}u_{3}^{2}+D_{2,1}u_{2}^{2}-h_{2}u_{2}\right)\\ \dot{u}_{2}=-D_{3,1}u_{1}u_{3}+u_{2}u_{3}\beta(t)+\alpha\left(D_{3,2}u_{2}u_{3 }^{2}-D_{2,1}u_{2}u_{1}^{2}+h_{2}u_{1}^{2}+h_{2}u_{3}^{2}\right)\\ \dot{u}_{3}=-h_{2}u_{1}+D_{2,1}u_{1}u_{2}-\left(u_{1}^{2}+u_{2}^{2}\right) \beta(t)-\alpha u_{3}\left(D_{3,2}y^{2}+D_{3,1}u_{1}^{2}+h_{2}u_{2}\right)\end{cases}. \tag{5}\]
The latter is immediately obtained from (1) by setting \(\mathbf{u}:=\mathbf{m}\) and \(\mathbf{h}_{a}=:(0,h_{2},0)\). Given \(D_{i}\) satisfying (4), we have defined, for all \(1\leq j<i\leq 3\),
\[D_{i,j}:=D_{i}-D_{j}.\]
Clearly, these quantities are such that
\[D_{3,1}>D_{3,2},\qquad D_{3,2}+D_{2,1}-D_{3,1}=0. \tag{6}\]
The function \(\beta(t):[0,+\infty)\to\mathbb{R}\) (_injected current_) plays here the role of "control".
As it is well known (see, e.g. [10]), the function "distance from the origin"
\[\Psi(\mathbf{u}):=\sum_{j=1}^{3}u_{j}^{2} \tag{7}\]
is conserved along the solutions of (5), namely \(\Psi(\mathbf{u})=1\), meaning that the motion occurs on the unit-sphere.
A switching process (see [10] for a comprehensive treatment) can be briefly described as the possibility to "move" the state of the system (5) from (a neighbourhood of) the equilibrium \(\mathbf{s}^{-}\) to \(\mathbf{s}^{+}\), where \(\mathbf{s}^{\pm}:=(\pm\sqrt{1-(h_{2}/D_{2,1})^{2}},h_{2}/D_{2,1},0)\) when \(\beta(t)=0\) (straightforward check) and this can be achieved by applying a suitable injected current \(\beta(t)\). We shall say that the "switching" has taken place, once the system has reached a state which is attracted by \(\mathbf{s}^{+}\) without the need to supply any further control via \(\beta(t)\). For reasons of topological nature, such a transition cannot exist if the current is switched off all the time.
As already mentioned in the foreword, several approaches can be found in the literature. Possibly the most paradigmatic one is the ballistic one, which consists in applying a current \(\beta(t)=\text{const.}>0\) for a prescribed period of time \(t\in[0,T_{e}]\), then switching it off. The time \(T_{e}\) is meant to be be determined.
However, as already stressed, due to the highly involved phase space space structure of the system at hand pointed out by specialised analyses (see, e.g., [4]) such as a successful switching is more properly intended as a probabilistic feature of the described technique. In other terms, without any supporting result of quantitative nature, this class of switching approaches have an experimental validity only.
In a different spirit, this paper aims to provide a _deterministic_ argument to successfully complete a switching process under mild assumptions. This has been possible by recognising some intrinsic perturbative features of the system at hand, and exploiting them via perturbation methods borrowed from the Hamiltonian world. As a main achievement, this has led to the realisation of the mentioned CQL solutions, via a suitably constructed controlling current \(\beta(t)\). As the name suggests, these solutions are characterised by a quasi-constant (in the perturbative sense) \(u_{3}(t)\) throughout the motion, until a neighbourhood of the target equilibrium \(\boldsymbol{s}^{+}\) is reached.
Interestingly enough, "small" values of \(|u_{3}(t)|\) turn out to provide a very adequate option, being the solution "rapidly" attracted by the equilibrium (characterised by \(u_{3}=0\)), right after the current is switched off. This constitutes another difference with the ballistic approach, where the required amount of initial injected current leads to "higher" values of \(|u_{3}(t)|\), requiring in this way a "long time" to come back to zero, as per effect of the "small" dissipation \(\alpha\).
## 4 Perturbative setting and main result
The starting point consists in noticing that a class of realistic models exhibit different scales amongst the involved parameters. More specifically, whilst \(D_{3,2},D_{3,1}\) are "of order one", denoted with \(O(1)\), the remaining \(h_{2},D_{2,1},\alpha\) are sensibly smaller, namely "of order \(\lambda\)", where \(\lambda\) is typically \(\sim 10^{-2}\). We choose \(\beta(t)=O(\lambda)\) as it is usually done, since the spin torque compensates the damping and is of the same order of magnitude. The described feature of the parameters at hand, naturally leads to a perturbative formulation for (5). In particular, by defining \(h_{2}=:\lambda\tilde{h}_{2}\), with \(\tilde{h}_{2}=O(1)\) and similarly for all the other mentioned \(O(\lambda)\) quantities, the system (5) is immediately cast into the following form
\[\begin{cases}\dot{u}_{1}=D_{3,2}u_{2}u_{3}+\lambda\left[\tilde{h}_{2}u_{3}+u_{ 1}u_{3}\tilde{\beta}(t)+\tilde{\alpha}u_{1}D_{3,1}u_{3}^{2}\right]+\tilde{ \alpha}\lambda^{2}u_{1}u_{2}\left[\tilde{D}_{2,1}u_{2}-\tilde{h}_{2}\right]\\ \dot{u}_{2}=-D_{3,1}u_{1}u_{3}+\lambda u_{2}u_{3}\left[\tilde{\beta}(t)+ \tilde{\alpha}D_{3,2}u_{3}\right]+\tilde{\alpha}\lambda^{2}\left[\tilde{h}_{2 }\left(u_{1}^{2}+u_{3}^{2}\right)-\tilde{D}_{2,1}u_{1}^{2}u_{2}\right]\\ \dot{u}_{3}=\lambda\left[-\tilde{h}_{2}u_{1}+u_{1}u_{2}\tilde{D}_{2,1}-\left( u_{1}^{2}+u_{2}^{2}\right)\tilde{\beta}(t)-\tilde{\alpha}u_{3}\left(D_{3,2}u_{2}^{2}+D _{3,1}u_{1}^{2}\right)\right]-\tilde{\alpha}\tilde{h}_{2}\lambda^{2}u_{2}u_{3} \end{cases} \tag{8}\]
or, in a more compact notation,
\[\dot{\boldsymbol{u}}=\boldsymbol{A}(u_{3})\boldsymbol{u}+\lambda\boldsymbol{ \mathcal{N}}(\boldsymbol{u},\lambda;\tilde{\beta}(t)), \tag{9}\]
where
\[\boldsymbol{A}(\xi):=\begin{pmatrix}0&D_{3,2}\xi&0\\ -D_{3,1}\xi&0&0\\ 0&0&0\end{pmatrix}\]
and the definition of \(\boldsymbol{\mathcal{N}}\) is obvious.
It is clear that at the order zero in \(\lambda\), the variable \(u_{3}(t)\) is constant for all \(t\). As a consequence,
the dynamics in the variables \((u_{1},u_{2})\) is described by the equation of a harmonic oscillator with (constant) frequency determined once and for all by the initial condition \(u_{3}(0)\). We shall restrict ourselves to the choice \(u_{3}(0)=-K\), with \(K\in\mathbb{R}^{+}\). We remark that the latter will be achieved in the full process via the "expulsion" stage, which will be described later on.
Hence, by defining
\[\omega:=K\sqrt{D_{3,2}D_{3,1}} \tag{10}\]
the energy of such an oscillator is given by \(E=(\omega^{2}/2)[u_{1}^{2}+(u_{2}/D_{3,1})^{2}]\), and hence a family of ellipses in the plane \((u_{1},u_{2})\) parameterised by \(E\). A motion starting in \((u_{1}(0),u_{2}(0))=(-1,0)\) (i.e. \(\lambda\)-close to \((s_{1}^{-},s_{2}^{-})\)) has energy \(\hat{E}=\omega^{2}/2\). As a consequence, after a semi-period
\[T_{tr}^{+}:=\pi/\omega, \tag{11}\]
it will evolve in \((u_{1}(T_{tr}^{+}),u_{2}(T_{tr}^{+}))=(1,0)\), which is \(\lambda\)-close to \((s_{1}^{+},s_{2}^{+})\). This is, essentially, the key mechanism of transfer we will rely on.
Note that this represents an archetypal _latitudinal_ solution, as (at least at the zero-th order in \(\lambda\)) it takes place along the same latitudinal line of the unit sphere, determined once and for all by the choice of \(K\).
However, should \(K\) be "small" (for reasons that will be clarified later on), the transfer process would take a "long" time, and then the \(O(\lambda)\) contributions would become significant. This means that, in order to preserve the "transfer" feature of the solutions, we need to find a way to "contain" the variations of \(\boldsymbol{u}\) from the unperturbed solution.
At this point, we are about to summarise the argument used to construct the control that is apt to realise the desired switching process.
Let us now consider, for this purpose, the auxiliary system in a renamed set of variables, say \(v_{j}(t)\), which is obtained from (8) when \(\tilde{\beta}(t)\) is replaced by some function of \(\boldsymbol{v}\), namely \(\hat{\beta}(\boldsymbol{v})\) and subject to the same initial conditions. Hence, the key step to obtain a latitudinal solution consists in choosing \(\hat{\beta}\) in such a way the r.h.s. of the third equation is zero. In fact, as a consequence of this choice, we have \(v_{3}(t)=-K\) for all \(t\).
Hence, in order to select a class of latitudinal solutions we can choose
\[\hat{\beta}_{lat}(\boldsymbol{v}):=(v_{1}^{2}+v_{2}^{2})^{-1}\left[v_{1}v_{2} \tilde{D}_{2,1}+\tilde{\alpha}K\left(D_{3,2}v_{2}^{2}+D_{3,1}v_{1}^{2}\right) +\tilde{\alpha}\tilde{h}_{2}\lambda v_{2}K-\tilde{h}_{2}v_{1}\right], \tag{12}\]
this leads to
\[\dot{\boldsymbol{v}}=\boldsymbol{A}(v_{3})\boldsymbol{v}+\lambda\boldsymbol{ \mathcal{N}}(\boldsymbol{v},\lambda;\hat{\beta}_{lat}(\boldsymbol{v}))=: \boldsymbol{A}(-K)\boldsymbol{v}+\lambda\boldsymbol{\mathcal{F}}(\boldsymbol {v};\boldsymbol{v})+\lambda^{2}\boldsymbol{\mathcal{R}}(\boldsymbol{v}; \boldsymbol{v}). \tag{13}\]
In other terms, \(\lambda\boldsymbol{\mathcal{F}}(\boldsymbol{v};\boldsymbol{v})\) denotes the first order terms (in \(\lambda\)) of the field \(\boldsymbol{\mathcal{N}}(\boldsymbol{v},\cdot;\cdot)\) in which \(\hat{\beta}\) has been substituted as a function of \(\boldsymbol{v}\). It might be useful, in order to clarify the notation, to denote as \(\lambda\boldsymbol{\mathcal{F}}(\boldsymbol{u};\boldsymbol{v})\) the first order terms arising in (9) following to the substitution \(\tilde{\beta}\leftarrow\hat{\beta}_{lat}(\boldsymbol{v})\). The meaning of \(\boldsymbol{\mathcal{R}}\) is obvious as a consequence. Their explicit expression is slightly more involved and is given in Appendix A, however, as for \(\boldsymbol{\mathcal{F}}(\boldsymbol{v};\boldsymbol{v})\) and \(\boldsymbol{\mathcal{R}}(\boldsymbol{v};\boldsymbol{v})\), we have
\[\begin{split}\mathcal{F}_{1}(\boldsymbol{v};\boldsymbol{v})& =-\rho Kv_{2}\left(\tilde{h}_{2}v_{2}+\tilde{D}_{2,1}v_{1}^{2}\right)\\ \mathcal{F}_{2}(\boldsymbol{v};\boldsymbol{v})&=\rho Kv _{1}\left(\tilde{h}_{2}v_{2}-\tilde{D}_{2,1}v_{2}^{2}\right)\\ \mathcal{R}_{1}(\boldsymbol{v};\boldsymbol{v})&=\rho \tilde{\alpha}v_{1}v_{2}\left[K^{2}\tilde{D}_{2,1}v_{2}+v_{1}^{2}(v_{2}\tilde{D }_{2,1}-\tilde{h}_{2})-\tilde{h}_{2}(K^{2}+v_{2}^{2})+\tilde{D}_{2,1}v_{2}^{ 3}\right]\\ \mathcal{R}_{2}(\boldsymbol{v};\boldsymbol{v})&=\rho \tilde{\alpha}v_{1}^{2}\left[-K^{2}\tilde{D}_{2,1}v_{2}-v_{1}^{3}(v_{2}\tilde{D }_{2,1}-\tilde{h}_{2})+\tilde{h}_{2}(K^{2}+v_{2}^{2})-\tilde{D}_{2,1}v_{2}^{ 3}\right]\end{split} \tag{14}\]
where, by virtue of (7), we have set \(v_{1}^{2}+v_{2}^{2}=1-K^{2}=:1/\rho\). Clearly, by construction
\[\mathcal{F}_{3}(\boldsymbol{v};\boldsymbol{v})=\mathcal{R}_{3}(\boldsymbol{v}; \boldsymbol{v})=0. \tag{15}\]
Hence (13) is, by all means, a two dimensional system of ODEs.
Similarly to what has been observed for (8), (13) is clearly integrable at the zero-th order in \(\lambda\).
Let us imagine for a moment to be able to compute a solution \(v_{1,2}(t)\) to (13). The construction above implies that, by choosing the particular _control_
\[\tilde{\beta}(t):=\hat{\beta}_{lat}(v_{1}(t),v_{2}(t),-K), \tag{16}\]
and substituting it in (8), one has \(\boldsymbol{u}(t)=(v_{1}(t),v_{2}(t),-K)\), provided that (8) and (13) are subject to the same initial conditions. In other terms, (exact) solutions to (13) yield latitudinal solutions (in the sense defined above) to (8) via (16). Hence, it is reasonable to expect that "approximated" solutions to (13) could provide a "quasi-constant" behaviour for the third variable \(u_{3}(t)\), in a sense to be made precise. We shall refer to these as _controlled quasi-latitudinal_ (CQL) solutions to (8). Under this setting, the main result states as follows:
**Theorem 4.1**.: _There exist \(\lambda_{0},r^{-},T_{e},T_{tr}>0\), a non-trivial set of parameters and a control_
\[\tilde{\beta}(t):=\begin{cases}\tilde{\beta}_{e}&t\in[-T_{e},0)\\ \tilde{\beta}_{tr}(t)&t\in[0,T_{tr}]\\ 0&t\in(T_{tr},+\infty)\end{cases} \tag{17}\]
_such that the flow of the system (8) controlled via (17) and denoted with \(\tilde{\Phi}^{t}\), for any \(r\leq r^{+}\) and initial condition \(\boldsymbol{u}_{0}:=\boldsymbol{u}(-T_{e})\in\mathfrak{B}_{r}(\boldsymbol{s} ^{-})\), satisfies_
\[\lim_{t\to+\infty}\tilde{\Phi}^{t}(\boldsymbol{u}_{0})\in\mathfrak{B}_{fr}( \boldsymbol{s}^{+})\text{,} \tag{18}\]
_for some \(f>0\). In particular, the system realises a CQL solution in the time interval \([0,T_{tr}]\)._
The rest of the paper is devoted to the proof of Thm. 4.1. This will be achieved in three steps. The first one, called "expulsion", consists in choosing a suitable neighbourhood of the point \(\boldsymbol{s}^{-}\) and injecting a constant current of magnitude \(\tilde{\beta}_{e}\) for a certain time \(T_{e}\). It is shown how this "pushes" the third variable \(u_{3}(t)\) in a region in which it is strictly negative, a key requirement in order to obtain the class of switching motions we are interested in, even at the zero-th order, as anticipated in sec. 4.
A second stage, referred to as "transfer", represents the very heart of the argument, i.e. where the QLS are constructed via a suitable controlling current, with the property of being able to "deliver" the solutions starting in the vicinity of the point reached once the first step is complete, in a neighbourhood of the target point \(\boldsymbol{s}^{+}\).
The last step, called "attraction", consists in showing that \(\boldsymbol{s}^{+}\) is an attractive point when the current is switched off, attracting in this way any points arriving in its vicinity. Although this is a very well known property, quantitative information are needed for a precise formulation of the result, and this is exactly the aim of this third stage. It is immediate to realise that being able to carry out the described three steps, implies a successful realisation of the switching process. The whole argument clearly exploits the known group property of a dynamical system.
From a technical viewpoint, the "core" of the proof (consisting in the "transfer" stage), relies on the possibility to construct a suitable control for the system at hand, by approximating the non-linear flow via Hamiltonian perturbative tools. This is possible via the well known possibility to
interpret any system of ODEs as a Hamiltonian system in a suitably extended phase space, see e.g. [1]. However, such a simple observation has been profitably used in several cases, due to the potential of a full all-orders generalisation of the perturbative setting. See, for instance, [13], [14]. This has the potential to increase the threshold \(\lambda_{0}\), expanding in this way the class of systems which can be dealt with via this approach.
The remaining stages, "expulsion" and "attraction", are carried out by using established tools from the theory of ODEs that will be specified later.
## 5 Expulsion
Let us consider the system (9) and set \(\tilde{h}_{2}=:-\tilde{D}_{2,1}\Omega\), with \(\Omega>0\). In this setting, in particular, one has \(\mathbf{s}^{\pm}\equiv(\pm\gamma,-\Omega,0)\), where \(\gamma:=\sqrt{1-\Omega^{2}}\). Let us now perform a translation-rescaling of \(\hat{\mathbf{s}}^{-}\) at the origin of the new system of coordinates
\[\lambda\mathbf{\xi}:=\mathbf{u}-\mathbf{s}^{-}. \tag{19}\]
Throughout this section we shall consider the autonomous control \(\lambda\tilde{\beta}(t)=:\lambda\tilde{\beta_{e}}(t)\equiv\beta_{e}=\text{ const}\). and the new time \(\tau:=T_{e}+t\), for any \(T_{e}>0\). Consequently, \(\Phi_{e}^{\tau}\) will denote the phase flow of (9) with respect to the new time. Hence, by (19), (9) reads as
\[\frac{d}{d\tau}\mathbf{\xi}=\mathbf{\mathcal{L}}\mathbf{\xi}+\mathbf{f}+\lambda\mathbf{\mathcal{ V}}(\mathbf{\xi};\lambda,\beta_{e}), \tag{20}\]
where,
\[\mathbf{\mathcal{L}}:=\begin{pmatrix}0&0&\bar{a}\\ 0&0&\bar{b}\\ 2\beta_{e}\gamma&2\beta_{e}\Omega&0\end{pmatrix},\qquad\mathbf{f}=(0,0,-\beta_{e }/\lambda)^{\top},\]
\[\bar{a}:=-D_{3,2}\Omega-\beta_{e}\gamma,\qquad\bar{b}:=D_{3,2}\gamma-\beta_{e }\Omega, \tag{21}\]
and \(\mathbf{\mathcal{V}}(\mathbf{\xi};\lambda,\beta_{e})\) is defined as a consequence. Let us remark that we have used the relation \(D_{3,1}=D_{3,2}-\lambda\tilde{D}_{2,1}\), see (6), in order to simplify the structure of \(\mathbf{\mathcal{L}}\). As a result, the linear term \(-\tilde{D}_{2,1}\xi_{3}\) is included in \(\mathbf{\mathcal{V}}(\mathbf{\xi};\lambda,\beta_{e})\). In particular, it is evident that for sufficiently small \(\lambda\) the dominant behaviour is "expulsive" i.e. it drives \(\xi_{3}\) towards negative values.
As it can be easily checked, by defining
\[\mathbf{S}:=\begin{pmatrix}1&1&1\\ \bar{b}/\bar{a}&\bar{b}/\bar{a}&-\gamma/\Omega\\ \sqrt{2}i\beta_{e}/\bar{a}&-\sqrt{2}i\beta_{e}/\bar{a}&0\end{pmatrix}\]
one has \(\mathbf{S}^{-1}\mathbf{\mathcal{L}}\mathbf{S}=\mathbf{\Gamma}\equiv\text{diag}(-\sqrt{2}i \beta_{e},\sqrt{2}i\beta_{e},0)\). Hence, the solution to (9) reads as
\[\mathbf{\xi}(\tau)=e^{\mathbf{\mathcal{L}}\tau}\mathbf{\xi}(0)+\int_{0}^{T_{e}}e^{\mathbf{ \mathcal{L}}(\tau-s)}\mathbf{f}ds+\lambda\int_{0}^{T_{e}}e^{\mathbf{\mathcal{L}}(\tau -s)}\mathbf{\mathcal{V}}(\mathbf{\xi};\lambda,\beta_{e})ds, \tag{22}\]
where, by noticing that \(\bar{a}\gamma+\bar{b}\Omega=-\beta_{e}\), one has
\[e^{\mathbf{\mathcal{L}}\tau}=\frac{1}{\beta_{e}}\begin{pmatrix}-\bar{b}\Omega-\bar {a}\gamma\cos(\sqrt{2}\tau\beta_{e})&\bar{a}\Omega-\bar{a}\Omega\cos(\sqrt{2} \tau\beta_{e})&(\bar{a}/\sqrt{2})\sin(\sqrt{2}\tau\beta_{e})\\ \bar{b}\gamma-\bar{b}\gamma\cos(\sqrt{2}\tau\beta_{e})&-\bar{b}\Omega\cos( \sqrt{2}\tau\beta_{e})-\bar{a}\gamma&(\bar{b}/\sqrt{2})\sin(\sqrt{2}\tau\beta _{e})\\ \sqrt{2}\beta_{e}\gamma\sin(\sqrt{2}\tau\beta_{e})&\sqrt{2}\beta_{e}\Omega \sin(\sqrt{2}\tau\beta_{e})&\beta_{e}\sin(\sqrt{2}\tau\beta_{e})\end{pmatrix}.\]
Furthermore, the norm of the latter is uniformly bounded in time, i.e.
\[||e^{\mathbf{\mathcal{L}}\tau}||\leq\mathcal{M}_{e}, \tag{23}\]
for some suitable \(\mathcal{M}_{e}=\mathcal{M}_{e}(\beta_{e},D_{3,2},\gamma,\Omega)\geq 1\) and all \(\tau\in\mathbb{R}\).
It is now easy to realise that, if \(O(\lambda)\) are disregarded in (22), the integral appearing in it can be computed immediately and this gives rise to a function, namely \(\mathbf{\xi}^{\prime}(\tau)\), which approximates the "full" \(\mathbf{\xi}(\tau)\) for "small" \(\lambda\) (under suitable assumptions). Such a function reads as
\[\mathbf{\xi}^{\prime}(\tau)=e^{\mathbf{\mathcal{L}}\tau}\mathbf{\xi}^{\prime}(0)+(2\beta_{ e}\lambda)^{-1}\left(\bar{a}(\cos(\sqrt{2}t\beta_{e})-1),\bar{b}(\cos(\sqrt{2}t \beta_{e})-1),-\sqrt{2}\beta_{e}\sin(\sqrt{2}t\beta_{e})\right)^{\top}. \tag{24}\]
As the state of the system is supposed to be initially in a neighbourhood of the origin, it is natural to set \(\mathbf{\xi}^{\prime}(0)=\mathbf{0}\). Hence, in the original set of variables, the third variable evolves as \(u_{3}(\tau)\sim-2^{-1/2}\sin(\sqrt{2}t\beta_{e})\) for initial data "close to" \(\mathbf{s}^{-}\) and "sufficiently small" \(\lambda\). Once a target value \(-K\) for \(u_{3}\) has been set, with
\[0<\sqrt{2}K\leq 1, \tag{25}\]
the expulsion stage is defined as the evolution of the system for \(\tau\in[0,T_{e}]\), where
\[T_{e}:=(\sqrt{2}\beta_{e})^{-1}\arcsin(\sqrt{2}K). \tag{26}\]
From (24) one immediately gets
\[\mathbf{\xi}^{\prime}(T_{e})=(2\beta_{e}\lambda)^{-1}(\bar{a}(-1+\sqrt{1-2K^{2}}), \bar{b}(-1+\sqrt{1-2K^{2}}),-2K\beta_{e}). \tag{27}\]
**Remark 5.1**.: _For realistic values of the parameters (for instance, \(\beta_{e},K^{2}=O(\lambda)\)), the quantities \(\lambda\xi^{\prime}_{1,2}(T_{e})\) are \(O(1)\), i.e. the point reached after the expulsion stage is "far" from \(\mathbf{s}^{-}\). For this reason, approximation formula (27) will play a key role in the main proof. Furthermore, the latter has some interest itself in ballistic switching processes, in which "long" expulsions are typically considered._
The approximation features of (27), numerically validated for an example in fig. 2, are stated in a quantitative form in the next
**Lemma 5.1**.: _Let us set a target value \(K\) satisfying (25) and \(\beta_{e}>0\) arbitrarily chosen. Then choose \(T_{e}\) as in (26). Let us now define_
\[r^{*}:=(2\beta_{e}\lambda)^{-1}\left[4K^{2}+(D_{3,2}^{2}+\beta_{e}^{2})(\sqrt {1-2K^{2}}-1)^{2}\right]^{\frac{1}{2}} \tag{28}\]
_and let be \(\rho_{e}>0\) arbitrarily chosen. Now set_
\[\lambda\mathcal{M}_{1}:=\max_{\mathbf{\xi}\in\mathfrak{B}_{r^{*}}(\mathbf{0})}|\mathbf{ \mathcal{V}}(\mathbf{\xi};\lambda,\beta_{e})|,\qquad\lambda\mathcal{M}_{2}:=\max _{\mathbf{\xi}\in\mathfrak{B}_{r^{*}+\rho_{e}}(\mathbf{0})}||\mathbf{D}\mathbf{\mathcal{V}}( \mathbf{\xi};\lambda,\beta_{e})||_{\infty} \tag{29}\]
_see Appendix B. for the explicit expressions of \(\mathbf{D}\mathbf{\mathcal{V}}(\mathbf{\xi};\lambda,\beta_{e})\)._
_Then for all \(\lambda\in(0,\lambda_{e}]\), with_
\[\lambda_{e}:=(T_{e}\mathcal{M}_{e})^{-1}\min\{(4\mathcal{M}_{1})^{-1}\rho_{e},\mathcal{M}_{2}^{-1}\log 2\}\text{,} \tag{30}\]
_the following property holds_
\[\Phi^{T_{e}}(\mathfrak{B}_{(4\mathcal{M}_{e})^{-1}\rho_{e}\lambda}(\mathbf{s}^{-}) )\subseteq\mathfrak{B}_{\rho_{e}\lambda}(\mathbf{u}_{c}(T_{e}))\text{,} \tag{31}\]
_where \(\mathbf{u}_{c}(\tau):=\lambda\mathbf{\xi}^{\prime}(\tau)+\mathbf{s}^{-}\). In other terms, solutions starting within a sphere of radius \(r^{-}\) around \(\mathbf{s}^{-}\) remain contained in a sphere of radius \(\rho_{e}\lambda\) of the (known) point \(\lambda\mathbf{\xi}^{\prime}(T_{e})+\mathbf{s}^{-}\)._
**Remark 5.2**.: _As it can be easily noticed from (28), \(r^{*}|_{K=0}=0\) and it grows monotonically with \(K\). The values of \(\mathcal{M}_{1,2}\) will increase accordingly. This implies that the allowed threshold for \(\lambda_{e}\) worsens as the target \(K\) gets bigger._
Proof.: The proof relies on the fact that the solutions \(\boldsymbol{\xi}(\tau)\), starting from a suitable neighbourhood of the origin, do not escape the sphere of radius \(r^{*}+\rho_{e}\) centred at the origin, in such a way the bounds (29) are justified. For this purpose, let us firstly check that \(|\boldsymbol{\xi}^{\prime}(T_{e})|\leq r^{*}\). This is immediate from (27), assumption (28) (notice that \(\bar{a}^{2}+\bar{b}^{2}=D_{3,2}^{2}+\Omega^{2}\)) and finally observing that \(|\boldsymbol{\xi}^{\prime}(\tau)|\) is monotonically increasing for all \(\tau\in[0,T_{e}]\).
Let us now define \(\boldsymbol{\delta}(\tau):=\boldsymbol{\xi}(\tau)-\boldsymbol{\xi}^{\prime}(\tau)\). By substituting in (22) we get
\[\boldsymbol{\delta}(\tau)=e^{\boldsymbol{\mathcal{L}}\tau}\boldsymbol{\delta} (0)+\lambda\int_{0}^{T_{e}}e^{\boldsymbol{\mathcal{L}}(\tau-s)}\boldsymbol{ \mathcal{V}}(\boldsymbol{\xi}^{\prime}+\boldsymbol{\delta};\lambda,\beta_{e})ds, \tag{32}\]
where \(\boldsymbol{\delta}(0)\equiv\boldsymbol{\xi}(0)\). By taking the absolute values of (32), then using the bound \(|\boldsymbol{\mathcal{V}}(\boldsymbol{\xi}^{\prime}+\boldsymbol{\delta}; \lambda,\beta_{e})|\leq|\boldsymbol{\mathcal{V}}(\boldsymbol{\xi}^{\prime}; \lambda,\beta_{e})|+\mathcal{M}_{2}|\boldsymbol{\delta}|\), assumptions (29), and finally the classical Gronwall lemma, one obtains
\[|\boldsymbol{\delta}(\tau)|\leq\mathcal{M}_{e}(|\boldsymbol{\delta}(0)|+ \lambda\mathcal{M}_{1}\tau)\exp(\lambda\mathcal{M}_{e}\mathcal{M}_{2}\tau).\]
Hence, for all \(\lambda\in(0,\lambda_{e}]\) and all \(\tau\in(0,T_{e}]\) it is sufficient to choose \(|\boldsymbol{\delta}(0)|\leq(4\mathcal{M}_{e})^{-1}\rho_{e}\) as suggested by the l.h.s. of (31), in order to get \(|\boldsymbol{\delta}(\tau)|\leq\rho_{e}\). This proves the r.h.s. of (31).
## 6 Transfer
Let us firstly introduce the following notation. Given any vector \(\boldsymbol{x}\in\mathbb{R}^{3}\) we shall denote with \(\boldsymbol{x}_{r}:=(x_{1},x_{2})\). Vice-versa, we shall denote with \(w_{1,2}\) the first two components of either \(\boldsymbol{w}_{r}\) or \(\boldsymbol{w}\).
Figure 2: In panel (a) the trajectories \(\boldsymbol{\xi}(\tau)\) (continuous line) and \(\boldsymbol{\xi}^{\prime}(\tau)\) (dashed line) are reported in the stereographic coordinates defined via \(w_{1,2}:=u_{1,2}/(1+u_{3})\) and transformation (19). Panel (b) shows the behaviour of the error \(|\boldsymbol{\delta}(t)|\) for \(t\in[-T_{e},0]\) with \(T_{e}=2.3372\) and three different values of \(\lambda\): \(\lambda_{(1)}=0.006\), \(\lambda_{(2)}=0.004\) and \(\lambda_{(3)}=0.001\). The corresponding values for \(\tilde{\beta}_{e}\) are \(0.0018\), \(0.0012\) and \(0.003\), respectively. The remaining parameters used are specified in Appendix C.
It is immediate to notice from (14) and (15) that \(\mathbf{\mathcal{F}}(\mathbf{v};\mathbf{v})\equiv(\mathbf{\mathcal{F}}_{r}(\mathbf{v}_{r};\mathbf{v}_{r}),0)\) and similarly for \(\mathbf{\mathcal{R}}\). Hence, it is meaningful to consider the following system
\[\dot{\mathbf{w}}_{r}=\mathbf{L}\mathbf{w}_{r}+\lambda\mathbf{\mathcal{F}}_{r}(\mathbf{w}_{r},\mathbf{w} _{r}) \tag{33}\]
where
\[\mathbf{L}:=\begin{pmatrix}0&-\sigma\omega\\ \omega/\sigma&0\end{pmatrix},\qquad\sigma:=\sqrt{D_{3,2}/D_{3,1}}, \tag{34}\]
and \(\omega\) has been defined in (10). Clearly, (33) is nothing but the first order truncation of (13), written in the reduced set of variables \(\mathbf{w}_{r}\). Moreover, it is immediate to check that
\[\mathcal{G}_{tr}:=\sigma^{-1}w_{1}^{2}+\sigma w_{2}^{2}, \tag{35}\]
is a prime integral for (33) if \(\lambda=0\).
With the aim to construct CQL solutions, we ask ourselves if the system can be solved, for instance, by means of perturbative tools. The answer is affirmative, as stated in the following
**Proposition 6.1**.: _It is possible to construct a function \(\mathbf{w}_{r}^{[\leq 1]}(t)\) satisfying (33) up to \(O(\lambda)\)._
**Remark 6.1**.: _It is important to avoid any ambiguity about the meaning of the previous statement. Solving (33) up to \(O(\lambda)\) does not mean solving it exactly. In fact one has \(\dot{\mathbf{w}}_{r}^{[\leq 1]}-\mathbf{L}\mathbf{w}_{r}^{[\leq 1]}-\lambda\mathbf{\mathcal{F}}_{r} (\mathbf{w}_{r}^{[\leq 1]},\mathbf{w}_{r}^{[\leq 1]})=O(\lambda^{2})\). This is typical of perturbative arguments, which are known to generate a remainder as a consequence of the expansions involved._
Proof.: Let us firstly cast the linear part of (33) into a diagonal form. It is immediate to check that the required transformation is given by
\[\mathbf{w}_{r}=\mathbf{C}\mathbf{x},\qquad\mathbf{C}:=\begin{pmatrix}\sigma&\sigma\\ -i&i\end{pmatrix}. \tag{36}\]
In fact, \(\mathbf{C}^{-1}\mathbf{L}\mathbf{C}=\mathrm{diag}(i\omega,-i\omega)=:\mathbf{\Lambda}\). We can now cast system (33) in the new set of variables
\[\dot{\mathbf{x}}=\mathbf{\Lambda}\mathbf{x}+\lambda\mathbf{C}^{-1}\mathbf{\mathcal{F}}_{r}(\mathbf{C} \mathbf{x}), \tag{37}\]
into a Hamiltonian form via a phase space extension. More precisely, by denoting with \(y_{j}\) the momenta canonically conjugated to \(x_{j}\), one has that (37) is given by (part of) the canonical equations of
\[H(\mathbf{y},\mathbf{x})=H_{0}(\mathbf{y},\mathbf{x})+\lambda H_{1}(\mathbf{y},\mathbf{x}),\]
where
\[\begin{split} H_{0}&:=i\omega(x_{1}y_{1}-x_{2}y_{2}) \\ H_{1}&:=K\rho y_{1}\left[\frac{\tilde{h}_{2}}{2} \left(\sigma+\frac{1}{\sigma}\right)x_{1}^{2}+\frac{\tilde{h}_{2}}{2}\left( \frac{1}{\sigma}-\sigma\right)x_{2}^{2}-\frac{\tilde{h}_{2}}{\sigma}x_{1}x_{2 }-i\tilde{D}_{2,1}\sigma x_{1}x_{2}^{2}+i\tilde{D}_{2,1}\sigma x_{1}^{3} \right]\\ &\quad+K\rho y_{2}\left[\frac{\tilde{h}_{2}}{2}\left(\sigma+ \frac{1}{\sigma}\right)x_{2}^{2}+\frac{\tilde{h}_{2}}{2}\left(\frac{1}{\sigma }-\sigma\right)x_{1}^{2}-\frac{\tilde{h}_{2}}{\sigma}x_{1}x_{2}+i\tilde{D}_ {2,1}\sigma x_{1}^{2}x_{2}-i\tilde{D}_{2,1}\sigma x_{2}^{3}\right]\end{split}. \tag{38}\]
As it is common in perturbation theory, we ask weather it is possible to find a canonical, \(\lambda-\)close to the identity transformation of variables \((\mathbf{x},\mathbf{y})=\mathcal{T}(\mathbf{X},\mathbf{Y})\) apt to "remove" the contribution of \(H_{1}\) i.e. such that
\[H\circ\mathcal{T}=H_{0}(\mathbf{Y},\mathbf{X})+O(\lambda^{2}). \tag{39}\]
For this purpose, by invoking the well known Grobner Exchange Theorem, see e.g. [10], such a transformation will be determined by requiring that
\[\exp(\mathcal{L}_{\lambda\chi})H=H_{0}+O(\lambda^{2}),\]
i.e. the well known first order homological equation
\[\{\chi,H_{0}\}=H_{1}, \tag{40}\]
where the generating function \(\chi\) will be sought of the form
\[\chi(\boldsymbol{y},\boldsymbol{x})=\boldsymbol{y}\cdot\boldsymbol{\mathcal{ C}}(\boldsymbol{x}). \tag{41}\]
This choice is suggested by a general property when dealing with Hamiltonians obtained via the above described phase space extension, see [13] for a proof. In this particular case, we will look for \(\mathcal{C}_{j}\) as non-homogeneous polynomials of the form
\[\mathcal{C}_{j}(\boldsymbol{x}):=\sum_{|\boldsymbol{\nu}|=2,3}c^{(j)}_{ \boldsymbol{\nu}}\boldsymbol{x}^{\boldsymbol{\nu}}, \tag{42}\]
where \(\boldsymbol{\nu}\in\mathbb{N}^{2}\), \(|\nu|:=\nu_{1}+\nu_{2}\), \(\boldsymbol{x}^{\boldsymbol{\nu}}:=x_{1}^{\nu_{1}}x_{2}^{\nu_{2}}\) and \(c^{(j)}_{\boldsymbol{\nu}}\) are complex-valued unknown coefficients to be determined. By using (42) in (41), then substituting in (40) one finds
\[\begin{split}-c^{(1)}_{0,2}&=c^{(2)}_{2,0}=i\tilde {h}_{2}K\rho(\sigma^{2}-1)/(6\sigma\omega)\\ -c^{(1)}_{2,0}&=c^{(2)}_{0,2}=i\tilde{h}_{2}K\rho( \sigma^{2}+1)/(2\sigma\omega)\\ -c^{(1)}_{1,1}&=c^{(2)}_{1,1}=i\tilde{h}_{2}K\rho \sigma/(\sigma\omega)\end{split} \tag{43}\]
and
\[\begin{split} c^{(1)}_{3,0}&=c^{(2)}_{0,3}=c^{(1)} _{1,2}=c^{(1)}_{2,1}=\tilde{D}_{2,1}K\rho\sigma/(2\omega)\\ c^{(1)}_{3,0}&=c^{(2)}_{0,3}=0\end{split} \tag{44}\]
The clear symmetry relations amongst these coefficients is related to the well known property \(x_{1}=\bar{x}_{2}\) (here \(\bar{z}\) denotes the complex-conjugated of \(z\in\mathbb{C}\)), which is typical of the coordinates maps (36). In conclusion, if \(O(\lambda^{2})\) terms are disregarded, the normalising transformation reads as
\[\boldsymbol{x}=\boldsymbol{\mathcal{N}}_{[\leq 1]}(\boldsymbol{X}):= \boldsymbol{X}+\lambda\boldsymbol{\mathcal{C}}(\boldsymbol{X}). \tag{45}\]
If the above mentioned Grobner Theorem is not used, checking that (45) satisfies (39) is just a matter of patience.
**Remark 6.2**.: _Interestingly enough, the variables \(\boldsymbol{y}\) and \(\boldsymbol{Y}\) do not appear in (45), which consistently becomes a normalising transformation in the pair \((\boldsymbol{x},\boldsymbol{X})\) alone for the original system (37)._
It is a simple consequence of the construction above that if \(\boldsymbol{X}(t)\) is a solution to the normalised system (up to \(O(\lambda)\)), then \(\boldsymbol{x}(t):=\boldsymbol{\mathcal{N}}_{[\leq 1]}(\boldsymbol{X}(t))\) is a solution to the original system (up to \(O(\lambda)\)). On the other hand, the integration of the normalised system is immediate
\[X_{1,2}(t)=(X_{a}\pm iX_{b})e^{\pm it\omega},\]
where
\[X_{a}:=\Re X_{1}(0),\qquad X_{b}:=\Im X_{1}(0), \tag{46}\]
(recall that \(X_{1}=\bar{X}_{2}\)). Hence, by using (36), one gets
\[w_{1}(t) =2\sigma(X_{a}\cos(t\omega)-X_{b}\sin(t\omega))+\lambda\omega^{-1}K \rho\left\{\tilde{D}_{2,1}\sigma^{2}(X_{a}^{2}+X_{b}^{2})[X_{a}\cos(t\omega)-X_{ b}\sin(t\omega)]\right. \tag{47}\] \[\left.+(2/3)\tilde{h}_{2}(\sigma^{2}+2)[2X_{a}X_{b}\cos(2t\omega)- (X_{b}^{2}-X_{a}^{2})\sin(2t\omega)]\right.\] \[\left.-\tilde{D}_{2,1}\sigma^{2}[X_{a}(3X_{b}^{2}-X_{a}^{2})\cos( 3t\omega)-X_{b}(X_{b}^{2}-3X_{a}^{2})\sin(3t\omega)]\right\}\] \[w_{2}(t) =2(X_{b}\cos(t\omega)+X_{a}\sin(t\omega))+\lambda\omega^{-1}K \rho\left\{-\tilde{D}_{2,1}\sigma(X_{a}^{2}+X_{b}^{2})[X_{b}\cos(t\omega)+X_{ a}\sin(t\omega)]\right.\] \[+(2/3)\tilde{h}_{2}(2\sigma^{2}+1)[(X_{b}^{2}-X_{a}^{2})\cos(2t \omega)+2X_{a}X_{b}\sin(2t\omega)]\] \[\left.-\tilde{D}_{2,1}\sigma[X_{b}(X_{b}^{2}-3X_{a}^{2})\cos(3t \omega)+X_{a}(3X_{b}^{2}-X_{a}^{2})\sin(3t\omega)]-2\tilde{h}_{2}(X_{a}^{2}+X_ {b}^{2})/\sigma\right\}\]
Clearly, the functions above provide the required approximated solution to a given Cauchy problem for (33), once the quantities \(X_{a,b}\) have been determined from the initial condition \(\mathbf{w}_{r}(0)\) (which will be specified later on). More precisely, by (36) and (45) one has
\[\mathbf{X}(0)=\mathbf{\mathcal{N}}_{[\leq 1]}^{-1}(\mathbf{C}^{-1}\mathbf{w}_{r}(0)), \tag{48}\]
where we recall that, if \(O(\lambda^{2})\) terms are disregarded, one simply has \(\mathbf{\mathcal{N}}_{[\leq 1]}^{-1}(\mathbf{x})=\mathbf{x}-\lambda\mathbf{\mathcal{C}}(\mathbf{x})\). The values of \(X_{a,b}\) are finally determined by recalling (46). These quantities determine the required \(\mathbf{w}_{r}^{[\leq 1]}(t)\) via (47).
**Remark 6.3**.: _Clearly, the normalisation order could be increased by generalising the above described procedure, for instance by using the Lie Series or Lie Transform methods, see e.g. [10]. The Hamiltonian formulation plays a key role from this point of view._
_Knowingly, the computations get dramatically more and more involved as the normalisation order is increased._
Let us now notice that at the zero-th order in \(\lambda\), (48) yields
\[\mathbf{X}^{[0]}(0)=2^{-1}(\sigma^{-1}w_{1}(0),w_{2}(0)),\]
hence, from (47), one gets
\[\mathbf{w}_{r}^{[0]}(t)=(w_{1}(0)\cos(\omega t)-w_{2}(0)\sigma\sin(\omega t),w_{1} (0)\sigma^{-1}\sin(\omega t)+w_{2}(0)\cos(\omega t)).\]
As \(w_{1}(0)\) will be assumed as negative and bounded away from zero, the first component can be written as \(\mathcal{A}_{m}\cos(\omega t+\phi)\), where \(\phi:=\arctan(\sigma w_{2}(0)/w_{1}(0))\) and
\[\mathcal{A}_{m}:=w_{1}(0)\sqrt{1+(\sigma w_{2}(0)/w_{1}(0))^{2}}<0. \tag{49}\]
Hence, we shall define \(T_{tr}\) as the time necessary for \(w_{r,1}^{[0]}(t)\) to reach \(-\mathcal{A}_{m}-K^{2}\). This yields
\[T_{tr}:=\omega^{-1}\left[\arccos(-1-K^{2}/\mathcal{A}_{m})-\arctan(\sigma w_ {2}(0)/w_{1}(0))\right]. \tag{50}\]
It is immediate to realise that this quantity is correctly defined as we we shall only consider cases in which \(K^{2}\ll|w_{1}(0)|\leq|\mathcal{A}_{m}|\). Moreover, recalling (11), it is easy to check that
\[T_{tr}\leq T_{tr}^{+}. \tag{51}\]
In fact, \(T_{tr}^{+}\) is attained in the (unrealistic as \(D_{2,1}>0\)) case \(\mathbf{w}_{r}(0)=(-1,0)\) i.e \(\gamma=1\). However, as it is evident from (19) and (22), the expulsion stage acts by increasing \(w_{2}(t)\) (and hence
decreasing \(w_{1}(t)\), and this clearly reduces \(T_{tr}\).
Once the control has been constructed, it is important to obtain an estimate of the measure of the initial conditions around \(\mathbf{w}(0)\equiv(\mathbf{w}_{r}(0),-K)\) which are "safely transported" in a neighbourhood of the target point \(\mathbf{w}(T_{tr})\). In order to achieve this, we need to bound the difference between the solutions of two different systems. The first one is given by (33) (i.e. the \(O(\lambda)\) truncation of (13)), whose solution is known by Prop. 6.1. By setting \(\mathbf{w}:=(\mathbf{w}_{r}^{[\leq 1]},-K)\) and disregarding \(O(\lambda^{2})\), this reads as
\[\dot{\mathbf{w}}=:\mathbf{A}(-K)\mathbf{w}+\lambda\mathbf{\mathcal{F}}(\mathbf{w};\mathbf{w}), \tag{52}\]
being subject to the prefixed initial condition \(\mathbf{w}(0)\). The second system is the full, controlled one, i.e. obtained from (8) by setting \(\tilde{\beta}(t):=\hat{\beta}_{lat}(\mathbf{w}^{[\leq 1]}(t))\). Its \(O(\lambda)\) truncation can be written as
\[\dot{\mathbf{u}}=\mathbf{A}(u_{3})\mathbf{u}+\lambda\mathbf{\mathcal{F}}(\mathbf{u};\mathbf{w}), \tag{53}\]
with an "uncertain" initial condition around \(\mathbf{w}(0)\). Let us denote by \(\Phi_{tr}^{t}\) the flow of the controlled system (53). An example of the constructed CQL solutions and the corresponding control is shown in fig. 3.
**Lemma 6.1** (\(O(\lambda)\) cut-off).: _Let us define the following quantities_
\[\mathcal{M}_{tr}:=1+\max_{|\mathbf{u}|\leq 2}||\mathbf{D_{u}\mathcal{F}}(\mathbf{u};\mathbf{ w})||_{\infty},\qquad\mathcal{K}_{\mathbf{w}}:=1+\max_{t\in[0,T_{tr}]}|\mathbf{w}_{r}^{[ \leq 1]}|.\]
_Then, for all \(\lambda\in(0,\lambda_{tr}]\), with_
\[\lambda_{tr}:=(4\pi^{2}\mathcal{K}_{\mathbf{w}}\mathcal{M}_{tr})^{-1}\omega^{2}, \tag{54}\]
_and all \(\rho_{tr}^{+}>0\), there exists \(\rho_{tr}^{-}=\rho_{tr}^{-}(\rho_{tr}^{+})\), such that following property holds_
\[\Phi_{tr}^{T_{tr}}(\mathfrak{B}_{\lambda\rho_{tr}^{-}}(\mathbf{w}(0))\subset \mathfrak{B}_{\lambda\rho_{tr}^{+}}(\mathbf{w}(T_{tr})). \tag{55}\]
**Remark 6.4**.: _The \(O(\lambda^{2})\) terms in (53) can be bounded by using the Lie series theory, and this requires some extra work. However, it is possible to check that the information carried by them can be thought as "negligible". More precisely, by facing slightly more cumbersome estimates, one could repeat the proof below by including these \(O(\lambda^{2})\) contributions, and verify that those terms would imply an additional \(O(\lambda)\) term in (54). Hence, the above statement still holds for the "non cut-off" problem at a price of a possible (but "small") further restriction of the threshold \(\lambda_{tr}\)._
Proof.: This proof has some similarities with the one of Lem. 5.1. Let us now define \(\mathbf{\eta}:=\mathbf{u}-\mathbf{w}\), hence, by (52) and (53) one gets
\[\dot{\mathbf{\eta}}=\mathbf{A}(-K)\mathbf{\eta}+\mathbf{A}(\eta_{3})(\mathbf{w}+\mathbf{\eta})+\lambda [\mathbf{\mathcal{F}}(\mathbf{w}+\mathbf{\eta};\mathbf{w})-\mathbf{\mathcal{F}}(\mathbf{w};\mathbf{w})]. \tag{56}\]
It is evident from the latter and the structure of \(\mathbf{A}\) that \(\dot{\eta}_{3}\) has an \(O(\lambda)\) magnitude, hence it is appropriate to consider the new variable \(\eta_{3}=:\lambda\zeta_{3}\). By substituting it in (56), one gets that the \(O(1)\) evolution of \(\eta_{1,2}\) is given by the linear system \(\dot{\mathbf{\eta}}_{r}=\mathbf{L}\mathbf{\eta}_{r}\), where \(\mathbf{\eta}_{r}:=(\eta_{1},\eta_{2})\).
Let us now introduce \(d_{r}^{\pm},d_{3}^{\pm}\in(0,1]\), which will play the role of real parameters to be determined, then suppose for a moment that the following bound holds
\[|\zeta_{3}|\leq d_{3}^{+}. \tag{57}\]
By multiplying both sides of the first two components of equation (56) by the "integrating factor" \(\exp(-\mathbf{L}t)\), integrating and finally considering the absolute values, one obtains
\[|\mathbf{\eta}_{r}(t)| \leq|\mathbf{\eta}_{r}(0)|+\lambda\int_{0}^{t}\left\{|\mathbf{A}(\zeta_{3} )(\mathbf{w}+\mathbf{\eta})|+\mathcal{M}_{tr}|\mathbf{\eta}|\right\}ds\] \[\leq|\mathbf{\eta}_{r}(0)|+\lambda d_{3}^{+}\mathcal{K}_{\mathbf{w}}t+ \lambda(d_{3}+\mathcal{M}_{tr})\int_{0}^{t}|\mathbf{\eta}_{r}(s)|ds\]
where we have used that \(|\exp(\mathbf{L}\cdot)\mathbf{z}|=|\mathbf{z}|\), for all \(\mathbf{z}\in\mathbb{R}^{3}\) as \(\mathbf{L}\) has pure complex eigenvalues only, and the bound
\[|\mathcal{F}_{i}(\mathbf{w}+\mathbf{\eta};\mathbf{w})-\mathcal{F}_{i}(\mathbf{w};\mathbf{w})|\leq |\mathbf{\mathcal{F}}(\mathbf{w}+\mathbf{\eta};\mathbf{w})-\mathbf{\mathcal{F}}(\mathbf{w};\mathbf{w})| \leq\mathcal{M}_{tr}|\mathbf{\eta}|. \tag{58}\]
Hence, by using the Gronwall lemma,
\[|\mathbf{\eta}_{r}(t)|\leq\left[|\mathbf{\eta}_{r}(0)|+\lambda d_{3}^{+}\mathcal{K}_ {\mathbf{w}}t\right]e^{\lambda(d_{3}^{+}+\mathcal{M}_{tr})t}. \tag{59}\]
Recalling that \(T_{tr}\leq\pi/\omega\) by (51), by using assumption (54) one gets \(\lambda(1+\mathcal{M}_{tr})t\leq 2\lambda\mathcal{M}_{tr}t\leq\log(2)\) for all \(t\leq T_{tr}\) as \(\mathcal{M}_{tr}\geq 1\). Hence, in order to obtain
\[|\mathbf{\eta}_{r}(t)|\leq\lambda d_{r}^{+}, \tag{60}\]
it is sufficient to require \(|\mathbf{\eta}_{r}(0)|\leq\lambda d_{r}^{-}\), and then
\[d_{r}^{-}\leq d_{r}^{+}/4,\qquad d_{3}^{+}\leq\omega d_{r}^{+}/(\pi\mathcal{K }_{\mathbf{w}}). \tag{61}\]
Let us now consider the third equation of (56), i.e. \(\dot{\zeta}_{3}=\mathcal{F}_{3}(\mathbf{w}+\mathbf{\eta};\mathbf{w})-\mathcal{F}_{3}(\mathbf{ w};\mathbf{w})\). The latter and bound (58), yield \(|\zeta_{3}(t)|\leq|\zeta_{3}(0)|+\int_{0}^{t}\mathcal{M}_{tr}\left(|\mathbf{\eta}_ {r}(s)|+\lambda|\zeta_{3}(s)|\right)ds\). Hence, (60) yields
\[|\zeta_{3}(t)|\leq|\zeta_{3}(0)|+\lambda\mathcal{M}_{tr}d_{r}^{+}t+\lambda \mathcal{M}_{tr}\int_{0}^{t}|\zeta_{3}(s)|ds.\]
By using the Gronwall lemma once again and assumption (54), which implies, _a fortiori_, \(\exp(\lambda\mathcal{M}_{tr}t)<2\) for all \(t\in[0,T_{tr}]\), we obtain
\[|\zeta_{3}(t)|\leq 2\left[|\zeta_{3}(0)|+\lambda\mathcal{M}_{tr}d_{r}^{+}\pi/ \omega\right].\]
Hence, by setting \(|\zeta_{3}(0)|\leq d_{3}^{-}\), a sufficient condition for the property (57) to hold true, is
\[d_{3}^{-}\leq d_{3}^{+}/4,\qquad\lambda d_{r}^{+}\leq\omega d_{3}^{+}/(4\pi \mathcal{M}_{tr}). \tag{62}\]
It is easy to realise that the second conditions appearing in (61) and in (62) hold in the region of the plane \((d_{r}^{+},d_{3}^{+})\) given by \(d_{3}^{+}\in[4\omega^{-1}\pi\mathcal{M}_{tr}\lambda d_{r}^{+},\pi^{-1}K_{\mathbf{ w}}^{-1}\omega d_{r}^{+}]\), for all \(d_{r}^{+}\in[0,1]\). Such a region is correctly defined by virtue of (54). Hence, for any \(d_{r}^{+}\), we choose the medium point of the above mentioned interval, that is,
\[d_{3}^{+}:=\Theta d_{r}^{+}, \tag{63}\]
where \(\Theta:=2^{-1}(\omega^{-1}\lambda 4\pi\mathcal{M}_{tr}+\pi^{-1}K_{\mathbf{w}}^{-1}\omega)\). Note that \(\Theta<1\) (as, in particular, \((2\pi)^{-1}K_{\mathbf{w}}^{-1}\omega<1\) by construction). This implies \(d_{3}^{+}<d_{r}^{+}\). Hence, as \(|\mathbf{\eta}(0)|\geq\max\{|\mathbf{\eta}_{r}(0)|,|\eta_{3}(0)|\}\), the first conditions of (61) and (62) hold if one requires
\[\lambda(d_{3}^{+}/4)\geq|\mathbf{\eta}(0)|. \tag{64}\]
On the other hand, bounds (57) and (60) imply, for all \(t\leq T_{tr}\),
\[|\boldsymbol{\eta}(t)|\leq|\boldsymbol{\eta}_{r}(t)|+\lambda|\zeta_{3}(t)|\leq \lambda\left[1+\Theta\right]d_{r}^{+}. \tag{65}\]
In conclusion, given \(\rho_{tr}^{+}>0\), we have that (65) and (64), imply that by defining
\[\rho_{tr}^{-}:=[4(1+\Theta)]^{-1}\Theta\rho_{tr}^{+}, \tag{66}\]
and choosing \(|\boldsymbol{\eta}(0)|\leq\lambda\rho_{tr}^{-}\) as in (55), the proof is complete.
Figure 3: A case study to show the quasi-latitudinal features of the constructed solutions during the “transfer” stage (the “expulsuion” stage is not reported for the sake of clarity). In panel (a) the behaviour of \(u_{3}(t)\) for the values \(\lambda_{(1)}=0.0055\), \(\lambda_{(2)}=0.005\) and \(\lambda_{(3)}=0.001\). The remaining panels show the case with \(\lambda=\lambda_{(3)}\). Panel (b) shows the current realising the control, whilst panels (c) and (d) represent the trajectory of the system controlled with such a current in the space \(\boldsymbol{u}\) and stereographic projection \(\boldsymbol{w}\), respectively. In particular, \(\boldsymbol{w}^{\pm}\) represent the stereographic projection of \(\boldsymbol{s}^{\pm}\). The remaining parameters are reported in Appendix C.
Attraction
As anticipated, the aim of this section is to study the dynamics in a neighbourhood of the equilibrium \(\mathbf{s}^{+}\) when the current is switched off, i.e. \(\beta=0\), and show that \(\mathbf{s}^{+}\) is an attractive point for the system, providing in addition an estimate for the basin of attraction. For this purpose, we shall proceed in a non-perturbative fashion and consider system (9). This is motivated by the fact that the attractive behaviour is characterised by \(\alpha\), whilst \(\lambda\) does not play a particularly relevant role in this case. As usual, we shall start by considering the standard translation
\[\mathbf{u}=\mathbf{U}+\mathbf{s}^{+}, \tag{67}\]
and setting \(\beta=0\) then \(h_{2}=:-D_{2,1}\Omega\) with \(\Omega>0\), and \(\gamma:=\sqrt{1-\Omega^{2}}\), in such a way (5) reads as
\[\dot{\mathbf{U}}=\mathbf{\mathcal{G}}_{1}(\mathbf{U})+\alpha\mathbf{\mathcal{G}}_{2}(\mathbf{U}), \tag{68}\]
where \(\mathbf{\mathcal{G}}_{1}(\mathbf{U}):=\left(D_{3,1}U_{3}(U_{2}-\Omega)-D_{3,2}U_{2}U_ {3},-D_{3,1}U_{3}(\gamma+U_{1}),D_{2,1}U_{2}(\gamma+U_{1})\right)^{\top}\) and \(\mathbf{\mathcal{G}}_{2}\) is defined as a consequence. We shall denote with \(\Phi^{t}_{a}\) the corresponding phase flow. Moreover, let us define \(\tilde{\Psi}(\mathbf{U}):=\Psi(\mathbf{u})|_{\mathbf{u}=\mathbf{U}+\mathbf{s}^{+}}\).
In this setting, we can state the following
**Lemma 7.1**.: _Define the following quadratic function_
\[\mathfrak{W}(U_{2},U_{3}):=2^{-1}(D_{2,1}U_{2}^{2}+D_{3,1}U_{3}^{2}) \tag{69}\]
_and suppose_
\[3\Omega^{2}\geq 2D_{2,1}/D_{3,1},\qquad 16\sqrt{D_{2,1}/D_{3,1}}\leq\gamma. \tag{70}\]
_Then, for all \(\alpha>0\), and all \(\mathcal{E}:=1+\delta_{a}\), with_
\[|\delta_{a}|\leq(\gamma/4)^{2}, \tag{71}\]
_all the solutions of (68) starting in the set_
\[\mathcal{B}_{\mathcal{E}}:=\{\mathbf{U}\in\mathbb{R}^{3}:|U_{1}|\leq\gamma/4, \quad\mathfrak{W}(U_{2},U_{3})\leq(D_{2,1}\gamma)^{2}/(32D_{3,1}\Omega^{2}) \}\cap\{\tilde{\Psi}(\mathbf{U})=\mathcal{E}\},\]
_satisfy the following property_
\[\mathbf{U}_{\infty}(\delta_{a}):=\lim_{t\to+\infty}\Phi^{t}_{a}\left(\mathcal{B} _{\mathcal{E}}\right)=(-\gamma+\sqrt{\gamma^{2}+\delta_{a}},0,0) \tag{72}\]
_and_
\[|\mathbf{U}_{\infty}(\delta_{a})|\leq\delta_{a}/(2\gamma). \tag{73}\]
_In particular, those trajectories with \(\mathcal{E}=1\) are asymptotic to \(\mathbf{U}=\mathbf{0}\) (and hence \(\mathbf{u}(t)\to\mathbf{s}^{+}\)). Furthermore, the projection of \(\mathcal{B}_{\mathcal{E}}\) on the plane \((U_{2},U_{3})\) contains the disk centred at the origin of radius_
\[r_{sm}:=(4|\Omega|D_{3,1})^{-1}\gamma D_{2,1}. \tag{74}\]
**Remark 7.1**.: _The importance of these last two lines of statement lies in the fact that, by construction, the CQL solution which realises the transfer, ends up at a point located in the vicinity of \(\mathbf{U}=(0,0,-K)\). This will be made precise later (see, for instance, (79)). However, it is important to stress for the moment that this is necessary and we shall choose \(2K\leq r_{sm}\) later on, in order to ensure that such a point is suitably attracted once the current is switched off._
A numerical validation of Lem. 7.1 is reported in fig. 4.
Proof.: The proof relies on the Theory of Lyapunov functions and related tools for the basin of attraction estimation, see e.g. [10] for a comprehensive description. The proof will be carried out in the variables \(U_{2},U_{3}\), being the behaviour of the first variable a consequence, by the conservation law \(\Psi(\mathbf{u})=\mathrm{const.}\), see (7).
Let us start noticing that (69) satisfies \(\mathfrak{W}(0,0)=0\) and \(\mathfrak{W}(U_{2},U_{3})>0\) for all \((U_{2},U_{3})\neq\mathbf{0}\). Furthermore, it is immediate to check that its derivative along the solutions of the "undamped" (68), i.e. with \(\alpha=0\), satisfies
\[\dot{\mathfrak{W}}|_{\alpha=0}:=\nabla_{(U_{2},U_{3})}\mathfrak{W}\cdot\mathbf{ \mathcal{G}}_{1}(\mathbf{U})=0,\]
hence \(\mathfrak{W}\) is a (non-strict) Lyapunov function for the undamped (partial) system, showing that it is stable. However, our aim is to show that the damped system satisfies \(\dot{\mathfrak{W}}<0\) on a suitable set, proving in this way the (stronger) asymptotic stability.
As \(D_{2,1}\ll D_{3,1}\) by assumption, the region we are considering to prove the negative definiteness of \(\dot{\mathfrak{W}}\) will be conveniently chosen as non-isotropic. More precisely, we shall proceed by introducing in \(\dot{\mathfrak{W}}\) the following variables transformation
\[\mathbf{U}=(\mu x,\varepsilon(\gamma D_{2,1})^{-1}\cos\theta,\varepsilon D_{3,1}^ {-1}\sin\theta),\]
with \((x,\theta)\in[0,1]\times[0,2\pi]\) and \(\varepsilon,\mu>0\) to be determined. The latter gives
\[\begin{split}\dot{\tilde{\mathfrak{W}}}&=-\alpha \varepsilon^{2}\left[1+2\mu x(\sin^{2}\theta+\gamma^{-1}\cos^{2}\theta)+(\mu x )^{2}(\sin^{2}\theta+\gamma^{-2}\cos^{2}\theta)+\right.\\ &\left.-2\Omega\varepsilon\gamma^{-1}(D_{2,1}^{-1}-D_{3,1}^{-1}) \cos\theta\sin^{2}\theta+\varepsilon^{2}\gamma^{-2}(D_{2,1}D_{3,1})^{-2}(D_{ 3,1}-D_{2,1})^{2}\sin^{2}\theta\cos^{2}\theta\right],\end{split} \tag{75}\]
where \(\tilde{\mathfrak{W}}\) stands for \(\mathfrak{W}\) in the new set of variables. It is evident that for sufficiently small \(\varepsilon,\mu\) and all \(\alpha>0\), the latter is strictly negative. Our goal is now to find a bound for the thresholds \(\varepsilon_{0},\mu_{0}\) in such a way this property persists for all \(\varepsilon\leq\varepsilon_{0}\) and \(\mu\leq\mu_{0}\).
Let us firstly deal with \(O(\varepsilon^{2})\) coefficients. For this purpose, it is immediate to check that
\[1+2\mu x(\sin^{2}\theta+\gamma^{-1}\cos^{2}\theta)+(\mu x)^{2}(\sin^{2}\theta+ \gamma^{-2}\cos^{2}\theta)\geq 1-2\mu/\gamma.\]
Hence the latter will be, say, greater than \(1/2\), for all \(\mu\leq\mu_{0}:=\gamma/4\).
As for the terms of \(O(\varepsilon^{3})\), by recalling (6), one has
\[(D_{2,1}^{-1}-D_{3,1}^{-1})\cos\theta\sin^{2}\theta\leq D_{2,1}^{-1}.\]
Finally, \(O(\varepsilon^{4})\) contributions are clearly strictly positive. In conclusion one has
\[\dot{\tilde{\mathfrak{W}}}<\alpha\varepsilon^{2}\left[2\Omega\varepsilon/(D_ {2,1}\gamma)-1/2\right].\]
The latter implies that the derivative along the solutions \(\tilde{\mathfrak{W}}\) will be strictly negative by choosing
\[\varepsilon_{0}:=D_{2,1}\gamma/(4\Omega).\]
This implies that, in the original set of variables \(\mathbf{U}\), the cylinder ellipse - shaped region on which \(\dot{\mathfrak{W}}\) is strictly negative, is given by
\[\mathcal{C}_{el}:=\{\mathbf{U}\in\mathbb{R}^{3}:|U_{1}|\leq\gamma/4,\quad U_{2}^{ 2}+(U_{3}D_{3,1}/(D_{2,1}\gamma))^{2}=(4\Omega)^{-2}\}.\]
In order to obtain the required estimate for the basin of attraction, we need to find the largest value \(\mathfrak{W}^{*}\) for which the level curve \(\mathfrak{W}(U_{2},U_{3})=\mathfrak{W}^{*}\) is entirely contained in the projection of \(\mathcal{C}_{el}\) on the \((U_{2},U_{3})\) plane. This is a simple problem of constrained optimisation in which \(\mathfrak{W}\) has to be minimised on the ellipse \(U_{2}^{2}+(U_{3}D_{3,1}/(D_{2,1}\gamma))^{2}=(4\Omega)^{-2}\). It is easy to check that the required minimum is attained at
\[(U_{2}^{*},U_{3}^{*}):=(0,\pm D_{2,1}\gamma/(4D_{3,1}\Omega)),\]
implying \(\mathfrak{W}^{*}:=\mathfrak{W}(U_{2}^{*},U_{3}^{*})\equiv(D_{2,1}\gamma)^{2}/( 32D_{3,1}\Omega^{2})\). It is now necessary to check that \(|U_{1}(t)|\leq\gamma/4\) for all \(t\), ensuring that \(\boldsymbol{U}(t)\in\mathcal{C}_{el}\) and then \(\mathfrak{W}<0\) all \(t\).
For this purpose, let us observe that the level \(\tilde{\Psi}(\boldsymbol{U})=\mathcal{E}\) is equivalent to
\[(U_{1}+\gamma)^{2}+(U_{2}-\Omega)^{2}+U_{3}^{2}=1+\delta_{a}\]
The latter defines a \(\delta_{a}\)-family of surfaces in which \(U_{1}\) can be written as a graph (surface) over the other two variables in a suitable neighbourhood of the origin, as follows
\[U_{1}=U_{1}(U_{2},U_{3};\delta_{a})\equiv-\gamma+\sqrt{1+\delta_{a}-(U_{2}- \Omega)^{2}-U_{3}^{2}}, \tag{76}\]
and such that \(U_{1}(0,0;0)=0\). The sign has been chosen according to the fact that we are describing the portion of sphere located near \(\boldsymbol{s}^{+}\). The task consists in showing that, under the required condition for \(\delta_{a}\), any point \((U_{2},U_{3})\in\mathcal{B}_{\mathcal{E}}\), yields \(|U_{1}|\leq\gamma/4\). Given the structure of (76), it will be sufficient to show \(U_{1}\geq-\gamma/4\), being condition \(U_{1}\leq\gamma/4\) a consequence. From (76), one has \(U_{1}\geq-\gamma+\sqrt{-|\delta_{a}|+\gamma^{2}-A_{a}}\), where \(A_{a}:=2\Omega|U_{2}|+U_{2}^{2}+U_{3}^{2}\). It is immediate to verify that, by choosing \(\delta_{a}\) as in (71), the desired condition is obtained if
\[A_{a}\leq(3/8)\gamma^{2}. \tag{77}\]
On the other hand, in \(\mathcal{B}_{\mathcal{E}}\) one has
\[|U_{2}|\leq[\gamma/(4\Omega)]q_{a},\qquad U_{2}^{2}+U_{3}^{2}\leq[\gamma/(4 \Omega)]^{2}q_{a}^{2},\]
where \(q_{a}:=\sqrt{D_{2,1}/D_{3,1}}\). This implies that condition (77) holds if
\[4q_{a}+2^{-1}\Omega^{-2}q_{a}^{2}\gamma\leq\gamma.\]
A sufficient condition for the latter to hold is that, for instance, \(4q_{a}\leq\gamma/4\) and \(2^{-1}\Omega^{-2}q_{a}^{2}\leq 3/4\). They are equivalent to the first and the second of (70), respectively. Equation (72) follows directly from (76) whilst bound (73) is immediate from (72) and a Taylor estimate.
As for the last statement it is sufficient to observe that the ellipse resulting from the projection of \(\mathcal{B}_{\mathcal{E}}\) clearly contains the disk whose radius is not larger than the minor semi-axis of the former, which is given by the r.h.s. of (74).
#### Proof of Thm. 4.1
Proof.: Let us consider the class of expulsion targets and currents such that \(\beta_{e},K^{2}=O(\lambda)\). In order to use Lemmata 5.1, 6.1 and 7.1 we are going to make the assumptions required by these results. First of all, given \(D_{2,1}=O(\lambda)\) and \(D_{3,1}=O(1)\), let us choose \(\Omega\) within the limitation prescribed by the first of (70), this determines a range for \(h_{2}=-D_{2,1}\Omega\). It is easy to realise that \(\Omega=O(\sqrt{\lambda})\). As a consequence, \(\gamma\sim 1-O(\lambda)\), hence we can assume without loss of generality,
that the last of (70) is satisfied as well. There is a certain amount of freedom in the choice of \(D_{3,2}\), provided that (6) holds. We anticipate that the value of \(\delta_{a}\) will be chosen, but only at the very end of the proof, according to (71), hence we shall proceed by supposing that such a condition is satisfied.
Once the possibility to use Lem. 7.1 has been guaranteed, let us examine the setting of the remaining Lemmata. As for Lem. 5.1, the values of \(r^{*},\mathcal{M}_{1,2}\) are uniquely determined, this yields \(\lambda_{e}\). The same holds for the constants of \(\mathcal{M}_{tr},\mathcal{K}_{\boldsymbol{w}}\) and the threshold \(\lambda_{tr}\) of Lem. 6.1. In order to proceed with the proof we shall assume \(\lambda_{0}:=\min\{\lambda_{e},\lambda_{tr}\}\), although practical applications of Lem. 6.1 may require a further restriction of such a threshold, as already discussed in Rem. 6.4.
Figure 4: The field attractivity feature within the basin \(\mathcal{B}_{\mathcal{E}}\) when the injected current \(\beta(t)\) is switched off. The (black) ellipse in the first three panels represents the set \(\mathfrak{W}(U_{2},U_{3})=(D_{2,1}\gamma)^{2}/(32D_{3,1}\Omega^{2})\) (recall that \(\boldsymbol{U}\), defined via (67), represents a coordinate system in which the equilibrium \(\boldsymbol{s}^{+}\) corresponds to the origin). Panels (a) and (b) show a trajectory starting on the boundary of \(\mathcal{B}_{\mathcal{E}}\) for \(\delta_{a}=0\) and \(\delta_{a}=0.1\), respectively. In panel (c) the projection on the \((U_{2},U_{3})\) plane of the latter trajectory is depicted. The last panel shows, as expected, the decreasing behaviour of the Lyapunov function \(\mathfrak{W}\) along the same trajectory. The full set of parameters is reported in Appendix C.
Let us now use the mentioned Lemmata to construct a suitable neighbourhood of the equilibrium \(\mathbf{s}^{-}\) which is transported in the vicinity of \(\mathbf{s}^{+}\) and then attracted by the point close to it on the corresponding energy level. The proof will be complete once (18) is validated and a precise value for \(r^{-}\) is computed.
For this purpose let us start by observing that, from (27) and (19), at the end of the expulsion stage started in \(\mathbf{u}_{c}(0)\equiv\mathbf{s}^{-}\), we get
\[\mathbf{u}_{c}(T_{e})=(-\gamma+\lambda\kappa\bar{a},-\Omega-\lambda\kappa\bar{b},- K),\]
with \(\kappa:=(2\beta_{e})^{-1}(-1+\sqrt{1-2K^{2}})\sim K^{2}/(2\beta_{e})=O(1)\) by assumption and \(T_{e}\) given by (26). Note that \(T_{e}\sim K/\beta_{e}=O(\lambda^{-1/2})\). Furthermore, \(\Phi(\mathbf{u}_{c}(T_{e}))=1+O(\lambda)\).
Let us now set \(\mathbf{w}_{r}(0):=\mathbf{u}_{c}(T_{e})\) in such a way to start the transfer stage. This determines the value of \(T_{r}\) via (50). By construction we have \(w_{1}^{[0]}(T_{tr})=-\mathcal{A}_{m}-K^{2}=\gamma+O(\lambda)=:\hat{\gamma}\) and \(w_{3}^{[0]}(T_{tr})=-K\). In order to determine \(w_{2}^{[0]}(T_{tr})\), one can use the conservation law (35), which reads as
\[\sigma^{-1}(\lambda\kappa\bar{a}-\gamma)^{2}+\sigma(\Omega+\lambda\kappa\bar{b })^{2}=\sigma^{-1}\hat{\gamma}^{2}+\sigma(w_{2}^{[0]}(T_{r}))^{2}.\]
The latter gives, recalling (21) and expanding in \(\lambda\),
\[w_{2}^{[0]}(T_{r})=-\Omega+O(\lambda). \tag{78}\]
The sign in the r.h.s. of the latter has been chosen by using the fact that before and on the first occurrence of \(w_{1}^{[0]}=-\mathcal{A}_{m}-K^{2}\), the trajectory lies in the \(w_{2}^{[0]}\)-negative half-plane. Hence, by recalling (47), we have
\[\mathbf{w}(T_{e})=(\gamma,-\Omega,-K)+O(\lambda). \tag{79}\]
Once more, the usual conservation law is easily checked as \(\Phi(\mathbf{w}(T_{e}))=1+O(\lambda)\).
Now we need to make sure that (79) is actually contained in the basin of attraction \(\mathcal{B}_{\mathcal{E}}\) in such a the attractivity property stated in Lem. 7.1 can be used. For this purpose, we recall the very last statement of Lem. 7.1 and set \(r_{sm}\geq(5/4)K\) (i.e. slightly bigger than \(K\) itself). Hence, (74) yields
\[\Omega^{2}\leq(5KD_{3,1})^{2}(\gamma D_{2,1})^{2},\]
yielding, by recalling that \(\gamma^{2}=1-\Omega^{2}\), \(\Omega^{2}\leq[1+25(D_{3,1}/D_{2,1}^{2})K^{2}]^{-1}\).
The latter, if compared with the first of (70) (which provides a lower bound for \(\Omega^{2}\)), makes sense provided that \(50K^{2}D_{3,1}\leq 3\gamma^{2}D_{2,1}\). One could choose, for instance,
\[K=\bar{K}:=(\gamma/4)\sqrt{D_{2,1}/D_{3,1}},\]
then computing \(\Omega^{2}\) as the mid-point given by the two conditions
\[\Omega^{2}:=2^{-1}[(2/3)(D_{2,1}/D_{3,1})+\gamma^{2}/(25\bar{K}^{2})(D_{2,1}/ D_{3,1})^{2}]=(52/75)(D_{2,1}/D_{3,1}),\]
this determines \(h_{2}\).
Hence, any suitable manifold \(\mathcal{E}=1+\delta_{a}\) contained in the cylinder \(\{\sqrt{U_{2}^{2}+U_{3}^{2}}\leq(5/4)K,\ |U_{1}|\leq\gamma/4\}\) is contained in \(\mathcal{B}_{\mathcal{E}}\). This implies that any \(\mathfrak{B}_{K/8}(\mathbf{w}(T_{e}))\) is contained in \(\mathcal{B}_{\mathcal{E}}\) as well. Hence, we can set \(\rho_{tr}^{+}:=K/(8\lambda)\) and \(\lambda\rho_{tr}^{-}:=[32(1+\Theta)]^{-1}\Theta K\) by (66), then use Lem. 6.1. The latter, allows us to conclude that the set \(\mathfrak{B}_{\lambda\rho_{tr}^{-}}(\mathbf{w}(T_{e}))\) will evolve inside \(\mathcal{B}_{\mathcal{E}}\) under the action of the controlled phase flow \(\Phi_{tr}^{t}\).
As \(\omega=O(\lambda^{-1/2})\) because of the choice for \(K\), we have \(\Theta=O(\sqrt{\lambda})\), hence \(\lambda\rho_{tr}^{-}=O(\lambda)\). It is
now sufficient to recall Lem. 5.1 and set \(\rho_{e}:=\rho_{tr}^{-}\) to determine the value of \(r^{-}\), which is easily found via (30) as
\[r^{-}=[128\mathcal{M}_{e}(1+\Theta)]^{-1}\Theta K.\]
Similarly, we notice that \(r^{-}=O(\lambda)\).
As (73) holds for the family of points starting in \(\mathcal{B}_{\mathcal{E}}\) and parameterised by \(\delta_{a}\) in the admissible range (71), this property will be true, _a fortiori_, for the points of the set
\[\mathcal{U}_{a}:=\Phi_{tr}^{T_{tr}}\left(\Phi_{e}^{T_{e}}(\mathfrak{B}_{r^{-}} (\boldsymbol{s}^{-}))\right)\subset\mathcal{B}_{\mathcal{E}}.\]
Hence, by defining
\[\boldsymbol{u}_{\infty}:=\lim_{t\to+\infty}\Phi_{a}^{t}\left(\bigcup_{ \mathcal{E}=1+\delta_{a}}\mathcal{B}_{\mathcal{E}}\right),\]
its distance from \(\boldsymbol{s}^{+}\) is readily bounded by \(\delta_{a}\) via (73) as follows
\[|\boldsymbol{u}_{\infty}(\delta_{a})-\boldsymbol{s}^{+}|=|\boldsymbol{U}_{ \infty}(\delta_{a})|\leq\delta_{a}/(2\gamma).\]
On the other hand, by the conservation of \(\Psi(\boldsymbol{u})\), an upper bound for \(\delta_{a}\) is found by evaluating the maximum of \(|\Psi(\boldsymbol{u})-1|\) for all \(\boldsymbol{u}:=\boldsymbol{s}^{-}+\boldsymbol{u}^{\prime}\in\mathfrak{B}_{r^ {-}}(\boldsymbol{s}^{-})\) i.e. for all \(\boldsymbol{u}^{\prime}\in\mathfrak{B}_{r^{-}}(\boldsymbol{0})\). This is straightforward, as \(\Psi(\boldsymbol{s}^{-}+\boldsymbol{u}^{\prime})\leq 1+r^{-}\) by the triangle inequality, hence \(|\delta_{a}|\leq r^{-}\). This proves (18) with \(f:=(2\gamma)^{-1}\). The proof of Thm. 4.1 is now complete.
## 8 Validation tests and conclusions
The aim of this section is to provide examples of full switching processes realised according to Thm. 4.1. To summarise, after an initial "expulsion" stage, the evolving point is "transferred"
Figure 5: Example of a full switching process. Panel (a) shows the (piece-wise) trajectory in the space \(\boldsymbol{u}\), as a sequence of an “expulsion” (black), a “transfer” (red) and finally the “attractive” stage (blue). The initial condition has been chosen as \(\boldsymbol{u}(-T_{e})=\boldsymbol{s}^{-}+\lambda(-0.1,0.05,0)\), with \(T_{e}=1.0262\). The first \(\sim 133\) time units of the constructed controlling current are reported in panel (b). The small black segment for \(t\in[-T_{e},0]\) (emphasised by the arrow), represents the current injected during the expulsion stage. We have set \(\lambda=0.002\). The full set of parameters is reported in Appendix C.
via a CQL solution in the estimated basin of attraction of the target equilibrium so that it can be "attracted" by it. An example of a full switching is reported in fig. 5.
As anticipated in sec. 3, the approach proposed here is conceptually different from the well established ballistic switching procedure, see, e.g. [dPS\({}^{+}\)15a]. From the qualitative viewpoint, these stages could be compared to our "expulsion" and "attraction" respectively, see fig. 6, panel (a). However, as it is evident from the latter, in order to successfully complete the switching, the operator needs to trust a basin of attraction which is, for instance, way larger than the one rigorously computed in this work. Clearly, the one stated in Lem. 7.1 is nothing but a sufficient condition but it suggests that, if not addressed with further (and highly specialised) tools and ad-hoc arguments, the attractivity of larger sets retains no more than a probabilistic validity. On the contrary, the proposed CQL strategy has a genuinely deterministic character, provided that \(\lambda\) is sufficiently small (according to the bounds described in the proof) and the initial set of initial conditions is chosen in \(\mathfrak{B}_{r^{-}}(\mathbf{s}^{-})\).
In addition, as anticipated in sec. 3, our method only requires a minimal amount of initial injected current during the "expulsion" stage, as the solution is subsequently "guided" towards \(\mathcal{B}_{\mathcal{E}}\) via the constructed control, along a CQL trajectory. As a result, this remarkably reduces the "energy" carried by \(u_{3}\) and then the amplitude of the oscillations around \(\mathbf{s}^{+}\) when the current is switched off. See fig. 6 for a depiction of this phenomenon in a comparison with the ballistic approach.
As a further check in relation with the robustness of this method and its deterministic character, the outcomes of a stress test are proposed in fig. 7, in which some errors are simulated with respect to the proposed control. More precisely, either a time dilatation-contraction coefficient for the expulsion time \(T_{e}\gets jT_{e}\) with \(j=1\pm 0.02\) or for the transfer control, i.e. \(\beta_{tr}(t)\leftarrow\tilde{\beta}_{tr}(jt)\) are considered.
In conclusion, the present work proposes an analytical formulation apt to realise a fully deterministic switching mechanism. The argument relies on the concept of CQL solution, a highly non-local object at the heart of the procedure, constructed via a perturbative approach which exploits intrinsic technologically relevant features of the system. The mentioned CQL solutions are realised by the explicit determination of a time dependent control, determined by using well established tools of Hamiltonian perturbation theory. Interestingly, the latter possess the potential to increase the range of validity of the method to a even larger class of systems, should a higher order analysis be considered. The numerical experiments proposed offer either a validation or a visual interpretation of the main statement of Thm. 4.1 and the related Lemmata.
## Appendix A
Explicit expressions of \(\mathbf{\mathcal{F}}(\mathbf{u};\mathbf{v})\) and \(\mathbf{\mathcal{R}}(\mathbf{u};\mathbf{v})\):
\[\mathcal{F}_{1} =-u_{1}u_{3}(-v_{2}^{2}\tilde{\alpha}D_{3,2}K-v_{1}^{2}\tilde{ \alpha}D_{3,1}K+v_{1}\tilde{h}_{2}-v_{1}v_{2}\tilde{D}_{2,1})\rho+u_{3}\tilde{ h}_{2}+u_{1}u_{3}^{2}\tilde{\alpha}D_{3,1}\] \[\mathcal{F}_{2} =u_{2}u_{3}^{2}\tilde{\alpha}D_{3,2}-u_{2}u_{3}(-v_{2}^{2}\tilde{ \alpha}D_{3,2}K-v_{1}^{2}\tilde{\alpha}D_{3,1}K+v_{1}\tilde{h}_{2}-v_{1}v_{2} \tilde{D}_{2,1})\rho\] \[\mathcal{F}_{3} =(u_{2}^{2}+u_{1}^{2})(-v_{2}^{2}\tilde{\alpha}D_{3,2}K-v_{1}^{2} \tilde{\alpha}D_{3,1}K+v_{1}\tilde{h}_{2}-v_{1}v_{2}\tilde{D}_{2,1})\rho-u_{1} \tilde{h}_{2}\] \[\quad+\tilde{\alpha}(-u_{2}^{2}u_{3}D_{3,2}-u_{1}^{2}u_{3}D_{3,1} )+u_{1}u_{2}\tilde{D}_{2,1}\] \[\mathcal{R}_{1} =u_{1}v_{2}u_{3}\tilde{\alpha}\tilde{h}_{2}K\rho+\tilde{\alpha}( u_{1}u_{2}^{2}\tilde{D}_{2,1}-u_{1}u_{2}\tilde{h}_{2})\] \[\mathcal{R}_{2} =u_{2}v_{2}u_{3}\tilde{\alpha}\tilde{h}_{2}K\rho+\tilde{\alpha}( u_{3}^{2}\tilde{h}_{2}+u_{1}^{2}\tilde{h}_{2}-u_{1}^{2}u_{2}\tilde{D}_{2,1})\] \[\mathcal{R}_{3} =-(u_{2}^{2}+u_{1}^{2})v_{2}\tilde{\alpha}\tilde{h}_{2}K\rho-u_{2} u_{3}\tilde{\alpha}\tilde{h}_{2}\]
## Appendix B
Explicit expression of the entries of \(\{\mathbf{DV}(\mathbf{\xi},\tilde{\beta})\}\). Let us denote them with \(\lambda b_{i,j}\). We have
\[b_{1,1} =\xi_{3}\tilde{\beta}\lambda+\tilde{\alpha}(\tilde{D}_{2,1}\xi_{2} \Omega+D_{3,1}\xi_{3}^{2})\lambda^{2}+\tilde{\alpha}\tilde{D}_{2,1}\xi_{2}^{2} \lambda^{3}\] \[b_{1,2} =D_{3,2}\xi_{3}-\tilde{\alpha}\tilde{D}_{2,1}\Omega\tilde{h}_{2} \lambda+\tilde{\alpha}\tilde{D}_{2,1}(\xi_{1}\Omega-2\xi_{2}\tilde{h}_{2}) \lambda^{2}+2\tilde{\alpha}\tilde{D}_{2,1}\xi_{1}\xi_{2}\lambda^{3}\] \[b_{1,3} =(-\tilde{\beta}\tilde{h}_{2}+\tilde{D}_{2,1}\Omega+D_{3,2}\xi_{ 2})+(\xi_{1}\tilde{\beta}-2\tilde{\alpha}D_{3,1}\xi_{3}\tilde{h}_{2})\lambda+ 2\tilde{\alpha}D_{3,1}\xi_{1}\xi_{3}\lambda^{2}\] \[b_{2,1} =-D_{3,1}\xi_{3}+2\tilde{\alpha}\tilde{D}_{2,1}\xi_{2}\tilde{h}_ {2}\lambda^{2}-2\tilde{\alpha}\tilde{D}_{2,1}\xi_{1}\xi_{2}\lambda^{3}\] \[b_{2,2} =(\xi_{3}\tilde{\beta}-\tilde{\alpha}\tilde{D}_{2,1}\gamma^{2}) \lambda+\tilde{\alpha}(2\tilde{D}_{2,1}\xi_{1}\tilde{h}_{2}+D_{3,2}\xi_{3}^{2 })\lambda^{2}-\tilde{\alpha}\tilde{D}_{2,1}\xi_{1}^{2}\lambda^{3}\] \[b_{2,3} =\tilde{\beta}\Omega-D_{3,1}\xi_{1}+(2\tilde{\alpha}D_{3,2}\xi_{ 3}\Omega+\xi_{2}\tilde{\beta})\lambda+2\tilde{\alpha}\xi_{3}(\tilde{D}_{2,1} \Omega+D_{3,2}\xi_{2})\lambda^{2}\] \[b_{3,1} =2\tilde{\beta}\tilde{h}_{2}(2\tilde{\alpha}D_{3,1}\xi_{3}\tilde{ h}_{2}-2\xi_{1}\tilde{\beta}+\tilde{D}_{2,1}\xi_{2})\lambda-2\tilde{\alpha}D_{3,1} \xi_{1}\xi_{3}\lambda^{2}\] \[b_{3,2} =-(\tilde{D}_{2,1}\tilde{h}_{2}+2\tilde{\beta}\Omega)+(-2\tilde{ \alpha}D_{3,2}\xi_{3}\Omega-2\xi_{2}\tilde{\beta}+\tilde{D}_{2,1}\xi_{1}) \lambda-\tilde{\alpha}\xi_{3}(\tilde{D}_{2,1}\Omega+2D_{3,2}\xi_{2})\lambda^ {2}\] \[b_{3,3} =-\tilde{\alpha}(D_{3,1}\tilde{h}_{2}^{2}+D_{3,2}\Omega^{2})+ \tilde{\alpha}(2D_{3,1}\xi_{1}\tilde{h}_{2}\overline{4}\tilde{\neg}\tilde{ \neg}\tilde{D}_{2,1}\Omega^{2}-2D_{3,2}\xi_{2}\Omega)\lambda\] \[-\tilde{\alpha}(\tilde{D}_{2,1}\xi_{2}\Omega+D_{3,2}\xi_{2}^{2}+D _{3,1}\xi_{1}^{2})\lambda^{2}\]
Figure 6: Comparison between the ballistic switching (a) and the one obtained by using CQL solutions (b). The initial condition has been set as \(\mathbf{u}(-T_{e})=\mathbf{s}^{-}+(\lambda,0,0)\) in both cases. Below, ((c) and (d), respectively), the corresponding behaviour of the first coordinate \(u_{1}(t)\). A considerably small residual oscillation of the mentioned coordinate can be observed in the CQL case. The remaining parameters used are specified in Appendix C.
## Appendix C
In this section the numerical values used in the experiments are reported. All the experiments have the following common features
* \(D_{1}=0.0411\), \(D_{3}=0.8527\) (see, for instance, [14]),
* \(D_{2}:=D_{1}+6.51\lambda\) (perturbative setting),
* \(\gamma=\sqrt{1-\Omega^{2}}\).
The remaining parameters have been chosen as follows
Figure 7: Switching trajectories via the CQL approach with a simulated error in the control: panels (a) and (b) show the case of a time contraction/dilatation of the computed “espulsion” time, more precisely \(T_{e}\gets 0.98T_{e}\) and \(T_{e}\gets 1.02T_{e}\), respectively. Panels (c) and (d) report an analogous test during the “transfer” stage instead. Hence, the time variable in the control \(\tilde{\beta}_{tr}(t)\) is replaced with \(t\gets 0.98t\) and \(t\gets 1.02t\), respectively. Taking into account the “considerable” duration of the transfer process (it has an order of \(10^{2}\) time units) the latter should be regarded as a pretty “hard” test and evident trajectory modifications such as those shown in (c) and (d), are reasonably expected. Full set of parameters in Appendix C.
### Acknowledgements
This work has been supported by the Italian Ministry of University and Research, PRIN2020 funding program, grant number 2020PY8KTC.
The numerical simulations and the corresponding plots have been performed with GNU Octave [1], whilst [17] has been used for the algebraic manipulations.
|
2307.06801 | The perturbation in Einstein-Gauss-Bonnet gravity II: the quasi-normal
modes of the tensor-type of the Kaluza-Klein black hole | In Einstein-Gauss-Bonnet gravity, we study the quasi-normal modes (QNMs) of
the tensor perturbation for the so-called Maeda-Dadhich black hole which
locally has a topology $\mathcal{M}^n \simeq M^4 \times \mathcal{K}^{n-4}$. Our
discussion is based on the tensor perturbation equation derived
in~\cite{Cao:2021sty}, where the Kodama-Ishibashi gauge invariant formalism for
Einstein gravity theory has been generalized to the Einstein-Gauss-Bonnet
gravity theory. With the help of characteristic tensors for the constant
curvature space $\mathcal{K}^{n-4}$, we investigate the effect of extra
dimensions and obtain the scalar equation in four dimensional spacetime, which
is quite different from the Klein-Gordon equation. Using the asymptotic
iteration method and the numerical integration method with the Kumaresan-Tufts
frequency extraction method, we numerically calculate the QNM frequencies. In
our setups, characteristic frequencies depend on six distinct factors. They are
the spacetime dimension $n$, the Gauss-Bonnet coupling constant $\alpha$, the
black hole mass parameter $\mu$, the black hole charge parameter $q$, and two
``quantum numbers" $l$, $\gamma$. Without loss of generality, the impact of
each parameter on the characteristic frequencies is investigated while fixing
other five parameters. Interestingly, the dimension of compactification part
has no significant impact on the lifetime of QNMs. | Li-Ming Cao, Liang-Bi Wu, Yaqi Zhao, Yu-Sen Zhou | 2023-07-13T15:12:38Z | http://arxiv.org/abs/2307.06801v2 | The perturbation in Einstein-Gauss-Bonnet gravity II: the quasi-normal modes of the tensor-type of the Kaluza-Klein black hole
###### Abstract
In Einstein-Gauss-Bonnet gravity, we study the quasi-normal modes (QNMs) of the tensor perturbation for the so-called Maeda-Dadhich black hole which locally has a topology \(\mathcal{H}^{n}\simeq M^{4}\times\mathcal{K}^{n-4}\). Our discussion is based on the tensor perturbation equation derived in [1], where the Kodama-Ishibashi gauge invariant formalism for Einstein gravity theory has been generalized to the Einstein-Gauss-Bonnet gravity theory. With the help of characteristic tensors for the constant curvature space \(\mathcal{K}^{n-4}\), we investigate the effect of extra dimensions and obtain the scalar equation in four dimensional spacetime, which is quite different from the Klein-Gordon equation. Using the asymptotic iteration method and the numerical integration method with the Kumaresan-Tufts frequency extraction method, we numerically calculate the QNM frequencies. In our setups, characteristic frequencies depend on six distinct factors. They are the spacetime dimension \(n\), the Gauss-Bonnet coupling constant \(\alpha\), the black hole mass parameter \(\mu\), the black hole charge parameter \(q\), and two "quantum numbers" \(l\), \(\gamma\). Without loss of generality, the impact of each parameter on the characteristic frequencies is investigated while fixing other five parameters. Interestingly, the dimension of compactification part has no significant impact on the lifetime of QNMs.
+
Footnote †: preprint: ICTS-USTC/PCFT-23-22
## I Introduction
Lovelock theories are the most general diffeomorphism covariant theories only involving a metric tensor with second order equations of motion [2]. In four dimensions and generic values of the coupling constants, the theory reduces to the General Relativity with a cosmological constant [3], where the corresponding equations of motion are the Einstein equations. Einstein-Gauss-Bonnet (EGB) gravity is the lowest Lovelock theory, whose Lagrangian contains only the linear and quadratic terms of spacetime curvature. String theory predicts quantum corrections to General Relativity, with the Gauss-Bonnet term which is the first and the dominating correction among others. EGB gravity is the simplest model for illustrating the distinctions between general Lovelock gravity theory and Einstein gravity theory in higher dimensions.
To explore the properties of EGB gravity, physicists have derived several black hole solutions. Black holes in high dimensional spacetime have garnered significant interest for two primary reasons: they arise naturally in the context of string theory and are also present in extra-dimensional brane-world scenarios [4]. A class of static vacuum solutions in the EGB gravity was first obtained by Boulware, Deser, and Wheeler [5; 6]. This work was later extended to include a cosmological constant by Cai [7]. The topology of these solutions is locally \(\mathcal{H}^{n}\simeq M^{2}\times\mathcal{H}^{n-2}\). The Maeda-Dadhich black hole, a kind of Kaluza-Klein (KK) black hole, is also an exact vacuum solution of EGB gravity with a cosmological constant which bears a specific relation to the Gauss-Bonnet coupling constant [8]. This spacetime is locally the product of a usual \(4\)-dimensional manifold with a \((n-4)\)-dimensional space of constant negative curvature, i.e., its topology is locally \(\mathcal{H}^{n}\simeq M^{4}\times\mathcal{K}^{n-4}\). Another remarkable feature of this solution is that the Gauss-Bonent term acts like a Maxwell source for large \(r\) while at the other end it regularizes the metric and weakens the central singularity [8]. Based on the same ideas, a class of black hole solutions has been obtained in \(n\)-dimensional Lovelock gravity theory [9]. The topology of these solutions is locally \(\mathcal{H}^{n}\simeq M^{m}\times\mathcal{K}^{n-m}\), where \(\mathcal{K}^{n-m}\) is a space of negative constant curvature.
Perturbing black holes provides valuable insights into their properties, but gauge dependence can be an issue. To address this, one approach is to use physically preferred gauges, while another one is to use gauge-invariant variables such as the Kodama-Ishibashi gauge invariant variables [10]. These variables allow for the derivation of master equations with tensor, vector, and scalar components [10; 11; 12; 13]. Using the Kodama-Ishibashi gauge invariant variables, a generalized master equation for tensor-type perturbations has been derived in EGB gravity [1].
When a black hole undergoes perturbations, it experiences damping oscillations superimposed by characteristic modes. The incoming boundary condition at the horizon and the outgoing boundary condition at spatial infinity result in a dissipative system
with characteristic modes referred to as quasi-normal modes (QNMs). Their frequencies \(\omega\), denoted as quasi-normal frequencies, are the group of discrete complex eigen values of perturbation equations of the black hole solution, a set of homogeneous second order differential equations, and can therefore reveal useful information about the corresponding spacetime geometry.
QNMs have been a subject of interest for several decades, ever since their initial proposal by Regge and Wheeler in their analysis of the stability of Schwarzschild black holes [14]. These modes are significant from both theoretical and observational standpoints. From the observation aspect, binary black hole mergers are a major source of gravitational waves. The waves emitted during the ringdown stage can be expressed as a superposition of quasi-normal modes of a perturbed Kerr black hole [15; 16; 17]. With the development of observational technology, it is now possible to use QNMs measurements from gravitational wave observations to test General Relativity, examine the validity of the "no-hair" theorem [18; 19; 20], and constrain modified gravitational theories. As such, the analysis of QNMs has become an important topic in gravitational wave research.
From the theoretical perspective, QNMs are a topic of significant interest. Research into QNMs can provide insight into potential violations of strong cosmic censorship [21]. Additionally, QNMs are related to the quantization of black hole area [22; 23; 24]. Furthermore, it is anticipated that the signatures of extra dimensions may be discerned from the QNMs of black holes. For instance, a recent study investigated the numerical evolution of massive Kaluza-Klein modes of a scalar field in a thick brane [25]. This study found that there are scalar KK resonant particles with long lifespans on the brane, suggesting that these resonances could potentially serve as candidates for dark matter. Another study examined the quasi-normal modes of a thick brane in order to detect sounds from extra dimensions [26]. Given these findings, our goal is to investigate the impact of extra dimensions on QNMs within the framework of EGB gravity theory.
The high precision measurement also requires accurately calculating the QNMs. So far, the high precision methods have been made to calculate the frequencies of QNMs, such as the Wentzel-Kramers-Brillouin (WKB) approximation [27; 28; 29; 30], numerical integration method [31; 32; 33; 34], continued fractions method (CFM) [35; 36; 37; 38; 39; 40], asymptotic iteration method (AIM) [41; 42; 43] and so on. One can see some nice reviews [16; 44; 45] to get more information about QNMs.
The aim of this paper is to discuss characteristic mode frequencies of the tensor perturbation equation for the Maeda-Dadhich black hole obtained through the Kodama-Ishibashi formalism for general warped product spacetimes, with asymptotic iteration method and numerical integration method. After expressing the effect of extra dimensions with the help of the characteristic tensors of \(\mathcal{R}^{n-4}\), we recast the perturbation equation derived in [1] into a scalar field equation in four dimensional spacetime, which is quite different from the Klein-Gordon equation. Roughly speaking, the correction effect stems from the Gauss-Bonnet coupling constant \(\alpha\) and the eigenvalues of characteristic tensors of the extra dimension part. The coefficients of the second order covariant derivative is no longer the spacetime metric. Actually, it is modified by the Einstein tensor of the four dimensional spacetime. In addition, there exists a term proportional to the scalar field in this equation. However, unlike the massive scalar field [38; 39], the term is in fact dependent on the radial coordinate \(r\). Having the scalar equation, we get the Schrodinger-like equation by separating angular part as usual. Then, we calculate the QNMs by two different numerical methods, namely, asymptotic iteration method and numerical integration method. We then provide the characteristic frequencies under different parameter choices, and study how these parameters affect the QNMs. It is found that the dimension of compactification part has no significant impact on the lifetime of QNMs.
The paper is organized as follows. In Sec.II, we present a brief review of the Kaluza-Klein black hole proposed in [8]. The master equation for the perturbation of tensor-type in the Einstein-Gauss-Bonnet gravity theories is displayed in Sec.III. The Schrodinger-like equation with the corresponding effective potential is also shown in the same section. In Sec.IV, asymptotic iteration method is given to get the QNMs. In Sec.V, the evolution of a scalar field is analysed. With the numerical results from Sec.V, we use the KT method to extract characteristic frequencies in Sec.VI. In Sec.VII, a large amount of data will be displayed and we can find how the characteristic frequencies change with parameters. Sec.VIII is devoted to conclusions and discussion.
## II Kaluza-Klein black hole
In this section, we will have a brief review on the Kaluza-Klein black hole proposed by Maeda and Dadhich [8]. The action for \(n\geq 5\) in the \(n\)-dimensional spacetime with a metric \(g_{MN}\) is given by
\[S=\int\mathrm{d}^{n}x\sqrt{-g}\left[\frac{1}{2\kappa_{n}^{2}}\left(R-2\Lambda +\alpha L_{\text{GB}}\right)\right]+S_{\text{matter}}\,, \tag{1}\]
where \(\kappa_{n}\) is the coupling constant of gravity which depends on the dimension of spacetime, and \(R\) and \(\Lambda\) are the \(n\)-dimensional Ricci scalar and the cosmological constant, respectively. \(S_{\text{matter}}\) stands for the matter fields. The Gauss-Bonnet term is given by
\[L_{\text{GB}}=R^{2}-4R_{MN}R^{MN}+R_{MNPQ}R^{MNPQ}\,, \tag{2}\]
where the capital letters \(\{M,N,P,Q,\cdots\}\) are the indices for the \(n\) dimensional spacetime. The symbol \(\alpha\) is the coupling constant of the Gauss-Bonnet term. The symbol \(\alpha\) is identified with the inverse string tension and is positive definite. The equation of motion of this theory is given by
\[G_{MN}+\alpha H_{MN}+\Lambda g_{MN}=\kappa_{n}^{2}T_{MN}\,, \tag{3}\]
where
\[G_{MN}=R_{MN}-\frac{1}{2}g_{MN}R\,, \tag{4}\]
and
\[H_{MN}=2\left[RR_{MN}-2R_{ML}R^{L}_{N}-2R^{KL}R_{MKNL}+R_{M}^{\;KLP}R_{NKLP} \right]-\frac{1}{2}g_{MN}L_{\text{GB}}\,. \tag{5}\]
We consider the \(n\)-dimensional spacetime locally homeomorphic to \(M^{4}\times\mathscr{R}^{n-4}\) with the metric, \(g_{MN}=\text{diag}(g_{ab},r_{0}^{2}\gamma_{ij})\), where \(a,b=0,\cdots,3\); \(i,j=4,\cdots,n-1\). Here \(g_{ab}\) is an arbitrary Lorentz metric on \(M^{4}\), \(r_{0}\) is a constant given by
\[r_{0}^{2}=-2K\alpha(n-4)(n-5)\,, \tag{6}\]
and \(\gamma_{ij}\) is the unit metric on the \((n-4)\)-dimensional space of constant curvature \(\mathscr{R}^{n-4}\) with a sectional curvature \(K=-1\).
We will seek a vacuum static solution with the metric on \(M^{4}\) reading as
\[g_{ab}\mathrm{d}x^{a}\mathrm{d}x^{b}=-f(r)\mathrm{d}t^{2}+\frac{1}{f(r)} \mathrm{d}r^{2}+r^{2}\mathrm{d}\Sigma^{2}_{2(k)}\,, \tag{7}\]
where \(\mathrm{d}\Sigma^{2}_{2(k)}\) is the unit metric on two dimensional constant curvature space \(\Sigma_{2(k)}\) with \(k=\pm 1,0\). The governing equation is a single scalar equation on \(M^{4}\), which is given by
\[\frac{1}{n-4}^{4}\!R+\frac{\alpha}{2}\!L_{\text{GB}}+\frac{2n-11}{\alpha(n-4) ^{2}(n-5)}=0\,, \tag{8}\]
where \({}^{4}\!R\) and \({}^{4}\!L_{\text{GB}}\) are defined in the Lorentz manifold \((M^{4},g_{ab})\). After some calculation, Eq.(8) yields the general solution for the function \(f(r)\):
\[f(r)=k+\frac{r^{2}}{2(n-4)\alpha}\Bigg{\{}1\mp\Big{[}1-\frac{2n-11}{3(n-5)}+ \frac{4(n-4)^{2}\alpha^{3/2}\mu}{r^{3}}-\frac{4(n-4)^{2}\alpha^{2}q}{r^{4}} \Big{]}^{1/2}\Bigg{\}}\,, \tag{9}\]
where \(\mu\) and \(q\) are arbitrary dimensionless constants, and \(\mu\) refers to the mass of the central object and \(q\) is the charge-like parameter. Probably, due to the topology of the spacetime, i.e., \(\mathcal{M}^{n}\simeq M^{4}\times\mathscr{R}^{n-4}\) with constant curvature \(K=-1\), the charge parameter \(q\) automatically appears as a constant of integration. It should be noted this kind of charge corresponds to the so-called "Weyl charge" defined by the integration of the Weyl tensor projected onto the brane [46]. Detailed explaination on the meaning of this charge can be found in [8] and references therein.
To make \(f(r)\) meaningful, the dimension of spacetime must be set as \(n\geq 6\). There are two branches of the solution indicated by a sign in front of the square root in Eq.(9), which we call the minus and the plus branch [8]. We will focus on the case with \(k=1\) in the following sections. Since the expression in the radical of the metric function should be non-negative, the parameter \(\mu\) and \(q\) should meet the following condition
\[1-\frac{2n-11}{3(n-5)}+\frac{4(n-4)^{2}\alpha^{3/2}\mu}{r^{3}}-\frac{4(n-4)^{2 }\alpha^{2}q}{r^{4}}\geq 0\,. \tag{10}\]
A sufficient condition is that \(\mu\geq 0\) and \(q\leq 0\), and then \(r\in(0,+\infty)\). Notice that since only for the negative branch, the metric funtion \(f(r)\) may have zero points, so we choose the negative branch for our study for the black hole appearing, i.e.,
\[f(r)=1+\frac{r^{2}}{2(n-4)\alpha}\Bigg{\{}1-\Big{[}1-\frac{2n-11}{3(n-5)}+ \frac{4(n-4)^{2}\alpha^{3/2}\mu}{r^{3}}-\frac{4(n-4)^{2}\alpha^{2}q}{r^{4}} \Big{]}^{1/2}\Bigg{\}}\,. \tag{11}\]
The function \(f(r)\) is expanded for \(r\to+\infty\) as
\[f(r) = 1+\frac{r^{2}}{2(n-4)\alpha}\Big{[}1-\sqrt{\frac{n-4}{3(n-5)}} \Big{]} \tag{12}\] \[-\frac{\alpha^{1/2}\mu\sqrt{3(n-4)(n-5)}}{r}+\frac{\alpha q\sqrt{ 3(n-4)(n-5)}}{r^{2}}+\Theta\Big{(}\frac{1}{r^{3}}\Big{)}\,.\]
This is the same as the Reissner-Nordstrom-anti-de Sitter (RNAdS) spacetime for \(k=1\) in spite of the absence of the Maxwell field.
Since the design of the algorithm of QNMs involves the number of the zero points of the metric function \(f(r)\), we will find the number of the zero points of \(f(r)\). \(f(r)=0\) is equivalent to \(h(r)=0\) in terms of the condition \(\mu\geq 0\) and \(q\leq 0\) where
\[h(r)=\frac{2n-11}{12(n-5)(n-4)^{2}\alpha^{2}}r^{4}+\frac{r^{2}}{(n-4)\alpha}- \frac{\mu}{\alpha^{1/2}}r+(q+1)\,. \tag{13}\]
The derivative of \(h(r)\) is
\[h^{\prime}(r)=\frac{2n-11}{3(n-5)(n-4)^{2}\alpha^{2}}r^{3}+\frac{2}{(n-4) \alpha}r-\frac{\mu}{\alpha^{1/2}}\,. \tag{14}\]
It is easy to find that \(h^{\prime}(r)\) only has one zero point in \((0,+\infty)\). Therefore, \(f(r)\) has two zero points at most. The ranges of parameter values of \(\mu\) and \(q\) are selected as \(\mu\geq 0\) and \(q\leq 0\) for simplicity. Additionally, we can easily see that when \(q<-1\), \(f(r)\) has and only has one zero point, i.e., only the event horizon exists. In later calculation, we can judge whether it has one or two zero points through numerical calculations. The event horizon is denoted as \(r_{+}\) and the inner horizon is denoted as \(r_{-}\) if it exists.
## III The master equation of the tensor perturbation
We consider a \(n=4+(n-4)\) dimensional spacetime (\(\mathcal{H}^{n},g_{MN}\)), which has a local direct product manifold with a metric
\[g_{MN}\mathrm{d}x^{M}\mathrm{d}x^{N}=g_{ab}(y)\mathrm{d}y^{a} \mathrm{d}y^{b}+r^{2}(y)\gamma_{ij}(z)\mathrm{d}z^{i}\mathrm{d}z^{j}\,, \tag{15}\]
where coordinates \(x^{M}=\{y^{1},\cdots,y^{4};z^{1},\cdots,z^{(n-4)}\}\). In the following discussion, the Riemann manifold (\(\mathcal{H}^{n-4},\gamma_{ij}\)) is assumed to be a maximally symmetric space, i.e., \(\mathcal{H}^{n-4}=\mathcal{H}^{n-4}\). The metric compatible covariant derivatives associated with \(g_{ab}\) and \(\gamma_{ij}\) are denoted by \(D_{a}\) and \(\hat{D}_{i}\). \(K\) is the sectional curvature of the space and for this Kaluza-Klein black hole we have \(K=-1\).
Under the linear perturbation of the metric \(g_{MN}\to g_{MN}+h_{MN}\), the linear perturbation equations for Eq.(3) can be obtained. The tensor perturbation equation is obtained by setting
\[h_{ab}=0\,,\quad h_{ai}=0\,,\] \[\delta T_{ab}=0\,,\quad\delta T_{ai}=0\,. \tag{16}\]
After some calculation, the master equation of tensor perturbation (The computing method can be found in [1; 12].) in vacuum can be written as [1]
\[(P^{ab}D_{a}D_{b}+P^{mn}\hat{D}_{m}\hat{D}_{n}+P^{a}D_{a}+V)\Big{(}\frac{h_{ ij}}{r^{2}}\Big{)}=0\,, \tag{17}\]
where
\[P^{ab}=g^{ab}+2(n-6)\alpha\left\{2\frac{D^{a}D^{b}r}{r}+\left[(n-7 )\frac{K-(Dr)^{2}}{r^{2}}-2\frac{{}^{4}\Box r}{r}\right]g^{ab}\right\}-4 \alpha\cdot{}^{4}\!G^{ab}\,, \tag{18}\]
\[P^{mn}=\left\{1+2\alpha\left[{}^{4}\!R-\frac{2(n-7){}^{4}\Box r}{r}+(n-7)(n-8 )\frac{K-(Dr)^{2}}{r^{2}}\right]\right\}\frac{\gamma^{mn}}{r^{2}}\equiv \frac{Q}{r^{2}}\gamma^{mn}\,, \tag{19}\]
\[P^{a} = (n-4)\frac{D^{a}r}{r}+2(n-6)\alpha\Bigg{\{}4\frac{D^{a}D^{b}r}{r }+\left[{}^{4}\!R-2(n-5)\frac{{}^{4}\Box r}{r}\right. \tag{20}\] \[\left.+(n-6)(n-7)\frac{K-(Dr)^{2}}{r^{2}}\right]\!g^{ab}\Bigg{\}} \frac{D_{b}r}{r}-8\alpha\cdot{}^{4}\!G^{ab}\frac{D_{b}r}{r}\,,\]
and
\[V = {}^{4}\!R-2(n-5)\frac{\overleftarrow{\square}r}{r}+\frac{(n-4)(n-7) K}{r^{2}}-\frac{(n-5)(n-6)(Dr)^{2}}{r^{2}}-2\Lambda \tag{3.7}\] \[+\alpha\Bigg{\{}^{4}\!L_{\text{GB}}+8(n-5)\cdot{}^{4}\!G^{ab}\frac {D_{a}D_{b}r}{r}-4(n-5)(n-6)\frac{(D^{a}D^{b}r)(D_{a}D_{b}r)}{r^{2}}\] \[+4(n-5)(n-6)\left(\frac{\overleftarrow{\square}r}{r}\right)^{2}+2 (n-4)(n-7)\frac{K\cdot{}^{4}\!R}{r^{2}}-2(n-5)(n-6)\frac{(Dr)^{2}\cdot{}^{4}\! R}{r^{2}}\] \[-4(n-4)(n-7)^{2}\frac{K\cdot{}^{4}\!\overleftarrow{\square}r}{r^ {3}}+4(n-5)(n-6)(n-7)\frac{(Dr)^{2}\cdot{}^{4}\!\overleftarrow{\square}r}{r^{ 3}}\] \[-2(n-4)(n-7)^{2}(n-8)\frac{K\cdot(Dr)^{2}}{r^{4}}+(n-7)(n-8)[(n-4 )^{2}-3(n-4)-2]\frac{K^{2}}{r^{4}}\] \[+(n-5)(n-6)(n-7)(n-8)\left[\frac{(Dr)^{2}}{r^{2}}\right]^{2} \Bigg{\}}\,,\]
where \(\overleftarrow{\square}=g^{ab}D_{a}D_{b}\) is the d'Alembertian in \((M^{4},g_{ab})\). We can apply the separation of variables [11],
\[h_{ij}(y,z^{1},\cdots,z^{n-4})=r^{2}\Phi(y)\bar{h}_{ij}(z^{1},\cdots,z^{n-4})\,, \tag{3.8}\]
where \(\bar{h}_{ij}\) is the characteristic tensor of \({\cal{R}}^{n-4}\) and satisfies,
\[\hat{D}^{k}\hat{D}_{k}\bar{h}_{ij}=\gamma\bar{h}_{ij}\,,\quad\hat{D}^{i}\bar{ h}_{ij}=0\,,\quad\gamma^{ij}\bar{h}_{ij}=0\,. \tag{3.9}\]
Then, one obtains a four dimensional wave equation of \(\Phi\) on the manifold \(M^{4}\) as follows (One can find the details in Appendix.A.)
\[\Big{[}\frac{4n-22}{(n-4)(n-5)}g^{ab}-4\alpha\cdot{}^{4}\!G^{ab}\Big{]}D_{a}D _{b}\Phi+\Big{[}\frac{2+\gamma}{(n-4)(n-5)}{}^{4}\!R+\frac{3(n-6)(2+\gamma)}{ \alpha(n-4)^{2}(n-5)^{2}}\Big{]}\Phi=0\,. \tag{3.10}\]
Comparing with the standard Klein-Gordon equation \(\overleftarrow{\square}\Phi=0\) in \((M^{4},g_{ab})\), it can be found that the coefficient of the second derivative of \(\Phi\) in Eq.(3.10) is added by a term related to four dimensional Einstein tensor \(G^{ab}\).
Solutions to equations (3.9) are worked out in [47] for \(K=1\), where it is shown that the spectrum of eigenvalues is \(\gamma=-L(L+n-5)+2\,,L=2,3,4,\cdots\). However, as for our case \(K=-1\), there is a subtlety in the value of \(\gamma\) which may be a difficult mathematical problem. To avoid mathematical hardship, as a matter of convenience, \(\gamma\in\mathbb{R}\) is assumed. It can be seen that QNMs can be obtained for a given \(\gamma\). Upon examination of Equation (3.9), it can be observed that a total of \(n-3\) constraints are imposed on \(\bar{h}_{ij}\). The degrees of freedom for \(\bar{h}_{ij}\) are \((n-4)(n-3)/2\). In the specific case where \(n=6\), the number of constraints imposed on \(\bar{h}_{ij}\) is equal to its degrees of freedom. As a result, \(\bar{h}_{ij}\) possesses no degrees of freedom for propagation. Since the issue mentioned above is excluded when \(n\geq 7\), we will focus on the cases where \(n\geq 7\) for the remainder of this paper.
Now, Eq.(3.10) is an equation about \(\Phi\) on \(M^{4}\) with a Lorentz metric
\[g_{ab}{\rm d}x^{a}{\rm d}x^{b}=-f(r){\rm d}t^{2}+\frac{1}{f(r)}{\rm d}r^{2}+r^ {2}({\rm d}\theta^{2}+\sin^{2}\theta{\rm d}\phi^{2})\,, \tag{3.11}\]
where we have chosen the metric function (2.9) with \(k=1\). Separating the variables as
\[\Phi(t,r,\theta,\phi)=e^{-i\omega t}R(r)Y(\theta,\phi)\,, \tag{3.12}\]
where \(Y(\theta,\phi)\) is the spherical harmonics, we get the radial equation of \(R(r)\) as follows,
\[R^{\prime\prime}+B(r)R^{\prime}+C(r)R=0\,, \tag{3.13}\]
where the functions \(B(r)\) and \(C(r)\) are
\[B(r) = \Big{[}\frac{4n-22}{(n-4)(n-5)}\Big{(}f^{\prime}+\frac{2f}{r}\Big{) }-4\alpha\frac{-f^{\prime}+3ff^{\prime}+r(f^{\prime})^{2}+rff^{\prime\prime}} {r^{2}}\Big{]} \tag{3.14}\] \[\times\Big{[}\frac{4n-22}{(n-4)(n-5)}f-\frac{4\alpha f(-1+f+rf^{ \prime})}{r^{2}}\Big{]}^{-1}\,,\]
and
\[C(r) = \frac{\omega^{2}}{f^{2}}+\Bigg{\{}-\frac{l(l+1)}{r^{2}}\Big{[}\frac{4 n-22}{(n-4)(n-5)}-\frac{2\alpha(2f^{\prime}+rf^{\prime\prime})}{r}\Big{]} \tag{3.15}\] \[+\Big{[}\frac{2+\gamma}{(n-4)(n-5)}{}^{4}\!R+\frac{3(n-6)(2+ \gamma)}{\alpha(n-4)^{2}(n-5)^{2}}\Big{]}\Bigg{\}}\Big{[}\frac{4n-22}{(n-4)(n- 5)}f-\frac{4\alpha f(-1+f+rf^{\prime})}{r^{2}}\Big{]}^{-1}\,.\]
Here, the " \(\prime\) " denotes the derivative with respect to \(r\).
Now, our task is to bring this equation to the more familiar form of one-dimensional Schrodinger-like equation, for which we need to remove the friction term in the equation above. There are two transformations that we can consider: a change of variable for the radial coordinate \(r_{\star}=r_{\star}(r)\) and a rescaling of \(R\), so that
\[{\rm d}r_{\star}=z(r){\rm d}r\,,\quad R=S(r)\varphi\,, \tag{3.16}\]
for given functions \(S\) and \(z\), and \(\varphi\) is the new radial function now [48]. Performing the transformations (3.16), one gets
\[z^{2}S\frac{{\rm d}^{2}\varphi}{{\rm d}r_{\star}^{2}}+\Big{(}z^{ \prime}S+2zS^{\prime}+BzS\Big{)}\frac{{\rm d}\varphi}{{\rm d}r_{\star}}+\Big{(} CS+BS^{\prime}+S^{\prime\prime}\Big{)}\varphi=0\,. \tag{3.17}\]
In order to remove the term \({\rm d}\varphi/{\rm d}r_{\star}\), it is found that \(S\) and \(z\) must satisfy
\[zS^{2}=\exp\Big{(}-\int B(r){\rm d}r\Big{)}\,. \tag{3.18}\]
Therefore, Eq.(3.17) becomes
\[\frac{{\rm d}^{2}\varphi}{{\rm d}r_{\star}^{2}}+\frac{CS+BS^{\prime}+S^{\prime \prime}}{z^{2}S}\varphi=0\,. \tag{3.19}\]
We have the freedom to fix one of these functions, the other one will then be determined by the relation (3.18). We make the following choice of tortoise coordinate:
\[z(r)=\frac{1}{f(r)}\,. \tag{3.20}\]
It is found that \(r_{\star}\) is finite as \(r\to+\infty\), and \(r_{\star}\to-\infty\) as \(r\to r_{+}\). This is similar to the AdS case. The function \(S\) is satisfied with
\[S^{2} = \frac{1}{z}\exp\Big{(}-\int B{\rm d}r\Big{)}\,,\] \[(S^{2})^{\prime} = \frac{-Bz-z^{\prime}}{z^{2}}\exp\Big{(}-\int B{\rm d}r\Big{)}\,,\] \[(S^{2})^{\prime\prime} = \Big{[}-\frac{z^{\prime\prime}}{z^{2}}+\frac{2Bz^{\prime}}{z^{2}} +\frac{2(z^{\prime})^{2}}{z^{3}}-\frac{B^{\prime}}{z}+\frac{B^{2}}{z}\Big{]} \exp\Big{(}-\int B{\rm d}r\Big{)}\,. \tag{3.21}\]
Finally, the standard Schrodinger-like equation is obtained
\[\Big{[}\frac{{\rm d}^{2}}{{\rm d}r_{\star}^{2}}+(\omega^{2}-V_{ \rm eff})\Big{]}\varphi=0\,, \tag{3.22}\]
where the effective potential \(V_{\rm eff}\) is
\[V_{\rm eff} = \omega^{2}-\frac{CS+BS^{\prime}+S^{\prime\prime}}{z^{2}S} \tag{3.23}\] \[= \omega^{2}-f^{2}C+\frac{(f^{\prime})^{2}}{4}-\frac{ff^{\prime \prime}}{2}+\frac{f^{2}B^{\prime}}{2}+\frac{f^{2}B^{2}}{4}\,.\]
It should be noted that \(V_{\rm eff}\) above is independent of \(\omega\) since there is a term \(\omega^{2}/f^{2}\) in \(C\).
We need to examine the behavior of the effective potential \(V_{\rm eff}\) before calculating QNMs. There are two things we must
accomplish. First, we should check whether there exists any \(r=r_{V}\in(r_{+},+\infty)\) such that \(V_{\text{eff}}\) is divergent, i.e.,
\[\lim_{r\to r_{V}}V_{\text{eff}}=\infty\,. \tag{3.24}\]
In other words, we are supposed to pay attention to whether there exists \(r=r_{V}\in(r_{+},+\infty)\) such that
\[\frac{4n-22}{(n-4)(n-5)}-\frac{4\alpha(-1+f+rf^{\prime})}{r^{2}}=0\,, \tag{3.25}\]
which is is equivalent to
\[\frac{2n-11}{n-5}r^{8}+12\alpha^{3/2}\mu(n-4)(2n-11)r^{5}-12\alpha ^{2}q(n-4)(n-6)r^{4}\] \[-144\alpha^{7/2}\mu q(n-5)^{2}(n-4)^{2}r+48\alpha^{4}(n-5)^{2}(n- 4)^{2}q^{2}=0\,. \tag{3.26}\]
However, in the above polynomial equation, we find each term is nonnegative provided that \(\mu\geq 0\), \(q\leq 0\) and \(n\geq 6\), i.e., there is no such \(r_{V}\) that Eq.(3.24) is established. Hence, the effective potential \(V_{\text{eff}}\) is always regular at \((r_{+},+\infty)\). Second, we should acquaint ourselves with the asymptotic behavior of \(V_{\text{eff}}\) as \(r\to+\infty\) and \(r\to r_{+}\). When \(r\to+\infty\), we have
\[V_{\text{eff}} = \Bigg{\{}\frac{(\gamma+2)\Big{(}2\sqrt{3}\sqrt{\frac{n-4}{n-5}}n -3n-10\sqrt{3}\sqrt{\frac{n-4}{n-5}}+12\Big{)}\Big{(}\sqrt{3}\sqrt{\frac{n-4} {n-5}}-3\Big{)}}{12\alpha^{2}(n-5)(n-4)^{2}\Big{(}\sqrt{3}\sqrt{\frac{n-4}{n-5 }}-n-5\sqrt{3}\sqrt{\frac{n-4}{n-5}}+4\Big{)}}+\frac{\Big{(}\sqrt{3}\sqrt{ \frac{n-4}{n-5}}-3\Big{)}^{2}}{18\alpha^{2}(n-4)^{2}}\Bigg{\}}r^{2}+\Theta(1)\] \[= \Big{(}3-\sqrt{3}\sqrt{\frac{n-4}{n-5}}\Big{)}\Bigg{\{}36\alpha^ {2}(n-5)(n-4)^{2}\Big{[}\sqrt{3}\sqrt{\frac{n-4}{n-5}}(n-5)-(n-4)\Big{]} \Bigg{\}}^{-1}\times\] \[\Bigg{\{}\Big{[}9(n-4)-6\sqrt{3}(n-5)\sqrt{\frac{n-4}{n-5}}\Big{]} \gamma+2\sqrt{3}\sqrt{\frac{n-4}{n-5}}(n-5)(4n-25)-6(n-4)(2n-13)\Bigg{\}}r^{2 }+\Theta(1)\] \[\equiv V_{0}(\alpha,n,\gamma)r^{2}+\Theta(1)\,.\]
The stability requirement demands that \(V_{\text{eff}}\) tends towards positive infinity, i.e., \(V_{0}(\alpha,n,\gamma)>0\)[49]. One can find the range of \(\gamma\) in terms of the dimension of spacetime \(n\) in Tab.1. An important property of the effective potential \(V_{\text{eff}}\) is that
\[V_{\text{eff}}(r_{+})=0\,. \tag{3.28}\]
One can check it by a direct calculation (see Appendix.B).
## IV Asymptotic iteration method
In this section, we will use the asymptotic iteration method (AIM) to solve the QNMs of the tensor perturbation in Kaluza-Klein black hole for EGB gravity. At the beginning, we provide a brief review on the asymptotic iteration method. Consider a second order homogeneous linear differential equation for the function \(\chi(x)\),
\[\chi^{\prime\prime}(x)=\lambda_{0}(x)\chi^{\prime}(x)+s_{0}(x)\chi(x)\,, \tag{4.1}\]
where \(\lambda_{0}(x)\neq 0\). Differentiating Eq.(4.1) with respect to \(x\), one finds
\[\chi^{\prime\prime\prime}(x)=\lambda_{1}(x)\chi^{\prime}(x)+s_{1}(x)\chi(x)\,, \tag{4.2}\]
\begin{table}
\begin{tabular}{c c} the dimension of spacetime \(n\) & the range of \(\gamma\) \\ \hline \(n=7\) & \(\gamma>-\frac{2\sqrt{3}\sqrt{\frac{n-4}{n-5}}(n-5)(4n-25)-6(n-4)(2n-13)}{9(n-4 )-6\sqrt{3}(n-5)\sqrt{\frac{n-4}{n-5}}}\) \\ \(n=8\) & \(\mathbb{R}\) \\ \(n\geq 9\) & \(\gamma<-\frac{2\sqrt{3}\sqrt{\frac{n-4}{n-5}}(n-5)(4n-25)-6(n-4)(2n-13)}{9(n-4 )-6\sqrt{3}(n-5)\sqrt{\frac{n-5}{n-5}}}\) \\ \end{tabular}
\end{table}
Table 1: The range of \(\gamma\) in terms of the dimension of spacetime \(n\)
where
\[\lambda_{1}(x) = \lambda_{0}^{\prime}+s_{0}+\lambda_{0}^{2}\,,\] \[s_{1}(x) = s_{0}^{\prime}+s_{0}\lambda_{0}\,. \tag{4.3}\]
Iteratively, the \((n-1)\)-th and \(n\)-th differentiations of Eq.(4.1) give
\[\chi^{(n+1)} = \lambda_{n-1}(x)\chi^{\prime}(x)+s_{n-1}\chi(x)\,,\] \[\chi^{(n+2)} = \lambda_{n}(x)\chi^{\prime}(x)+s_{n}(x)\chi(x)\,, \tag{4.4}\]
where
\[\lambda_{n}(x) = \lambda_{n-1}^{\prime}+s_{n-1}+\lambda_{0}\lambda_{n-1}\,,\] \[s_{n}(x) = s_{n-1}^{\prime}+s_{0}\lambda_{n-1}\,. \tag{4.5}\]
The so-called "quantization condition" is given by
\[s_{n}(x)\lambda_{n-1}(x)-s_{n-1}(x)\lambda_{n}(x)=0\,. \tag{4.6}\]
It is noted that at each iteration one must take the derivative of the \(s\) and \(\lambda\) terms of the previous iteration [50]. This "deficiency" might bring difficulties for numerical calculations. An improved version of the AIM which bypasses the need to take derivatives at each step is proposed in [41; 42]. This greatly improves both the accuracy and speed of the method. The functions \(\lambda_{n}\) and \(s_{n}\) are expanded in a Taylor series around the point \(\xi_{0}\) at which the AIM is performed, which means that
\[\lambda_{n}(\xi)=\sum_{j=0}^{\infty}c_{n}^{j}(\xi-\xi_{0})^{j}\,, \tag{4.7}\] \[s_{n}(\xi)=\sum_{j=0}^{\infty}d_{n}^{j}(\xi-\xi_{0})^{j}\,, \tag{4.8}\]
where \(c_{n}^{j}\) and \(d_{n}^{j}\) are the \(j\)-th Taylor coefficients of \(\lambda_{n}(\xi)\) and \(s_{n}(\xi)\) respectively. Substituting these expressions, we get a set of recursion relations for the coefficients:
\[c_{n}^{j} = (j+1)c_{n-1}^{j+1}+d_{n-1}^{j}+\sum_{k=0}^{j}c_{0}^{k}c_{n-1}^{j- k}\,,\] \[d_{n}^{i} = (j+1)d_{n-1}^{j+1}+\sum_{k=0}^{j}d_{0}^{k}c_{n-1}^{j-k}\,. \tag{4.9}\]
In terms of these coefficients, the "quantization condition" (4.6) can be expressed as
\[d_{n}^{0}c_{n-1}^{0}-d_{n-1}^{0}c_{n}^{0}=0\,. \tag{4.10}\]
Thus we have reduced the AIM into a set of recursion relations which no longer require derivative operations.
Now, we can find the standard AIM form of Schrodinger-like equation (3.22) with the help of the boundary conditions of QNMs. The effective potential is zero at the event horizon \(r\to r_{+}\) [see Eq.(3.28)]. The Dirichlet boundary condition is added at spatial infinity because of the divergence of the effective potential there, as we can see through Eq.(3.27). Therefore, the boundary conditions are taken so that the asymptotic behavior of the solutions is
\[\varphi(r)\to\begin{cases}e^{-i\omega r_{+}}&r\to r_{+}\,,\\ 0&r\to+\infty\,,\end{cases} \tag{4.11}\]
which represents an in-going wave at the event horizon and no waves at infinity. According to the theory of second order ordinary differential equation, we know that \(r=r_{+}\) and \(r=+\infty\) both are regular singular points. In order to apply the boundary condition (4.11), we define the following solution (One can see more details in Appendix.C.)
\[\varphi(r)=\Big{(}\frac{r-r_{+}}{r-r_{-}}\Big{)}^{-i\omega/f^{\prime}(r_{+})} \Big{(}\frac{r_{+}-r_{-}}{r-r_{-}}\Big{)}^{\rho}\tilde{\varphi}(r)\,, \tag{4.12}\]
in which the index at infinity is (we have disposed another index due to the boundary condition of \(\varphi(r)\) at infinity.)
\[\rho=\frac{1}{2}\Bigg{\{}1+\sqrt{1+\frac{16(n-4)^{2}\alpha^{2}V_{0}}{\Big{[}1- \sqrt{\frac{n-4}{3(n-5)}}\Big{]}^{2}}}\Bigg{\}}>0\,, \tag{4.13}\]
and \(\tilde{\varphi}(r)\) is a finite and convergent function. Based on the above discussion, a compact coordinate is introduced as follows
\[\xi=\frac{r-r_{+}}{r-r_{-}}\,, \tag{4.14}\]
with \(0\leq\xi<1\). If there exists only the event horizon \(r_{+}\), we will set \(r_{-}=0\) in Eq.(4.14). The regular function \(\tilde{\varphi}(\xi)\) is introduced as
\[\varphi(\xi)=\xi^{-i\omega/f^{\prime}(r_{+})}(1-\xi)^{\rho}\tilde{\varphi}( \xi)\,, \tag{4.15}\]
so that the function \(\varphi(\xi)\) obeys the Dirichlet boundary condition at spatial infinity \((\xi=1)\). Now, we will rewrite the Eq.(3.22) into the differential equation for the regular function \(\tilde{\varphi}(\xi)\) by using Eq.(4.14) and Eq.(4.15). Using the relationship (4.14) between \(\xi\) and \(r\), we can derive the inverse relationship between \(\xi\) and \(r\), which is expressed as
\[r=\frac{r_{+}-\xi r_{-}}{1-\xi}\,. \tag{4.16}\]
After some calculations, the standard AIM form of Eq.(3.22) equipped with the boundary condition (4.11) is found as below
\[\frac{\mathrm{d}^{2}\tilde{\varphi}}{\mathrm{d}\xi^{2}}=\lambda_{0}(\xi) \frac{\mathrm{d}\tilde{\varphi}}{\mathrm{d}\xi}+s_{0}(\xi)\tilde{\varphi}\,, \tag{4.17}\]
where
\[\lambda_{0}(\xi)=-\frac{2\kappa\xi(\rho+1)-i(\xi-1)\omega}{\kappa(\xi-1)\xi} -\frac{g(\xi)(r_{+}-r_{-})}{(\xi-1)^{2}f(\xi)}\,, \tag{4.18}\]
and
\[s_{0}(\xi) = \frac{1}{4\kappa^{2}(\xi-1)^{4}\xi^{2}f(\xi)^{2}}\Bigg{\{}-2 \kappa(\xi-1)\xi f(\xi)g(\xi)(r_{+}-r_{-})\Big{[}2\kappa\xi\rho-i(\xi-1)\omega \Big{]} \tag{4.19}\] \[+(\xi-1)^{2}f^{2}(\xi)\Big{[}-4\kappa^{2}\xi^{2}\rho(\rho+1)+2i \kappa(\xi-1)\omega(2\xi\rho+\xi+1)+(\xi-1)^{2}\omega^{2}\Big{]}\] \[-4\kappa^{2}\xi^{2}(r_{+}-r_{-})^{2}\Big{[}\omega^{2}-V_{\text{ eff}}(\xi)\Big{]}\Bigg{\}}\,,\]
with \(\kappa=f^{\prime}(r_{+})/2\) and
\[g(\xi)\equiv f^{\prime}(r)|_{r=(r_{+}-\xi r_{-})/(1-\xi)}\,. \tag{4.20}\]
These equations are now in the standard form for AIM calculation, and we can use the standard AIM treatment to derive the QNM frequencies.
The QNM frequencies depend on six physical parameters, namely, the spacetime dimension \(n\), the coupling constant \(\alpha\), the black hole mass parameter \(\mu\), the black hole charge parameter \(q\), and the "quantum numbers" \(l\) and \(\gamma\). We will demonstrate how these six parameters influence the QNM frequencies later. Besides, the numerical results also depend on two nonphysical parameters, the iteration order and the expanding position \(\xi_{0}\) in AIM. Before going into the discussion about physical parameter, we will first give a discussion about these two parameters.
In Fig.1, we illustrate how the iteration order influences the numerical result. This figure shows the numerical results for iteration orders ranging from 1st to 50th, with the parameter choices indicated in the figure. The colors of the points correspond to the iteration order, as shown in the legend bar. As the iteration order increases, the physical frequencies, which are the physical roots of Eq. (4.10), repeatedly appear in the results of each iteration order. We expect that the precision of the numerical result will also increase with increasing iteration order.
Similar to the WKB method, we can use the difference between adjacent iteration order to estimate the precision of the QNM frequencies we get [30]. Here, we use the variance of the results from the highest iteration orders as the uncertainty of the QNM frequencies. We demonstrate this estimation in Fig.2. In this plot, we show the mean value and the variance for the \(n=1\) QNM
frequency under the same parameter choice with Fig.1. The blue points are from the highest \(11\) iteration orders, and the orange point is their mean value. The light and deep yellow regions then indicate the \(2\sigma\) and \(1\sigma\) region, where \(\sigma\) is the variance for the real and imaginary parts of the blue points, given by
\[\sigma=\sqrt{<\text{re}^{2}(\omega)>+<\text{im}^{2}(\omega)>-<\text{re}(\omega)> ^{2}-<\text{im}(\omega)>^{2}}, \tag{4.21}\]
where \(<*>\) means the average value of \(*\) for the highest \(11\) iteration orders.
On the other hand, the precision of AIM heavily depends on the expanding position \(\xi_{0}\in(0,1)\). For a good expanding position, the result can be more accurate, while for a bad expanding position, the precision of the result is much worse. This can be illustrated from Fig.1 and Fig.3. As shown in Fig.3, we do the AIM calculation with the same physical parameters as in Fig.1, but expand the equation at \(\xi_{0}=0.5\) instead of \(\xi_{0}=0.4125\). From the left panel, we find that the spot for \(n=1\) point is much larger, and we can not read the characteristic frequency for the overtone number \(n=2\) from the figure. This is further shown in the right panel, which shows the variance of the highest \(11\) iteration orders is more than about \(10\) times larger than that of \(\xi_{0}=0.4125\). These suggest that \(\xi_{0}=0.5\) is not a good expanding position.
From the analyse above, we see the importance of choosing a proper expanding position to find out the characteristic frequencies. However, up till now, although there are some suggested expanding position for the case where the effective potential has a maxima [51], there is no universal method to derive the proper expanding position theoretically for a given AdS-like system. Therefore, in order to overcome this difficulty, we conduct the following analysis. In our calculation, we choose our expanding position by going through all possibilities on the interval \((0,1)\) and then choose the one that minimizes the variance mentioned above. The variances for the \(n=0\), \(n=1\) and \(n=2\) QNM frequencies with respect to different expanding position are shown in Fig.4, respectively. From these plots, we have three observations as follows:
1. For fixed overtone number \(n\), the best expanding position doesn't change markedly with different iteration order.
Figure 1: The AIM numerical results of iteration order from \(1\)st to \(50\)th. The parameter choice is as shown in the figure. The horizontal axis stands for the real part of frequencies, while the vertical axis stands for the imaginary part of frequencies. Points of different color are from different iteration orders, as indicated by the legend bar on the right.
Figure 2: The variance estimate for QNM frequencies with overtone \(n=1\). The blue points are the results for the \(n=1\) frequencies from the highest \(11\) iteration orders (the \(40\)-th order to \(50\)-th order, here), and the orange point stands for the mean value of them. The light and deep yellow regions indicate \(2\sigma\) and \(1\sigma\) regions, respectively.
2. The QNM frequency with lower \(n\) converges at lower iteration order, and has smaller variance.
3. The best expanding position for different overtone number \(n\) is slightly different. However, the choice of \(\xi_{0}\) which works best for higher overtone number \(n\) also works fairly well for lower overtone number \(n\).
We then demonstrate how the expansion position impact the calculation result of QNM frequencies in Fig. 5. These figures shows the mean value of QNM frequencies with respect to different expanding positions. The horizontal axis stands for the expanding position, while the vertical axis stands for the mean value. These six figures are for \(n=0\), \(n=1\) and \(n=2\) from left to right. The upper panel is for the real part, while the bottom panel is for the imaginary part. From these figures, we can see that the expanding position with minima value of variance, the mean value for QNM frequencies doesn't change greatly with respect to expanding position, and the results given by different orders coincide with each other quite well. These relationships between the variance and the mean value for AIM results confirm the variance as a good indicator to choose expanding positions.
For a set of given physical parameters, in this calculation we want to find out the expanding position that works best for the \(n=0\), \(n=1\) and \(n=2\) QNM frequencies. Based on three observations above, we choose the proper expanding position mainly with the following method. In the first step, we go through all possible \(\xi_{0}\), from \(0\) to \(1\) under iteration order of \(20\), and find out the value of \(\xi_{0}\) that minimizes the variance for the overtone number \(n=1\) QNM. This provides a rough estimate of the expanding position and a proper region around the expanding position. In the second step, we use bisection method in this region to find out a more refined expanding position that minimize the variance for the overtone number \(n=2\) QNM at iteration order of \(30\). It should be noted that for the iteration order of \(20\), the variance is averaged over the neighbouring \(4\) points, and for the iteration order of \(30\), the variance is averaged over the neighbouring \(6\) points. After confirming the location of the expansion point, we then use this expanding position for higher order calculation at the iteration order of \(50\), and there the variance is averaged over the neighbouring \(11\) points.
## V Time-domain analysis
In this section, we consider the numeric evolution of an initial wave packet in order to investigate the contribution of all modes. We rewrite the wavelike equation (3.22) without implying the stationary ansatz (\(\Psi\sim e^{-i\omega t}\)), i.e., the equation for \(\Psi(t,r)\) is
Figure 4: These figures show how the variances defined in (4.21) of QNM frequencies changes with respect to changing expanding position. The horizontal axis stands for the expanding position, while the vertical axis is for the value of the variance. The three figures are for \(n=0\), \(n=1\) and \(n=2\), respectively.
Figure 3: The AIM calculation result with the parameter indicated in the figure. The physical parameters are unchanged, while the expanding positions is chosen to be \(\xi_{0}=0.5\). The left panel shows the numerical result for different orders, and the right panel is for the mean value and variance of the corresponding \(n=1\) QNM frequency.
given by
\[-\frac{\partial^{2}\Psi}{\partial t^{2}}+\frac{\partial^{2}\Psi}{\partial r_{ \star}^{2}}-V_{\text{eff}}(r)\Psi=0\,, \tag{5.1}\]
where the effective potential is expressed as (3.23). The technique of integration of the above wave equation in the time domain was developed by Gaundlach, Price, and Pullin [52]. In terms of \(t\) and \(r_{\star}\), we introduce null coordinates \(u=t-r_{\star}\) and \(v=t+r_{\star}\) so that the black hole horizon \(r=r_{+}\) is located at \(u=+\infty\). In these coordinates, Eq.(5.1) is written as
\[-4\frac{\partial^{2}}{\partial u\partial v}\Psi(u,v)=V_{\text{eff}}(r)\Psi(u,v )\,, \tag{5.2}\]
where \(r\) can be determined by inverting the relation \(r_{\star}(r)=(v-u)/2\), because of the monotonicity of the relation between \(r\) and \(r_{\star}\).
The two-dimensional wave equation (5.2) can be integrated numerically, using the finite difference method suggested in Refs.[53; 52; 31]. To be specific, Eq.(5.2) can be discretized as
\[\Psi(N)=\Psi(E)+\Psi(W)-\Psi(S)-h^{2}V_{\text{eff}}\Big{(}\frac{v_{N}+v_{W}-u _{N}-u_{E}}{4}\Big{)}\frac{\Psi(E)+\Psi(W)}{8}+\Theta(h^{4})\,, \tag{5.3}\]
where \(S=(u,v)\), \(W=(u+h,v)\), \(E=(u,v+h)\), \(N=(u+h,v+h)\). While the above described integration scheme is efficient for asymptotically flat or de Sitter black holes, for asymptotically AdS black holes like our case, its convergence is too slow [16]. An alternative integration scheme is put forward in [32] which is given by
\[\Big{[}1+\frac{h^{2}}{16}V_{\text{eff}}(S)\Big{]}\Psi(N)=\Psi(E)+\Psi(W)-\Psi( S)-\frac{h^{2}}{16}\Big{[}V_{\text{eff}}(S)\Psi(S)+V_{\text{eff}}(E)\Psi(E)+V_{ \text{eff}}(W)\Psi(W)\Big{]}+\Theta(h^{4})\,. \tag{5.4}\]
This integration scheme is more stable and in our paper this alternative integration scheme is used. This integration scheme can be proved when one uses Taylor expansion at the center of the square. Considering that the behavior of the wave function is not sensitive to the choice of initial data, we set \(\Psi(u,v=0)=0\) and use a pulse as an initial perturbation as
\[\Psi(u=0,v)=A\Big{(}\frac{v-v_{1}}{v_{2}-v_{1}}\Big{)}^{4}\Big{(}1-\frac{v-v_ {1}}{v_{2}-v_{1}}\Big{)}^{4} \tag{5.5}\]
if \(v\in[v_{1},v_{2}]\), and \(\Psi(u=0,v)=0\) otherwise. The fourth power is used to ensure that the initial value is smooth at \(v_{1}\) and \(v_{2}\). The symbol \(A\) refers to the initial amplitude of the pulse.
Figure 5: These figures show how the mean value of QNM frequencies changes with respect to changing expanding position. The horizontal axis stands for the expanding position, while the vertical axis is for the mean value. The six figures are for \(n=0\), \(n=1\) and \(n=2\), from left to right, respectively. The upper panel is for the real part, while the bottom panel is for the imaginary part.
First, according to the definition of tortoise coordinates, we have
\[r_{\star}(r)=\int_{r_{\star}}^{r}\frac{\mathrm{d}r^{\prime}}{f(r^{\prime})}\,, \tag{5.6}\]
where \(r_{\epsilon}\) is chosen as \(r_{\epsilon}=r_{+}+\epsilon\) such that \(r_{\star}(r_{\epsilon})=0\) and \(\epsilon\) is a positive constant that can be given arbitrarily in principle. Hence, the above integral can be worked out numerically although the primitive function of \(1/f(r)\) cannot be expressed as an elementary function. It is found that when \(r\to+\infty\), \(r_{\star}\) tends to a finite constant denoted as \(r_{\star\text{max}}\) given by
\[r_{\star\text{max}}=\int_{r_{\epsilon}}^{+\infty}\frac{\mathrm{d}r^{\prime}}{f (r^{\prime})}\,, \tag{5.7}\]
which is determined by \(\epsilon\) and \(r_{\star}\to-\infty\) as \(r\to r_{+}\). It is worth recalling that our initial condition (5.5) is vanished strictly at \(v=2r_{\star\text{max}}\), if \(-v_{\text{max}}<v_{2}<v_{1}<0\) is selected. Now, we start to build the numerical grid in the Fig.6. In Fig.6, the black spots represent the initial grid points, the stars represent the grid points to be calculated, and the cross product sets represent the forbidden region. Provided that \(\epsilon\) is given, we have \(N_{1}\) grid points between the interval \([0,2r_{\star\text{max}}]\), where \(r_{\star\text{max}}\) is given by Eq.(5.7), then \(h=\mathrm{d}u=\mathrm{d}v=2r_{\star\text{max}}/(N_{1}-1)\). We have \(N_{2}\) grid points between the interval \([-v_{\text{max}},0]\), where \(v_{\text{max}}=(N_{2}-1)h\). \(u_{\text{max}}\) is assumed to be \(u_{\text{max}}=(N-1)h=(N_{1}+N_{2}-1)h\) for simplicity.
The evaluation of the potential \(V_{\text{eff}}(r)\) is the most challenging part in the computation, which brings more numerical errors. We use the method proposed in [52; 53] to overcome it. The potential is evaluated at the central radius \(r_{c}\) satisfying
\[r_{\star}(r_{c})=\frac{v_{N}+v_{W}-u_{N}-u_{E}}{4}=\frac{v_{S}-u_{S}}{2}\,. \tag{5.8}\]
From the Fig.6, it is easy to see that there are \(2N-1\) points whose \(r\) should be computed in order to get the potential \(V_{\text{eff}}(r)\). These points are all on the line segment between the point \((-v_{\text{max}},u_{\text{max}})\) and the point \((2r_{\star\text{max}},0)\). We will number these \(2N-1\) points where \((-v_{\text{max}},u_{\text{max}})\) is the first one and \((2r_{\star\text{max}},0)\) is the last one (including the center of the square). Since \(r_{\star}(r_{\epsilon})=0\), we use the built-in function **FindRoot** in MATHEMATICA based on \(r_{\epsilon}\). After evaluating \(r\) along the line segment, we use Eq.(3.23) to derive the value of \(V_{\text{eff}}(r)\) along the line segment and number it in the same order as \(r\). Then, the values of the stars in the Fig.6 are established as follows. Define \(\Psi(u_{j},v_{k})\equiv\Psi_{j}^{k}\), and since \(\Psi(u,v=0)=0\), we have
\[\Psi(u_{j},v=0)\equiv\Psi(j,1)\equiv\Psi_{j}^{1}=0\,, \tag{5.9}\]
Figure 6: The diagram of numerical grid and the right-angled trapezoidal domain is of our interests. The red dots refer to the east (E), the south (S), the west (W) and the north (N), respectively. From the boundary condition (4.11), we have \(\Psi=0\) on the sloping waist of the right-angled trapezoidal.
for \(j=1,2,\cdots,N-1,N\). From Eq.(5.4), for \(k=2,\cdots,N\), we have
\[\Psi_{j}^{k} = \Big{[}1+\frac{h^{2}}{16}V_{\text{eff}}(k-j+N)\Big{]}^{-1}\Bigg{\{} \Psi_{j-1}^{k}+\Psi_{j}^{k-1}-\Psi_{j-1}^{k-1}-\frac{h^{2}}{16}\Big{[}V_{\text{ eff}}(k-j+N)\Psi_{j-1}^{k-1} \tag{5.10}\] \[+V_{\text{eff}}(k-j+N+1)\Psi_{j-1}^{k}+V_{\text{eff}}(k-j+N-1)\Psi _{j}^{k-1}\Big{]}\Bigg{\}}\,,\quad j=2,\cdots,N\,.\]
For \(k=N+1,\cdots,2N-1\), we have
\[\Psi_{j}^{k} = \Big{[}1+\frac{h^{2}}{16}V_{\text{eff}}(k-j+N)\Big{]}^{-1}\Bigg{\{} \Psi_{j-1}^{k}+\Psi_{j}^{k-1}-\Psi_{j-1}^{k-1}-\frac{h^{2}}{16}\Big{[}V_{ \text{eff}}(k-j+N)\Psi_{j-1}^{k-1} \tag{5.11}\] \[+V_{\text{eff}}(k-j+N+1)\Psi_{j-1}^{k}+V_{\text{eff}}(k-j+N-1) \Psi_{j}^{k-1}\Big{]}\Bigg{\}}\,,\quad j=2+k-N,\cdots,N\,.\]
There is an issue when the term \(V_{\text{eff}}(E)\Psi(E)\) is computed on the grid of sloping waist of the right-angled trapezoidal. Since \(\Psi=0\) on this sloping waist, one can set any value \(V_{\text{eff}}\), which does not affect the calculation results. For simplicity, \(V_{\text{eff}}(2N-1)=0\) is added into the above numerial scheme. After the integration is completed, the value \(\Psi(u_{\text{max}},v)\) is extracted, where \(u_{\text{max}}\) is the maximum value of \(u\) on the numerical grid.
Now, we give an example to demonstrate how to implement the above algorithm and obtain the corresponding waveform under a specific set of parameters. For the metric function \(f(r)\), we choose \(n=10\), \(\alpha=30\), \(\mu=1.5\) and \(q=-0.5\). The inner horizon is \(r_{-}=1.8991\) and the event horizon is \(r_{+}=27.9251\). Choosing \(\epsilon=0.9973\), we have \(r_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\
where
\[h_{k} = A_{k}e^{i\varphi_{k}}\,, \tag{6.2}\] \[z_{k} = e^{(\alpha_{k}+i\omega_{k})T}\,. \tag{6.3}\]
The complex parameters \(\{h_{k},z_{k}\}\) and the number \(p\) of damped sinusoids are to be determined. For \(1\leq n\leq p\) we can rewrite Eq.(6.1) in matrix form as
\[\left[\begin{array}{cccc}z_{1}^{0}&z_{2}^{0}&\cdots&z_{p}^{0}\\ z_{1}^{1}&z_{2}^{1}&\cdots&z_{p}^{1}\\ \vdots&\vdots&\ddots&\vdots\\ z_{1}^{p-1}&z_{2}^{p-1}&\cdots&z_{p}^{p-1}\end{array}\right]\left[\begin{array} []{c}h_{1}\\ h_{2}\\ \vdots\\ h_{p}\end{array}\right]=\left[\begin{array}{c}x[1]\\ x[2]\\ \vdots\\ x[p]\end{array}\right]\,. \tag{6.4}\]
In essence, Prony's method is a technique that allows for the determination of the \(z_{k}\)'s without requiring nonlinear minimization. Define a polynomial \({\bf A}(z)\) of degree \(p\) which has the \(z_{k}\)'s as its roots:
\[{\bf A}(z)=\prod_{k=1}^{p}(z-z_{k})\equiv\sum_{m=0}^{p}a[m]z^{p-m}\,, \tag{6.5}\]
where \(a[0]=1\). It can be shown that the \(a[k]\)'s are determined from the following matrix equation
\[\left[\begin{array}{cccc}x[p]&x[p-1]&\cdots&x[1]\\ x[p+1]&x[p]&\cdots&x[2]\\ \vdots&\vdots&\ddots&\vdots\\ x[2p-1]&x[2p-2]&\cdots&x[p]\end{array}\right]\left[\begin{array}{c}a[1]\\ a[2]\\ \vdots\\ a[p]\end{array}\right]=-\left[\begin{array}{c}x[p+1]\\ x[p+2]\\ \vdots\\ x[2p]\end{array}\right]\,. \tag{6.6}\]
Then we aim to determine the roots \(z_{k}\) of the polynomial \({\bf A}(z)\) [see Eq.(6.5)]. The damping and frequency are obtained through
\[\alpha_{k} = \log|z_{k}|/T\,,\] \[\omega_{k} = \arctan[{\sf Im}(z_{k})/{\sf Re}(z_{k})]/T\,. \tag{6.7}\]
Finally, the amplitudes \(A_{k}\) and phases \(\varphi_{k}\) are found
\[A_{k} = |h_{k}|\,,\] \[\varphi_{k} = \arctan[{\sf Im}(h_{k})/{\sf Re}(h_{k})]\,. \tag{6.8}\]
Figure 8: On the left panel, the waveform of \(\Psi(u_{\rm max},v)\) in logarithmic graph is given. The right panel shows the 3D waveform of \(\Psi(u,v)\) in logarithmic graph. We choose \(n=10\), \(\alpha=30\), \(\mu=1.5\), \(q=-0.5\), \(\epsilon=0.9973\), \(l=6\) and \(\gamma=0\). At this case, \(A=20\), \(v_{1}=-100\) and \(v_{2}=-80\) are selected as the parameters of the initial wave packet.
For most situations, there are more data points than exponential parameters: \(N>2p\). One can then use the so called "least-squares Prony method" [55] to get the \(a[k]\)'s from the data and then determine the roots \(z_{k}\), \(\alpha_{k}\), \(\omega_{k}\), \(A_{k}\) and \(\varphi_{k}\) from Eq.(6.5), Eq.(6.7) and Eq.(6.8).
Unfortunately, the results of measurements and numerical simulations will inevitably contain noise. This will make the original and least-squares Prony method no longer applicable. By introducing another characteristic polynomial \(\mathbf{B}(z)\), an improved method called KT method is given [54; 55; 56]. The coefficients \(a[k]\) of \(\mathbf{A}(z)\) are solutions of the forward linear prediction equation given by
\[\sum_{m=0}^{p}a[m]x[n-m]=0\,. \tag{6.9}\]
These same exponential waves can be generated in reverse time by the backward linear predictor
\[\sum_{m=0}^{p}b[m]x[n-p-m]=0\,, \tag{6.10}\]
where \(b[0]=1\). The characteristic polynomial \(\mathbf{B}(z)\) is constructed as
\[\mathbf{B}(z)=\sum_{m=0}^{p}b^{\star}[m]z^{p-m}\,, \tag{6.11}\]
in which the roots are \(z_{k}=e^{-s_{k}^{\star}}\) with \(s_{k}=(\alpha_{k}+i\omega_{k})T\) and here \(\star\) represents the complex conjugation.
Suppose the measured signal contains additional Gaussian white noise. The noise leads to the deviation of the true zero estimate of the polynomials. As a result, this deviation will cause the real and imaginary parts of characteristic frequency estimates being different from the true ones. By searching for a number of exponential components \(L>p\), in which \(p\) represents the actual number of exponential waves in the signal and \(L\) is the prediction order of the model, the bias can be significantly reduced in an empirical manner [54; 55; 56]. However, when one uses this process, some extra zeros due to noise will arise. Fortunately, these can be statistically separated by monitoring the zeros of the polynomials \(\mathbf{A}(z)\) and \(\mathbf{B}(z)\) and the complex conjugate of the reciprocal of these zeros. Singular value decomposition (SVD) can provide the separation. In practice, \(p\) in Eq.(6.9) and Eq.(6.10) is replaced by \(L\). We obtain two linear equations with respect to \(a[k]\) and \(b[k]\), i.e.,
\[\begin{bmatrix}x[L]&x[L-1]&\cdots&x[1]\\ x[L+1]&x[L]&\cdots&x[2]\\ \vdots&\vdots&\ddots&\vdots\\ x[N-1]&x[N-2]&\cdots&x[N-L]\end{bmatrix}\begin{bmatrix}a[1]\\ a[2]\\ \vdots\\ a[L]\end{bmatrix}=-\begin{bmatrix}x[L+1]\\ x[L+2]\\ \vdots\\ x[N]\end{bmatrix} \tag{6.12}\]
and
\[\begin{bmatrix}x[2]&x[3]&\cdots&x[L+1]\\ x[3]&x[4]&\cdots&x[L+2]\\ \vdots&\vdots&\ddots&\vdots\\ x[N-L+1]&x[N-L+2]&\cdots&x[N]\end{bmatrix}\begin{bmatrix}b[1]\\ b[2]\\ \vdots\\ b[L]\end{bmatrix}=-\begin{bmatrix}x[1]\\ x[2]\\ \vdots\\ x[N-L]\end{bmatrix}\,. \tag{6.13}\]
We express \(\mathbf{X}\) which is the coefficient matrix of Eq.(6.12) or Eq.(6.13) as
\[\mathbf{X}=\mathbf{U}\mathbf{S}\mathbf{V}^{H}\,, \tag{6.14}\]
where \(\mathbf{U}\) is a \((N-L)\times(N-L)\) dimensional matrix, \(\mathbf{S}\) is a \((N-L)\times L\) dimensional matrix and \(\mathbf{V}\) is a \(L\times L\) dimensional matrix with superscript \(H\) stands for the Hermitian conjugation. The singular values on the diagonal \((s_{1},\cdots,s_{p},s_{p+1},\cdots,s_{L})\) are arranged in decreasing order. Noise will be reduced by considering the reduced rank approximation
\[\hat{\mathbf{X}}=\mathbf{U}\hat{\mathbf{S}}\mathbf{V}^{H} \tag{6.15}\]
with
\[\hat{\mathbf{S}}=\begin{bmatrix}\hat{\mathbf{S}}_{p}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}_{(N-L)\times L}\,, \tag{6.16}\]
where \(\hat{\mathbf{S}}_{p}\) is the top-left \(p\times p\) of \(\mathbf{S}\). A better estimate for the coefficients \(a[k]\) and \(b[k]\) is then
\[\hat{\mathbf{a}}=-\hat{\mathbf{X}}^{+}\mathbf{x}\,, \tag{6.17}\]
where \(\hat{\mathbf{X}}^{+}\) is the Moore-Penrose inverse of \(\hat{\mathbf{X}}\) and \(\hat{\mathbf{a}}\) stands for \(a[k]\)'s or \(b[k]\)'s. This is the basic idea for the Kumaresan-Tufts methods [54; 55; 56]. It should be mentioned that SVD decomposition and Moore-Penrose inverse are both built-in functions in MATLAB, writing **svd** and **pinv**. As for the KT method in practice, we choose \(L=N/3\) in order to minimize the variance [54] where \(N\) is the number of samples.
Now, taking the parameters given in Fig.8 as an example, we will use the KT method to extract quasi-normal frequencies. First, we select an appropriate sampling time interval by observing the data in the left panel of Fig.8. For the entire interval of \(v\), we choose the line segments proportionally as \([0.53,0.88]\) to extract the characteristic frequencies where \(0.53\) means that we start to extract at \(v_{\text{initial}}=132.8815\) and \(0.88\) means that we end with \(v_{\text{final}}=298.9833\). It is noted that \(v_{\text{initial}}=0.53\times(v_{\text{end}}-v_{\text{start}})+v_{\text{start}}\) and \(v_{\text{final}}=0.88\times(v_{\text{end}}-v_{\text{start}})+v_{\text{start}}\) in which \(v_{\text{start}}\) is the time when the numerical simulation starts, \(v_{\text{end}}\) is the time when the numerical simulation ends. It is observed that the ringdown is begin at around \(v_{\text{initial}}=132.8815\). For this example, \(N=1200\) is the number of samples and the prediction order is \(L=400\) as previously mentioned. We obtain \(a[k]\)'s and \(b[k]\)'s from Eq.(6.12) and Eq.(6.13) so the roots of polynomial \(\mathbf{A}(z)\) and \(\mathbf{B}(z)\) can be found. These roots are irregularly distributed on both sides of the unit circumference which are shown in Fig.9. At this time, \(a[k]\)'s and \(b[k]\)'s are not modified by the SVD method.
Then, choosing physical number of exponential waves as \(p=5\) and using the SVD method, we will have the new \(a[k]\)'s and \(b[k]\)'s [see Eq.(6.17)] and then derive the new roots of polynomial \(\mathbf{A}(z)\) and \(\mathbf{B}(z)\) which are shown in Fig.10. To find the physical frequency, we also need to find the conjugate roots of these roots which are also shown in Fig.10. After applying the SVD method, it is found that almost all of the red asterisks are coincident with blue asterisks except five red asterisks which are coincident with blue five-pointed stars. Just as explained previously, these five points correspond to physical frequencies while other points are not physical which are considered to be the noise. The theoretical support for the above statement is that for both polynomials \(\mathbf{A}(z)\) and \(\mathbf{B}(z)\), zeros due to the noise tend to stay within the unit circle, whereas the true zeros due to the exponential signal form complex conjugate pairs inside and outside the unit circle. This is in general as a result of the fact that the statistics of a stationary random process do not change under time reversal [54].
Following such a process, we have five physical frequencies in all. Substituting them into Eq.(6.1), \(h_{k}\) will be acquired with \(x[i]\) coming from our collected samples. Finally, we have the damping and the frequency through Eq.(6.7) where the time sampling interval \(T=0.1305\) in this example. The physical frequencies are displayed in Fig.11.
Furthermore, the amplitudes \(A_{k}\) and the phases \(\varphi_{k}\) are determined from Eqs.(6.8). The comparation among the results of numerical calculations and the results of fitting are represented in Fig.12. The model has a good fit and we find that the physical frequencies derived in this section is compatible with the AIM. This will be explained in more detail in sec.VII.
Last but not least, determining the beginning of quasi-normal ringing is somewhat ambiguous since the quasi-normal stage is the one that can not be defined exactly on the one hand. On the other hand, higher overtones damp quickly and are exponentially suppressed. As a result, they are difficult to distinguish from numerical errors within the fitting approach, making them challenging to identify [57]. In fact, we are able to calculate only two or, sometimes, three longest-living frequencies including the point on the imaginary axis. However, through our practice, we find that using KT method is suitable to find more characteristic frequencies than the least-squares Prony method in our model.
Figure 9: The red asterisks stand for the roots of polynomial \(\mathbf{A}(z)\). The blue asterisks stand for the roots of polynomial \(\mathbf{B}(z)\).
## VII Numerical Result and Analysis
In this section, we will show some typical results. In order to provide how a single physical parameter affects the characteristic frequencies, we provide benchmark parameters as spacetime dimension \(n=10\), coupling constant \(\alpha=30\), black hole mass parameter \(\mu=1.5\), black hole charge parameter \(q=-0.5\), and "quantum numbers" \(l=6\) and \(\gamma=0\) which are used in Sec.IV and Sec.V. It should be noted that there is nothing special about this set of benchmark parameters.
In Fig.13, we show the first three order characteristic frequencies obtained from AIM excepting the pure imaginary modes where we show the pure imaginary modes in the tables (see Tab.II-Tab.VII). These six figures describe the relationship between characteristic frequencies and the six parameters \(n\), \(\mu\), \(q\), \(\alpha\), \(l\) and \(\gamma\), respectively. The horizontal axis in the figure represents the real part of the frequency, while the vertical axis represents the imaginary part of the frequency. The markers of circle, rectangle and triangle stand for overtone numbers \(n=0\), \(n=1\) and \(n=2\), respectively.
At first glance, it is found that the relationship between frequencies and parameters roughly shows a linear relationship for fixed overtones (The curvature of the curve is low.). Since all the imaginary part of frequencies we get are negative, for convenience of discussion, in later discussions the imaginary part of frequencies refers to the absolute value of the imaginary part without any special instructions.
For the dimension of spacetime \(n\), we find that the imaginary part of frequencies are not significantly dependent on the
Figure 11: These five characteristic frequencies are symmetrical in terms of the imaginary axis. The value of these characteristic frequencies are \(-0.0873i\), \(\pm 0.2635-0.07208i\) and \(\pm 0.3248-0.1836i\), respectively.
Figure 10: The new roots are shown after the SVD decomposition. The red asterisks stand for the roots of polynomial \(\mathbf{A}(z)\). The blue asterisks stand for the roots of polynomial \(\mathbf{B}(z)\). The red five-pointed stars stand for the complex conjugate of the reciprocal of the roots of polynomial \(\mathbf{A}(z)\). The blue five-pointed stars stand for the complex conjugate of the reciprocal of the roots of polynomial \(\mathbf{B}(z)\) is an enlargement of the image on the left near the point \((1,0)\).
dimension of spacetime while the real part decreases as the dimension of spacetime increases. So we can say the lifetime of QNMs are not remarkably dependent on the number of extra dimension of spacetime. For the black hole mass parameter \(\mu\), we find that the imaginary part of frequencies increases with the increase of the mass, while the real part increases with the increase of the mass. Therefore, the lifetime of QNMs decreases with the increase of the mass. For the parameter of black hole charge \(q\), we see a weak dependence on the imaginary part of frequencies for the charge. The imaginary part of frequencies increases with the increase of the charge \(|q|\), while the real part increases with the increase of the charge. The lifetime of QNMs decreases with the increase of the charge \(|q|\). For the Gauss-Bonnet coupling constant \(\alpha\), the imaginary part of frequencies decreases with the increase of the Gauss-Bonnet coupling constant. The lifetime of QNMs increases with the increase of the Gauss-Bonnet coupling constant. Additionally, the Gauss-Bonnet coupling constant has more impacts on the imaginary part of the third overtone than the first and the second overtones. The above studies are showed to illustrate the relationship between the characteristic frequencies and the four physical parameters \(n\), \(\mu\), \(q\) and \(\alpha\) where they are all appear at the metric funtion \(f(r)\).
There are two parameters \(l\) and \(\gamma\) that can effect the frequencies which are called the "quantum numbers". For the parameter \(l\), we find that the imaginary part of frequencies decreases with the increase of \(l\), while the real part increases with the increase of \(l\). The different performance is that the linear relationship between the imaginary part and the real part disappears especially for \(l\to 0\). The lifetime of QNMs increases with the increase of \(l\). For another quantum parameter \(\gamma\), we find that the imaginary part of frequencies decreases with the increase of \(\gamma\), while the real part decreases with the increase of \(\gamma\). Interestingly, it is found that the slopes of the three lines of three overtones are almost the same within the range of errors. So, it can be said that rain and dew are evenly distributed on the each overtone for the quantum parameter \(\gamma\) while the Gauss-Bonnet coupling constant is not.
As for those pure imaginary modes, we place the first pure imaginary frequency on the right side of the table and these modes do not participate in the sorting of overtones. Here, we provide some interesting discoveries among the results. For the dimension \(n\), it can be seen that the relationship between frequency and \(n\) is not monotonic. For the mass \(\mu\), we notice that as the mass increases, the fundamental mode changes from the non-imaginary axis mode dominating to the imaginary axis mode dominating. For the "quantum number" \(l\), it is found that when \(l\geq 7\), the pure imaginary frequency is dissipated, which is confirmed at the side of numerical integration method.
Considering the limitation of the numerical integration method, in our analysis AIM is the main approach to derive the QNM frequencies and the numerical integration method is an auxiliary method. In other words, we use the numerical integration method to check the consistence of the frequencies obtained by two methods and calculate the corresponding errors. The relative error formula used in this paper is
\[\delta=\frac{|\omega_{\text{AIM}}-\omega_{\text{NIM}}|}{|\omega_{\text{AIM}}| }\times 100\%\,, \tag{7.1}\]
where AIM refers to the asymptotic iteration method and NIM refers to the numerical integration method. A typical example is that the spacetime dimension \(n=10\), coupling constant \(\alpha=30\), black hole mass parameter \(\mu=1.5\), black hole charge parameter \(q=-0.5\), and "quantum numbers" \(l=6\) and \(\gamma=0\). Using Eq.(7.1), we get the relative error \(\delta_{0}=0.072\%\), \(\delta_{1}=3.6\%\) and \(\delta_{\text{pure}}=1.9\%\) where \(0\) and \(1\) refer to the overtone number and "pure" refers to the pure imaginary mode. By applying the relative error formula (7.1), we observe that the error is within an acceptable range. The corresponding calculation results are not shown but they are all small especially for the overtone number \(n=0\).
Figure 12: The comparation among the results of numerical calculations and the results of fitting. The blue curve stands for the numerical results and the orange curve stands for the fitting results.
It has been shown that one can obtain a four dimensional wave-like equation on the manifold \(M^{4}\) by using the characteristic tensors in extra dimensions [see Eq.(3.8) and Eq.(3.9)]. In Appendix.D, we study a toy model (Klein-Gordon equation) to compute the QNM frequencies in four dimensional spacetime so as to compare our model with the test model. The results are shown in Fig.14 with the same parameters excepting the "quantum number" \(\gamma\), since the "quantum number" \(\gamma\) does not appear in the KG model. It can be found that for the overtone number \(n=1\), the real part of characteristic frequencies is not monotonic about the mass \(\mu\) which is different from the KK model. Combining \(n=0\) and \(n=2\), we observe that the QNM frequencies have three different behaviors for the mass parameter \(\mu\).
## VIII Conclusions and discussion
In this paper, we investigate the QNMs of tensor perturbation for the Kaluza-Klein black hole in EGB gravity. The topology of the so-called Kaluza-Klein black hole is the product of a usual \(4\)-dimensional spacetime with a negative constant curvature space.
\begin{table}
\begin{tabular}{c c c c c c c c c} & \multicolumn{2}{c}{\(n=0\)} & \multicolumn{2}{c}{\(n=1\)} & \multicolumn{2}{c}{\(n=2\)} & first pure imaginary \\ \(n\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) \\ \hline
7 & 0.3407 - 0.07223 i & 3.436e-4 & 0.4174 - 0.1707 i & 3.758e-3 & - & - & - 0.06898 i & 5.593e-4 \\
8 & 0.3064 - 0.06988 i & 3.030e-4 & 0.3779 - 0.1675 i & 2.716e-3 & - & - & - 0.08114 i & 4.715e-4 \\
9 & 0.2824 - 0.07091 i & 5.328e-5 & 0.3538 - 0.1715 i & 4.615e-4 & 0.4353 - 0.2786 i & 4.363e-3 & - 0.08474 i & 1.296e-3 \\
10 & 0.2637 - 0.07208 i & 2.256e-5 & 0.3336 - 0.1733 i & 1.937e-4 & 0.4161 - 0.2830 i & 2.627e-3 & - 0.08566 i & 7.629e-4 \\
11 & 0.2485 - 0.07270 i & 2.654e-5 & 0.3167 - 0.1734 i & 2.333e-4 & 0.3962 - 0.2817 i & 2.081e-3 & - 0.08465 i & 1.058e-4 \\
12 & 0.2357 - 0.07293 i & 1.401e-5 & 0.3023 - 0.1726 i & 1.278e-4 & 0.3798 - 0.2812 i & 1.295e-3 & - 0.08328 i & 7.548e-5 \\
13 & 0.2249 - 0.07286 i & 1.997e-5 & 0.2900 - 0.1714 i & 1.818e-4 & 0.3656 - 0.2772 i & 1.563e-3 & - 0.08167 i & 1.167e-5 \\
14 & 0.2154 - 0.07261 i & 1.247e-5 & 0.2792 - 0.1698 i & 1.177e-4 & 0.3530 - 0.2748 i & 1.002e-3 & - 0.08002 i & 8.808e-6 \\
15 & 0.2072 - 0.07223 i & 1.990e-5 & 0.2697 - 0.1681 i & 1.852e-4 & 0.3419 - 0.2704 i & 1.589e-3 & - 0.07838 i & 1.528e-6 \\ \end{tabular}
\end{table}
Table 2: QNM frequencies for the dimension \(n\). The results are calculated with asymptotic iteration method of \(50\) iteration order. ”-” symbol indicates that AIM can’t predict the result with enough precision with corresponding parameter choice.
\begin{table}
\begin{tabular}{c c c c c c c c c} & \multicolumn{2}{c}{\(n=0\)} & \multicolumn{2}{c}{\(n=1\)} & \multicolumn{2}{c}{\(n=2\)} & first pure imaginary \\ \(\mu\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) \\ \hline
1.5 & 0.2637 - 0.07208 i & 2.256e-5 & 0.3336 - 0.1733 i & 1.937e-4 & 0.4161 - 0.283 i & 2.627e-3 & - 0.08566 i & 7.629e-4 \\
2. & 0.2675 - 0.08634 i & 5.619e-6 & 0.3444 - 0.2027 i & 4.915e-5 & 0.4331 - 0.3297 i & 4.489e-4 & - 0.09916 i & 2.898e-6 \\
2.5 & 0.2712 - 0.0983 i & 1.978e-6 & 0.3549 - 0.2274 i & 2.017e-5 & 0.4510 - 0.3667 i & 1.806e-4 & - 0.1095 i & 1.451e-7 \\
3. & 0.2748 - 0.1087 i & 1.440e-6 & 0.3647 - 0.2490 i & 1.427e-5 & 0.4676 - 0.3992 i & 1.221e-4 & - 0.1177 i & 1.477e-8 \\
3.5 & 0.2782 - 0.1180 i & 5.341e-7 & 0.3740 - 0.2681 i & 5.295e-6 & 0.4830 - 0.4280 i & 4.907e-5 & - 0.1244 i & 7.514e-9 \\
4. & 0.2815 - 0.1265 i & 6.627e-7 & 0.3827 - 0.2855 i & 6.665e-6 & 0.4973 - 0.4540 i & 6.095e-5 & - 0.1301 i & 1.025e-9 \\
4.5 & 0.2847 - 0.1343 i & 3.598e-7 & 0.3910 - 0.3014 i & 3.943e-6 & 0.5108 - 0.4778 i & 3.637e-5 & - 0.1351 i & 6.371e-10 \\
5. & 0.2877 - 0.1415 i & 2.322e-7 & 0.3989 - 0.3161 i & 2.638e-6 & 0.5236 - 0.4997 i & 2.338e-5 & - 0.1395 i & 3.905e-10 \\
5.5 & 0.2906 - 0.1482 i & 3.925e-7 & 0.4064 - 0.3299 i & 4.425e-6 & 0.5357 - 0.5202 i & 4.033e-5 & - 0.1434 i & 1.511e-10 \\
6. & 0.2934 - 0.1546 i & 2.911e-7 & 0.4136 - 0.3427 i & 3.342e-6 & 0.5473 - 0.5394 i & 2.977e-5 & - 0.1470 i & 1.201e-10 \\
6.5 & 0.2961 - 0.1606 i & 2.283e-7 & 0.4205 - 0.3549 i & 2.612e-6 & 0.5583 - 0.5574 i & 2.277e-5 & - 0.1503 i & 9.093e-11 \\
7. & 0.2988 - 0.1664 i & 1.848e-7 & 0.4272 - 0.3664 i & 2.094e-6 & 0.5689 - 0.5745 i & 1.797e-5 & - 0.1534 i & 6.594e-11 \\
7.5 & 0.3013 - 0.1718 i & 1.528e-7 & 0.4336 - 0.3774 i & 1.714e-6 & 0.5791 - 0.5908 i & 1.459e-5 & - 0.1562 i & 4.540e-11 \\
8. & 0.3038 - 0.1771 i & 1.284e-7 & 0.4398 - 0.3878 i & 1.431e-6 & 0.58
The metric function \(f(r)\) of the black hole with \(k=1\) is determined by four parameters. These parameters are the dimension of spacetime \(n\), the Gauss-Bonnet coupling constant \(\alpha\), the mass parameter \(\mu\) and the charge parameter \(q\), respectively. From the asymptotic expansion of the metric function, it is found that the behavior of the metric function is similar to the Reissner-Nordstrom-anti-de Sitter spacetime while the Maxwell field is absent.
The establishment of the tensor perturbation for the Kaluza-Klein black hole in EGB gravity is tedious. In our first work in this series, we have got the Kodama-Ishibashi formalism for the tensor perturbation of the theory, and a generalized master equations are given [1]. The applicability of this master equation is broad. Indeed, it can be used to calculate the perturbation of all tensor types for warped product spacetimes in EGB gravity theory. Evidently, it is applicable to the spacetime studied in this paper. It should be emphasized that our perturbation equation or the Schrodinger-like equation is based on the equations of motion of EGB theory and the corrections of the coefficients of the second order covariant derivative, i.e., the term \(\sim\alpha G^{ab}\), arises naturally.
\begin{table}
\begin{tabular}{c c c c c c c c c} & \multicolumn{2}{c}{\(n=0\)} & \multicolumn{2}{c}{\(n=1\)} & \multicolumn{2}{c}{\(n=2\)} & first pure imaginary \\ \(\alpha\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) \\ \hline
10 & 0.4567 - 0.1248 i & 3.907e-5 & 0.5778 - 0.3001 i & 3.356e-4 & 0.7207 - 0.4901 i & 4.549e-3 & - 0.1484 i & 1.321e-3 \\
20 & 0.3230 - 0.08828 i & 2.763e-5 & 0.4085 - 0.2122 i & 2.373e-4 & 0.5096 - 0.3465 i & 3.217e-3 & - 0.1049 i & 9.343e-4 \\
30 & 0.2637 - 0.07208 i & 2.256e-5 & 0.3336 - 0.1733 i & 1.937e-4 & 0.4161 - 0.2830 i & 2.627e-3 & - 0.08566 i & 7.629e-4 \\
40 & 0.2284 - 0.06242 i & 1.953e-5 & 0.2889 - 0.1501 i & 1.678e-4 & 0.3604 - 0.2450 i & 2.275e-3 & - 0.07419 i & 6.607e-4 \\
50 & 0.2043 - 0.05583 i & 1.747e-5 & 0.2584 - 0.1342 i & 1.501e-4 & 0.3223 - 0.2192 i & 2.034e-3 & - 0.06635 i & 5.909e-4 \\
60 & 0.1865 - 0.05097 i & 1.595e-5 & 0.2359 - 0.1225 i & 1.370e-4 & 0.2942 - 0.2001 i & 1.857e-3 & - 0.06057 i & 5.394e-4 \\
70 & 0.1726 - 0.04719 i & 1.477e-5 & 0.2184 - 0.1134 i & 1.268e-4 & 0.2724 - 0.1852 i & 1.719e-3 & - 0.05608 i & 4.994e-4 \\
80 & 0.1615 - 0.04414 i & 1.381e-5 & 0.2043 - 0.1061 i & 1.186e-4 & 0.2548 - 0.1733 i & 1.608e-3 & - 0.05246 i & 4.672e-4 \\
90 & 0.1522 - 0.04161 i & 1.302e-5 & 0.1926 - 0.1000 i & 1.119e-4 & 0.2402 - 0.1634 i & 1.516e-3 & - 0.04946 i & 4.405e-4 \\
100 & 0.1444 - 0.03948 i & 1.235e-5 & 0.1827 - 0.09491 i & 1.061e-4 & 0.2279 - 0.1550 i & 1.439e-3 & - 0.04692 i & 4.179e-4 \\ \end{tabular}
\end{table}
Table 5: QNM frequencies for the Gauss Bonnet coupling constant \(\alpha\). The results are calculated with asymptotic iteration method of \(50\) iteration order.
\begin{table}
\begin{tabular}{c c c c c c c c c} & \multicolumn{2}{c}{\(n=0\)} & \multicolumn{2}{c}{\(n=1\)} & \multicolumn{2}{c}{\(n=2\)} & first pure imaginary \\ \(q\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) \\ \hline -0.5 & 0.2637 - 0.07208 i & 2.256e-5 & 0.3336 - 0.1733 i & 1.937e-4 & 0.4161 - 0.2830 i & 2.627e-3 & - 0.08566 i & 7.629e-4 \\ -1.5 & 0.2655 - 0.07258 i & 1.282e-5 & 0.3385 - 0.1744 i & 9.853e-5 & 0.4229 - 0.2854 i & 8.608e-4 & - 0.09169 i & 3.496e-5 \\ -2.5 & 0.2673 - 0.07315 i & 4.219e-6 & 0.3432 - 0.1757 i & 3.423e-5 & 0.4304 - 0.2886 i & 3.392e-4 & - 0.09728 i & 1.314e-5 \\ -3.5 & 0.2689 - 0.07377 i & 5.536e-6 & 0.3478 - 0.1772 i & 4.274e-5 & 0.4379 - 0.2908 i & 3.733e-4 & - 0.1022 i & 1.449e-6 \\ -4.5 & 0.2705 - 0.07444 i & 3.660e-6 & 0.3522 - 0.1789 i & 2.371e-5 & 0.4453 - 0.2931 i & 1.997e-4 & - 0.1067 i & 1.089e-6 \\ -5.5 & 0.2721 - 0.07513 i & 2.591e-6 & 0.3562 - 0.1805 i & 1.673e-5 & 0.4521 - 0.2960 i & 1.263e-4 & - 0.1107 i & 5.847e-7 \\ -6.5 & 0.2735 - 0.07584 i & 2.131e-6 & 0.3601 - 0.1823 i & 1.229e-5 & 0.4584 - 0.2987 i & 8.281e-5 & - 0.1143 i & 4.157e-7 \\ -7.5 & 0.2749 - 0.07656 i & 1.768e-6 & 0.3638 - 0.1840 i & 9.130e-6 & 0.4645 - 0.3015 i & 5.755e-5 & - 0.1177 i & 3.843e-7 \\ -8.5 & 0.2763 - 0.07728 i & 3.659e-6 & 0.3673 - 0.1858 i & 1.998e-5 & 0.4702 - 0.3043 i & 1.293e-4 & - 0.1208 i & 1.040e-7 \\ -9.5 & 0.2776 - 0.07801 i & 3.190e-6 & 0.3707 - 0.1875 i & 1.724e-5 & 0.4757 - 0.3069 i & 1.081e-4 & - 0.1237 i & 7.619e-8 \\ -10.5 & 0.2788 - 0.07874 i & 2.887e-6 & 0.3739 - 0.1893 i & 1.612e-5 & 0.4810 - 0.3097 i & 9.016e-5 & - 0.1264 i & 5.360e-8 \\ -11.5 & 0.2800 - 0.07946 i & 1.136e-6 & 0.3770 - 0.1910 i & 6.094e-6 & 0.4860 - 0.3125 i & 2.472e-5 & - 0.1290 i & 1.778e-7 \\ -12.5 & 0.2812 - 0.0801
Different from a simple test field equation, this perturbation equation can reveal the dynamics of underlying gravitational theories beyond the given geometry of spacetime. To illustrate this difference, we give a toy model, i.e., Klein-Gordon test field on the four dimensional spacetime in Appendix.D.
Using the asymptotic iteration method (AIM) and the numerical integration method, we calculated the QNMs for different parameters. As for the AIM, considering that there is limited literature on how to select expansion points, we propose a method for selecting the expansion point \(\xi_{0}\) in Sec.IV. The first three order characteristic frequencies obtained from AIM including the first pure imaginary modes are shown in the Sec.VII. The corresponding monotonicities are also described for the parameters \(n\)
\begin{table}
\begin{tabular}{c c c c c c c c c} & \multicolumn{2}{c}{\(n=0\)} & \multicolumn{2}{c}{\(n=1\)} & \multicolumn{2}{c}{\(n=2\)} & \multicolumn{2}{c}{first pure imaginary} \\ \(l\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 6: QNM frequencies for the “quantum number” \(l\). The results are calculated with asymptotic iteration method of \(50\) iteration order. “-” symbol indicates that AIM can’t predict the result with enough precision with corresponding parameter choice.
\begin{table}
\begin{tabular}{c c c c c c c c c c} & \multicolumn{2}{c}{\(n=0\)} & \multicolumn{2}{c}{\(n=1\)} & \multicolumn{2}{c}{\(n=2\)} & \multicolumn{2}{c}{first pure imaginary} \\ \(\gamma\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) & \(\omega\) & \(\sqrt{\sigma_{re}^{2}+\sigma_{im}^{2}}\) \\ \hline
0 & 0.2637 - 0.07207 i & 2.841e-5 & 0.3336 - 0.1733 i & 2.460e-4 & 0.4155 - 0.2821 i & 2.514e-3 & - 0.08552 i & 5.121e-4 \\
1 & 0.2622 - 0.06978 i & 3.295e-5 & 0.3317 - 0.1703 i & 2.749e-4 & 0.4137 - 0.2790 i & 2.685e-3 & - 0.08446 i & 5.226e-4 \\
2 & 0.2606 - 0.06742 i & 3.875e-5 & 0.3298 - 0.1672 i & 3.117e-4 & 0.4116 - 0.2758 i & 2.862e-3 & - 0.08342 i & 5.333e-4 \\
3 & 0.2590 - 0.06500 i & 4.122e-5 & 0.3278 - 0.1640 i & 3.203e-4 & 0.4092 - 0.2732 i & 2.844e-3 & - 0.08250 i & 6.506e-4 \\
4 & 0.2573 - 0.06248 i & 4.470e-5 & 0.3256 - 0.1606 i & 3.343e-4 & 0.4059 - 0.2704 i & 2.878e-3 & - 0.08162 i & 7.885e-4 \\
5 & 0.2556 - 0.05985 i & 5.582e-5 &
\(\mu\), \(q\), \(\alpha\), \(l\) and \(\gamma\). Furthermore, to demonstrate the accuracies of characteristic frequencies, we provide the formula of the relative error between the results of the two methods. The conclusion is that the errors are relatively small.
Technically speaking, due to limitations in computational accuracy, it is difficult for us to find the QNM frequencies with \(n\geq 3\). The continued fractions method or Leaver method is a great benefit for getting more overtones of QNMs [35; 36; 37; 38; 39; 40]. However, the metric function \(f(r)\) is not a rational function so that if one use this method, an infinite recurrence relation will be obtained. Fortunately, Rezzolla and Zhidenko use continue fractions expansions in terms of the metric function to overcome it [58]. Recently, Konoplya _et al._ developed a general procedure allowing one to use the Leaver method to metrics which are not expressed in terms of rational functions [59]. Therefore, we subsequently do these calculations to get more overtones somewhere.
Another point worth noting is that by the use of the gauge-invariant variables proposed by Kodama and Ishibashi [10], the most general perturbation equations of General Relativity in the \((m+n)\)-dimensional spacetime with a warped product metric has been obtained in [12]. In the EGB gravity theory, it is worthwhile to use similar methods to get the master equations of the scalar and vector type for the \((m+n)\)-dimensional spacetime. With these equations, we can calculate the QNMs under different black hole backgrounds. In the frame of EGB gravity theory with \((m+n)\)-dimensional spacetime, computing the perturbation equations and QNMs are both non-trivial things which will be considered in future.
## Acknowledgement
We are grateful to Yi-Fu Cai for helpful discussions. This work was supported in part by the National Natural Science Foundation of China with grants No.12075232, No.12247103 and No.11961131007. This work is also supported by the Fundamental Research Funds for the Central Universities under Grant No.WK2030000036, and Grant NO.WK2030000044. Part of numerics were operated on the computer clusters _LINDA_ & _JUDY_ in the particle cosmology group at USTC.
## Appendix A The four dimensional scalar equation
In this Appendix, we will show how the four dimensional scalar equation (3.10) is derived from the master equation of the tensor perturbation (3.3). Substituting Eq.(3.8) into Eq.(3.3), we have
\[(P^{ab}D_{a}D_{b}+P^{mn}\hat{D}_{m}\hat{D}_{n}+P^{a}D_{a}+V)\Big{[}\Phi(y)\bar {h}_{ij}\Big{]}=0\,.\] (A1)
With the help of Eq.(3.9), a scalar equation in the four dimensional manifold \(M^{4}\)
\[\Big{(}P^{ab}D_{a}D_{b}+P^{a}D_{a}+V+\frac{Q\gamma}{r^{2}}\Big{)}\Phi(y)=0\,,\] (A2)
Figure 13: QNM frequencies of overtone number \(n=0,1,2\) with different physical parameters. The six figures are for the spacetime dimension \(n\), the mass \(\mu\), the charge \(q\), the Gauss-Bonnet coupling constant \(\alpha\), and the “quantum numbers” \(l\) and \(\gamma\). Results with overtone number \(n=0,1,2\) are marked by circle, rectangle and triangle, repesctively.
is obtained. Note that \(r=r_{0}\), the terms with derivatives with respect to \(r\) are all vanished. Therefore, from Eqs.(3.4)-(3.7), we have
\[P^{ab}=\frac{4n-22}{(n-4)(n-5)}g^{ab}-4\alpha\cdot{}^{4}\!G^{ab}\,,\] (A3) \[Q=\frac{6(n-6)}{(n-4)(n-5)}+{}^{2}\alpha\cdot{}^{4}\!R\,,\] (A4) \[P^{a}=0\,,\] (A5)
and
\[V=\frac{2}{(n-4)(n-5)}{}^{4}\!R+\frac{6(n-6)}{\alpha(n-4)^{2}(n-5)^{2}}\,,\] (A6)
where we have used Eq.(2.6) and Eq.(2.8) with a relation between \(\alpha\) and \(\Lambda\), i.e.,
\[\alpha\Lambda=-\frac{n^{2}-5n-2}{8(n-4)(n-5)}\,.\] (A7)
It should be noted that since \(r_{0}^{2}>0\) and \(\alpha>0\), we have \(K=-1\), and \(\Lambda<0\). Hence, Eq.(A2) becomes
\[\Big{[}\frac{4n-22}{(n-4)(n-5)}g^{ab}-4\alpha\cdot{}^{4}\!G^{ab}\Big{]}D_{a}D_ {b}\Phi+\Big{[}\frac{2+\gamma}{(n-4)(n-5)}{}^{4}\!R+\frac{3(n-6)(2+\gamma)}{ \alpha(n-4)^{2}(n-5)^{2}}\Big{]}\Phi=0\,.\] (A8)
## Appendix B The proof of \(V_{\textrm{eff}}(r_{+})=0\)
In this Appendix, we will give a proof of \(V_{\textrm{eff}}(r_{+})=0\). First, we have the function \(B(r)\) given by Eq.(3.14). As \(r\to r_{+}\), the function \(f\to 0\). On the one hand,
\[\lim_{r\to r_{+}}\frac{f^{2}(r)B^{2}(r)}{4}\] (B1) \[= \lim_{r\to r_{+}}\frac{f^{2}}{4}\Big{[}\frac{4n-22}{(n-4)(n-5)} \Big{(}f^{\prime}+\frac{2f}{r}\Big{)}-4\alpha\frac{-f^{\prime}+3ff^{\prime}+r( f^{\prime})^{2}+rff^{\prime\prime}}{r^{2}}\Big{]}^{2}\times\] \[\Big{[}\frac{4n-22}{(n-4)(n-5)}f-\frac{4\alpha f(-1+f+rf^{\prime })}{r^{2}}\Big{]}^{-2}\] \[= \frac{1}{4}\lim_{r\to r_{+}}\Big{[}\frac{4n-22}{(n-4)(n-5)}f^{ \prime}-4\alpha\frac{-f^{\prime}+r(f^{\prime})^{2}}{r^{2}}\Big{]}^{2}\Big{[} \frac{4n-22}{(n-4)(n-5)}-\frac{4\alpha(-1+rf^{\prime})}{r^{2}}\Big{]}^{-2}\] \[= \frac{(f^{\prime}(r_{+}))^{2}}{4}\,.\]
On the other hand, the derivative of \(B(r)\) is
\[B^{\prime}(r) = \frac{1}{f^{2}\Big{[}\frac{4n-22}{(n-5)(n-4)}-\frac{4\alpha(rf^{ \prime}+f-1)}{r^{2}}\Big{]}^{2}}\Bigg{\{}-\Big{[}\frac{(4n-22)f^{\prime}}{(n-5 )(n-4)}+\frac{8\alpha f(rf^{\prime}+f-1)}{r^{3}}-\frac{4\alpha(rf^{\prime}+f- 1)f^{\prime}}{r^{2}}\] (B2) \[-\frac{4\alpha f(rf^{\prime\prime}+2f^{\prime})}{r^{2}}\Big{]} \times\Big{[}\frac{(4n-22)\Big{(}f^{\prime}+\frac{2f}{r}\Big{)}}{(n-5)(n-4) }-\frac{4\alpha(rff^{\prime\prime}+r(f^{\prime})^{2}+(3f-1)f^{\prime})}{r^{2}}\Big{]}\] \[+f\Big{[}\frac{4n-22}{(n-5)(n-4)}-\frac{4\alpha(rf^{\prime}+f-1)} {r^{2}}\Big{]}\times\Big{[}\frac{(4n-22)\Big{(}f^{\prime\prime}+\frac{2f^{ \prime}}{r}-\frac{2f}{r^{2}}\Big{)}}{(n-5)(n-4)}+\frac{8\alpha(rf^{\prime \prime}+r(f^{\prime})^{2}+(3f-1)f^{\prime})}{r^{3}}\] \[-\frac{4\alpha(rff^{(3)}+(4f-1)f^{\prime\prime}+4(f^{\prime})^{2} +3rf^{\prime}f^{\prime\prime})}{r^{2}}\Big{]}\Bigg{\}}\,.\]
Therefore, we have
\[\lim_{r\to r_{+}}\frac{f^{2}(r)B^{\prime}(r)}{2} \tag{100}\] \[= \lim_{r\to r_{+}}\frac{1}{2}\Big{[}\frac{4n-22}{(n-5)(n-4)}-\frac{4 \alpha(rf^{\prime}+f-1)}{r^{2}}\Big{]}^{-2}\Bigg{\{}-\Big{[}\frac{(4n-22)f^{ \prime}}{(n-5)(n-4)}+\frac{8\alpha f(rf^{\prime}+f-1)}{r^{3}}-\frac{4\alpha(rf ^{\prime}+f-1)f^{\prime}}{r^{2}}\] \[-\frac{4\alpha f(rf^{\prime\prime}+2f^{\prime})}{r^{2}}\Big{]} \times\Big{[}\frac{(4n-22)\Big{(}f^{\prime}+\frac{2f}{r}\Big{)}}{(n-5)(n-4)} -\frac{4\alpha(rf^{\prime\prime}+r(f^{\prime})^{2}+(3f-1)f^{\prime})}{r^{2}} \Big{]}\] \[+f\Big{[}\frac{4n-22}{(n-5)(n-4)}-\frac{4\alpha(rf^{\prime}+f-1)} {r^{2}}\Big{]}\times\Big{[}\frac{(4n-22)\Big{(}f^{\prime\prime}+\frac{2f^{ \prime}}{r}-\frac{2f}{r^{2}}\Big{)}}{(n-5)(n-4)}+\frac{8\alpha(rf^{\prime \prime}+r(f^{\prime})^{2}+(3f-1)f^{\prime})}{r^{3}}\] \[-\frac{4\alpha(rf^{\prime(3)}+(4f-1)f^{\prime\prime}+4(f^{\prime })^{2}+3rf^{\prime}f^{\prime\prime})}{r^{2}}\Big{]}\Bigg{\}}\] \[= \lim_{r\to r_{+}}\frac{1}{2}\Big{[}\frac{4n-22}{(n-5)(n-4)}- \frac{4\alpha(rf^{\prime}-1)}{r^{2}}\Big{]}^{-2}\Bigg{\{}-\Big{[}\frac{(4n-22) f^{\prime}}{(n-5)(n-4)}-\frac{4\alpha(rf^{\prime}-1)f^{\prime}}{r^{2}} \Big{]}\Big{[}\frac{(4n-22)f^{\prime}}{(n-5)(n-4)}-\frac{4\alpha(r(f^{\prime}) ^{2}-f^{\prime})}{r^{2}}\Big{]}\Bigg{\}}\] \[= -\frac{(f^{\prime}(r_{+}))^{2}}{2}\,.\]
From Eq.(100) and Eq.(100), as \(r\to r_{+}\), the limit of \(V_{\text{eff}}\) is
\[V_{\text{eff}}(r_{+}) = \lim_{r\to r_{+}}\Big{[}\omega^{2}-f^{2}C+\frac{(f^{\prime})^{2}} {4}-\frac{ff^{\prime\prime}}{2}+\frac{f^{2}B^{\prime}}{2}+\frac{f^{2}B^{2}}{4} \Big{]} \tag{101}\] \[= \frac{(f^{\prime}(r_{+}))^{2}}{4}-\frac{(f^{\prime}(r_{+}))^{2}}{ 2}+\frac{(f^{\prime}(r_{+}))^{2}}{4}=0\,.\]
## Appendix C The asymptotic behavior of \(\varphi\)
In this Appendix, we will look at the asymptotic behavior of \(\varphi\) in order to apply the boundary condition (4.11). In terms of \(r\), consider that \(\mathrm{d}r_{\star}=\mathrm{d}r/f\), Eq.(3.22) becomes
\[\varphi^{\prime\prime}+p(r)\varphi^{\prime}+q(r)\varphi=0\,, \tag{102}\]
where
\[p(r)=\frac{f^{\prime}(r)}{f(r)}\,,\quad q(r)=\frac{\omega^{2}-V_{\text{eff}}(r )}{f^{2}(r)}\,. \tag{103}\]
For the sake of finding out the the asymptotic behavior of \(\varphi\), there is a useful theorem [60]:
_The necessary and sufficient condition for Eq.(102) to have two regular solutions in the neighborhood \(0<|r-r_{0}|<\delta\) of its singular point \(r_{0}\) is that the functions_
\[(r-r_{0})p(r)\,,\quad(r-r_{0})^{2}q(r) \tag{104}\]
_are both are all analytic in \(|r-r_{+}|<\delta\)._
The singular point satisfied with the theorem is called regular singular point. Otherwise, the points are called irregular singular point. Since we are considering the non-degenerate case, i.e., \(f^{\prime}(r_{+})\neq 0\), so it is easy to find that
\[(r-r_{+})p(r)=(r-r_{+})\frac{f^{\prime}(r)}{f(r)}\quad\text{and}\quad(r-r_{+})^{ 2}q(r)=(r-r_{+})^{2}\frac{\omega^{2}-V_{\text{eff}}(r)}{f^{2}(r)} \tag{105}\]
are both analytic in \(|r-r_{+}|<\delta\). Hence, \(r=r_{+}\) is the regular singular point. The index equation is given by
\[\rho(\rho-1)+a_{0}\rho+b_{0}=0\,, \tag{106}\]
where
\[a_{0}=\lim_{r\to r_{+}}(r-r_{+})p(r)=\lim_{r\to r_{+}}(r-r_{+})\frac{f^{ \prime}(r)}{f(r)}=1\,, \tag{109}\]
\[b_{0}=\lim_{r\to r_{+}}(r-r_{+})^{2}q(r)=\lim_{r\to r_{+}}(r-r_{+})^{2}\frac{ \omega^{2}-V_{\text{eff}}(r)}{f^{2}(r)}=\frac{\omega^{2}}{(f^{\prime}(r_{+}))^ {2}}\,. \tag{110}\]
The index equation (106) becomes
\[\rho^{2}+\frac{\omega^{2}}{(f^{\prime}(r_{+}))^{2}}=0\,. \tag{111}\]
Therefore, we have \(\rho=\pm i\omega/f^{\prime}(r_{+})\). Considering the boundary condition, we have the asymptotic behavior of \(\varphi\) at \(r\to r_{+}\) reading as
\[\varphi\sim\Big{(}\frac{r-r_{+}}{r-r_{-}}\Big{)}^{-i\omega/f^{ \prime}(r_{+})}\,. \tag{112}\]
As for the behavior of \(r\to+\infty\), define \(t=1/r\), and then Eq.(104) becomes
\[\frac{\mathrm{d}^{2}\varphi}{\mathrm{d}t^{2}}+\Big{[}\frac{2}{t} -\frac{1}{t^{2}}p\Big{(}\frac{1}{t}\Big{)}\Big{]}\frac{\mathrm{d}\varphi}{ \mathrm{d}t}+\frac{1}{t^{4}}q\Big{(}\frac{1}{t}\Big{)}\varphi(t)=0\,. \tag{113}\]
Since we have
\[\lim_{t\to 0}\frac{1}{t}p\Big{(}\frac{1}{t}\Big{)}=\lim_{r\to+ \infty}\frac{rf^{\prime}(r)}{f(r)}=\lim_{r\to+\infty}\frac{r\cdot\frac{2r}{2( n-4)\alpha}\Big{[}1-\sqrt{\frac{n-4}{3(n-5)}}\Big{]}}{\frac{r^{2}}{2(n-4) \alpha}\Big{[}1-\sqrt{\frac{n-4}{3(n-5)}}\Big{]}}=2\,, \tag{114}\]
and
\[\lim_{t\to 0}\frac{1}{t^{2}}q\Big{(}\frac{1}{t}\Big{)}=\lim_{r\to+ \infty}\frac{r^{2}(\omega^{2}-V_{\text{eff}}(r))}{f^{2}(r)}=\lim_{r\to+\infty }\frac{-r^{2}\cdot V_{0}r^{2}}{\Big{\{}\frac{r^{2}}{2(n-4)\alpha}\Big{[}1- \sqrt{\frac{n-4}{3(n-5)}}\Big{]}\Big{\}}^{2}}=-\frac{4(n-4)^{2}\alpha^{2}V_{0} }{\Big{[}1-\sqrt{\frac{n-4}{3(n-5)}}\Big{]}^{2}}\,, \tag{115}\]
as a result, \(r=\infty\) is a regular singular point. Now, \(a_{0}\) and \(b_{0}\) in the index equation (106) are given by
\[a_{0}=\lim_{t\to 0}t\Big{[}\frac{2}{t}-\frac{1}{t^{2}}p\Big{(}\frac{1}{t} \Big{)}\Big{]}=2-\lim_{t\to 0}\frac{1}{t}p\Big{(}\frac{1}{t}\Big{)}=0\,, \tag{116}\]
and
\[b_{0}=\lim_{t\to 0}t^{2}\Big{[}\frac{1}{t^{4}}q\Big{(}\frac{1}{t} \Big{)}\Big{]}=\lim_{t\to 0}\frac{1}{t^{2}}q\Big{(}\frac{1}{t}\Big{)}=-\frac{4(n-4)^{2} \alpha^{2}V_{0}}{\Big{[}1-\sqrt{\frac{n-4}{3(n-5)}}\Big{]}^{2}}\,. \tag{117}\]
The condition \(V_{0}>0\) is required in our paper. So, we obtain \(b_{0}<0\). There are two different roots of the index equation \(\rho^{2}-\rho+b_{0}=0\) with \(b_{0}<0\). The roots are
\[\rho_{1}=\frac{1+\sqrt{1-4b_{0}}}{2}>0\,, \tag{118}\]
and
\[\rho_{2}=\frac{1-\sqrt{1-4b_{0}}}{2}<0\,. \tag{119}\]
The boundary condition is \(\varphi(t)\to 0\) as \(t\to 0\). Therefore, \(\rho=\rho_{1}>0\) is requisite. For convenience, we define
\[\rho\equiv\rho_{1}=\frac{1}{2}\Bigg{\{}1+\sqrt{1+\frac{16(n-4)^{2} \alpha^{2}V_{0}}{\Big{[}1-\sqrt{\frac{n-4}{3(n-5)}}\Big{]}^{2}}}\Bigg{\}}\,. \tag{120}\]
In order to apply the boundary condition (4.11), we define the following solution
\[\varphi(r)=\Big{(}\frac{r-r_{+}}{r-r_{-}}\Big{)}^{-i\omega/f^{\prime}(r_{+})} \Big{(}\frac{r_{+}-r_{-}}{r-r_{-}}\Big{)}^{\rho}\tilde{\varphi}(r)\,. \tag{108}\]
## Appendix D The toy model: the Klein-Gordon equation
In this Appendix, we use the Klein-Gordon equation as a toy model to compute the QNM frequencies in the four dimensional spacetime \((M^{4},g_{ab})\). In this case, the effective potential has a simple form
\[V_{\text{eff}}(r)=f(r)\Big{[}\frac{l(l+1)}{r^{2}}+\frac{f^{\prime}(r)}{r}\Big{]}\,, \tag{109}\]
where the metric is given by Eq.(2.11). After considering the boundary conditions of QNMs, we define the following solution
\[\varphi(r)=\Big{(}\frac{r-r_{+}}{r-r_{-}}\Big{)}^{-i\omega/f^{\prime}(r_{+})} \Big{(}\frac{r_{+}-r_{-}}{r-r_{-}}\Big{)}^{2}\tilde{\varphi}(r)\,. \tag{110}\]
Substituting the above expression into the AIM algorithm, we get the QNM frequencies shown in Fig.14.
|
2303.05627 | Strong uniform convergence rates of the linear wavelet estimator of a
multivariate copula density | In this paper, we investigate the almost sure convergence, in supremum norm,
of the rank-based linear wavelet estimator for a multivariate copula density.
Based on empirical process tools, we prove a uniform limit law for the
deviation, from its expectation, of an oracle estimator (obtained for known
margins), from which we derive the exact convergence rate of the rank-based
linear estimator. This rate reveals to be optimal in a minimax sense over Besov
balls for the supremum norm loss, whenever the resolution level is suitably
chosen. | Cheikh Tidiane Seck, Salha Mamane | 2023-03-10T00:09:58Z | http://arxiv.org/abs/2303.05627v1 | # Strong uniform convergence rates of the linear wavelet estimator of a multivariate copula density
# Strong uniform convergence rates of the linear wavelet estimator of a multivariate copula density
Cheikh Tidiane Seck, Salha Mamane
_Departement de mathematiques, Universite Alioune Diop, Bambey, Senegal_
_School of Statistics and Actuarial Science, University of the Witwatersrand, Johannesburg, South-Africa_
**Abstract**
In this paper, we investigate the almost sure convergence, in supremum norm, of the rank-based linear wavelet estimator for the multivariate copula density. Based on empirical process tools, we prove a uniform limit law for the deviation, from its expectation, of an oracle estimator (obtained for known margins), from which we derive the exact convergence rate for the rank-based linear estimator. This rate reveals to be optimal in a minimax sense over Besov balls for the supremum norm loss, whenever the resolution level is suitably chosen.
**Keywords :** Copula density, Nonparametric estimation, Wavelet methods, Almost sure uniform convergence rates.
**Mathematics Subject Classification (2010)**: 62G07, 62G20
## 1 Introduction
A copula is a multivariate distribution function \(C\) defined on \([0,1]^{d},d\geq 2\), with uniform margins. Unlike the linear correlation coefficient, it gives a full characterization of the dependence between random variables, be it linear or nonlinear. Given a vector \((X_{1},\ldots,X_{d})\) of continuous random variables with marginal distribution functions \(F_{1},\cdots,F_{d}\), the copula \(C\) may be defined as the joint cumulative distribution function of the random vector \((F_{1}(X_{1}),\ldots,F_{d}(X_{d}))\). If it exists, the copula density is defined as the derivative, \(c\), of the copula distribution function \(C\) with respect to the Lebesgue measure :
\[c(u_{1}\ldots,u_{d})=\frac{\partial^{d}}{\partial u_{1},\ldots\partial u_{d}}C (u_{1},\ldots,u_{d}),\ \ \forall\ (u_{1},\ldots,u_{d})\in(0,1)^{d}.\]
Nonparametric estimation of copula density is an active reseach domain that has been investigated by many authors. For instance, [12] and [8] used convolution kernel methods to construct consistent estimators for the copula density, while [21] employed techniques
based on Bernstein polynomials. A drawback of kernel methods is the existence of boundary effects due to the compact support of the copula function. To overcome this difficulty, some approaches have been proposed. For example [12] used a mirror-reflexion technique, while [2] employed a local linear kernel procedure. In the same vein [20] proposed some improved copula kernel estimators in order to face the boundary bias. Recently, [10] introduced kernel-type estimators for the copula density, based on a probit transformation method that can take care of the boundary effects.
In this paper, we deal more neatly with the boundary bias problem by using wavelet methods, which are very convenient to describe features of functions at the edges and corners of the unit cube, because of their good localization properties. Indeed, wavelet bases automatically handle the boundary effects by locally adapting to the properties of the curve being estimated. The use of wavelet methods in density and regression estimation problems is surveyed in [15], where approximation properties of wavelets are discussed at length. For more details on wavelet theory we refer to [19], [5], [18] and [23] and references therein.
Wavelet methods have been already used in nonparametric copula density estimation. For instance, [11] dealt with a rank-based linear wavelet estimator of the bivariate copula density and established, under certain conditions, its optimality in the minimax sense on Besov-balls for the \(L_{2}\)-norm loss, as well as on H\(\ddot{o}\)lder-balls for the pointwise-norm loss. [1] extended these results to the nonlinear thresholded estimators of multivariate copula densities. These nonlinear estimates are near optimal (up to a logarithmic factor) for the \(L_{2}\)-norm loss, and have the advantage of being adaptive to the regularity of the copula density function. In the same spirit, [9] established an upper bound on \(L_{p}\)-losses, \(2\leq p<\infty\), for linear wavelet-based estimators of the bivariate copula density, when the latter is bounded.
Our goal in this paper, is to establish the exact almost sure convergence rate, in supremum norm loss, for the rank-based linear wavelet estimator of the multivariate copula density. Our methodology is largely inspired by [14], who established almost sure convergence rates, in supremum norm loss, for the linear wavelet estimator of a univariate density function on \(\mathbb{R}\). Here, we want to extend this result to multivariate copula densities on \((0,1)^{d}\). We prove that if copula density \(c\) is regular enough (i.e. \(c\) belongs to a Besov space of regularity \(t\), corresponding to the H\(\ddot{o}\)lder space of order \(t\)) and the resolution level, say \(j_{n}\), satisfies : \(2^{j_{n}}\simeq(n/\log n)^{1/(2t+d)}\), then the rank-based linear wavelet estimator achieves the optimal minimax rate, for supremum norm loss, over Besov-balls.
The rest of the paper is organized as follows. In Section 2, we recall some facts on wavelet theory and define the rank-based linear wavelet estimator of the multivariate copula density as in [1]. Section 3 presents the main theoretical results along with some comments. In appendix A, we recall some useful facts on empirical process theory. Appendix B contains the proof of the uniform limit law given in Proposition 3.1.
## 2 Wavelet theory and Estimation procedure
Let \(\phi\) be a father wavelet and \(\psi\) its associated mother wavelet, which are both assumed compactly supported. [4] proposed orthonormal wavelet bases for \(L_{2}([0,1])\), the space of all measurable and square integrable functions on \([0,1]\). Precisely, for all fixed \(j_{0}\in\mathbb{N}\), the family \(\{\phi_{j_{0},l}:l=1,\ldots,2^{j_{0}}\}\bigcup\{\psi_{j_{l}}:j\geq j_{0},l=1, \ldots,2^{j}\}\) is an orthonormal basis for \(L_{2}([0,1])\), where \(\phi_{j,l}(u)=2^{j/2}\phi(2^{j}u-l)\) and \(\psi_{j,l}(u)=2^{j/2}\psi(2^{j}u-l),\ \forall j,l\in\mathbb{Z}\), \(u\in[0,1]\). Using the tensorial product, one can construct a multivariate wavelet basis for \(L_{2}([0,1]^{d}),d\geq 2\). For \(\textbf{k}=(k_{1},\ldots,k_{d})\in\mathbb{Z}^{d}\), define the following functions of \(\textbf{u}=(u_{1},\ldots,u_{d})\in[0,1]^{d}:\)
\[\phi_{j_{0},\textbf{k}}(u_{1}\ldots,u_{d})=\prod_{m=1}^{d}\phi_{j_ {0},k_{m}}(u_{m}),\] \[\psi^{\epsilon}_{j,\textbf{k}}(u_{1},\ldots,u_{d})=\prod_{m=1}^{d }\phi_{j,k_{m}}^{1-\epsilon_{m}}(u_{m})\psi^{\epsilon_{m}}_{j,k_{m}}(u_{m}),\]
where \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{d})\in\mathcal{S}_{d}=\{0,1\}^{d} \setminus\{(0,\ldots,0)\}\). Then the family \(\{\phi_{j_{0},\textbf{k}},\psi^{\epsilon}_{j,\textbf{h}}:j\geq j_{0},\textbf{ k}\in\{1,\ldots,2^{j_{0}}\}^{d},\textbf{h}\in\{1,\ldots,2^{j}\}^{d},\epsilon\in \mathcal{S}_{d}\}\) is an orthonormal basis for \(L_{2}([0,1]^{d})\), for any fixed \(j_{0}\in\mathbb{N}\). Thus, assuming that the copula density \(c\) belongs to \(L_{2}([0,1]^{d})\), we have the following representation :
\[c(\textbf{u})=\sum_{\textbf{k}\in\{1,\ldots,2^{j_{0}}\}^{d}}\alpha_{j_{0}, \textbf{k}}\phi_{j_{0},\textbf{k}}(\textbf{u})+\sum_{j\geq j_{0}}\sum_{ \textbf{k}\in\{1,\ldots,2^{j}\}^{d}}\sum_{\epsilon\in\mathcal{S}_{d}}\beta^{ \epsilon}_{j,\textbf{k}}\psi^{\epsilon}_{j,\textbf{k}}(\textbf{u}), \tag{1}\]
for all \(\textbf{u}\in[0,1]^{d},\) where the scaling coefficients \(\alpha_{j_{0},\textbf{k}}\) and wavelet coefficients \(\beta^{\epsilon}_{j,\textbf{k}}\) are respectively defined as
\[\alpha_{j_{0},\textbf{k}}=\int_{[0,1]^{d}}c(\textbf{u})\phi_{j_{0},\textbf{k }}(\textbf{u})d\textbf{u}\quad\text{ and }\quad\beta^{\epsilon}_{j,\textbf{k}}=\int_{[0,1]^{d}}c(\textbf{u})\psi^{ \epsilon}_{j,\textbf{k}}(\textbf{u})d\textbf{u}.\]
Now, let \((\textbf{X}_{1},\cdots,\textbf{X}_{n})\) be an independent and identically distributed (i.i.d) sample of the random vector \(\textbf{X}=(X_{1},\ldots,X_{d})\), with continuous marginal distribution functions \(F_{1},\ldots,F_{d}\), and where \(\textbf{X}_{i}=(X_{i1},\ldots,X_{id}),i=1,\ldots,n\). The distribution function of the random vector \(\textbf{U}_{i}=(F_{1}(X_{i1}),\ldots,F_{d}(X_{id}))\) is the copula \(C\) and its density, if it exists, is \(c\). Denoting the expectation operator by \(\mathbb{E}\), the coefficients \(\alpha_{j_{0},\textbf{k}}\) and \(\beta^{\epsilon}_{j,\textbf{k}}\) can be rewritten as follows :
\[\alpha_{j_{0},\textbf{k}}=\mathbb{E}[\phi_{j_{0},\textbf{k}}(\textbf{U}_{i})],\hskip 28.452756pt\beta^{\epsilon}_{j,\textbf{k}}=\mathbb{E}[\psi^{\epsilon}_{ j,\textbf{k}}(\textbf{U}_{i})].\]
If the margins \(F_{1},\ldots,F_{d}\) were known, natural estimators for \(\alpha_{j_{0},\textbf{k}}\) and \(\beta^{\epsilon}_{j,\textbf{k}}\) would be given by,
\[\tilde{\alpha}_{j_{0},\textbf{k}}=\frac{1}{n}\sum_{i=1}^{n}\phi_{j_{0}, \textbf{k}}(\textbf{U}_{i})\ \,\ \ \tilde{\beta}^{\epsilon}_{j,\textbf{k}}=\frac{1}{n}\sum_{i=1}^{n}\psi^{\epsilon} _{j,\textbf{k}}(\textbf{U}_{i}). \tag{2}\]
But, usually the marginal ditribution functions \(F_{1},\ldots,F_{d}\) are unknown ; and it is customary to replace them by their empirical counterparts \(F_{1n},\ldots,F_{dn}\) (or rescaled versions
thereof), with
\[F_{jn}(x_{j})=\sum_{i=1}^{n}\mathbb{I}(X_{ij}\leq x_{j}),\quad j=1,\ldots,d,\quad x _{j}\in\mathbb{R},\]
where \(\mathbb{I}(\cdot)\) denotes the indicator function. Then, putting \(\hat{\mathbf{U}}_{i}=(\hat{U}_{i1},\ldots,\hat{U}_{id})\), where \(\hat{U}_{ij}=F_{jn}(X_{ij}),j=1,\ldots,d\) ; \(i=1,\ldots,n\), the modified empirical coefficients are
\[\hat{\alpha}_{j_{0},\mathbf{k}}=\frac{1}{n}\sum_{i=1}^{n}\phi_{j_{0},\mathbf{k }}(\mathbf{U}_{i})\ \,\ \ \hat{\beta}_{j,\mathbf{k}}^{\epsilon}=\frac{1}{n}\sum_{i=1}^{n}\psi_{j, \mathbf{k}}^{\epsilon}(\mathbf{U}_{i}). \tag{3}\]
Now, choosing a suitable resolution level \(j_{n}\geq j_{0}\) and considering the orthogonal projection of \(c\) onto the sub-space \(V_{j_{n}}\) of the underlying multiresolution analysis on \(L_{2}([0,1]^{d})\), we obtain the rank-based linear wavelet estimator of \(c\) :
\[\hat{c}_{j_{n}}(\mathbf{u})=\sum_{\mathbf{k}\in\{1,\ldots,2^{jn}\}^{d}}\hat{ \alpha}_{j_{n},\mathbf{k}}\phi_{j_{n},\mathbf{k}}(\mathbf{u}),\qquad\mathbf{ u}\in(0,1)^{d}. \tag{4}\]
As remarked in [11], the estimator \(\hat{c}_{j_{n}}\) is not necessarily a density, because it can take negative values on parts of its domain and fails to integrate to one. In practice, some truncations and normalizations are necessary for its use.
To obtain the exact rate of convergence of the linear estimator \(\hat{c}_{j_{n}}\), our methodology of proof follows the empirical process approach developped in [7] (see also [14],[13]). In fact, we can rewrite \(\hat{c}_{j_{n}}\) in terms of the empirical measure. For \(j_{n}\) fixed, define the following kernel functions :
\[\widetilde{K}(x,y)=\sum_{l=1}^{2^{jn}}\phi(x-l)\phi(y-l),\qquad(x,y)\in \mathbb{R} \tag{5}\]
\[\widetilde{K}_{j_{n}}(x,y)=2^{j_{n}}\widetilde{K}(2^{j_{n}}x,2^{j_{n}}y),\qquad (x,y)\in\mathbb{R}.\]
For \(\mathbf{x}=(x_{1},\ldots,x_{m})\in\mathbb{R}^{d},\ \mathbf{y}=(y_{1},\ldots,y_{m})\in \mathbb{R}^{d}\), set :
\[\mathbf{K}(\mathbf{x},\mathbf{y})=\prod_{m=1}^{d}\widetilde{K}(x_{m},y_{m}), \tag{6}\]
\[\mathbf{K}_{j_{n}}(\mathbf{x},\mathbf{y})=\prod_{m=1}^{d}\widetilde{K}_{j_{n} }(x_{m},y_{m}). \tag{7}\]
Then, the linear wavelet estimator \(\hat{c}_{j_{n}}\) can be transformed into
\[\hat{c}_{j_{n}}(\mathbf{u})=\frac{1}{n}\sum_{i=1}^{n}\mathbf{K}_{j_{n}}(\hat{ \mathbf{U}}_{i},\mathbf{u})=\frac{2^{dj_{n}}}{n}\sum_{i=1}^{n}\mathbf{K}(2^{j _{n}}\hat{\mathbf{U}}_{i},2^{j_{n}}\mathbf{u}). \tag{8}\]
Asymptotic behaviour of the estimator
Let's introduce an auxillary estimator \(\tilde{c}_{j_{n}}\) corresponding to the case where the marginal distribution functions \(F_{1},\ldots,F_{d}\) are known. In this situation, \((U_{i1},\ldots,U_{id})=(F_{1}(X_{i1}),\ldots,F_{d}(X_{id})),i=1,\ldots,n\) are direct observations of the copula \(C\), and \(\tilde{c}_{j_{n}}\) may be defined as
\[\tilde{c}_{j_{n}}(\textbf{u})=\sum_{\textbf{k}\in\{1,\ldots,2^{jn}\}^{d}} \tilde{\alpha}_{j_{n},\textbf{k}}\phi_{j_{n},\textbf{k}}(\textbf{u}), \tag{9}\]
where
\[\tilde{\alpha}_{j_{n},\textbf{k}}=\frac{1}{n}\sum_{i=1}^{n}\phi_{j_{n}, \textbf{k}}(F_{1}(X_{i1}),\ldots,F_{d}(X_{id})) \tag{10}\]
is an unbiased estimator of \(\alpha_{j_{n},\textbf{k}}\).
For all \(\textbf{u}\in(0,1)^{d}\), we can decompose the estimation error \(\hat{c}_{j_{n}}-c\) as
\[\hat{c}_{j_{n}}(\textbf{u})-c(\textbf{u}) = [\hat{c}_{j_{n}}(\textbf{u})-\tilde{c}_{j_{n}}(\textbf{u})]+[ \tilde{c}_{j_{n}}(\textbf{u})-\mathbb{E}\tilde{c}_{j_{n}}(\textbf{u})]+[ \mathbb{E}\tilde{c}_{j_{n}}(\textbf{u})-c(\textbf{u})] \tag{11}\] \[=: R_{n}(\textbf{u})+D_{n}(\textbf{u})+B_{n}(\textbf{u}).\]
To obtain the almost sure convergence rate of \(\hat{c}_{j_{n}}\) uniformly in \(\textbf{u}\in(0,1)^{d}\), we have to investigate the limiting behavior of each of the three above terms. We need the following hypotheses in the sequel :
1. The father wavelet \(\phi\in L^{2}(\mathbb{R})\) is bounded, compactly supported and admits a bounded derivative \(\phi^{\prime}\).
2. There exists a bounded and compactly supported function \(\Phi:\mathbb{R}\rightarrow\mathbb{R}_{+}\) such that \(|\widetilde{K}(x,y)|\leq\Phi(x-y)\) and the function \(\theta_{\phi}(x)=\sum_{k=1}^{2^{jn}}|\phi(x-k)|\) is bounded.
3. The kernel \(\widetilde{K}\) satisfies : for all \(y\in\mathbb{R}\), \(\int_{-\infty}^{\infty}\widetilde{K}(x,y)dx=1.\)
4. As \(n\rightarrow\infty,\) the sequence \((j_{n})_{n\geq 0}\) satisfies : \[j_{n}\nearrow\infty,\qquad\frac{n}{j_{n}2^{(d+1)j_{n}}}\rightarrow\infty, \qquad\frac{j_{n}}{\log\log n}\rightarrow\infty.\]
_Remark. 1_: _Hypotheses (H.1), (H.2) and (H.3) are usual conditions that are satisfied by many wavelets bases, for example the Daubechies wavelets and the Haar wavelet \(\phi(u)=1_{[0,1]}(u)\). The conditions in Hypothesis (H.4) are analogous to some conditions imposed on the bandwidth parameter in convolution-kernel estimation methods._
The following proposition gives the asymptotic behavior of the second term \(D_{n}(\textbf{u})\), corresponding to the deviation of the auxillary estimator \(\tilde{c}_{j_{n}}\) from its expectation. In the sequel, we denote \(I=(0,1)\), \(\|c\|_{\infty}=\sup_{\textbf{u}\in I^{d}}|c(\textbf{u})|\) and for any real function \(\varphi\) defined on \(\mathbb{R}^{d},d\geq 1\), \(\|\varphi\|_{\infty}=\sup_{\textbf{x}\in\mathbb{R}^{d}}|\varphi(\textbf{x})|\).
**Proposition 3.1**: _Suppose that the father wavelet \(\phi\) is uniformly continuous, with compact support \([0,B]\), where \(B\) is a positive integer. Further, assume that the copula density \(c\) is continuous and bounded on \(I^{d}\) and that Hypotheses (H.1), (H.2), (H.3) and (H.4) hold. Then, almost surely (a.s.),_
\[\lim_{n\to\infty}r_{n}\sup_{\boldsymbol{u}\in I^{d}}\frac{|\tilde{c}_{j_{n}}( \boldsymbol{u})-\mathbb{E}\tilde{c}_{j_{n}}(\boldsymbol{u})|}{\sqrt{\int_{ \mathbb{R}^{d}}\boldsymbol{K}^{2}(\boldsymbol{x},2^{j_{n}}\boldsymbol{u})d \boldsymbol{x}}}=\sqrt{\|c\|_{\infty}}, \tag{12}\]
_with_
\[r_{n}=\sqrt{\frac{n}{(2d\log 2)j_{n}2^{d\bar{\eta}_{n}}}}. \tag{13}\]
**Proof :** It is largely inspired by [14] and is postponed to Appendix B. It will consist in establishing a lower bound and an upper bound for the limit in (12), a methodology borrowed from [7] (see also [13]).
_Remark. 2_: Proposition 3.1 gives the exact almost sure convergence rate, in supremum norm, of the deviation \(D_{n}(\boldsymbol{u})\) to zero, which is of order \(O(\sqrt{j_{n}2^{d_{n}}/n})\). In fact, by hypotheses (H.1-2-3) the quantity \(\int_{\mathbb{R}^{d}}\boldsymbol{K}^{2}(\boldsymbol{x},2^{j_{n}}\boldsymbol{u })d\boldsymbol{x}\) can be bounded above and below : there exist two positive constants \(D_{1}\) and \(D_{2}\) independent of \(\boldsymbol{u}\) and \(n\) such that :
\[D_{1}\leq\int_{\mathbb{R}^{d}}\boldsymbol{K}^{2}(\boldsymbol{x},2^{j_{n}} \boldsymbol{u})d\boldsymbol{x}\leq D_{2}. \tag{14}\]
This readily implies
\[\sup_{\boldsymbol{u}\in I^{d}}|D_{n}(\boldsymbol{u})|=O\left(\sqrt{\frac{j_{n }2^{d\bar{\eta}_{n}}}{n}}\right),\quad a.s. \tag{15}\]
The following theorem constitutes our principal result. We need some notation before stating it. Let \(N\) be a positive integer and \(t=N+\alpha,0<\alpha\leq 1\). For any bounded real function \(f\) defined on \(I^{d}\) and possessing derivatives up to order \(N\), set
\[\|f\|_{t,\infty,\infty}=\|f\|_{\infty}+\sum_{k=0}^{N}\sup_{u\neq v,u,v\in I^{ d}}\frac{|f^{(k)}(u)-f^{(k)}(v)|}{|u-v|^{\alpha}}. \tag{16}\]
We say that \(f\) belongs to the Besov space of regularity \(t\), \(B^{t}_{\infty,\infty}\), if and only if \(\|f\|_{t,\infty,\infty}<\infty\).
The following condition is also needed for the proof :
_Condition \(1(N):\)_ the father wavelet \(\phi\) is compactly supported and admits weak derivatives up to order \(N,N\in\mathbb{N}\), that are all in \(\mathcal{L}^{p}\) for some \(1\leq p\leq\infty\).
**Theorem 3.1**: _Suppose that the assumptions of Proposition 3.1 are fulfilled. If, moreover, \(c\) belongs to \(B^{t}_{\infty,\infty}\) and \(\phi\) satisfies Condition \(1(N)\), with \(0<t<N+1\), then, as \(n\to\infty\), we have_
\[\sup_{\boldsymbol{u}\in I^{d}}|\hat{c}_{j_{n}}(\boldsymbol{u})-c(\boldsymbol {u})|=O\left(\sqrt{\frac{j_{n}2^{d\bar{\eta}_{n}}}{n}}+2^{-j_{n}t}\right)+o( 1),\quad a.s. \tag{17}\]
**Proof.** In view of decomposition (11), it suffices to handle the first and the last term. The behavior of the second term is given by the previous Proposition 3.1. Let us begin with the first term \(R_{n}(\textbf{u})\). We have for \(\textbf{k}=(k_{1},\ldots,k_{d})\in\mathbb{Z}^{d}\)
\[\hat{\alpha}_{j_{n},\textbf{k}}-\tilde{\alpha}_{j_{n},\textbf{k}} = \frac{1}{n}\sum_{i=1}^{n}\left[\phi_{j_{n},\textbf{k}}(F_{1n}(X_{i 1}),\ldots,F_{dn}(X_{id}))-\phi_{j_{n},\textbf{k}}(F_{1}(X_{i1}),\ldots,F_{d}( X_{id}))\right]\] \[=: \frac{1}{n}\sum_{i=1}^{n}\xi_{\textbf{k}}(X_{i1},\ldots,X_{id}),\]
where, we set
\[\xi_{\textbf{k}}(X_{i1},\ldots,X_{id})=\phi_{j_{n},\textbf{k}}(F_{1n}(X_{i1}),\ldots,F_{dn}(X_{id}))-\phi_{j_{n},\textbf{k}}(F_{1}(X_{i1}),\ldots,F_{d}(X_{ id})). \tag{18}\]
For \(d=2\), [11] observes that, with \(k=(k_{1},k_{2})\),
\[\xi_{k}(X_{i1},X_{i2})=\xi_{k_{1}}(X_{i1})\xi_{k_{2}}(X_{i2})+\xi_{k_{1}}(X_{ i1})\phi_{j_{n}k_{2}}(F_{2}(X_{i2}))+\xi_{k_{2}}(X_{i2})\phi_{j_{n}k_{1}}(F_{1}(X_ {i1})), \tag{19}\]
where \(\xi_{k_{m}}(X_{im})=\phi_{j_{n}k_{m}}(F_{mn}(X_{im}))-\phi_{j_{n}k_{m}}(F_{m}( X_{im}))\), for \(m=1,2\).
By induction of (19), we obtain for all fixed \(d\geq 2\) that
\[\xi_{\textbf{k}}(X_{i1},\ldots,X_{id})=\sum_{q=0}^{d-1}\sum_{\epsilon_{1}+ \ldots+\epsilon_{d}=q}\prod_{m=1}^{d}\left[\phi_{j_{n}k_{m}}(F_{m}(X_{im})) \right]^{\epsilon_{m}}\left[\xi_{k_{m}}(X_{im})\right]^{1-\epsilon_{m}}, \tag{20}\]
where \((\epsilon_{1},\ldots\epsilon_{d})\in\{0,1\}^{d}\). Recall that \(\phi_{jl}(u)=2^{j/2}\phi(2^{j}u-l),\forall j,l\in\mathbb{Z}\). By using the derivability of \(\phi\) by hypothesis, we can write, for all \(m=1,\ldots,d\)
\[\xi_{k_{m}}(X_{im}) = 2^{\frac{in}{2}}\phi(2^{j_{n}}F_{mn}(X_{im})-k_{m})-2^{\frac{in} {2}}\phi(2^{j_{n}}F_{m}(X_{im})-k_{m})\] \[= 2^{\frac{3}{2}j_{n}}\left[F_{mn}(X_{im})-F_{m}(X_{im})\right] \phi^{\prime}(\zeta_{im}),\]
where \(\zeta_{im}\) lies between \(F_{mn}(X_{im})\) and \(F_{m}(X_{im})\). Now, combining Chung's (1949) law of the iterated logarithm (LIL) with the boundedness of \(\phi\) and \(\phi^{\prime}\), we obtain, for all \(m=1,\ldots,d\)
\[|\xi_{k_{m}}(X_{im})|\leq 2^{\frac{3}{2}j_{n}}\times\sqrt{\frac{\log\log n}{2n}} \|\phi^{\prime}\|_{\infty},\quad a.s. \tag{21}\]
Thus, for \(d=2\), the expression in (20) can be bounded above ; that is
\[|\xi_{k}(X_{i1},X_{i2})|\leq 2^{3j_{n}}\left(\frac{\log\log n}{2n}\right)\|\phi^ {\prime}\|_{\infty}^{2}+2.2^{2j_{n}}\sqrt{\frac{\log\log n}{2n}}\|\phi^{ \prime}\|_{\infty}\|\phi\|_{\infty},\quad a.s. \tag{22}\]
Since
\[\frac{2^{3j_{n}}\left(\frac{\log\log n}{2n}\right)}{2^{2j_{n}}\sqrt{\frac{\log \log n}{2n}}}=\frac{1}{\sqrt{2}}\left(\frac{j_{n}2^{2j_{n}}}{n}\right)^{1/2} \left(\frac{\log\log n}{j_{n}}\right)^{1/2},\]
which, by hypothesis (H.4) converges to 0, as \(n\rightarrow\infty\). Then \(\ 2^{3j_{n}}\left(\frac{\log\log n}{2n}\right)=o\left(2^{2j_{n}}\sqrt{\frac{\log \log n}{2n}}\right)\). That is, for \(d=2\),
\[|\xi_{k}(X_{i1},X_{i2})|=O\left(2^{2j_{n}}\sqrt{\frac{\log\log n}{n}}\right).\]
By induction of formula (22), we have for all \(d\geq 2\),
\[|\xi_{\mathbf{k}}(X_{i1},\ldots,X_{id})|\leq 2^{\frac{3}{2}q_{in}}\left(\frac{ \log\log n}{n}\right)^{\frac{d}{2}}\|\phi^{\prime}\|_{\infty}^{d}+\cdots+d.2^{ \frac{3}{2}j_{n}}\sqrt{\frac{\log\log n}{n}}2^{\frac{d-1}{2}j_{n}}\|\phi^{ \prime}\|_{\infty}\|\phi\|_{\infty}^{d-1}. \tag{23}\]
Note that the number of terms in the summation in the right hand side of inequality (23) is finite. Moreover, as we observe for the case \(d=2\), all these terms are dominated (_small-o's_) by the last one, which is of order \(O\left(2^{\frac{3}{2}j_{n}}\sqrt{\frac{\log\log n}{n}}2^{\frac{d-1}{2}j_{n}}\right)\). Then
\[|\xi_{\mathbf{k}}(X_{i1},\ldots,X_{id})|=O\left(2^{\frac{3}{2}j_{n}}\sqrt{ \frac{\log\log n}{n}}2^{\frac{d-1}{2}j_{n}}\right).\]
and
\[|\hat{\alpha}_{j_{n},\mathbf{k}}-\tilde{\alpha}_{j_{n},\mathbf{ k}}| = \frac{1}{n}\sum_{i=1}^{n}|\xi_{\mathbf{k}}(X_{i1},\ldots,X_{id})|\] \[= O\left(2^{(\frac{2+d}{2})j_{n}}\sqrt{\frac{\log\log n}{n}}\right)\]
Finally, by using the boundedness of the function \(\theta_{\phi}(x)=\sum_{l=1}^{2^{j_{n}}}|\phi(x-l)|\), we obtain
\[|\hat{c}_{j_{n},\mathbf{k}}(\mathbf{u})-\tilde{c}_{j_{n},\mathbf{ k}}(\mathbf{u})| \leq \sum_{\mathbf{k}\in\{1,\ldots,2^{j_{n}}\}^{d}}|\hat{\alpha}_{j_{n },\mathbf{k}}-\tilde{\alpha}_{j_{n},\mathbf{k}}|2^{\frac{d}{2}j_{n}}\prod_{m=1 }^{d}\phi(2^{j_{n}}u_{m}-k_{m})\] \[= O\left[\|\theta_{\phi}\|_{\infty}^{d}2^{\frac{2+2d}{2}j_{n}} \sqrt{\frac{\log\log n}{n}}\right]\] \[= O\left[\left(\frac{j_{n}2^{(1+d)j_{n}}}{n}\right)^{1/2}\left( \frac{\log\log n}{j_{n}}\right)^{1/2}\right]\]
which, by hypothesis (H.4), converges to 0, as \(n\to\infty\). Hence
\[\sup_{\mathbf{u}\in I^{d}}|R_{n}(\mathbf{u})|\longrightarrow 0,n\to\infty,\quad a.s. \tag{24}\]
To handle the last term \(B_{n}(\mathbf{u})\) corresponding to the bias of \(\tilde{c}_{j_{n}}\), we make use of approximation properties in Besov spaces. Let \(K_{j_{n}}\) denote the orthogonal projection kernel onto the sub-space \(V_{j_{n}}\). That is
\[K_{j_{n}}(c)(\mathbf{u})=\int_{I^{d}}K_{j_{n}}(\mathbf{u},\mathbf{v})c( \mathbf{v})d\mathbf{v},\quad\mathbf{u}\in I^{d}.\]
Then, we can write
\[B_{n}(\mathbf{u})=\mathbb{E}\tilde{c}_{j_{n}}(\mathbf{u})-c(\mathbf{u})=K_{j_ {n}}(c)(\mathbf{u})-c(\mathbf{u}).\]
Since \(\phi\) satisfies _Condition_\(1(N)\) and \(c\in B_{\infty,\infty}^{t}\), \(0<t<N+1\), then applying Theorem 9.4 in [15] gives :
\[\|K_{j_{n}}(c)-c\|_{\infty}\leq A2^{-j_{n}t},\]
where \(A\) is a positive constant depending on the Besov norm of \(\|c\|\). Hence
\[\sup_{{\bf u}\in I^{d}}|B_{n}({\bf u})|=O(2^{-j_{n}t}). \tag{25}\]
Combining (15), (24) and (25) gives the proof of the theorem. \(\Box\)
_Remark. 3_ Theorem 3.1 implies that if \(2^{j_{n}}\simeq(n/\log n)^{\frac{1}{2t+d}}\), then the rank-based linear estimator \(\hat{c}_{j_{n}}\) achieves the optimal rate of convergence in supremum norm, \((\log n/n)^{\frac{t}{2t+d}}\), over Besov-balls in \(B^{t}_{\infty,\infty}\). This rate is the best possible, as far as the supremum norm loss is concerned \((p=\infty)\), and the estimated density is defined on a compact set [see, e.g., [16] for optimality in minimax theory]. However, notice that this rate is slower than that obtained in the case of quadratic and pointwise loss functions and established in [11] and [1] for the wavelet linear estimators.
_Remark. 4_ For copula densities in general Besov spaces \(B^{t}_{pq}\) with \(t>1/p\), we also have optimal rates for the wavelet linear estimator \(\hat{c}_{j_{n}}\). Indeed, if \(c\in B^{t}_{pq}\) with \(t>1/p\), the Sobolev embedding properties entails \(B^{t}_{pq}\subset B^{t-1/p}_{\infty,\infty}\). Thus, if \(2^{j_{n}}\simeq(n/\log n)^{\frac{1}{2(t-1/p)+d}}\), then \(\hat{c}_{j_{n}}\) attains the optimal rate : \(\mathbb{E}(\|\hat{c}_{j_{n}}-c\|_{\infty})=O\left((\log n/n)^{\frac{t-1/p}{2(t -1/p)+d}}\right)\) [see, e.g., [6], Theorem 1].
_Remark. 5_ As established in [11] for the quadratic norm, we have proved by another approach using Chung's (1949) LIL, that the error term associated with the use of ranks (coming from the pseudo-observations) is also negligible for the supremum norm case. That is resorting to pseudo-observations instead of genuine observations does not affect the convergence rate of the linear wavelet estimators of the copula density.
## Appendix A : Useful results on empirical process
**Bernstein's inequality (maximal version):**
Let \(Z_{1},\ldots,Z_{n}\) be independent random variables with \(\mathbb{E}(Z_{i})=0,i=1,\ldots,n\) and \(\mathrm{Var}(\sum_{i=1}^{n}Z_{i})\leq\nu\). Assume further that for some constant \(M>0,\ |Z_{i}|<M\), \(i=1,...,n\). Then for all \(t>0\)
\[\mathbb{P}\left(\max_{q\leq n}\left|\sum_{i=1}^{q}Z_{i}\right|>t\right)\leq 2 \exp\left\{\frac{-t^{2}}{2\nu+(2/3)Mt}\right\}, \tag{26}\]
**Lemma A.1** [Einmahl and mason(2000)]: Let \(\mathcal{F}\) and \(\mathcal{G}\) be two classes of real-valued measurable functions on \(\mathcal{X}\) satisfying
\[|f(x)|\leq F(x),\quad f\in\mathcal{F},\quad x\in\mathcal{X},\]
where \(F\) is a finite valued measurable envelope function on \(\mathcal{X}\);
\[\|g\|\leq M\quad g\in\mathcal{G},\]
where \(M>0\) is a finite constant. Assume that for all probability measure \(Q\) with \(0<Q(F^{2})<\infty\),
\[N(\varepsilon(Q(F^{2}))^{1/2},\mathcal{F},d_{Q})\leq C_{1}\varepsilon^{-\nu_{1 }},\qquad 0<\varepsilon<1\]
and
\[N(\varepsilon M,\mathcal{G},d_{Q})\leq C_{2}\varepsilon^{-\nu_{2}},\qquad 0< \varepsilon<1\]
where \(\nu_{1},\nu_{1},C_{1},C_{2}\geq 1\) are suitable constants. Then we have for all probability measure \(Q\) with \(0<Q(F^{2})<\infty\),
\[N(\varepsilon M(Q(F^{2}))^{1/2},\mathcal{F}\mathcal{G},d_{Q})\leq C_{3} \varepsilon^{-\nu_{1}-\nu_{2}},\qquad 0<\varepsilon<1\]
for some finite constant \(0<C_{3}<\infty\).
**Proposition 2 [Einmahl and Mason (2000)]:**
Let \(Z,Z_{1},Z_{2},\dots\), be a sequence of i.i.d. random vectors taking values in \(\mathbb{R}^{m},m\geq 1\). For each \(n\geq 1\), consider the empirical distribution function based on the first \(n\) of these random vectors, defined by
\[G_{n}(s)=\frac{1}{n}\sum_{i=1}^{n}1_{Z_{i}\leq s},\quad s\in\mathbb{R}^{m},\]
where as usual \(z\leq s\) means that each component of \(z\) is less than or equal to the corresponding component of \(s\). For any measurable real valued function \(g\) defined on \(\mathbb{R}^{m}\), set
\[G_{n}(g)=\int_{\mathbb{R}^{m}}g(s)dG_{n}(s),\qquad\mu(g)=\mathbb{E}g(Z)\qquad \text{and}\quad\sigma(g)=Var(g(Z)).\]
Let \(a_{n}:n\geq 1\) denote a sequence of positive constants converging to zero. Consider a sequence \(\mathcal{G}_{n}=\{g_{i}^{(n)}:i=1,...,k_{n}\}\) of sets of real-valued measurable functions on \(\mathbb{R}^{2}\), satisfying, whenever \(g_{i}^{(n)}\in\mathcal{G}_{n}\):
\[\mathbb{P}(g_{i}^{(n)}(Z)=0,\;g_{j}^{(n)}(Z))=0,\quad i\neq j\quad\text{and} \quad\sum_{i=1}^{k_{n}}\mathbb{P}(g_{i}^{(n)}(Z)\neq 0)\leq 1/2.\]
Further assume that :
For some \(0<r<\infty\), \(a_{n}k_{n}\to r\), as \(n\to\infty\).
For some \(-\infty<\mu_{1},\mu_{2}\infty\), uniformly in \(i=1,\dots,k_{n}\), for all large \(n\), \(a_{n}\mu_{1}\leq\mu(g_{i}^{(n)})\leq a_{n}\mu_{2}\)
For some \(0<\sigma_{1}<\sigma_{2}<\infty\), uniformly in \(i=1,\dots k_{n}\), for all large \(n\), \(\sigma_{1}\sqrt{n}a_{n}\leq\bar{\sigma}(g_{i}^{(n)})\leq\sigma_{2}\sqrt{n}a_{n}\)
For some \(0<B<\infty\), uniformly in \(i=1,\dots,k_{n}\), for all large \(n\), \(|g_{i}^{(n)}|\leq B\)
**Proposition 3.2**: _Under these assumptions, with probability one, for each \(0<\varepsilon<1\), there exists \(N_{\varepsilon}\) such that for \(n\geq N_{\varepsilon}\),_
\[\max_{1\leq i\leq k_{n}}\frac{\sqrt{n}\{G_{n}(g_{i}^{(n)})-\mu(g_{i}^{(n)})\}}{ \bar{\sigma}(g_{i}^{(n)})\sqrt{2|\log a_{n}|}}\geq 1-\varepsilon.\]
**Talagrand's inequality**:
Let \(X_{i},i=1,\ldots,n\) be an independent and identically distributed random samples of \(X\) with probability law \(P\) on \(\mathbb{R}\), and \(\mathcal{G}\) a \(P\)-centered (i.e.,\(\int gdP=0\) for all \(g\in\mathcal{G}\)) countable class of real-valued functions on \(\mathbb{R}\), uniformly bounded by the constant U. Let \(\sigma\) be any positive number such that \(\sigma^{2}\geq\sup_{g\in\mathcal{G}}\mathbb{E}(g^{2}(X))\). Then, Talagrand's (1996) inequality implies that there exists a universal constant \(L\) such that for all \(t>0\),
\[\mathbb{P}\left(\max_{q\leq n}\left\|\sum_{i=1}^{q}(g(X_{i})\right\|_{ \mathcal{G}}>E+t\right)\leq L\exp\left\{\frac{-t}{LU}\log(1+\frac{t\mathrm{U}} {V})\right\}, \tag{27}\]
with
\[E=\mathbb{E}\left\|\sum_{i=1}^{n}g(X_{i})\right\|_{\mathcal{G}}\ \ \text{and}\ \ V= \mathbb{E}\left\|\sum_{i=1}^{n}(g(X_{i}))^{2}\right\|_{\mathcal{G}}.\]
Further, if \(\mathcal{G}\) is a VC-type class of functions, with characteristics \(A,\,v\) then, there exist a universal constant \(B\) such that : [see, e.g., Gine and Guillou (2001)]
\[E\leq B\left[v\mathrm{U}\log\frac{A\mathrm{U}}{\sigma}+\sqrt{v}\sqrt{n\sigma^ {2}}\frac{A\mathrm{U}}{\sigma}\right] \tag{28}\]
Next, if \(\sigma<\frac{\mathrm{U}}{2}\), the constant \(A\) may be replaced by 1 at the price of changing the constant \(B\), and then if, moreover, \(n\sigma^{2}>C_{0}\log\left(\frac{\mathrm{U}}{\sigma}\right)\), we have
\[E\leq C_{1}\sqrt{n\sigma^{2}\log\left(\frac{\mathrm{U}}{\sigma}\right)},\ \ \text{and}\ \ V\leq L^{\prime}n\sigma^{2}, \tag{29}\]
where \(C_{1},L^{\prime}\) are constants depending only on \(A,v,C_{0}\). Finally, it follows from (42) and (29) that, for all \(t>0\) satisfying : \(C_{1}\sqrt{n\sigma^{2}\log\left(\frac{\mathrm{U}}{\sigma}\right)}\leq t\leq C_ {2}\frac{n\sigma^{2}}{\mathrm{U}}\) for all constant \(C_{2}\geq C_{1},\)
\[\mathbb{P}\left(\max_{n_{k-1}\leq n\leq n_{k}}\left\|\sum_{i=1}^{n}g(X_{i}) \right\|_{\mathcal{G}}>t\right)\leq R\exp\left\{\frac{-1}{C_{3}}\frac{t^{2}}{ n\sigma^{2}}\right\}, \tag{30}\]
where \(C_{3}=\log(1+C_{2}/L^{\prime})/RC_{2}\) and \(R\) a constant depending only on \(A\) and \(v\).
## Appendix B : Proof of Proposition 3.1
### Upper Bound
**Lemma 3.2**: _Under the assumptions of Proposition 3.1, one has almost surely_
\[\limsup_{n\to\infty}r_{n}\sup_{\boldsymbol{u}\in I^{d}}\frac{|D_{n}( \boldsymbol{u})|}{\sqrt{\int_{\mathbb{R}^{d}}K^{2}(\boldsymbol{x},2^{j_{n}} \boldsymbol{u})d\boldsymbol{x}}}\leq\sqrt{\|c\|_{\infty}}. \tag{31}\]
**Proof :** Given \(\lambda>1\), define \(n_{k}=[\lambda^{k}],k\in\mathbb{N}\), where \([a]\) denotes the integer part of a real \(a\). Let \(\delta_{m}=1/m\), \(m\geq 1\) integer, then we can cover the set \(I^{d}\) by a number \(l_{k}\) of small cubes \(S_{k,r}\), each of side length \(\delta_{m}2^{-j_{n_{k}}}\), with
\[l_{k}\leq\left(\frac{1}{\delta_{m}2^{-j_{n_{k}}}}+1\right)^{d}\leq\left(\frac{ 2}{\delta_{m}2^{-j_{n_{k}}}}\right)^{d}, \tag{32}\]
for \(k\) large enough. Let us choose points \(\textbf{u}_{k,r}\in S_{k,r}\cap I^{d},\ r=1,\ldots,l_{k}\). We want to prove Lemma 3.2 over the discrete grid of points \(\{\textbf{u}_{k,r}:r=1,\ldots,l_{k}\}\). For all \(\eta\in(0,1)\) we claim that
\[\limsup_{k\to\infty}\sqrt{\frac{n_{k}}{(2d\log 2)j_{n_{k}}2^{d_{n_{k}}}}} \max_{1\leq r\leq l_{k}}\max_{n_{k-1}\leq n\leq n_{k}}|D_{n}(\textbf{u}_{k,r} )|\leq(1+\eta)\sqrt{\|c\|_{\infty}[K^{2}]}, \tag{33}\]
where we note
\[[K^{2}]=\int_{\mathbb{R}^{d}}\textbf{K}^{2}(\textbf{x},2^{j_{n}}\textbf{u})d \textbf{x}.\]
To prove (33), we apply the maximal version of Bernstein inequality [see, Appendix A above]. Given \(\textbf{u}\in I^{d}\) and \(k\in\mathbb{N}\), for all \(n\) satisfying : \(n_{k-1}\leq n\leq n_{k}\) let
\[Z_{i}(\textbf{u})=\textbf{K}(2^{j_{n}}\textbf{U}_{i},2^{j_{n}}\textbf{u})- \mathbb{E}\textbf{K}(2^{j_{n}}\textbf{U}_{i},2^{j_{n}}\textbf{u}),\ \ i=1,\ldots,n.\]
Observe that for each \(n\), the \(Z_{i}(\textbf{u})^{\prime}s\) are independent and identically distributed zero-mean random variables, and for all \(\textbf{u}\in I^{d}\),
\[D_{n}(\textbf{u})=\frac{2^{d_{n}}}{n}\sum_{i=1}^{n}Z_{i}(\textbf{u}). \tag{34}\]
By hypothesis (H.2), we have
\[\left|\textbf{K}(2^{j_{n}}\textbf{U}_{i},2^{j_{n}}\textbf{u})\right| = \prod_{m=1}^{d}\left|\tilde{K}(2^{j_{n}}U_{im},2^{j_{n}}u_{m}) \right|\leq\prod_{m=1}^{d}\Phi(2^{j_{n}}(U_{im}-u_{m}))\leq\|\Phi\|_{\infty}^ {d},\]
where \(\|\Phi\|_{\infty}=\sup_{x\in\mathbb{R}}|\Phi(x)|\). This implies
\[\left|\mathbb{E}\textbf{K}[(2^{j_{n}}\textbf{U}_{i},2^{j_{n}}\textbf{u})] \right|\leq\mathbb{E}\|\Phi\|_{\infty}^{2}=\|\Phi\|_{\infty}^{d}.\]
Thus, for all \(\textbf{u}\in I^{d}\),
\[|Z_{i}(\textbf{u})|\leq 2\|\Phi\|_{\infty}^{d}:=M.\]
Since \(Z_{i}(\textbf{u})^{\prime}s\) are independent and centered, we can write for \(n=n_{k}\)
\[Var\left(\sum_{i=1}^{n_{k}}Z_{i}(\textbf{u})\right)=n_{k}Var(Z_{1}(\textbf{u}) )=n_{k}\mathbb{E}(Z_{1}^{2}(\textbf{u})).\]
Then using the change of variables \(\textbf{s}=2^{-j_{n_{k}}}\textbf{x},\textbf{s}=(s_{1},\ldots,s_{d})\), \(\textbf{x}=(x_{1},\ldots,x_{d})\), we obtain
\[\mathbb{E}(Z_{1}^{2}(\textbf{u})) \leq \mathbb{E}\textbf{K}^{2}[(2^{j_{n_{k}}}\textbf{U}_{1},2^{j_{n_{k} }}\textbf{u})\] \[\leq \int_{I^{d}}\textbf{K}^{2}(2^{j_{n_{k}}}\textbf{s},2^{j_{n_{k}}} \textbf{u})c(\textbf{s})d\textbf{s}\] \[\leq 2^{-d_{jn_{k}}}\|c\|_{\infty}\int_{[0,2^{j_{n_{k}}}]^{d}}\textbf {K}^{2}(\textbf{x},2^{j_{n_{k}}}\textbf{u})d\textbf{x}\]
which yields
\[Var\left(\sum_{i=1}^{n_{k}}Z_{i}(\textbf{u})\right)\leq n_{k}2^{-d\eta_{n_{k}}}\|c \|_{\infty}\int_{\mathbb{R}^{d}}\textbf{K}^{2}(\textbf{x},2^{j_{n}}\textbf{u}) d\textbf{x}:=\sigma_{k}^{2}.\]
Now, applying the maximal version Bernstein's inequality, for each point \(\textbf{u}_{k,r}\), we obtain for all \(t>0\),
\[\mathbb{P}\left(\max_{n_{k-1}\leq n\leq n_{k}}\left|\sum_{i=1}^{n}Z_{i}( \textbf{u}_{k,r})\right|>t\right)\leq 2\exp\left\{\frac{-t^{2}}{2\sigma_{k}^{2}+(2/3) Mt}\right\}, \tag{35}\]
which yields
\[\mathbb{P}\left(\max_{1\leq r\leq k_{n}}\max_{n_{k-1}\leq n\leq n _{k}}\left|\sum_{i=1}^{n}Z_{i}(\textbf{u}_{k,r})\right|>t\right) = \mathbb{P}\left(\bigcup_{r=1}^{l_{k}}\left\{\max_{n_{k-1}\leq n \leq n_{k}}\left|\sum_{i=1}^{n}Z_{i}(\textbf{u}_{k,r})\right|>t\right\}\right)\] \[\leq \sum_{r=1}^{l_{k}}\mathbb{P}\left(\max_{n_{k-1}\leq n\leq n_{k}} \left|\sum_{i=1}^{n}Z_{i}(\textbf{u}_{k,r})\right|>t\right)\] \[\leq l_{k}2\exp\left\{\frac{-t^{2}}{2\sigma_{k}^{2}+(2/3) Mt}\right\}.\]
Let \(t=\sqrt{2(1+\eta)n_{k}2^{-d\eta_{n_{k}}}\log 2^{d\eta_{n_{k}}}\|c\|_{\infty}[K^{2}]}\). Then, for \(k\) large enough, \(t\rightarrow\infty\). Combining this with (32), we obtain after some little algebra,
\[\mathbb{P}\left(\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n \leq n_{k}}|\sum_{i=1}^{n}Z_{i}(\textbf{u}_{k,r})|}{\sqrt{2n_{k}2^{-d\eta_{n_{ k}}}\log 2^{d\eta_{n_{k}}}\|c\|_{\infty}[K^{2}]}}>\sqrt{1+\eta}\right) \leq 2l_{k}\exp\left\{\frac{-t^{2}}{\frac{t^{2}}{(1+\eta)\log 2^{d\eta_{n }}}+\frac{4}{3}\|\Phi\|^{2}t}\right\}\] \[\leq 2l_{k}\exp\left\{-(1+\eta)\log 2^{d\eta_{n_{k}}}\right\}\] \[\leq 2^{d}\delta_{m}^{-d\eta j_{n_{k}}}.\]
Since the series \(\sum_{k\geq 0}2^{-d\eta j_{n_{k}}}<\infty\), Borel-Cantelli lemma yields
\[\mathbb{P}\left(\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n \leq n_{k}}|\sum_{i=1}^{n}Z_{i}(\textbf{u}_{k,r})|}{\sqrt{(2d\log 2)n_{k}j_{n_{k}}2^{-d \eta_{n_{k}}}\|c\|_{\infty}[K^{2}]}}>\sqrt{1+\eta}\right)=o(1), \tag{36}\]
That is
\[\limsup_{k\rightarrow\infty}\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n \leq n_{k}}|\sum_{i=1}^{n}Z_{i}(\textbf{u}_{k,r})|}{\sqrt{(2d\log 2)n_{k}j_{n_{k}}2^{-d \eta_{n_{k}}}\|c\|_{\infty}[K^{2}]}}\leq\sqrt{1+\eta},\quad a.s. \tag{37}\]
Since the function \(x\mapsto x2^{-2x}\) is decreasing for \(x>2\log 2\), we have for \(n_{k-1}\leq n\leq n_{k}\), and \(k\) large enough,
\[\sqrt{\frac{n_{k}j_{n_{k}}2^{-d\eta_{n_{k}}}}{nj_{2}2^{-d\eta_{n}}}}\leq\sqrt{ \frac{n_{k}}{n}}\leq\sqrt{\frac{n_{k}}{n_{k-1}}}\leq\sqrt{\lambda}. \tag{38}\]
In view of inequality (38), Statement (37) yields
\[\limsup_{k\rightarrow\infty}\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n \leq n_{k}}|\sum_{i=1}^{n}Z_{i}(\textbf{u}_{k,r})|}{\sqrt{(2d\log 2)nj_{n}2^{-d \eta_{n}}\|c\|_{\infty}[K^{2}]}}\leq\sqrt{\lambda(1+\eta)},\quad a.s. \tag{39}\]
Now, multiplying the numerator and the denominator of the fraction in (39) by the factor \(2^{d_{n}}/n\), an recalling the expression of \(D_{n}(\mathbf{u})\) in (34), we finally get for all \(\eta\in(0,1)\),
\[\limsup_{k\to\infty}\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n\leq n_{k}} \sqrt{n}\,|D_{n}(\mathbf{u}_{k,r})|}{\sqrt{(2d\log 2)j_{n}2^{d_{n}}}}\leq\sqrt{ \lambda(1+\eta)\|c\|_{\infty}[K^{2}]}, \tag{40}\]
which proves Lemma 3.2 over the discrete grid.
Next, to prove Lemma 3.2 between the grid points, we shall make use of Talagrand's (1996) inequality (27). Let us introduce the sequence of functions defined as follows : for all \(n\geq 1\), \(k\geq 1\), \(1\leq r\leq l_{k}\) and any fixed \(\mathbf{u}\in S_{k,r}\), define
\[g_{k,r}^{(n)}(\mathbf{s},\mathbf{u})=\mathbf{K}(2^{j_{n_{k}}}\mathbf{s},2^{j_{ n_{k}}}\mathbf{u}_{k,r})-\mathbf{K}(2^{j_{n}}\mathbf{s},2^{j_{n}}\mathbf{u}), \qquad\mathbf{s}\in I^{d}. \tag{41}\]
and set, for all \(\lambda>1\),
\[\mathcal{G}_{k,r}(\lambda)=\left\{g:\mathbf{s}\mapsto g_{k,r}^{(n)}(\mathbf{ s},\mathbf{u}):\mathbf{u}\in S_{k,r}\cap I^{d},\ n_{k-1}\leq n\leq n_{k}\right\}.\]
Let \(\mathbf{S}=(S_{1},\ldots,S_{d})\) be a vector of \([0,1]-\)uniform random variables, now we have to check the following conditions in order to apply Talagrand's (1996) inequality :
* The classes \(\mathcal{G}_{k,r}(\lambda),1\leq r\leq l_{k}\), are of VC-type with characteristics \(A\) and \(v\) ;
* \(\forall g\in\mathcal{G}_{k,r}(\lambda)\), \(\|g\|_{\infty}\leq\mathrm{U};\)
* \(\forall g\in\mathcal{G}_{k,r}(\lambda)\), \(\mathrm{Var}[g(\mathbf{S})]\leq\sigma_{k}^{2},\)
* \(\sigma_{k}<\frac{\mathrm{U}}{2}\) and \(n_{k}\sigma_{k}^{2}>C_{0}\log\left(\frac{\mathrm{U}}{\sigma_{k}}\right)\), \(C_{0}>0\).
These conditions will be checked below.
Recall that \((\mathbf{U}_{i}=(F_{1i}(X_{i1}),\ldots,F_{di}(X_{id})),i,\ldots,n\) is a sequence of independent and identically distributed vectors of \([0,1]-\)uniform components. We have shown (see below) that each class \(\mathcal{G}_{k,r}(\lambda)\) satisfies all the conditions i), ii), iii) and iv) for \(\mathrm{U}=2\|\Phi\|_{\infty}^{d}\) and \(\sigma_{k}^{2}=D_{0}2^{-d_{n_{k}}}\|c\|_{\infty}\omega_{\phi}^{2}(\delta_{m})\), where \(\omega_{\phi}\) is the modulus of continuity of \(\phi\) defined below (49) and \(D_{0}\) is a positive constant depending on \(\|\Phi\|_{\infty}\) and \(d\). Then, Talagrand's (1996) inequality gives : for all \(t>0\),
\[\mathbb{P}\left(\max_{n_{k-1}\leq n\leq n_{k}}\left\|\sum_{i=1}^{n}(g(\mathbf{ U}_{i})-\mathbb{E}g(\mathbf{U}_{i}))\right\|_{\mathcal{G}_{k,r}(\lambda)}>t\right) \leq R\exp\left\{\frac{-1}{C_{3}}\frac{t^{2}}{n_{k}\sigma_{k}^{2}}\right\}, \tag{42}\]
which yields, by taking the maximum over \(r\) and \(t=C_{1}\sqrt{n_{k}\sigma_{k}^{2}\log\left(\frac{\mathrm{U}}{\sigma_{k}}\right)}\),
\[\mathbb{P}\left(\max_{1\leq r\leq l_{k}}\max_{n_{k-1}\leq n\leq n_{k}}\left\| \sum_{i=1}^{n}(g(\mathbf{U}_{i})-\mathbb{E}g(\mathbf{U}_{i}))\right\|_{ \mathcal{G}_{k,r}(\lambda)}>t\right)\leq Rl_{k}\exp\left\{\frac{-C_{1}^{2}}{C_ {3}}\log\left(\frac{\mathrm{U}}{\sigma_{k}}\right)\right\}. \tag{43}\]
Whenever \(m\to\infty\), \(\omega_{\phi}(\delta_{m})\to 0\). Hence, for any \(\varepsilon>0\), there exists \(m_{0}\in\mathbb{N}\) such that \(\omega_{\phi}(\delta_{m})<\varepsilon\) for \(m\geq m_{0}\). Using this fact, we can replace \(\sigma_{k}^{2}\) by \(4D2^{-dj_{n_{k}}\varepsilon}\|c\|_{\infty}\), for \(m\) large enough. We also have, for \(k\) large enough,
\[\log\left(\frac{\mathrm{U}}{\sigma_{k}}\right)=\log\left(\frac{\mathrm{U}}{4D \varepsilon\|c\|_{\infty}}\right)+j_{n_{k}}\log 2\sim j_{n_{k}}\log 2\]
and thus, for \(k,m\) large enough,
\[t=C_{1}\sqrt{n_{k}\sigma_{k}^{2}\log\left(\frac{\mathrm{U}}{\sigma_{k}}\right) }\sim\sqrt{4DC_{1}^{2}n_{k}2^{-dj_{n_{k}}}d\,j_{n_{k}}\log 2\,\varepsilon\|c\|_{ \infty}}.\]
By combining these facts with (32) we obtain, with \(A_{0}=\sqrt{2DC_{1}}\),
\[\mathbb{P}\left(\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n\leq n_{k}} \|\sum_{i=1}^{n}(g(\mathbf{U}_{i})-\mathbb{E}g(\mathbf{U}_{i}))\|_{\mathcal{G }_{k,r}(\lambda)}}{\sqrt{(2d\log 2)n_{k}j_{n_{k}}2^{-dj_{n_{k}}}}}>A_{0}\sqrt{ \varepsilon\|c\|_{\infty}}\right)\leq 4R\delta^{-2}2^{-[C_{1}^{2}/2C_{3}-1]2j_{n_{k}}}. \tag{44}\]
Now, we can choose the constant \(C_{1}\) in such a way that \(C_{1}^{2}/2C_{3}-1>0\) ; in which case the series \(\sum_{k\geq 0}2^{-[C_{1}^{2}/2C_{3}-1]2j_{n_{k}}}\) converges. Thus, Borel-Cantelli lemma implies
\[\mathbb{P}\left(\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n\leq n_{k}} \|\sum_{i=1}^{n}(g(\mathbf{U}_{i})-\mathbb{E}g(\mathbf{U}_{i}))\|_{\mathcal{G }_{k,r}(\lambda)}}{\sqrt{(2d\log 2)n_{k}j_{n_{k}}2^{-dj_{n_{k}}}}}>A_{0}\sqrt{ \varepsilon\|c\|_{\infty}}\right)=o(1), \tag{45}\]
that is
\[\limsup_{k\to\infty}\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n\leq n_{k} }\|\sum_{i=1}^{n}(g(\mathbf{U}_{i})-\mathbb{E}g(\mathbf{U}_{i}))\|_{\mathcal{ G}_{k,r}(\lambda)}}{\sqrt{(2d\log 2)n_{k}j_{n_{k}}2^{-dj_{n_{k}}}}}\leq A_{0} \sqrt{\varepsilon\|c\|_{\infty}},\quad a.s. \tag{46}\]
Arguing as in the discrete case, with Statement (38) in view, we conclude that
\[\limsup_{k\to\infty}\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n\leq n_{k }}\sqrt{n}\left\|D_{n}(\mathbf{u}_{k,r})-D_{n}(\mathbf{u}))\right\|_{\mathcal{ G}_{k,r}(\lambda)}}{\sqrt{(2d\log 2)j_{n}2^{dj_{n}}}}\leq A_{0}\sqrt{ \lambda\varepsilon\|c\|_{\infty}},\quad a.s., \tag{47}\]
which completes the proof of Lemma (3.2) between the grid-points.
Now recapitulating, we can infer from (40) and (47) that
\[\limsup_{k\to\infty}\max_{1\leq r\leq l_{k}}\frac{\max_{n_{k-1}\leq n\leq n_{ k}}\sqrt{n}\left|D_{n}(\mathbf{u})\right|}{\sqrt{(4\log 2)j_{n}2^{-2j_{n}}}} \leq\sqrt{\lambda(1+\eta)\|c\|_{\infty}[K^{2}]}+A_{0}\sqrt{\lambda \varepsilon\|c\|_{\infty}},\quad a.s. \tag{48}\]
Since \(\eta\) and \(\varepsilon\) are arbitrary, letting \(\lambda\to 1\) completes the proof of Lemma 3.2.\(\square\)
## Checking conditions i), ii), iii), iv)
**Checking i):** Observe that the elements of the class \(\mathcal{G}_{k,r}(\lambda)\) may be rewritten as
\[g_{k,r}^{(n)}(\mathbf{s},\mathbf{u})=\prod_{m=1}^{d}\widetilde{K}(2^{j_{n_{k} }}s_{m},2^{j_{n_{k}}}u_{k,r,m})-\prod_{m=1}^{d}\widetilde{K}(2^{j_{n}}s_{m},2^ {j_{n}}u_{m}),\]
where \(\widetilde{K}(x,y)=\sum_{l=1}^{2^{j_{n}}}\phi(x-l)\phi(y-l),\) with \(\phi\) compactly supported and is of bounded variation. For \(m=1,\ldots,d\), define the classes of functions : \({\cal F}_{m}=\{v\mapsto\sum_{l\in\mathbb{Z}}\phi(2^{j}w-l)\phi(2^{j}v-l):w\in[0, 1],j\in\mathbb{N}\}\). By Lemma 2 in [14], \({\cal F}_{1},\ldots,{\cal F}_{m}\) are VC-type classes of functions. Moreover, \({\cal F}_{1},\ldots,{\cal F}_{m}\) are uniformly bounded. Indeed, for all \(w\in[0,1],\ j\in\mathbb{N}\), we have \(\left|\sum_{l=1}^{2^{j_{n}}}\phi(2^{j}w-l)\phi(2^{j}\cdot-l)\right|\leq\|\phi \|_{\infty}\|\theta_{\phi}\|_{\infty}\), as the function \(\theta_{\phi}(x)=\sum_{l=1}^{2^{j_{n}}}|\phi(x-l)|\) is bounded. By Lemma A.1 in [7], this implies that the product \({\cal F}_{1}\cdots{\cal F}_{m}\) is also a VC-type class of functions. Now, using properties (iv) and (v) of Lemma 2.6.18 in [24], we can infer that the classes of functions \({\cal G}_{k,r}(\lambda)\) are of VC-type for all \(k,r\) fixed.
**Checking ii):** For all \(k\geq 1,\ 0\leq r\leq l_{k},\ n_{k-1}\leq n\leq n_{k}\), using hypothesis (H.2), we can write
\[\left|g^{(n)}_{k,r}(\cdot,{\bf u})\right| \leq \left|{\bf K}(2^{j_{n_{k}}}\cdot,2^{j_{n_{k}}}{\bf u}_{k,r}) \right|+\left|{\bf K}(2^{j_{n}}\cdot,2^{j_{n}}{\bf u})\right|\] \[\leq \prod_{m=1}^{d}\widetilde{K}(2^{j_{n_{k}}}\cdot,2^{j_{n_{k}}}u_{ k,r,m})+\prod_{m=1}^{d}\widetilde{K}(2^{j_{n}}\cdot,2^{j_{n}}u_{m})\leq 2\|\Phi\|_{ \infty}^{d}\]
and ii) holds with \({\rm U}=2\|\Phi\|_{\infty}^{d}\).
**Checking iii):** For all \(k\geq 1,\ 0\leq r\leq l_{k},\ n_{k-1}\leq n\leq n_{k}\). As in [14] we choose \(\lambda\in(0,1)\), such that \(j_{n_{k}}=j_{n}\). By a change of variable \({\bf s}={\bf u}+2^{-j_{n_{k}}}{\bf x},\ {\bf s}=(s_{1},\ldots,s_{d}),\ {\bf x}=(x_{1}, \ldots,x_{d}),\ {\bf u}=(u_{1},\ldots,u_{d})\), we have
\[\mathbb{E}\left[\left(g^{(n)}_{k,r}({\bf S},{\bf u})\right)^{2}\right] = \mathbb{E}\left[\left({\bf K}(2^{j_{n_{k}}}{\bf S},2^{j_{n_{k}}}{ \bf u}_{k,r})-{\bf K}(2^{j_{n_{k}}}{\bf S},2^{j_{n_{k}}}{\bf u})\right)^{2}\right]\] \[= \int_{I^{d}}\left({\bf K}(2^{j_{n_{k}}}{\bf s},2^{j_{n_{k}}}{ \bf u}_{k,r})-{\bf K}(2^{j_{n_{k}}}{\bf s},2^{j_{n_{k}}}{\bf u})\right)^{2}c({ \bf s})d{\bf s}\] \[\leq \frac{\|c\|_{\infty}}{2^{-dj_{n_{k}}}}\int_{[-2^{j_{n_{k}}}2^{j_ {n_{k}}}]^{d}}\left({\bf K}(2^{j_{n_{k}}}{\bf u}+{\bf x},2^{j_{n_{k}}}{\bf u}_ {k,r})-{\bf K}(2^{j_{n_{k}}}{\bf u}+{\bf x},2^{j_{n_{k}}}{\bf u})\right)^{2}d{ \bf x}.\] \[\leq 2^{-dj_{n_{k}}}\|c\|_{\infty}\int_{\mathbb{R}^{d}}\left({\bf K}( 2^{j_{n_{k}}}{\bf u}+{\bf x},2^{j_{n_{k}}}{\bf u}_{k,r})-{\bf K}(2^{j_{n_{k}}} {\bf u}+{\bf x},2^{j_{n_{k}}}{\bf u})\right)^{2}d{\bf x}.\]
To simplify, let us take \({\bf w}=2^{j_{n_{k}}}{\bf u}+{\bf x}\), then \(d{\bf x}=d{\bf w}\), \(w=(w_{1},\ldots,w_{d})\in\mathbb{R}^{d}\).
Put \(A({\bf w})={\bf K}({\bf w},2^{j_{n_{k}}}{\bf u}_{k,r})-{\bf K}({\bf w},2^{j_{n_ {k}}}{\bf u})\) ; using the multiplicativity of the kernel \({\bf K}\), we can rewrite \(A({\bf w})\) as
\[A({\bf w}) = \prod_{l=1}^{d}\widetilde{K}(w_{l},2^{j_{n_{k}}}u_{k,r,l})-\prod _{l=1}^{d}\widetilde{K}(w_{l},2^{j_{n_{k}}}u_{k,r,l})\] \[= \sum_{l=1}^{d}\left[\widetilde{K}(w_{l},2^{j_{n_{k}}}u_{k,r,l})- \widetilde{K}(w_{l},2^{j_{n_{k}}}u_{l})\right]\prod_{p=1,p\neq l}^{d}\widetilde{ K}(w_{p},2^{j_{n_{k}}}u_{k,r,p})\]
For any \(\delta>0\), the modulus of continuity of \(\phi\) is defined as
\[\omega_{\phi}(\delta)=\{\sup|\phi(x)-\phi(y)|:|x-y|\leq\delta\}. \tag{49}\]
Recall that \(\widetilde{K}(x,y)=\sum_{h=1}^{2^{j_{n}}}\phi(x-h)\phi(y-h).\) Combining these facts with the inequality \((a_{1}+\cdots+a_{d})^{2}\leq d(a_{1}^{2}+\cdots+a_{d}^{2})\), and Fubini's Theorem, we get
\[\int_{\mathbb{R}^{d}}|A(\mathbf{w})|^{2}d\mathbf{w} \leq d\int_{\mathbb{R}^{d}}\sum_{l=1}^{d}\left[\widetilde{K}(w_{l},2^{ j_{n_{k}}}u_{k,r,l})-\widetilde{K}(w_{l},2^{j_{n_{k}}}u_{l})\right]^{2}\prod_{p=1,p \neq l}^{d}\widetilde{K}^{2}(w_{l},2^{j_{n_{k}}}u_{k,r,p})d\mathbf{w}.\]
Then
\[\int_{\mathbb{R}^{d}}|A(\mathbf{w})|^{2}d\mathbf{w} \leq d\sum_{l=1}^{d}\int_{\mathbb{R}^{d}}\left[\sum_{h=1}^{2^{j_{n}}} \phi(2^{j_{n_{k}}}w_{l}-h)[\phi(2^{j_{n_{k}}}u_{k,r,l}-h)-\phi(2^{j_{n_{k}}}u_ {l}-h)]\right]^{2}\] \[\times\prod_{p=1,p\neq l}^{d}\widetilde{K}^{2}(w_{p},2^{j_{n_{k} }}u_{k,r,p})d\mathbf{w}\] \[\leq d\omega_{\phi}^{2}(\delta_{m})\sum_{l=1}^{d}\int_{\mathbb{R}^{d} }\left[\sum_{h}^{2^{j_{n}}}\phi(2^{j_{n_{k}}}w_{l}-h)\right]^{2}\prod_{p=1,p \neq l}^{d}\widetilde{K}^{2}(w_{p},2^{j_{n_{k}}}u_{k,r,p})d\mathbf{w}\] \[\leq d\omega_{\phi}^{2}(\delta_{m})\sum_{l=1}^{d}\int_{\mathbb{R}} \left[\sum_{h=1}^{2^{j_{n}}}\phi(2^{j_{n_{k}}}w_{l}-h)\right]^{2}dw_{l}\prod_{ p=1,p\neq l}^{d}\int_{\mathbb{R}}\widetilde{K}^{2}(w_{p},2^{j_{n_{k}}}u_{k,r,p})dw_{p}.\]
Now, since the family \(\{\phi(\cdot-h):h=1,\ldots,2^{j_{n}}\}\) is an orthonormal basis, the quantity \(\int_{\mathbb{R}}\left(\sum_{h=1}^{2^{j_{n}}}\phi(w_{l}-h)\right)^{2}dw_{l}\) can be bounded by a constant \(M_{0}\) ; thus
\[\int_{\mathbb{R}^{d}}|A(\mathbf{w})|^{2}d\mathbf{w} \leq M_{0}d\omega_{\phi}^{2}(\delta_{m})\sum_{l=1}^{d}\prod_{p=1,p\neq l }^{d}\int_{\mathbb{R}}\widetilde{K}^{2}(w_{p},2^{j_{n_{k}}}u_{k,r,p})dw_{p}\] \[\leq M_{0}d^{2}\omega_{\phi}^{2}(\delta_{m})D,\]
where we use Hypothesis (H.2) for the last inequality, with \(D\) a positive constant depending on \(\|\Phi\|_{\infty}\). Finally, we obtain
\[\mathbb{E}\left[\left(g_{k,r}^{(n)}(\mathbf{S},\mathbf{u})\right)^{2}\right] \leq M_{0}d^{2}D2^{-dj_{n_{k}}}\|c\|_{\infty}\omega_{\phi}^{2}(\delta_{m}), \tag{50}\]
and iii) holds with
\[\sigma_{k}^{2}=D_{0}2^{-dj_{n_{k}}}\|c\|_{\infty}\omega_{\phi}^{2}(\delta_{m} ),\qquad D_{0}=M_{0}d^{2}D. \tag{51}\]
**Checking iv):** For \(m>0\) fixed, we have
\[\frac{\sigma_{k}}{\mathrm{U}}=\frac{D_{0}^{1/2}2^{-\frac{d}{2}j_{n_{k}}}\|c\| _{\infty}^{1/2}\omega_{\phi}(\delta_{m})}{2\|\Phi\|_{\infty}^{d}}\to 0,k\to\infty,\]
which implies that \(\frac{\sigma_{k}}{\mathrm{U}}<\varepsilon\), for all \(\varepsilon>0\) and \(k\) large enough. Hence, for \(\varepsilon=1/2\), we have \(\sigma_{k}<\frac{\mathrm{U}}{2}\). We also have, for all large \(k\),
\[\frac{n_{k}\sigma_{k}^{2}}{\log\left(\frac{\mathrm{U}}{\sigma_{k}}\right)}= \frac{D_{0}n_{k}\|c\|_{\infty}\omega_{\phi}^{2}(\delta_{m})}{j_{n_{k}}2^{d \tilde{\sigma}_{n_{k}}}\ \log 2}\longrightarrow\infty,\]
by Hypothesis (H.4). This readily implies that, for any constant \(C_{0}>0\), \(n_{k}\sigma_{k}^{2}>C_{0}\log\left(\frac{\mathrm{U}}{\sigma_{k}}\right)\) for all large \(k\), and iv) holds. \(\Box\)
## Lower Bound
**Lemma 3.3**: _Under the assumptions of Proposition 3.1, one has almost surely_
\[\liminf_{n\to\infty}r_{n}\sup_{\mathbf{u}\in I^{d}}\frac{|D_{n}(\mathbf{u})|}{\sqrt{\int_ {\mathbb{R}^{d}}K^{2}[(\mathbf{x},2^{j_{n}}\mathbf{u})]d\mathbf{x}}}\geq\sqrt{\|c\|_{\infty}}. \tag{52}\]
**Proof :** It is an adaptation of the proof of Proposition 2 in [14], which is, itself, inspired by Proposition 2 in [7]. According to this latter proposition, (52) holds if and only if for all \(\tau>0\), and all large \(n\), there exists \(k_{n}=:k_{n}(\tau)\) points \(\mathbf{z}_{i,n}=(z_{1,i,n},\ldots,z_{d,i,n})\in I^{d},\ i=1,\ldots,k_{n}\) such that, for functions \(g_{i}^{(n)}(\mathbf{s})=\mathbf{K}(2^{j_{n}}\mathbf{s},2^{j_{n}}\mathbf{z}_{i,n}),\ \mathbf{s}\in I^{d},\) and for \(\mathbf{U}=(U_{1},\ldots,U_{d})\) a random vector with joint density \(c\), the following conditions hold :
* \(\mathbb{P}(g_{i}^{(n)}(\mathbf{U})\neq 0,\ g_{i^{\prime}}^{(n)}(\mathbf{U}) \neq 0)=0,\quad\forall i\neq i^{\prime};\)
* \(\sum_{i=1}^{k_{n}}\mathbb{P}(g_{i}^{(n)}(\mathbf{U})\neq 0)\leq 1/2;\)
* \(2^{-j_{n}}k_{n}\longrightarrow r\in]0,\infty[;\)
* \(\exists\ \mu_{1},\mu_{2}\in\mathbb{R}:2^{-d_{n}}\mu_{1}\leq\mathbb{E}g_{i}^{(n)}( \mathbf{U})\leq 2^{-d_{n}}\mu_{2},\quad\forall i=1,\ldots,k_{n};\)
* \(\exists\ \sigma_{1},\sigma_{2}>0:2^{-d_{n}}\sigma_{1}^{2}\leq\mathrm{Var}[g_{i}^{(n)} (\mathbf{U})]\leq 2^{-d_{n}}\sigma_{2}^{2},\quad\forall i=1,\ldots,k_{n};\)
* \(\|g_{i}^{(n)}\|_{\infty}<\infty,\quad\forall i=1,\ldots,k_{n};\ \forall n\geq 1;\)
Now, we have to check these conditions. By hypothesis the copula density \(c\) is continuous and bounded on \(I^{d}\), then there exists some orthrotope \(D\subset I^{d}\) such that \(\max_{\mathbf{s}\in D}c(\mathbf{s})=\|c\|_{\infty}\). Thus, for all \(\tau>0\) there exists \(\mathbf{s}_{0}\in D\) such that \(c(\mathbf{s}_{0})\geq(1-\tau)\|c\|_{\infty}\). Let
\[D_{\tau}=\{\mathbf{s}\in D:c(\mathbf{s})\geq(1-\tau)\|c\|_{\infty}\}, \tag{53}\]
and choose a subset \(D_{0}\subset D_{\tau}\) such that \(\ \mathbb{P}(\mathbf{U}\in D_{0})\leq\frac{1}{2}\). Suppose that \(D_{0}=\prod_{j=1}^{d}[a_{j},b_{j}]\), with \(0\leq a_{j}<b_{j}\leq 1,\quad\text{and}\ b_{j}-a_{j}=\ell,\ \forall j=1,\ldots,d\).
Set \(\delta=3B\) and define
\[z_{j,i,n}=a+i\delta 2^{-j_{n}},\quad i=1,\ldots,\left[\frac{b-a}{\delta 2^{-j_{n}}}\right]-1:=k_{n},\quad j=1,\ldots,d\]
where \([x]\) designs the integer part of a real \(x\).
**Checking C.1) :** Recall that \(\phi\) is supported on \([0,B]\), then
\[g_{i}^{(n)}(\mathbf{U})\neq 0\iff\forall k,l\in\mathbb{Z}\quad\left\{\begin{array}{ ll}0\leq&2^{j_{n}}U_{j}-l&\leq B,\ j=1,\ldots,d\\ 0\leq&2^{j_{n}}z_{j,i,n}-l&\leq B,\ j=1,\ldots,d\end{array}\right. \tag{1}\]
and
\[g_{i^{\prime}}^{(n)}(\mathbf{U})\neq 0\iff\forall k,l\in\mathbb{Z}\quad \left\{\begin{array}{ll}0\leq&2^{j_{n}}U_{j}-l&\leq B,\ j=1,\ldots,d\\ 0\leq&2^{j_{n}}z_{j,i^{\prime},n}-l&\leq B,\ j=1,\ldots,d\end{array}\right. \tag{2}\]
Combining (2) and \((2)^{\prime}\) gives, for every \(j=1,\ldots,d\),
\[|z_{j,i,n}-z_{j,i^{\prime},n}|\leq 2^{-j_{n}}B.\quad(3)\]
But, by definition, for all \(i\neq i^{\prime}\), \(|z_{j,i,n}-z_{j,i^{\prime},n}|>\delta 2^{-j_{n}}=3B2^{-j_{n}}\), which contradicts (3). Hence, the event \(\{g_{i}^{(n)}(\textbf{U})\neq 0,g_{i^{\prime}}^{(n)}(\textbf{U})\neq 0\}\) is empty for \(i\neq i^{\prime}\) and condition C.1) holds.
**Checking C.2) :** For all \(n\geq 1\), the sets \(\{g_{i}^{(n)}(\textbf{U})\neq 0\},\)\(i=1,\ldots,k_{n}\) are disjoint in view of Condition C.1). Then, we have
\[\sum_{i=1}^{k_{n}}\mathbb{P}(\{g_{i}^{(n)}(\textbf{U})\neq 0\})=\mathbb{P} \left(\bigcup_{i=1}^{k_{n}}\{g_{i}^{(n)}(\textbf{U})\neq 0\}\right).\]
Now, it suffices to show that \(\bigcup_{i=1}^{k_{n}}\{g_{i}^{(n)}(\textbf{U})\neq 0\}\subset\{\textbf{U}\in D_{0}\}.\) From statements (1) and (3) above, we can write, fro all \(j=1,\ldots,d\)
\[-B\leq 2^{j_{n}}(U_{j}-u_{j,i,n})\leq B\] \[u_{j,i,n}-2^{-j_{n}}B\leq U_{j}\leq u_{j,i,n}+2^{-j_{n}}B\] \[a_{j}\leq a_{j}+(3i-1)2^{-j_{n}}B\leq U_{j}\leq a_{j}+(3i+1)2^{-j _{n}}B\leq b_{j}.\]
That is \(U_{j}\in[a_{j},b_{j}]\), and hence \(\textbf{U}=(U_{1},\ldots,U_{d})\in\prod_{j=1}^{d}[a_{j},b_{j}]=D_{0}.\) It follows that,
\[\forall\ i=1,\ldots,k_{n},\quad\{g_{i}^{(n)}(\textbf{U})\neq 0\} \subset\{\textbf{U}\in D_{0}\}\] \[\bigcup_{i=1}^{k_{n}}\{g_{i}^{(n)}(\textbf{U})\neq 0\}\subset\{ \textbf{U}\in D_{0}\}\] \[\mathbb{P}\left(\bigcup_{i=1}^{k_{n}}\{g_{i}^{(n)}(\textbf{U}) \neq 0\}\right)\leq\mathbb{P}(\{\textbf{U}\in D_{0}\})\leq\tfrac{1}{2}.\]
Hence, C.2) is fulfilled.
**Checking C.3):** It is immediate, since
\[2^{-j_{n}}k_{n}=2^{-j_{n}}\left(\left[\frac{b-a}{\delta 2^{-j_{n}}}\right]-1\right)=\left[\frac{b-a}{ \delta}\right]-2^{-j_{n}}\rightarrow\left[\frac{b-a}{\delta}\right]=:r>0,\ n \rightarrow\infty.\]
**Checking C.4) :** Using a change of variables \(\textbf{s}=2^{-j_{n}}\textbf{x},\ \textbf{s}=(s_{1},\ldots,s_{d}),\ \textbf{x}=(x_{1}, \ldots,x_{d})\), we have
\[|\mathbb{E}g_{i}^{(n)}(\textbf{U})| \leq \int_{I_{d}^{d}}\left|\textbf{K}(2^{j_{n}}\textbf{s},2^{j_{n}} \textbf{z}_{i,n})\right|c(\textbf{s})d\textbf{s}\] \[\leq 2^{-dj_{n}}\|c\|_{\infty}\int_{\mathbb{R}^{d}}\left|\textbf{K}( \textbf{x},2^{j_{n}}\textbf{z}_{i,n})\right|d\textbf{x}\] \[\leq 2^{-dj_{n}}\|c\|_{\infty}\int_{\mathbb{R}^{d}}|\prod_{j=1}^{d} \widetilde{K}(x_{j},2^{j_{n}}u_{j,i,n})|dx_{j}\] \[\leq 2^{-dj_{n}}\|c\|_{\infty}\int_{\mathbb{R}^{d}}\prod_{j=1}^{d} \Phi(x_{j}-2^{j_{n}}u_{j,i,n})dx_{j}\] \[\leq 2^{-2j_{n}}\mu,\]
where \(\mu=\|c\|_{\infty}\int_{\mathbb{R}^{d}}\prod_{j=1}^{d}\Phi(x_{j}-2^{j_{n}}u_{j,i,n })dx_{j}\) exists, because the function \(\Phi\) is integrable by hypothesis (H.2). The last inequality is equivalent to
\[-2^{-2j_{n}}\mu\leq\mathbb{E}g_{i}^{(n)}(U,V)\leq 2^{-2j_{n}}\mu,\ \forall i=1, \cdots,k_{n}.\]
That is C.4) holds.
**Checking C.5) :** For \(n\geq 1,\ i=1,\ldots,k_{n}\), using a change of variables \({\bf s}=2^{-j_{n}}{\bf x}+{\bf z}_{i,n},\ {\bf s}=(s_{1},\ldots,s_{d}),\ {\bf x}=(x_{1}, \ldots,x_{d}),\ {\bf z}_{i,n}=(z_{1,i,n},\ldots,z_{d,i,n})\), we can write
\[{\rm Var}[g_{i}^{(n)}({\bf U})] \leq \mathbb{E}\left[\left(g_{i}^{(n)}({\bf U})\right)^{2}\right]\] \[\leq \int_{I^{d}}{\bf K}^{2}(2^{j_{n}}{\bf s},2^{j_{n}}{\bf z}_{i,n})c ({\bf s})d{\bf s}\] \[\leq 2^{-dj_{n}}\|c\|_{\infty}\int_{\mathbb{R}^{d}}{\bf K}^{2}({\bf x }+2^{j_{n}}{\bf z}_{i,n},2^{j_{n}}{\bf z}_{i,n})d{\bf x}.\]
Putting \(\sigma_{2}^{2}:=\|c\|_{\epsilon}\int_{\mathbb{R}^{d}}{\bf K}^{2}({\bf x}+2^{j_ {n}}{\bf z}_{i,n},2^{j_{n}}{\bf z}_{i,n})d{\bf x}\) yields
\[{\rm Var}[g_{i}^{(n)}({\bf U})]\leq 2^{-dj_{n}}\sigma_{2}^{2},\]
which is the upper bound in condition C.5). For the lower bound, we have
\[{\rm Var}[g_{i}^{(n)}({\bf U})]=\mathbb{E}\left[\left(g_{i}^{(n)}( {\bf U})\right)^{2}\right]-\left[\mathbb{E}g_{i}^{(n)}({\bf U})\right]^{2}\] \[= \int_{I^{d}}{\bf K}^{2}(2^{j_{n}}{\bf s},2^{j_{n}}{\bf z}_{i,n})c ({\bf s})d{\bf s}-\left(\int_{I^{d}}{\bf K}(2^{j_{n}}{\bf s},2^{j_{n}}{\bf z}_ {i,n})c({\bf s})d{\bf s}\right)^{2}.\]
Put \(\mu_{n}^{2}=\left(\int_{I^{d}}{\bf K}(2^{j_{n}}{\bf s},2^{j_{n}}{\bf z}_{i,n}) c({\bf s})d{\bf s}\right)^{2}.\) Noting that \(D_{\tau}\subset I^{d}\), by a change of variables \({\bf x}=2^{j_{n}}{\bf s}\), we obtain
\[{\rm Var}[g_{i}^{(n)}({\bf U})] \geq \int_{D_{\tau}}{\bf K}^{2}(2^{j_{n}}{\bf s},2^{j_{n}}{\bf z}_{i,n })c({\bf s})d{\bf s}-\mu_{n}^{2}\] \[\geq (1-\tau)\|c\|_{\infty}\int_{D_{\tau}}{\bf K}^{2}(2^{j_{n}}{\bf s},2^{j_{n}}{\bf z}_{i,n})d{\bf s}-\mu_{n}^{2}\] \[\geq (1-\tau)\|c\|_{\infty}2^{-dj_{n}}\int_{\mathbb{R}^{d}}{\bf K}^{2} ({\bf x},2^{j_{n}}{\bf z}_{i,n})d{\bf x}-\mu_{n}^{2}.\]
Proceeding again to the same change of variables, and observing from hypothesis (H.3) that \(\int_{\mathbb{R}^{d}}{\bf K}({\bf x},2^{j_{n}}{\bf z}_{i,n})d{\bf x}=1\), we can write
\[\mu_{n}^{2} \leq \left(\|c\|_{\infty}2^{-dj_{n}}\int_{\mathbb{R}^{d}}{\bf K}({\bf x },2^{j_{n}}{\bf z}_{i,n})d{\bf x}\right)^{2}\leq\|c\|_{\infty}^{2}2^{-2dj_{n}},\]
which implies \(-\mu_{n}^{2}\geq-\|c\|_{\epsilon}^{2}2^{-4j_{n}}.\) Thus, for \(n\) large enough, we obtain the lower bound in condition C.5), i.e.
\[{\rm Var}[g_{i}^{(n)}({\bf U})] \geq 2^{-dj_{n}}(1-\tau)\|c\|_{\infty}\int_{\mathbb{R}^{d}}{\bf K}^{2} ({\bf x},2^{j_{n}}{\bf z}_{i,n})d{\bf x}-\|c\|_{\infty}^{2}2^{-2dj_{n}}\] \[\geq 2^{-dj_{n}}\sigma_{1}^{2}+o(1),\]
with \(\sigma_{1}^{2}:=(1-\tau)\|c\|_{\infty}\int_{\mathbb{R}^{d}}K^{2}({\bf x},2^{j_{n}}{ \bf z}_{i,n})d{\bf x}\). Finally, C.5) holds.
Moreover, letting \(\tau\to 0\), we get \(\sigma_{2}^{2}=\sigma_{1}^{2}=\|c\|_{\infty}\int_{\mathbb{R}^{d}}{\bf K}^{2}({ \bf x},2^{j_{n}}{\bf u}d{\bf x}\).
**Checking C.6) :** For all \({\bf s}\in I^{d}\),\(n\geq 1,\quad i=1,\ldots,k_{n}\), by using hypotheses (H.1-2) and the multiplicativity of kernel \({\bf K}\), we have
\[|g_{i}^{(n)}({\bf s})| = |{\bf K}(2^{j_{n}}{\bf s},2^{j_{n}}{\bf z}_{i,n})|=\prod_{m=1}^{d }|\widetilde{K}(2^{j_{n}}s_{m},2^{j_{n}}z_{i,n,m})|\] \[\leq \prod_{m=1}^{d}\sum_{l=1}^{2^{jn}}|\phi(2^{j_{n}}s_{m}-l)\phi(2^{ j_{n}}z_{i,n,m}-l)|\] \[\leq \|\phi\|_{\infty}^{d}\prod_{m=1}^{d}\sum_{l=1}^{2^{jn}}|\phi(2^{ j_{n}}s_{m}-l)|\] \[\leq \|\phi\|_{\infty}^{d}\|\theta_{\phi}\|_{\infty}^{d}.\]
Hence, \(\sup_{n\geq 1,1\leq i\leq k_{n}}\|g_{i}^{(n)}\|\leq\|\phi\|_{\infty}^{d}\|\theta _{\phi}\|_{\infty}^{d}\), and C.6) holds.
Since Conditions C.1-2-3-4-5-6) are fulfilled, we can now apply Proposition 2 in [7] to complete the proof of Lemma 3.3. \(\Box\)
Finally, Lemma 3.2 and Lemma 3.3 give the proof of Proposition (3.1).\(\Box\)
## Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2305.19180 | Prognostic Adjustment with Efficient Estimators to Unbiasedly Leverage
Historical Data in Randomized Trials | Although randomized controlled trials (RCTs) are a cornerstone of comparative
effectiveness, they typically have much smaller sample size than observational
studies because of financial and ethical considerations. Therefore there is
interest in using plentiful historical data (either observational data or prior
trials) to reduce trial sizes. Previous estimators developed for this purpose
rely on unrealistic assumptions, without which the added data can bias the
treatment effect estimate. Recent work proposed an alternative method
(prognostic covariate adjustment) that imposes no additional assumptions and
increases efficiency in trial analyses. The idea is to use historical data to
learn a prognostic model: a regression of the outcome onto the covariates. The
predictions from this model, generated from the RCT subjects' baseline
variables, are then used as a covariate in a linear regression analysis of the
trial data. In this work, we extend prognostic adjustment to trial analyses
with nonparametric efficient estimators, which are more powerful than linear
regression. We provide theory that explains why prognostic adjustment improves
small-sample point estimation and inference without any possibility of bias.
Simulations corroborate the theory: efficient estimators using prognostic
adjustment compared to without provides greater power (i.e., smaller standard
errors) when the trial is small. Population shifts between historical and trial
data attenuate benefits but do not introduce bias. We showcase our estimator
using clinical trial data provided by Novo Nordisk A/S that evaluates insulin
therapy for individuals with type II diabetes. | Lauren D. Liao, Emilie Højbjerre-Frandsen, Alan E. Hubbard, Alejandro Schuler | 2023-05-30T16:24:31Z | http://arxiv.org/abs/2305.19180v4 | Transfer Learning With Efficient Estimators to Optimally Leverage Historical Data in Analysis of Randomized Trials
###### Abstract
Randomized controlled trials (RCTs) are a cornerstone of comparative effectiveness because they remove the confounding bias present in observational studies. However, RCTs are typically much smaller than observational studies because of financial and ethical considerations. Therefore it is of great interest to be able to incorporate plentiful observational data into the analysis of smaller RCTs. Previous estimators developed for this purpose rely on unrealistic additional assumptions without which the added data can bias the effect estimate. Recent work proposed an alternative method (prognostic adjustment) that imposes no additional assumption and increases efficiency in the analysis of RCTs. The idea is to use the observational data to learn a prognostic model: a regression of the outcome onto the covariates. The predictions from this model, generated from the RCT subjects' baseline variables, are used as a covariate in a linear model. In this work, we extend this framework to work when conducting inference with nonparametric efficient estimators in trial analysis. Using simulations, we find that this approach provides greater power (i.e., smaller standard errors) than without prognostic adjustment, especially when the trial is small. We also find that the method is robust to observed or unobserved shifts between the observational and trial populations and does not introduce bias. Lastly, we showcase this estimator leveraging real-world historical data on a randomized blood transfusion study of trauma patients.
**Keywords**: _Causal inference, Randomized Trials, Trauma, Prognostic Score, Historical Data_
## 1 Introduction
Many research questions ask about causal relationships beyond associations. Randomized controlled trials (RCTs), or experiments, are conducted explicitly to estimate causal effects. RCTs are especially powerful due to the randomization of the treatment assignment. This randomization step in trial design ensures the integrity of causal analysis and interpretation by eliminating all confounding [1].
Although RCTs are ideal for causal inference, researchers face difficulties in execution due to practical financial and ethical issues. Clinical trials require large sums of money to recruit participants and manage facilities and facilitators [2]. Large parts of the financial resources are allocated to ensure data quality during practical implementation. Researchers collecting the data in practice can encounter difficulties such as participant noncompliance or attrition [3]. These practical challenges and costs tend to scale with the sample size. Moreover, researchers need to consider multiple layers of human protection of trial safety. Researchers investigating the efficacy of new medication therapy in an RCT need to evaluate the ethical considerations beyond safety, addressing whether receiving no new active treatment or placebo can potentially
harm the control participants [4]. These resource and ethical constraints often hinder RCTs from having a large representative sample size.
Researchers often integrate prior knowledge learned from observational studies to reduce trial uncertainty of the estimated treatment effect on an outcome. RCTs are generally constructed based on prior or biological evidence measuring a hypothesized effect [5]. However, there is great interest in using historical data more directly during the analysis of the trial. One common method previously proposed is directly pooling the trial data with prior observational studies, referred to as data fusion [6, 7]. See Colnet et al. for a recent review of data fusion methods [8]. Another usage of prior studies is through Bayesian methods that naturally rely on assumptions in the form of specified priors to develop causal inference [9, 10]. Considerations for either combining the data sets together for analysis or relying on prior data to form assumptions are often addressed in generalizability and transportability research [7, 11, 12]. Although researchers have been examining this question in multiple aspects, in practice, assumptions about the structure of the observational data or the relationship between the observational and trial data are hard to assess and validate. Specifically, the underlying population may be different between the prior study and the current trial sample. These differences can be present in measured ways, such as shifting the age of the trial to a younger population to assess whether an earlier intervention can be useful when the observational study consists of mainly older individuals. However, the differences in age may require additional consistency assumptions with previous methods. When these assumptions are violated, the inference drawn on extrapolated functional forms may result in unreliable analysis. For example, if an analysis falsely claims a medication is beneficial when it is not, due to unsatisfied assumptions, then it can result in consequential harm when distributed. Distributional shifts can be corrected for under reasonable assumptions as long as the variable(s) in question are observed, but shifts in unobserved variables are impossible to detect or correct.
Simultaneously, nonparametric estimators have become much more popular in recent years because of their theoretical and practical advantages over traditional parametric methods. Specifically, estimators that are semi-parametrically efficient provide asymptotically smallest variance, thus reducing the uncertainty of the treatment effect estimate. Estimators used for estimation are called "doubly robust" in causal inference when either the treatment or the outcome model, if correctly specified, is sufficient for unbiased estimation. Doubly robust estimators leverage machine learning internally to estimate the treatment or the outcome model, or both; for example, the augmented inverse probability weighting estimator (AIPW) and the targeted maximum likelihood estimator (TMLE) are commonly used to evaluate the average treatment effect [13, 14, 15, 16, 17]. RCTs correctly specify the treatment model as random assignments by design, so the variance of efficient estimators is based predominantly on the accuracy of the outcome model [18]. These estimators are semi-parametrically efficient and provide maximum power while maintaining coverage of the true treatment effect. Thus, when the trial sample is large, no other estimator can improve upon efficient estimators, with or without relying on historical data. However, this guarentee is asymptotic and in practice there may still be opportunities for variance reduction in small samples.
Recent studies proposed doubly robust estimators to integrate prior observational studies into trial analyses [19, 20]. Although those proposed estimators have some desireable properties, they rely on assumptions of the prior data, specifically, outcome mean exchangeability between the observational and trial data. There are no existing methods that integrate prior data without adding additional assumptions in conjunction with efficient estimators.
We propose transferring knowledge from prior studies, "historical data," via a prognostic score to inform trial analysis while maintaining strict type I error control. We target reducing estimation errors without imposing additional assumptions on the historical data. The idea extends covariate adjustment and prognostic score integration from previous studies to optimally leverage historical data in finite sample trials [21; 22]. Importantly, using a prognostic score for covariate adjustment in the trial analysis with efficient estimators will not increase bias or variance of the estimate, even when the outcome model from the historical data is misspecified.
This approach is closely related to the transfer learning literature in machine learning. In transfer learning, the goal is to use a large amount of data from a "source task" to benefit performance (typically prediction) on a "target task" [23; 24; 25]. While machine learning has been commonly used in causal inference for modeling the treatment and outcome [26; 27; 17], there are limited discussions explicitly linking these concepts together. One recent estimator, Causal-Batle, proposed by Aoki and Ester discussed using transfer learning to integrate with causal inference through Bayesian neural network architecture to improve modeling [28]. Although Causal-Batle is Bayesian in nature, which does not ensure strict type I error control, our proposed method is ideologically similar to their work. The context of the source task is analogous to the outcome modeling in the historical data and similar to the target task, outcome modeling in the trial data. As with typical transfer learning applications, our approach should be most powerful (relative to target-only approaches) when the amount of source data is large and the amount of target data is small.
## 2 Set up
We follow the causal inference framework and roadmap from Petersen and van der Laan [29]. First, we define each observational unit \(i\in\{1,...,n\}\), as an independent, identically distributed random variable, \(O_{i}\) with true distribution \(P\). In our setting, each random variable \(O=(W,A,Y,D)\) contains associated \(p\) baseline covariates \(W\in\mathbb{R}^{p}\), a binary treatment \(A\), an outcome \(Y\), and an indicator \(D\) denoting membership in either the trial or historical sample. We are interested only in the trial analysis, \(D=1\), and we denote the historical data, \(D=0\).
The fundamental problem of causal inference comes from not being able to observe the outcome under both treatment types, denoted as control, \(a=0\), or treated, \(a=1\). Hence, we can only observe \(Y^{A}\), the outcome \(Y\) given the treatment A assigned to each unit. Suppose unit \(i\) is treated, \(A_{i}=1\), then we are only able to observe \(Y_{i}=Y_{i}^{1}\). We are unable to observe unit \(i\)'s counterfactual outcome, \(Y_{i}^{0}\), the outcome if unit \(i\) was in the control group instead. This scenario is similar if the unit \(i\) is in the control group; we cannot observe the counterfactual outcome if unit \(i\) was treated. To calculate the causal parameter of interest, we define the full data to be \((W,Y^{1},Y^{0},D)\). In this study, we are interested in finding the causal parameter, average treatment effect (ATE), between the treated and control groups in the trial population as in Equation 1.
\[\Psi=E[Y_{1}-Y_{0}|\ D=1] \tag{1}\]
Although inherently unobservable, with applicable identifiability assumptions, we can equate what we observe statistically from the data to this causal parameter [1]. The standard identifying assumptions for the ATE is the consistency of treatment or the standard of care, no unmeasured confounding, and all units having an equal probability of receiving treatment. Note that in this framework, no interference is assumed since all units are independent. Since our ATE is calculated from an RCT, all identifying assumptions are naturally satisfied.
To leverage external data, we also define the prognostic score \(\rho\), as proposed by Hansen [30]. The prognostic score \(\rho_{d}\) is the expected observed outcome conditional on covariates in a given data set \(d\), as shown in Equation 2. In Hansen, the prognostic score is defined as the expected observed outcome given covariates and treatment set to zero. In practice this is estimated from a historical data set including only controls. Here we generalize this so that we can work with historical data where the given treatment may vary or be different than either treatment in the trial.
\[\rho_{d(W)}:=E[Y\ |\ W,D=d] \tag{2}\]
Let \(\mathcal{L}\) denote a machine learning algorithm that maps a data set with outcome \(\boldsymbol{Y}=[Y_{1},...,Y_{n}]\) and the observed covariates \(\boldsymbol{X}=[X_{1},...,X_{n}]\), where \((Y,X)\in(\mathbb{R}\times\mathbb{R}^{m})\) to a learned function \(f\) that estimates the conditional mean \(E[Y|X]\). The algorithm \(\mathcal{L}\) may include detailed internal model selection and parameter tuning, and the algorithm works with predictors and data of any dimension (i.e., \(m,n\) are arbitrary). Let \(\boldsymbol{\widetilde{Y}},\boldsymbol{\widetilde{W}}\) represent the historical data set of size \(n_{0}\), which is a draw from \(P^{n_{0}}(Y,W\ |\ D=0)\). We use \(\hat{\rho}_{0}=\mathcal{L}\big{(}\boldsymbol{\widetilde{Y}},\boldsymbol{ \widetilde{W}}\big{)}\) (or just \(\hat{\rho}\)) to refer to an estimate of prognostic score learned from the historical data.
Since the identifying assumptions hold, with simple algebra to establish equivalency, the statistical parameter is equivalent to the causal parameter. This allows for a causal interpretation of the statistical estimand. We can further write the statistical parameter with the estimated prognostic score (Equation 3).
\[\begin{split}&\Psi=E[E[Y|A=1,W,D=1]]-E[E[Y|A=0,W,D=1]]\\ &=E[E[Y|\ W,\hat{\rho}(W),A=1,D=1]]-E[E[Y|W,\hat{\rho}(W),A=0,D=1] ]\end{split} \tag{3}\]
which shows that additionally conditioning on an estimated prognostic score does not create bias, no matter what that estimated function is.
Let \((\boldsymbol{Y},\boldsymbol{A},\boldsymbol{W})\) represent the trial data set of size \(n_{1}\), which is a draw from \(P^{n_{1}}(Y,W\ |\ D=1)\). In a slight abuse of notation, let \(\widehat{\psi}=\widehat{\psi}(\boldsymbol{Y},\boldsymbol{A},\boldsymbol{W})\) denote the mapping between trial data and our estimate \(\widehat{\psi}\) using an efficient estimator. For example, \(\widehat{\psi}\) could denote the cross-fit AIPW estimator described in Schuler 2021 [18].
## 3 Method
Our method builds on the prognostic adjustment method of Schuler et al. [21]. Specifically, we first obtain a prognostic model from performing an outcome prediction to fit the historical data (\(D=0\)) using a machine learning algorithm \(\hat{\rho}=\mathcal{L}\big{(}\boldsymbol{\widetilde{Y}},\boldsymbol{ \widetilde{W}}\big{)}\). We then calculate the value of the prognostic score in the trial by feeding in all units' baseline covariates: \(\boldsymbol{R}=\hat{\rho}(\boldsymbol{W})\). Lastly, we generate our ATE estimate from the trial data, augmented with the prognostic score as an additional covariate: \(\widehat{\psi}(\boldsymbol{Y},\boldsymbol{A},[\boldsymbol{W},\boldsymbol{R}])\).
For an efficient estimator, adding a fixed function of the covariates as an additional covariate cannot change the asymptotic behavior [31]. Therefore, using the estimated prognostic score as a covariate cannot introduce bias, even if the prognostic score is poor.
However, in theory, adding the prognostic covariate also cannot reduce asymptotic variance! Nonetheless, we find that the _finite-sample_ variance of efficient estimators is far enough from the efficiency bound that using the prognostic score as a covariate generally decreases the variance (without introducing bias). Mechanistically, this happens because the prognostic score accelerates the learning curve of the outcome regression models within the efficient
estimator, such that, more accurate predictions can be made with less data. If the prognostic score \(E[Y|W,D=0]\) is similar to the true outcome regression in either arm of the trial \(E[Y|W,A=a,D=1]\) then the estimated prognostic score will prove very useful for the efficient estimator to learn the outcome regressions. We expect this to be the case as long as the trial and historical populations and treatments are similar enough. But even if they are not identical, the prognostic score is still likely to contain very useful information about the conditional outcome mean. Metaphorically, the prognostic score "jump-starts" the outcome regressions, resulting in better predictions that absorb more unexplained trial outcome variance with fewer data.
While the aforementioned machine learning algorithm \(\mathcal{L}\) is not of specified functional form, the choice does affect the ATE estimation performance. First, the purpose of the prognostic model, building on the historical data, is to learn the relationship between the covariates \(W\) and outcome \(Y\). We suggest using a cross-validated ensemble learner, the super learner, to achieve this task. Super learner, developed by Polley, van der Laan, and Hubbard, is an algorithm that optimally combines multiple candidate machine learning algorithms in a set (library) selected using cross-validation and a common loss measure [26]. The advantages of using the super learner include its "oracle property," which indicates that the super learner has proven to perform as well as the best machine learning algorithm included in the library [26, 27]. The library in the super learner should include various nonparametric and parametric learners such as gradient boosting, random forest, elastic net, and linear models with different specifications because it guarantees the fastest convergence via the oracle property [27, 32]. Secondly, the choice of the learner is also important for modeling the nuisance parameters of the efficient estimators. We recommend using the cross-validated ensemble learner as well in this setting due to the oracle property. However, the issue of overfitting can arise when including adaptive learners in the library especially in smaller trial sample sizes. To avoid this in trial analysis, using the discrete super learner (same as choosing the lowest error model after cross-validation) and including simpler parametric models in the library are essential [32].
## 4 Simulation
Our simulation aims to demonstrate the utility of an efficient estimator with the addition of a prognostic score. We examine how our method performs in different scenarios (e.g. heterogeneous vs. constant effect), across different data set sizes, and when there are distributional shifts from the historical to the trial population.
### Setup
The simulation is based on the structural causal model given in Equation 4. The trial data-generating process (DGP) has the form
\[\begin{split} W\neg P_{W}\\ U\sim&\text{Unif}(0,1)\\ A\sim&\text{Bern}\bigg{(}\frac{1}{2}\bigg{)}\\ Y^{a}|W,U=&\mu_{a(W,U)}+\mathcal{N}(0,2)\\ Y=AY^{1}+(1-A)Y^{0}\end{split} \tag{4}\]
We simulate 20 baseline covariates \(W_{1},\cdots,W_{20}\). We denote the \(P_{W}\) representing the following data generating process: \(W_{1}\) is drawn randomly from a uniform distribution bounded by -2 and 1. \(W_{2}\) is drawn randomly from a normal distribution with a mean of 0 and a standard deviation
of 3. \(W_{3}\) is drawn randomly from an exponential distribution with a rate of 0.8. \(W_{4}\) is drawn randomly from a gamma distribution with a rate of 5 and a scale of 10. \(W_{5}\) is also drawn from a gamma distribution with a rate of 2 and a scale of 1. \(W_{6}\) to \(W_{20}\) are drawn randomly from uniform distribution bounded by -1 and 1. \(U\) denotes an unobserved covariate drawn from a uniform distribution bounded by 0 and 1.
We examine two different scenarios for the conditional outcome mean The outcome. In our "heterogeneous effect" simulation:
\[\begin{split}\mu_{1}(W,U)&=\left(\sin(|W_{1}|* \pi)*10\right)^{2}+\mathrm{I}(U>1.01)*8+\mathrm{I}(U>1.55)*15-42\\ \mu_{0}(W,U)&=\left(\sin(|W_{1}|*\pi)*10\right)+ \mathrm{I}(U>1.01)*8+\mathrm{I}(U>1.55)*15\end{split} \tag{5}\]
and in our "constant effect" simulation:
\[\begin{split}\mu_{1}(W,U)&=\left(\sin(|W_{1}|*\pi)*1 0\right)+\mathrm{I}(U>1.21)*20+\mathrm{I}(U>1.55)*15-0.8\\ \mu_{0}(W,U)&=\left(\sin(|W_{1}|*\pi)*10\right)+ \mathrm{I}(U>1.21)*20+\mathrm{I}(U>1.55)*15\end{split} \tag{6}\]
To begin, we use the same data-generating process for the historical and trial populations, but in what follows, we loosen this assumption by changing the historical data generating distribution with varying degrees of observed and unobserved covariates shifts. In line with the prognostic score originally proposed by Hansen [30], we only simulate controls in the historical sample, meaning that for the historical data the DGP is the same except that \(A=0\) deterministically.
For simplification, we include the same specifications of the super learner for both the prognostic model and all regressions required by our efficient estimators. Specifically, we use the discrete super learner and include the linear regression, gradient boosting with varying tree tuning specifications (xgboost) [33], and Multivariate Adaptive Regression Splines [34]. Specifications for tuning parameters are in Appendix A.
We consider 3 estimators for the trial: unadjusted (difference-in-group-means), linear regression, and TMLE. Note that TMLE leverages machine learning, the super learner, for the estimation steps of the outcome models. All estimators return an effect estimate and an estimated standard error, which we use to construct Wald 95% confidence intervals and corresponding p-values. The naive unadjusted estimator cannot leverage any covariates, but both linear and TMLE estimators can. We compare and contrast results from linear and TMLE estimators with and without fitted prognostic score covariate adjustment ("fitted") to compare against Schuler et al. [21]. We also consider the oracle version of the prognostic score ("oracle") for a benchmark comparison; the oracle prognostic score perfectly models the control outcome in the trial \(E[Y|W,D=0]\) without influence from unobserved covariate, \(U\), or random noise, \(\varepsilon\). The oracle prognostic score only serves as a best-case comparison and cannot be calculated in practice.
We examine several scenarios: first, we analyze the trial (\(n_{1}=250\)) under the heterogeneous and constant treatment effect DGPs, where the historical sample (\(n_{0}=1\),000) is from the same DGP as the trial sample. Second, we vary the historical and trial sample sizes for the heterogeneous treatment effect simulation. To vary the historical sample sizes, we first fix the trial sample size (\(n_{1}=250\)) and vary the historical sample size (with \(n_{0}=100\), 250, 500, 750, and 1,000). To vary the trial sample sizes, we first fix the historical sample size (\(n_{0}=1\),000) and vary the trial sample size (with \(n_{1}=100\), 250, 500, 750, and 1,000). Third, we examine the effect of distributional shifts between the historical and trial populations. Specifically, we shift the observed covariate \(W_{1}\) from -2 to 1 to be bounded by -5 and -2 for a small observed
shift and shift to -7 and -4 for a large observed shift. We also shift the unobserved covariate \(U\) from 0 to 1 to be bounded by 0.5 and 1.5 for a small unobserved shift, and we shift U to be bounded by 1 and 2 for a large unobserved shift. The shifts in the unobserved covariate induce shifts in the conditional mean relationship between the observed covariates and the outcome (see Appendix B for an explicit explanation).
We repeat all simulation scenarios 200 times to calculate all performance metrics. We calculate the average estimated standard error, empirical power (percentage of significant p-values, i.e., Wald CIs not covering the true ATE), estimation root mean square error (RMSE), and coverage.
### Results
The simulation results, summarized in Table 1, demonstrate an increase in power when using a prognostic score via an efficient estimator across different DGPs. First, our simulation demonstrates that TMLE with a (fitted) prognostic score estimator yields the highest power across most data generating distributions. Comparing the heterogeneous treatment effect DGP for no prognostic score with the fitted prognostic score, there is an 11% increase in power. In fact, the TMLE with a fitted prognostic score estimator performs similarly to the TMLE with an oracle (best possible) prognostic score estimator.
Using larger historical data sets increases performance. Figure 1.A shows a more detailed view of this phenomenon in terms of the average estimated standard error for each estimator as the historical data set grows in size. In effect, the larger the historical data, the smaller the resulting confidence intervals tend to be in the trial (while still preserving coverage). On the other hand, the relative benefit of prognostic adjustment is larger in smaller trials. In Table 1 we see an 11% increase of power comparing the TMLE with vs. without fitted prognostic score when \(n_{1}=250\), but an 80% increase when \(n_{1}=100\). Figure (Figure 1.B) shows the change in estimated standard error as the trial size varies.
\begin{table}
\begin{tabular}{c c c c c c c} \hline & **Unadjusted** & \multicolumn{2}{c}{**TMLE**} & \multicolumn{2}{c}{**Linear**} \\ & **estimator** & \multicolumn{2}{c}{**estimator**} & \multicolumn{2}{c}{**estimator**} \\ \hline _prognostic score_ & none & none & _fitted_ & oracle & none & _fitted_ & oracle \\ \hline heterogeneous effect & 0.435 & 0.645 & _0.720_ & 0.745 & 0.405 & 0.42 & 0.42 \\ \hline constant effect & 0.150 & 0.655 & 0.790 & 0.780 & 0.240 & _0.800_ & 0.840 \\ \hline small observed shift & 0.435 & 0.640 & _0.715_ & 0.740 & 0.405 & 0.405 & 0.415 \\ \hline small unobserved shift & 0.425 & 0.640 & _0.695_ & 0.745 & 0.390 & 0.410 & 0.390 \\ \hline small historical & & & & & & \\ sample & 0.455 & 0.610 & _0.65_ & 0.735 & 0.425 & 0.420 & 0.435 \\ \((n_{0},n_{1})=(100,250)\) & & & & & & \\ \hline small trial sample & 0.205 & 0.215 & _0.380_ & 0.365 & 0.200 & 0.210 & 0.205 \\ \((n_{0},n_{1})=(1000,100)\) & & & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Highlighted simulation results for empirical power given different data generating processes. Except for the constant effect DGP, all conditional means are shared with the heterogeneous effect DGP. Highest power (excluding oracle estimators) is bolded.
Overall, the TMLE consistently outperforms the linear estimator with or without a prognostic score. Due to the high heterogeneity of effect, the linear estimator in our simulation performs worse than the unadjusted estimator when adjusting for all 20 covariates. Even without the prognostic score, the TMLE estimator internally uses machine learning for estimation, which yields improved performance and robust to parametric model misspecification. The linear estimator is the most powerful when including a prognostic score for a constant treatment effect DGP, which is consistent with the optimality property previously discussed in Schuler et al. [21].
We also observe that our method is relatively robust to distribution shifts. When the shift (observed in or unobserved ) is large, the prognostic score may be unhelpful or uninformative, but including it produces no efficiency loss when compared to the TMLE without a prognostic score (Figure 2). For smaller shifts, the prognostic score is still useful in reducing the uncertainty of the trial estimate.
Importantly, coverage was nominal (95%) in all of our simulations for all estimators (and thus strict type I error control was attained). Including the prognostic score did not affect cov
erage in any case, even when the trial and historical populations were different. These results along with RMSEs may be found in Appendix C.
## 5 Case Study
We reanalyzed the Pragmatic, Randomized Optimal Platelet and Plasma Ratios (PROPPR) trial that investigated the efficacy and effectiveness of blood transfusion ratios 1:1:1 versus the 1:1:2 strategies. The PROPPR trial was a phase 3 study that randomized 680 severely injured trauma patients who needed a massive blood transfusion. The study consisted of patients who arrived at 12 level-I trauma centers across North America between August 2012 and December 2013 [35].
Our corresponding historical sample came from the PRospective, Observational Multicenter Major Trauma Transfusion (PROMMTT) study that enrolled 1,245 individuals at 10 level-I trauma centers in the United States [36]. The PROMMTT study included patients who arrived at the emergency department, survived at least 30 minutes and received at least one unit of red blood cells within 6 hours of arrival. Both data sets were de-identified before their use in this analysis.
For the PROPPR trial reanalysis in our study, we included patient measures of their demographic background, injury mechanism, and clinically relevant measures to the trauma injuries. For demographic background, measures of body mass index, Hispanic ethnicity, age, gender,
Figure 2: Estimated standard errors across estimators when observed (Figure 2.A) and unobserved shifts (Figure 2.B) are present in the historical sample relative to the trial sample. The small shifts represent the historical samples are informative to some degree. The large shifts represent the historical samples provide little to no additional benefit.
race, and anticoagulant use were included. For injury type, either blunt or penetrating injuries were recorded. Clinical relevant records obtained from their arrival at the emergency department measured base deficit, Glasgow coma scale, initial hemoglobin, heart rate, international normalized ratio, injury severe score, platelet count, partial thromboplastin time, and systolic blood pressure. The treatment variable is only specified in the PROPPR trial which indicates the random assignment of whether the patient received a 1:1:1 or 1:1:2 transfusion ratio. The outcome of interest is mortality. We evaluated mortality outcomes 24 hours post-emergency department admission.
To clean both data sets, we included missingness indicators and respective imputed covariates using random forest [37]. The normalized root mean square error is 0.072 for continuous covariates and proportion of falsely classified is 0.052 for PROMMTT data. The variables with imputed values are: age, anticoagulant use, BMI, and Hispanic ethnicity. The normalized root mean square error is 0.380 for continuous covariates and proportion of falsely classified is 0.065 for PROPPR data. The variables with imputed values are: anticoagulant use, BMI, race, base deficit, initial hemoglobin, heart rate, international normalized ratio, platelet count, partial thromboplastin time, and systolic blood pressure. Note that the historical sample, PROMMTT, has indicator variables that does not overlap with PROPPR in the historical sample; we drop such variables: missing indicator for age and Hispanic ethnicity when creating the prognostic model. We retain all 28 covariates present in the PROPPR trial, which includes dichotomizing the race variable and 10 missing indicator variables.
We report the result of 5 estimators: unadjusted, logistic regression, logistic regression with a prognostic score, TMLE, and TMLE with a prognostic score (shown in Figure 3). To be comparable with our other estimators, the logistic regression estimator is used as a plug-in to target the marginal risk difference using the robust estimated standard error [38, 39, 40]. For this application, we expanded the library of the super learner for a more comprehensive set of machine learning models than the simulation, including random forest [41], k-nearest neighbor, and a more comprehensive set of tuning parameters for the xgboost model in addition to the previously specified library (Appendix A).
Figure 3: Estimates for average treatment effect of blood transfusion using 5 estimators.
Although using the unadjusted estimator indicates a significant effect while without prognostic score adjustments, estimators leveraging covariates decrease the variance. The result for TMLE with or without prognostic score consistently shows 1:1:2 blood transfusion ratio yielded a negligible improvement in survival outcome than the 1:1:1 transfusion ratio. Since the upper bound is close to 0, there is no clear clinical significance.
Separately, we obtained the correlation of the fitted prognostic score against the trial outcome. The correlation with the outcome is 0.455 with control subjects and 0.376 with treated subjects, indicating that adjustment for the score should result in an improvement over unadjusted estimation [21]. Therefore the lack of improvement due to the prognostic score in this case is likely due to the fact that the covariate-outcome relationship is simple enough to model with the trial data alone. Nonetheless, this result demonstrates that prognostic adjustment in RCTs (like covariate adjustment) does not typically change point estimates and instead functions by modifying variance.
## 6 Discussion
In this study, we demonstrate the utility of incorporating historical data via the prognostic score in an efficient estimator while maintaining strict type I error without imposing additional assumptions on the historical data. This method is most useful in randomized trials with smaller sample sizes. Using the prognostic score via covariate adjustment overall improves the performance of the efficient estimator by decreasing the standard error. Our proposed method is shown to be robust against bias even when the historical sample is drawn from a different population.
In contrast to existing methods, prognostic adjustment requires no assumptions to continue to guarantee unbiased causal effect estimates. However, this comes with a tradeoff: without introducing the risk of bias, there is a limit on how much power can be gained and in what scenarios. For example, the method of Li et al. (which imposes an additional but rather light assumption) can asymptotically benefit from the addition of historical data, whereas our method can only provide gains in small samples [20]. However, these gains are _most important_ precisely in small samples because estimated effects are likely to be of borderline significance, whereas effects are more likely to be clear in very large samples regardless of the estimator used.
Besides being assumption-free, our method has other practical advantages relative to data fusion approaches. For one, we do not require a single, well-defined treatment in the historical data. This was demonstrated in our real-world case study, where there is no established standard of care in the historical data. Moreover, we do not require an exact overlap of the covariates measured in the historical and trial data sets, as long as there is some overlap and informativeness, the historical data can be leveraged.
It is also easy to utilize multiple historical data sets: if they are believed to be drawn from substantially different populations, separate prognostic scores can be built from each of them and included as covariates in the trial analysis. As long as one of these scores is a good approximation of the outcome-covariate relationship in one or more arms of the trial, there will be added benefits to power. The addition of multiple covariates poses no risk for efficient estimators that use data-adaptive machine learning methods, which is another advantage of marrying prognostic adjustment to efficient estimation as opposed to marrying it to traditional parametric adjustment.
Prognostic adjustment with efficient estimators can also be used with pre-built or public prognostic models: the analyst does not need direct access to the historical data if they can
query a model for predictions. This is helpful in cases where data is "federated" and cannot move (e.g. when privacy must be protected or data has commercial value). Specifically, individual subject data is not necessary to perform this step.
Lastly, since we use efficient estimators, we can leverage the results of Schuler 2021 to prospectively calculate power with prognostic adjustment [18]. In fact, we suspect the methods of power calculation described in that work would improve in accuracy with prognostic adjustment since the outcome regressions are "jump-started" with the prognostic score. Verification of this fact and empirical demonstration will be left to future work.
## Acknowledgments.
The authors would like to thank study participants and staff for their contributions. This research was conducted on the Savio computational cluster resource provided by the Berkeley Research Computing program at the University of California, Berkeley. This computing resource was supported by the UC Berkeley Chancellor, Vice Chancellor for Research, and Chief Information Officer. The authors thank Christopher Paciorek for answering Savio related inquiries.
This research was made possible by funding from the National Science Foundation (DGE 2146752) and global development grant (OPP1165144) from the Bill & Melinda Gates Foundation to the University of California, Berkeley, CA, USA. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
2308.07342 | Emergent communication for AR | Mobile augmented reality (MAR) is widely acknowledged as one of the
ubiquitous interfaces to the digital twin and Metaverse, demanding unparalleled
levels of latency, computational power, and energy efficiency. The existing
solutions for realizing MAR combine multiple technologies like edge, cloud
computing, and fifth-generation (5G) networks. However, the inherent
communication latency of visual data imposes apparent limitations on the
quality of experience (QoE). To address the challenge, we propose an emergent
semantic communication framework to learn the communication protocols in MAR.
Specifically, we train two agents through a modified Lewis signaling game to
emerge a discrete communication protocol spontaneously. Based on this protocol,
two agents can communicate about the abstract idea of visual data through
messages with extremely small data sizes in a noisy channel, which leads to
message errors. To better simulate real-world scenarios, we incorporate channel
uncertainty into our training process. Experiments have shown that the proposed
scheme has better generalization on unseen objects than traditional object
recognition used in MAR and can effectively enhance communication efficiency
through the utilization of small-size messages. | Ruxiao Chen, Shuaishuai Guo | 2023-08-12T16:45:39Z | http://arxiv.org/abs/2308.07342v1 | # Emergent Semantic Communications for Mobile Augmented Reality: Basic Ideas and Opportunities
###### Abstract
Mobile augmented reality (MAR) is widely acknowledged as one of the ubiquitous interfaces to the digital twin and Metaverse, demanding unparalleled levels of latency, computational power, and energy efficiency. The existing solutions for realizing MAR combine multiple technologies like edge, cloud computing, and fifth-generation (5G) networks. However, the inherent communication latency of visual data imposes apparent limitations on the quality of experience (QoE). To address the challenge, we propose an emergent semantic communication framework to learn the communication protocols in MAR. Specifically, we train two agents through a modified Lewis signaling game to emerge a discrete communication protocol spontaneously. Based on this protocol, two agents can communicate about the abstract idea of visual data through messages with extremely small data sizes in a noisy channel, which leads to message errors. To better simulate real-world scenarios, we incorporate channel uncertainty into our training process. Experiments have shown that the proposed scheme has better generalization on unseen objects than traditional object recognition used in MAR and can effectively enhance communication efficiency through the utilization of small-size messages.
## I Introduction
As one of the primary interfaces to the digital twin and Metaverse, mobile augmented reality (MAR) enables users to blend digital content seamlessly with the real world. It allows individuals to view and interact with virtual objects and information overlaid on their physical surroundings through the device's camera and display. The realization of MAR tasks often involves processing computationally intensive visual data. The underlying technologies like the fifth generation (5G) networks and mobile edge computing have accelerated the implementations of MAR. However, MAR's requirements on latency, mobility, and endurance are stringent, and with numerous pictures to process per frame, the conventional data-oriented communication method is overwhelmingly burdensome. Data-oriented communication tends to transmit the symbols representing the original data, which has been well-studied and realized. However, this level of communication is approaching the physical constraints of the underlying infrastructure [1]. Semantic communication, on the other hand, goes beyond the mere transmission of data by focusing on comprehending the essence and concepts embedded within the information and has been gaining increasing attention [2, 3]. Due to the above features, semantic communication is promising for MAR tasks that involve handling extensive raw data and require significant computational power.
A large body of artificial intelligence (AI)-based algorithms for processing video streams were proposed for MAR tasks. For example, Ren _et al._ proposed a motion-aware scheduler [4] to select the keyframe from the video source for offloading, thereby improving the computational efficiency of the edge system. In [5], Shuwaili _et al._ leveraged the inherent collaborative nature of MAR that mobile devices connected to the same base station have partly shared inputs and outputs video stream to avoid extra computing, then using a successive convex approximation method (SCA) to solve the non-convex optimization problem. In [6], Lee _et al._ proposed a reinforcement learning-based server-client controlling scheme that conducts class-wise characteristic analysis from the experience so that it could control the MAR service quality adaptively.
The aforementioned works primarily focus on the first level of communication, specifically centered on transmitting the original data. To the best of our knowledge, no existing research has explored the utilization of semantic-level communication in MAR. For video processing tasks like object recognition in MAR scenarios, existing methods commonly employ two approaches. The first involves utilizing a convolutional neural network (CNN) to extract feature vectors, which are then transmitted to servers for subsequent computation [7]. The second approach entails directly transmitting the original video stream to servers for processing. Regardless of the method, the amount of data transferred is enormous. In specific scenarios, it's apparent that not all video features are required. Instead, only specific segments of the video hold relevance [8]. For instance, the usage case of MAR we analyzed in this paper focused on bird recognition and displaying related information to the user, extraneous objects such as trees or people need not be considered, and to process all these videos without selection is a huge waste of computation. By employing emergent communication for video transmission, we can effectively encapsulate each video frame's concept and abstract ideas within a concise set of messages [9]. These messages facilitate communication efficiency between user devices and servers. Notably, the data size of these messages is orders of magnitude different from the two previously described methods.
To accomplish this objective, our system employs two intelligent agents: the speaker and the listener, which are trained via a modified Lewis signaling game [10]. In this game, the speaker is tasked with generating a message based on a concept, utilizing stored pictures of the concept. Subsequently, the listener uses this message to determine which pictures correspond to the given concept. To some extent, this approach leverages the memory of stored concept pictures, trading off communication time and computation, which proves to be useful by experimental evidence when employed in MAR
tasks. Besides, the emergent communication protocol can be generalized to concepts unseen in the training process [11]. For instance, if blue squares and green circles are seen while training, the generated messages are able to describe blue circles to the listener. Traditional object recognition like YOLO [12] is unable to generalize to this level. Additionally, our training process takes into account channel uncertainty, which is a more complete simulation of the real world [13]. Experimental evidence has demonstrated it improves the robustness and flexibility of the system when encountering different channels.
Drawing all the insights above, this article proposed an emergent semantic communication framework for MAR. The main contribution of this article can be summarized as follow:
* Establishing a basic idea and structure for implementing emergent communication in the field of MAR for visual data processing.
* Demonstrating the emergent semantic communication's superior generalization ability and significantly smaller communication data sizes, identifying its potential benefits in industrial, societal, and business applications.
* Considering the channel uncertainty in real-world scenarios in the training process of emergent communication, we achieve increased robustness when confronted with varying channels.
## II System Architecture
We consider a typical MAR application of education, as depicted in Fig. 1 (a). The system encompasses a substantial user population within a specific area, all of whom are connected to a base station. The base station, in turn, establishes a connection with multiple edge and cloud servers. During operation, the users' devices offload their computational tasks to the base station, continuing their execution on one of the servers. And the results of the computation will then be transmitted back to users' devices for rendering or other functions. We examine a specific education-oriented scenario illustrated in Fig. 1 (b). Mobile devices are employed to capture desired types of birds within the video stream according to users' requirements. Subsequently, these devices seamlessly overlay pertinent information about the identified birds on the original video.
When encountering an unfamiliar bird, users can describe it based on features such as the color of its wings, the shape of its bills, etc. These descriptions can be regarded as concepts that can be mathematically expressed and transmitted to speakers located on servers. Examples of all concepts are stored in the server memory in the first place, when a concept is required, the speaker can extract the specific examples of the required concept for learning. After learning, the speaker can summarize the concept into a message, denoted as \(m\), which is a discrete sequence of length \(L\) with a vocabulary size of \(V\). The message \(m\) is then transmitted to the listener on the user's device. It's worth noting that during transmission, the message will experience interference from channel uncertainty, the training process should take this into consideration. The message after undergoing the channel uncertainty can be represented as \(\hat{m}\). Utilizing the message \(\hat{m}\), listeners are able to select the relevant video stream given by the AR device that belongs to the concept user required. Since the emergent communication channel is distributed, the channel uncertainty can be realized and represented using error rate \(\epsilon\). It refers to the likelihood of the current character in a message undergoing a transformation into a different character.
To further illustrate the problem at hand, we now conduct a detailed modeling analysis for the aforementioned MAR application case. It can be divided into five interconnected
Fig. 1: System architecture (a), users located in the same area share the same base station. System application scenario (b), a MAR-aided birds education system for students.
components: a video capturer, a feature extractor, a mapper, a tracker, an object recognizer, and a render. The video capturer is a camera capturing the raw video frame, which is then processed by the feature extractor to extract its feature points. The mapper leverages the feature points to construct a digital model of the 3-dimensional (3D) environment, while the object recognizer employs the feature points to identify specific objects. The tracker is responsible for tracking the identified object across subsequent frames. Finally, the render component combines all the positional and image information, enabling the overlaying of virtual content onto the original video, such as the introduction of the birds.
In this application case of MAR, the main consumption of computation resources and transmission bandwidth is the object recognition component. Thus, this component is where emergent semantic communication can be implemented to improve the quality of service. The prevailing approaches commonly adopt YOLO [12] to achieve this goal: to use CNN to extract the feature vectors of each video stream for further computation. As illustrated above, these methods typically involve extracting the feature vectors beforehand and transmitting them to the server or directly sending the entire video. However, despite their undeniable accuracy, both approaches suffer from the challenge of substantial data size for transmission and demanding computational requirements.
## III Basic Ideas of Emergent Semantic Communications
To make the communication protocol emerge, the training process of our system implements a modified Lewis signaling game, as depicted in Fig. 2. Lewis signaling game [10] is widely used in the training of emergent communication that contains two intelligent agents, the speaker, and the listener. In this game, the speaker is given a target picture to generate a discrete message, and the listener is given a set of pictures including the target picture and several distracting pictures. Based on the given message, the listener's goal is to successfully identify the target picture from other distracting pictures.
In our game setting, however, the focus is not on specific objects but rather on the concept of an object. In our definition, a concept consists of several attributes, and each attribute can have different values [14]. For example, a concept \(c\) can have two attributes: the color of wings and the shape of bills. The values of the attribute color of wings can be green and blue, and the values of the attribute shape of bills can be dagger and hooked.
Given a concept and a set of objects belonging to that concept, along with other distracting objects, the speaker generates a message that conveys the abstract idea of the concept based on its observation. The listener then uses this message to distinguish the pictures belonging to the concept from the distracting ones. In order to enhance generalization capability, the pictures provided to the listener and speaker are different, despite belonging to the same concept. The speaker and the listener contain a CNN and a recurrent neural network (RNN) respectively. The CNN is responsible for extracting the feature vectors of a given picture and RNN is responsible for encoding or decoding messages. A concept \(c\) can be presented as a binary matrix, with 0 meaning the absence of an attribute, and 1 meaning the presence of an attribute. The actual value \(Y_{L}\) and the prediction \(\hat{Y_{L}}\) can be represented as a binary matrix with 0 meaning distracting picture and 1 meaning target picture. To fine-tune their neural network parameters, the two agents continuously adjust them based on the disparity between the prediction and the actual value.
The message data size is typically small, as it is a discrete sequence of length \(L\) with a vocabulary size of \(V\). Thus, a single-bit error caused by channel uncertainty might lead to a substantial misinterpretation of the final outcome. To account for the influence of channel uncertainty, we integrate different message error rates into both the training and simulation process.
## IV Evaluation
The experiments were conducted in a simulated MAR system, with video processing tasks generated every time interval. During the training process, we set the message length \(L\) to \(4\) and the maximum vocabulary size \(V\) to \(14\). The training utilized the Caltech-UCSD Birds dataset1, and we trained the model for 100 epochs to ensure comprehensive learning.
Footnote 1: [http://www.vision.caltech.edu/datasets/cub_200_2011/](http://www.vision.caltech.edu/datasets/cub_200_2011/)
We simulate the real channel environment as a discrete message error channel. We investigate different error rates in the channel from 0 to \(0.10\). Our first finding is a higher error rate leads to slower convergence, as demonstrated in Fig. 3. For example, when the error rate was \(0\), the system converged at epoch \(29\), while at an error rate of \(0.08\), convergence occurred at epoch \(62\). This observation suggests that an appropriate error rate enables the neural network to explore a wider range of possibilities, preventing it from becoming trapped in a local optimum solution.
We first test the accuracy of our trained model on the concept seen during training at different message error rates, as shown in Fig. 4. The identification accuracy of the model that didn't consider message errors while training declined as the message error rate averages rise. The results from the models with error rates set to \(0.02\) and \(0.04\) suggest that a moderate error rate can enhance the system's robustness in the presence of
Fig. 2: The training process of a modified Lewis signaling game that focuses on concept identification.
message errors. The lines of these two models exhibited more stability compared to the model without message errors as the message error rate increased. Interestingly, increasing the message error rate beyond a certain point not only enhances the stability of the system but also improves the accuracy rate. This coincides with previous studies done by Kucinski _et al._[13]. This outcome can be attributed to the random shuffling of the message, which facilitates the model in exploring a wider range of possibilities.
Subsequently, we evaluate the generalization capability of the trained model by examining its performance on unseen concepts at different message error rates. For the sake of comprehension and maximum utilization of the message sizes, the ideal model is each character in the message represents an attribute [14]. This feature of communication protocols enables it to arrange and combine different attributes which is why it can generalize to unseen concepts. The obtained results Fig. 5 demonstrate a certain degree of generalization to unseen concepts; however, the accuracy achieved is still suboptimal.
To better illustrate the communication efficiency of our model, we conduct a theoretical comparison between our system and the two other approaches mentioned above. The size of the pictures used in the training and evaluation ranged from \(70\) kilobytes to \(100\) kilobytes. The feature vector extracted by the CNN network of YOLOv5 is about 384 bytes [12]. For the message length and vocabulary used in the system, as there are \(14\) possibilities for each character, each character can be represented by a one-hot vector of length \(14\), which is \(14\) bits. The data size of messages with four characters is thus \(4\times 14=56\) bits. The data size that needs to be transmitted for each video frame is listed in Table I.
For mainstream target detection algorithms like YOLO, the accuracy of the latest version YOLOv7-E6 has an average precision of \(73.5\)% at a frame rate of \(56\) fps [12]. Although our proposed system can't compete with it in accuracy, communication efficiency and generalization ability to unseen concepts make it a promising solution in the context of AR-oriented tasks. Besides, the structure of our neural network is relatively small compared to that of YOLO. Recent researches indicate that increasing the scale of emergent communication training [15] or introducing a more rigorous environment may yield additional benefits in generating an ideal communication protocol, and thus are left to our future work.
## V Opportunity and Challenges
Implementing emergent semantic communication in the field of MAR is an unexplored area before, and it presents untapped potential when it comes to visual data. For example, vehicles can extract the abstract idea of the video it captures like obstacles and road conditions, and communicate this information to other vehicles through the Internet of Vehicles to achieve self-driving. Moreover, emergent semantic communication is not limited to processing visual data, the
Fig. 4: Average accuracy for each model on the seen concept
Fig. 5: Average accuracy for each model on the unseen concept
Fig. 3: Average training loss for each epoch
semantic information can also be expressed in the form of tactile signals using vibrating bracelets. In this way, it can be utilized in real-time exoskeletons robot control or navigation for visually impaired people. In the future, we can even combine five-dimensional communication involving sight, smell, taste, touch, and hearing to create an immersive experience or even Metaverse. This integration of all senses can be used in healthcare, enabling physicians to command a telerobot at the patient's location, allowing remote surgery with full MAR and haptic feedback. In the education and training area, teachers will not only have visual access to the learners but also gain a sense of their movements when engaging in tasks that require precise motor skills. This capability enables teachers to provide real-time corrections and guidance to optimize learning outcomes. In addition to the small message size and generalization ability mentioned in the article, emergent semantic communication can also be adopted to improve communication reliability by leveraging semantic understanding to predict missing or corrupted information and fill gaps in the message context.
Several challenges still exist in implementing emergent semantic communication. Firstly, there is a lack of systematic analysis regarding the impact of adding message error rates during the training process. While the experimental results in this paper demonstrate the enhancement of channel uncertainty immunity, the optimal error rate averages, and its potential improvement still lack mathematical analysis. Additionally, although our experimental results align with previous studies by Kucinski _et al._'s conclusion that an appropriate message error rate promotes the emergence of an ideal communication protocol [13], we are not yet sure about the mechanism or the mathematical proof behind it. Our initial hypothesis is that the appropriate perturbations induced by message error rate can facilitate the exploration of a wider range of possibilities.
Secondly, the existing research on emergent semantic communication primarily focuses on experimental studies, and it lacks a concise theoretical framework to guide its implementations in different scenarios. This framework should address critical aspects such as determining optimal strategies for selecting message length, vocabulary size, and neural network architectures that align with real-world requirements. Furthermore, the absence of quantitative metrics to measure performance is also a noteworthy limitation.
Finally, the scalability of emergent semantic communication in computing networks for tasks like MAR is not sufficiently explored. This paper has shown that under a specific task, two agents are able to develop an efficient and distributed communication protocol. However, as the number of agents increases, the complexity of emergent communication systems grows significantly. And coordinating and managing communication among a large number of agents might lead to inefficiencies or breakdowns in the communication process. Thus, scalability tests need to be conducted and modifications to the current training framework need to be made.
## VI Conclusion
In this article, we proposed a basic idea and framework for emergent semantic communication for MAR tasks. Using discrete messages to communicate the abstract idea of the original video stream, this framework possesses high communication efficiency and can generalize to unseen concepts. Experimental analysis has revealed the significant advantages of our system compared to data-oriented or feature-oriented communications. To better model the real environment, we take into account different message error rates in our training process. The experiment results demonstrate that an appropriate error rate while training can improve message error immunity and even raise the accuracy of the system. At last, we discussed the opportunities and challenges in this area, providing directions for future research.
|
2301.04340 | Proportional Fairness in Obnoxious Facility Location | We consider the obnoxious facility location problem (in which agents prefer
the facility location to be far from them) and propose a hierarchy of
distance-based proportional fairness concepts for the problem. These fairness
axioms ensure that groups of agents at the same location are guaranteed to be a
distance from the facility proportional to their group size. We consider
deterministic and randomized mechanisms, and compute tight bounds on the price
of proportional fairness. In the deterministic setting, not only are our
proportional fairness axioms incompatible with strategyproofness, the Nash
equilibria may not guarantee welfare within a constant factor of the optimal
welfare. On the other hand, in the randomized setting, we identify
proportionally fair and strategyproof mechanisms that give an expected welfare
within a constant factor of the optimal welfare. | Haris Aziz, Alexander Lam, Bo Li, Fahimeh Ramezani, Toby Walsh | 2023-01-11T07:30:35Z | http://arxiv.org/abs/2301.04340v1 | # Proportional Fairness in Obnoxious Facility Location
###### Abstract
We consider the obnoxious facility location problem (in which agents prefer the facility location to be far from them) and propose a hierarchy of distance-based proportional fairness concepts for the problem. These fairness axioms ensure that groups of agents at the same location are guaranteed to be a distance from the facility proportional to their group size. We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness. In the deterministic setting, not only are our proportional fairness axioms incompatible with strategyproofness, the Nash equilibria may not guarantee welfare within a constant factor of the optimal welfare. On the other hand, in the randomized setting, we identify proportionally fair and strategyproof mechanisms that give an expected welfare within a constant factor of the optimal welfare.
## 1 Introduction
In the _obnoxious facility location problem (OFLP)_, some undesirable facility such as a garbage dump or an oil refinery is to be located on a unit interval (i.e. the domain of locations is \([0,1]\)), and the agents along the interval wish to be as far from the facility as possible (Feigenbaum _et al._, 2020; Cheng _et al._, 2011; Ibara and Nagamochi, 2012; Cheng _et al._, 2019). In this problem, agents have single-dipped preferences, contrasting with the single-peaked preferences of agents in the classic facility location problem (in which agents prefer to be located as close as possible to the facility).
The obnoxious facility location problem models many real-world facility placements which negatively impact nearby agents, such as a prison or a power plant (Church and Drezner, 2022). Aside from the geographic placement of an obnoxious facility, the OFLP can also be applied to
various collective decision making problems. For instance, when agents are averse to their worst possible social outcomes (represented by their locations), the problem captures issues where a decision needs to be made on a social policy or a budget composition. When a socially sensitive or a politically undesirable policy needs to be implemented, the placements of such a policy in the space of outcomes may need to take equity considerations.
It is known that placing the facility at one of the interval endpoints maximizes the sum of agent distances (Cheng _et al._, 2013), but such a solution may not be 'proportionally fair' for the agents. To build intuition, consider the instance depicted in Figure 1 where there are two agents at \(0.1\) and five agents at \(0.8\). The optimal utilitarian solution (which maximizes the sum of agent distances) places the facility at \(0\), disproportionately disadvantage the agents at \(0.1\) who are located only \(0.1\) distance from the facility. A facility location of \(0.45\) results in both groups of agents having the same distance from the facility, and would be considered to be more 'fair' in the egalitarian sense. However, it is not proportionally fair: despite having over twice as many agents, the group of agents at \(0.8\) have the same distance from the facility as the group of agents at \(0.1\). A proportionally fair solution places the facility at \(0.3\), and results in the distance between a group of agents and the facility being proportional to the size of the group.
In this work, we pursue notions of _proportional fairness_ as a central concern for the problem. Specifically, we formulate a hierarchy of proportional fairness axioms which guarantee that each group of agents at the same location are a distance from the facility proportional to the relative size of the group. While proportional fairness axioms have been formulated and studied in the classic facility location problem (Aziz _et al._, 2021), they have not yet been applied to the OFLP. Our paper provides a comprehensive overview of proportionally fair solutions for the obnoxious facility location problem, examining the interplay between proportional fairness and utilitarian/egalitarian welfare, and investigating concerns of agent strategic behaviour in both the deterministic and randomized settings.
### Contributions
* We formalize (approximate) proportional fairness concepts such as 2-Individual Fair Share (2-IFS) and 2-Unanimous Fair Share (2-UFS) in the context of the obnoxious facility location problem. Several of the definitions are natural adaptations of axioms from fair division and participatory budgeting.
* We find tight bounds on the price of 2-IFS and 2-UFS fairness for the objectives of egalitarian and utilitarian welfare, in both the deterministic and randomized settings.
Figure 1: OFLP with agent location profile \((0.1,0.1,0.8,0.8,0.8,0.8)\) represented by x. The facility locations (represented by \(\bullet\)) correspond to a utilitarian outcome, \(f^{*}_{UW}=0\); a proportionally fair outcome, 2-UFS \(=0.3\); and an egalitarian outcome, \(f^{*}_{EW}=0.45\).
* We prove that our proportional fairness axioms are incompatible with strategyproofness in the deterministic setting, and give strategyproof randomized mechanisms that satisfy these proportional fairness axioms in expectation and either have a constant approximation ratio for utilitarian welfare or are optimal for egalitarian welfare.
* For the deterministic mechanisms that maximize utilitarian welfare under the constraints of 2-IFS and 2-UFS, we prove that a pure \(\epsilon\)-Nash equilibrium always exists and find linear bounds on the corresponding \(\epsilon\)-prices of anarchy.
* Finally, we give two possible extensions of our model: the fairness axiom of 2-Proportional Fairness (2-PF), which is stronger than 2-UFS as it captures proportional fairness concerns for groups of agents near but not necessarily at the same location, and the hybrid model, which additionally includes 'classic' agents which want to be near the facility (along with 'obnoxious' agents which want to be far away from the facility). We give existence results for both extensions.
Table 1 summarizes some of our results. Results lacking proofs are proven in the appendix.
Related WorkFacility location problems have been explored in the fields of computer science, economics and operations research. In the latter field, an optimization approach is usually taken, aiming to minimize transport costs. Summaries of results and approaches in the operations research literature are given by Hekmatfar (2009) and Melo _et al._ (2009). On the other hand, research on the facility location problem at the intersection of computer science and economics often takes an approximate mechanism design approach, assuming that agent locations are private information and finding strategyproof mechanisms which approximate the optimal social cost. The seminal paper on this approach is written by Procaccia and Tennenholtz (2013), and for a recent and comprehensive survey on facility location mechanism design, we refer the reader to a survey by Chan _et al._ (2021). Our paper lies at the intersection of these two approaches, analyzing the agent strategic behaviour in the optimal mechanisms which satisfy our proportional fairness axioms as well as identifying a randomized strategyproof and proportionally fair mechanism.
The papers most relevant to our research are those that treat the facility as obnoxious: agents prefer the facility to be as far as possible. Similar to the classical facility location problem, early operations research on the OFLP apply an optimization approach to compute solutions; a review
\begin{table}
\begin{tabular}{c c c c c} \cline{3-5} & & \multicolumn{2}{c}{Price of Fairness} & \multicolumn{2}{c}{Best Known Approx.} \\ & & 2-IFS & 2-UFS & by 2-UFS SP Mech. \\ \hline \multirow{3}{*}{Deterministic} & \multirow{3}{*}{UW} & 2 & 2 & \multirow{3}{*}{**Incompatible**} \\ & & (Thm. 1) & (Thm. 2) & & \\ \cline{3-5} & & 1 & n-1 & (Prop. 4) \\ \cline{3-5} & & (Prop. 3) & (Thm. 3) & \\ \hline \multirow{3}{*}{Randomized} & \multirow{3}{*}{UW} & 12/11 & 1.09384... & 1.5 \\ & & (Cor. 3) & (Cor. 4) & (Thm. 8) \\ \cline{3-5} & & 1 & 1 & 1 \\ \cline{3-5} & & (Prop. 3) & (Cor. 2) & (Prop. 6) \\ \hline \end{tabular}
\end{table}
Table 1: Table of price of fairness and welfare approximation results.
of these approaches is given by Church and Drezner (2022). There have been several recent papers on the obnoxious facility location problem that assume agents' location are private information, and thus aim to design strategyproof facility location mechanisms. Some of the earliest research applying a mechanism design approach was initiated by Cheng _et al._ (2011, 2013), in which they define an agent's utility as its distance from the facility, and design strategyproof mechanisms which approximate the optimal utilitarian welfare on the path and network metrics, respectively. Other recent examples of related papers include (Cheng _et al._, 2019; Feigenbaum _et al._, 2020; Ibara and Nagamochi, 2012; Xu _et al._, 2021). These papers do not pose or study the fairness concepts that we explore in this paper.
Notions of fairness in various collective decision problems have been widely explored over the last few decades (Moulin, 2003; Nash, 1950; Shapley, 1953). Fairness objectives specifically relevant to the facility location problem include maximum cost/egalitarian welfare (see, e.g. (Procaccia and Tennenholtz, 2013; Wang _et al._, 2021)) and maximum total/average group cost (Zhou _et al._, 2022). Rather than optimize/approximate fairness objectives, we focus on solutions enforcing proportional fairness axioms, in which groups of agents with similar or identical preferences (represented in our setting as their location) have a minimum utility guarantee relative to the group size. The axioms of proportional fairness that we present stem from several related areas of social choice. Individual Fair Share (IFS) is closely related to the axiom of proportionality proposed by Steinhaus (1948), and appears in participatory budgeting along with Unanimous Fair Share (UFS) (Bogomolnaia _et al._, 2005; Aziz _et al._, 2019). Most recently, all of our proportional fairness axioms have been studied in the classical facility location problem by Aziz _et al._ (2021).
In our paper, we also analyse the loss of efficiency, defined as the price of fairness, of implementing the proportional fairness axioms that we have proposed. There have been many recent results on the price of fairness in various social choice contexts. For instance, Barman _et al._ (2020), Caragiannis _et al._ (2012) and Bei _et al._ (2021) find price of fairness bounds for axioms such as envy-freeness and equitability in fair division, Bertsimas _et al._ (2011) look at the price of proportional fairness in resource allocation, and Michorzewski _et al._ (2020) explore the areas of budget division and probabilistic social choice. There has also been work on price of fairness bounds for the facility location problem, such as when there is a lexicographic minimax objective (Buzna _et al._, 2014). Wang and Zhang (2021) assume that facilities have preferences over subsets of agents, observing the concepts of fairness and efficiency from the facilities' perspectives.
As strategyproofness is impossible in our deterministic setting, we present results on the existence of pure Nash equilibria and the price of anarchy. Similar models where such results are proven include a variation of the Hotelling-Downs model where clients have limited attraction ranges (Feldman _et al._, 2016), and two-stage facility location games where both facilities and clients act strategically (Krogmann _et al._, 2021). In the classic facility location problem, Aziz _et al._ (2021) characterize the pure Nash equilibria of strictly monotonic facility location mechanisms satisfying UFS and show that the resulting facility location (under the pure Nash equilibria) is also guaranteed to satisfy UFS. In our setting, the price of anarchy is not well-defined for certain proportionally fair mechanisms, as a pure Nash equilibrium may not exist for a given location profile. As a result, we prove the existence of an approximate equilibrium notion, called pure \(\epsilon\)-Nash equilibrium. Examples of papers applying this notion to other settings include (Chien and Sinclair, 2011; Mylvaganam _et al._, 2015).
The second half of our paper focuses on the randomized setting to overcome the incompatibility with strategyproofness. The use of randomized mechanisms to overcome impossibility results is
prevalent in many social choice contexts (see, e.g., [1, 16]). Additionally, Aziz _et al._[2022] use a randomized approach in the classic facility location problem to achieve stronger notions of proportional fairness, providing a unique characterization of universally anonymous and universally truthful mechanisms satisfying an axiom called Strong Proportionality. The use of randomized mechanisms also results in better approximation ratio/price of fairness bounds. This is common in many variants of the facility location problem, such as when agents have fractional or optional preferences [12, 13], or in the hybrid facility location model [14].
## 2 Model
Let \(N=\{1,\ldots,n\}\) be a set of agents, and let \(X:=[0,1]\) be the domain of locations.1 Agent \(i\)'s location is denoted by \(x_{i}\in X\); the profile of agent locations is denoted by \(x=(x_{1},\ldots,x_{n})\in X^{n}\). We also assume the agent locations are ordered such that \(x_{1}\leq\cdots\leq x_{n}\). A _deterministic mechanism_ is a mapping \(f\ :\ X^{n}\to X\) from a location profile \(\hat{x}\in X^{n}\) to a facility location \(y\in X\). We define a _randomized mechanism_ as a probability distribution over deterministic mechanisms. Given a facility location \(y\in X\), agent \(i\)'s utility2 is equal to its distance from the facility \(u(y,x_{i}):=|y-x_{i}|\). We are interested in maximizing the objectives of _Utilitarian Welfare_ (UW), defined for a facility location \(y\) and location profile \(x\) as the sum of agent utilities \(\sum_{i}u(y,x_{i})\), and _Egalitarian Welfare_ (EW), defined as the minimum agent utility \(\min_{i}u(y,x_{i})\).
Footnote 1: Our results can be naturally extended to any compact interval on \(\mathbb{R}\).
Footnote 2: This definition is consistent with [13].
Note that the preferences in OFLP can be viewed as _single-dipped_. In contrast, the classical facility location problem (FLP) concerns _single-peaked preferences_. The underlying model of both FLP and OFLP is the same except that the agents' preferences have a different structure.
Unless specified otherwise, we will state results for the obnoxious facility location problem (OFLP). For the first half of the paper, we will discuss the deterministic setting, and then move to the randomized setting for the second half.
## 3 Proportional Fairness Axioms
In this section, we introduce proportional fairness axioms for the obnoxious facility location problem.
### Individual Fair Share
We first present an adaptation of Individual Fair Share (IFS), the weakest of our proportional fairness axioms (as studied by Aziz _et al._[2021] in the context of the classic facility location problem). IFS provides a minimum distance guarantee between each agent and the facility, requiring that each agent has at least \(\frac{1}{n}\) utility. By placing two agents at \(\frac{1}{4}\) and \(\frac{3}{4}\), it is easy to see that an IFS solution may not exist. As a result, we turn to approximations of IFS.
**Definition 1** (\(\alpha\)-Individual Fair Share (IFS)).: _Given a profile of locations \(x\), a facility location \(y\) satisfies \(\alpha\)-Individual Fair Share (\(\alpha\)-IFS) if_
\[u(y,x_{i})\geq\frac{1}{\alpha n}\qquad\forall i\in N.\]
We find that the lowest value of \(\alpha\) such that an \(\alpha-\)IFS solution always exists is \(\alpha=2\). Intuitively, with \(\alpha=2\), each agent has a ball of radius \(\frac{1}{2n}\) around its location. The sum of ball lengths is \(1\), meaning there will always be a 2-IFS solution. Furthermore, for any \(\alpha<2\), the sum of ball lengths will exceed \(1\), so an \(\alpha-\)IFS solution may not always exist.
**Proposition 1**.: _The lowest value of \(\alpha\) for which an \(\alpha\)-IFS solution always exists is \(\alpha=2\)._
A polynomial time mechanism (which we denote as \(f^{*}_{2IFS}\)) that maximizes the utilitarian welfare under the constraint of 2-IFS simply iterates through the endpoints of the intervals which satisfy the constraint and outputs the optimal facility location, breaking ties in favour of the leftmost optimal location.
### Unanimous Fair Share
We now present Unanimous Fair Share (UFS), a strengthening and generalization of IFS to groups of agents at the same location. Informally, if there are \(k\) agents at the same location, then UFS requires that the facility is placed at least \(\frac{k}{n}\) distance from these agents. Again, we focus on approximations of UFS as a UFS solution may not exist.
**Definition 2** (\(\alpha\)-Unanimous Fair Share (Ufs)).: _Given a profile of locations \(\mathbf{x}\), a facility location \(y\) satisfies \(\alpha\)-Unanimous Fair Share (\(\alpha\)-UFS) if for any set of agents \(S\) with identical location,_
\[u(y,x_{i})\geq\frac{|S|}{\alpha n}\qquad\forall i\in S.\]
Note that \(\alpha-\)IFS implies \(\alpha-\)IFS. As with \(\alpha-\)IFS, we find that the optimal value of \(\alpha\) for which an \(\alpha\)-UFS solution always exists is \(\alpha=2\). The proof intuition is similar to that of Theorem 1, but the balls vary in size depending on the number of agents in the group.
**Proposition 2**.: _The lowest value of \(\alpha\) for which an \(\alpha\)-UFS solution always exists is \(\alpha=2\)._
Similar to \(f^{*}_{2IFS}\), a polynomial time mechanism (which we denote as \(f^{*}_{2UFS}\)) that computes the optimal 2-UFS facility location for utilitarian welfare iterates through the endpoints of the intervals satisfying 2-UFS and outputs the optimal facility location, breaking ties in favour of the leftmost optimal location.
## 4 Deterministic Setting
We begin with the deterministic setting, analyzing the price of proportional fairness and agent strategic behaviour. All results stated in this section are for the deterministic setting.
### Price of Fairness
In this section, we analyze the price of fairness for our (approximate) fairness axioms.3 Informally, the price of fairness measures the loss of efficiency from imposing a certain fairness constraint. We focus on the objectives of utilitarian and egalitarian welfare, defined as the sum of utilities and the minimum agent utility, respectively.
Footnote 3: The price of fairness can also be interpreted as the approximation ratio for the respective optimal mechanism satisfying the fairness constraint.
A _fairness property_\(P\) is a mapping from an agent location profile \(x\in X^{n}\) to a (possibly empty) set of facility locations \(P(x)\in X\). Every facility location \(P(x)\) satisfies the fairness property \(P\). The price of fairness for property \(P\) is the worst case ratio between the optimal welfare and the optimal welfare from a facility location satisfying \(P\).
#### 4.1.1 Utilitarian Welfare
The utilitarian welfare of an instance is a standard measure of efficiency. Finding the price of our proportional fairness axioms for utilitarian welfare quantifies the impact on efficiency when the OFLP system is constrained to be proportionally fair.
**Definition 3** (Price of Fairness for Utilitarian Welfare).: _Let \(f^{*}_{UW}\) be the mechanism that returns the solution maximizing utilitarian welfare. For utilitarian welfare and fairness property \(P\), we define the price of fairness as the worst case ratio (over all location profiles) between the optimal utilitarian welfare and the optimal utilitarian welfare achieved by a facility location satisfying fairness property \(P\):_
\[\max_{x\in[0,1]^{n}}\frac{\sum_{i}u(f^{*}_{UW}(x),x_{i})}{\max_{y\in P(x)}\sum _{i}u(y,x_{i})}.\]
We now move to compute the price of 2-IFS fairness for utilitarian welfare. Recall that the solution maximizing utilitarian welfare must be either \(0\) or \(1\)[Cheng _et al._, 2013].
**Lemma 1**.: _The price of 2-IFS for utilitarian welfare is at least 2._
Proof.: Suppose \(n\) is even, and that the agents are located at \(\frac{1}{2n}-\epsilon\), \(\frac{3}{2n}-2\epsilon\),..., \(\frac{n-1}{2n}-\frac{n}{2}\epsilon\), \(\frac{n+1}{2n}+\frac{n}{2}\epsilon\),..., \(\frac{2n-3}{2n}+2\epsilon\), \(\frac{2n-1}{2n}+\epsilon\) for some sufficiently small \(\epsilon\) (see, e.g. Figure 2). Under this symmetric profile, either a facility location of \(0\) or \(1\) leads to the maximum utilitarian welfare of \(\frac{n}{2}\). The only facility locations satisfying 2-IFS are within the interval \([\frac{1}{2}-\frac{n}{2}\epsilon,\frac{1}{2}+\frac{n}{2}\epsilon]\). Any location in this interval gives the same utilitarian welfare as there are an equal number of agents on both sides, so suppose the facility is at \(\frac{1}{2}\). This corresponds to a utilitarian welfare of \(\frac{n}{2}\frac{1}{2n}(1+3+\cdots+n-1)+2\epsilon(1+2+\cdots+\frac{n}{2})= \frac{n}{4}+\epsilon n(1+\frac{n}{2})\). Taking the limit \(\epsilon\to 0\) gives a ratio of \(2\).
**Remark 1**.: _The above example places the facility at the midpoint of the optimal median interval. The median is known for minimizing sum of distances._
**Theorem 1**.: _The price of 2-IFS for utilitarian welfare is 2, and this bound is tight._
We next compute bounds on the price of 2-UFS fairness for utilitarian welfare.
**Theorem 2**.: _The price of 2-UFS for utilitarian welfare is 2, and this bound is tight._
As the price of fairness for utilitarian welfare is the same for both proportional fairness axioms, it may be desirable to implement 2-UFS in favour of 2-IFS when loss of utilitarian welfare is the primary concern.
#### 4.1.2 Egalitarian Welfare
The egalitarian welfare is an alternate measure of fairness frequently observed in the literature, focussing on the worst off agent. Our price of fairness analysis gives an insight into the tradeoff between egalitarian welfare/maximin fairness and proportional fairness in the OFLP.
**Definition 4** (Price of Fairness for Egalitarian Welfare).: _Let \(f_{EW}^{*}\) be the mechanism that returns the solution maximizing Egalitarian Welfare. For egalitarian welfare and fairness property \(P\), we define the price of fairness as the worst case ratio (over all location profiles) between the optimal egalitarian welfare and the optimal egalitarian welfare achieved by a facility location satisfying fairness property \(P\):_
\[\max_{x\in[0,1]^{n}}\frac{\min_{i}u(f_{EW}^{*}(x),x_{i})}{\max_{y\in P(x)}\min_ {i}u(y,x_{i})}.\]
Our first result is that the price of 2-IFS is \(1\), meaning that a mechanism that maximizes egalitarian welfare is guaranteed to satisfy 2-IFS. The intuition is that since a 2-IFS solution (in which every agent obtains at least \(\frac{1}{2n}\) utility) always exists, a solution which maximizes the worst off agent's utility would therefore result in each agent obtaining at least \(\frac{1}{2n}\) utility.
**Proposition 3**.: _The price of 2-IFS for egalitarian welfare is \(1\)._
On the other hand, we find that the price of 2-UFS is noticeably worse, taking a linear factor of \(n-1\). The intuition behind this is that a coalition of \(n-1\) agents at one point can ensure that the facility is distant from their location (and closer to the remaining agent's location) by a 'factor' of \(n-1\).
**Theorem 3**.: _The price of 2-UFS for egalitarian welfare is \(n-1\)._
Proof.: We first prove that the lower bound is \(n-1\). It suffices to consider \(n\geq 3\). Consider the location profile with \(1\) agent at \(\frac{1}{2n}-\epsilon\) and \(n-1\) agents at \(\frac{n+1}{2n}+\epsilon\) for sufficiently small \(\epsilon>0\), (see, e.g. Figure 3). The optimal solution places the facility at \(1\) resulting in an egalitarian welfare of \(\frac{n-1}{2n}-\epsilon\). The only 2-UFS solutions are in the interval \([\frac{1}{n}-\epsilon,\frac{1}{n}+\epsilon]\), and the solution of \(\frac{1}{n}+\epsilon\) results in an egalitarian welfare of \(\frac{1}{2n}+2\epsilon\). As \(\epsilon\to 0\), the ratio approaches \(n-1\).
Figure 2: The instance in the proof of Lemma 1 for \(n=4\). \(f_{UW}^{*}\) represents the utilitarian welfare maximizing facility placement, whilst \(f_{2IFS}^{*}\) represents the facility that maximizes utilitarian welfare under the constraints of 2-IFS. The red intervals denote locations that are infeasible under 2-IFS.
We now prove that the upper bound is \(n-1\). Firstly, it clearly suffices to consider location profiles where groups contain at most \(n-1\) agents. Now suppose there exists such an \(x\) where \(\min_{i}u(f^{*}_{EW}(x),x_{i})\geq\frac{n-1}{2n}\), i.e. there is a solution where every agent has at least \(\frac{n-1}{2n}\) utility. Then this also satisfies 2-UFS and results in an egalitarian ratio of \(1\). Therefore the maximum ratio must have \(\min_{i}u(f^{*}_{EW}(x),x_{i})<\frac{n-1}{2n}\). Due to 2-UFS, we also have \(\max_{y\in 2UFS(x)}\min_{i}u(y,x_{i})\geq\frac{1}{2n}\). The theorem statement follows from dividing these two terms.
### Incompatibility with Strategyproofness
In mechanism design, the normative property of _strategyproofness_ is often sought as it disincentivizes agents from misreporting their true location.
**Definition 5** (Strategyproofness).: _A (deterministic) mechanism \(f\) is strategyproof if for every agent \(i\in N\), we have for every \(x_{i}\), \(x^{\prime}_{i}\) and \(\hat{x}_{-i}\),_
\[u(f(x_{i},\hat{x}_{-i}),x_{i})\geq u(f(x^{\prime}_{i},\hat{x}_{-i}),x_{i}).\]
We say that a randomized mechanism is _strategyproof in expectation_ if no agent can improve its expected utility by misreporting its own location.
We note that no strategyproof and deterministic mechanism can achieve any approximation of IFS (and therefore also UFS). This follows from the characterization of deterministic strategyproof mechanisms for the OFLP by Feigenbaum and Sethuraman (2015) which we describe below.
**Definition 6** (Feigenbaum and Sethuraman (2015)).: _Let \(f\) be a deterministic mechanism s.t. \(|R^{f}_{n}|=|\{f_{n}(\mathbf{x}):\mathbf{x}\in X^{n}\}|\leq 2\) for all \(n\in\mathbb{N}\). For each \(n\in\mathbb{N}\), let \(R^{f}_{n}=\{\alpha_{n},\beta_{n}\}\) s.t. \(\beta_{n}\geq\alpha_{n}\), and let \(m_{n}=\frac{\alpha_{n}+\beta_{n}}{2}\). For any \(n\in\mathbb{N}\), for every profile \(\mathbf{x}\in X^{n}\), consider the partition of the agents \(L^{\mathbf{x}}=\{i\in N:x_{i}<m_{n}\}\), \(M^{\mathbf{x}}=\{i\in N:x_{i}=m_{n}\}\), and \(E^{\mathbf{x}}=\{i\in N:x_{i}>m_{n}\}\). We say that \(f\) is a midpoint mechanism if it satisfies the following property: for any \(n\in\mathbb{N}\), let \(\mathbf{x},\mathbf{y}\in X^{n}\) be any profiles s.t. \(f(\mathbf{x})=\beta_{n}\) and \(f(\mathbf{y})=\alpha_{n}\). If \(\beta_{n}>\alpha_{n}\), then there exists an agent \(i\) which satisfies one of the following:_
1. \(i\in L^{\mathbf{x}}\) _and_ \(i\in M^{\mathbf{y}}\)__
2. \(i\in L^{\mathbf{x}}\) _and_ \(i\in E^{\mathbf{y}}\)__
3. \(i\in M^{\mathbf{x}}\) _and_ \(i\in E^{\mathbf{y}}\)_._
Figure 3: The instance in the proof of Theorem 3. \(f^{*}_{EW}\) represents the egalitarian welfare maximizing facility placement, whilst \(2UFS(x)\) represents the interval of facility placements satisfying 2-UFS. The red intervals denote locations that are infeasible under 2-UFS.
This definition has the following intuition: the mechanism can switch the facility location from right to left or from left to right only when an agent crosses the midpoint in the opposite direction.
**Proposition 4**.: _There exists no strategyproof mechanism that achieves any approximation of IFS._
Proof.: Feigenbaum and Sethuraman (2015) proved that the midpoint mechanisms characterize all strategyproof mechanisms. Consider any profile which locates at least one agent at each point in \(R_{n}^{f}\). Such a mechanism does not satisfy any approximation of IFS.
In other words, for every midpoint mechanism, there exists a location profile where the mechanism places the facility at an agent's location.
Since strategyproofness is incompatible with our fairness axioms, we are interested in the performance of proportionally fair mechanisms in our model when accounting for agent strategic behaviour. Such performance can be quantified by the price of anarchy.
### \(\epsilon\)-Price of Anarchy
In this section, we compute the worst case loss of efficiency by agents misreporting their location under the mechanisms \(f_{2IFS}^{*}\) and \(f_{2UFS}^{*}\). Recall these are the mechanisms which maximize utilitarian welfare under the constraints of 2-IFS and 2-UFS, respectively. Typically, this efficiency loss is quantified by the _price of anarchy_(Koutsoupias and Papadimitriou, 1999; Nisan _et al._, 2007), defined as the worst case ratio between the utilitarian welfare corresponding to the truthful agent location profile, and the minimum utilitarian welfare corresponding to a pure Nash equilibrium of reports.
**Definition 7**.: _Given a (truthful) profile of agent locations \(x\) and a deterministic mechanism \(f\), a pure Nash equilibrium is a profile of reported agent locations \(x^{\prime}=(x^{\prime}_{1},\ldots,x^{\prime}_{n})\) such that no single agent can improve its own utility (with respect to its true location) by changing its reported location._
However, for \(f_{2IFS}^{*}\) and \(f_{2UFS}^{*}\), a pure Nash equilibrium may not necessarily exist, and hence the price of anarchy is not well-defined.
**Proposition 5**.: _A pure Nash equilibrium may not exist for \(f_{2IFS}^{*}\) or \(f_{2UFS}^{*}\)._
As a result, we turn to proving existence of the approximate notion of pure \(\epsilon\)-Nash equilibria, and computing the corresponding notion of \(\epsilon\)-price of anarchy.
**Definition 8** (Tardos and Vazirani (2007)).: _A pure \(\epsilon\)-Nash equilibrium is a profile of reported agent locations \(x^{\prime}=(x^{\prime}_{1},\ldots,x^{\prime}_{n})\) such that no single agent can improve its own utility (with respect to its true location) by strictly more than \(\epsilon\) by changing its reported location. A pure Nash equilibrium is a pure \(\epsilon\)-Nash equilibrium where \(\epsilon=0\)._
**Theorem 4**.: _For any \(\epsilon>0\), a pure \(\epsilon\)-Nash equilibrium always exists for \(f_{2IFS}^{*}\)._
**Theorem 5**.: _For any \(\epsilon>0\), a pure \(\epsilon\)-Nash equilibrium always exists for \(f_{2UFS}^{*}\)._
For a mechanism \(f\), the \(\epsilon\)-price of anarchy is defined as the worst case ratio (over all location profiles \(x\)) between the utilitarian welfare corresponding to all agents reporting truthfully and the minimum utilitarian welfare corresponding to agents reporting in a pure \(\epsilon\)-Nash equilibrium.
**Definition 9**.: _Given \(f\) and \(x\), define the set of pure \(\epsilon\)-Nash equilibria location profiles as \(\epsilon\)-\(Equil(f,x)\). The price of anarchy for utilitarian welfare is defined as:_
\[\epsilon\text{-}PoA(f):=\max_{x\in X^{n}}\frac{\sum_{i}u(f(x),x_{i})}{\min_{x^ {\prime}\in\epsilon\text{-}Equil(f,x)}\sum_{i}u(f(x^{\prime}),x_{i})}.\]
We must show that the price of anarchy is well-defined by proving that a pure Nash equilibrium always exists. However, we cannot simply apply the early theorems of [1, 10, 11], which show existence of a pure Nash equilibrium when basic strategy space conditions are satisfied along with players having continuous and quasiconcave payoff functions.4 This is because the payoff function is neither continuous nor quasiconcave.
Footnote 4: A function \(f\) is _quasiconcave_ if \(f(\lambda x+(1-\lambda)y)\geq\min\{f(x),f(y)\}\).
Nevertheless, we prove that a pure Nash equilibrium always exists for \(f^{*}_{2IFS}\) and \(f^{*}_{2UFS}\).
We now proceed to find \(\epsilon\)-price of anarchy bounds for utilitarian welfare. The same proof arguments can be applied to find identical bounds for both \(f^{*}_{2IFS}\) and \(f^{*}_{2UFS}\).
**Theorem 6**.: _For any \(\epsilon\in(0,\frac{1}{n})\), the \(\epsilon\)-price of anarchy for \(f^{*}_{2IFS}\) and \(f^{*}_{2UFS}\) of utilitarian welfare is at least \(\frac{2n-1+n\epsilon}{1-n\epsilon}\). The price of anarchy is unbounded for \(\epsilon\geq\frac{1}{n}\)._
**Theorem 7**.: _For any \(\epsilon\in(0,\frac{1}{2n})\), the \(\epsilon-\)price of anarchy for \(f^{*}_{2IFS}\) and \(f^{*}_{2UFS}\) of utilitarian welfare is at most \(\frac{2n}{1-2n\epsilon}\)._
Proof.: Firstly, we note that under a pure \(\epsilon\)-Nash equilibrium, each agent must have at least \(\frac{1}{2n}-\epsilon\) utility. To see this, suppose there is a profile of reports \(x^{\prime}\) where some agent \(i\) has strictly less than \(\frac{1}{2n}-\epsilon\) utility. By switching its report to its truthful location, agent \(i\) can strictly improve its utility by greater than \(\epsilon\), as the facility must be at least \(\frac{1}{2n}\) distance from the agent's truthful location, hence \(x^{\prime}\) is not a pure \(\epsilon\)-Nash equilibrium. Therefore the utilitarian welfare under a pure \(\epsilon\)-Nash equilibrium must be at least \(\frac{1}{2}-n\epsilon\). Now the utilitarian welfare under any instance is at most \(n\), from all agents being located at \(0\) and the facility being placed at \(1\). The theorem statement immediately follows.
By setting \(\epsilon=0\) in the \(\epsilon\)-price of anarchy bounds of Theorems 6 and 7, we achieve the following result.
**Corollary 1**.: _If a pure Nash equilibrium exists, the price of anarchy for \(f^{*}_{2IFS}\) and \(f^{*}_{2UFS}\) of utilitarian welfare is between \(2n-1\) and \(2n\)._
As the \(\epsilon\)-price of anarchy of our proportional fairness axioms in the deterministic setting is linear, it may be desirable to use a randomized, strategyproof mechanism when the agent locations are private information. We give examples of such mechanisms in the upcoming section.
## 5 Randomized Mechanisms
By using randomized mechanisms, we can achieve a better price of fairness for 2-IFS and 2-UFS, and overcome the incompatibility with strategyproofness. We define a randomized mechanism
as a probability distribution over deterministic mechanisms, and an agent's utility as its expected distance from the facility.
In the randomized setting, the optimal approximation of IFS and UFS for which a solution always exists is \(\alpha=2\). This can be easily seen by setting 1 agent at \(\frac{1}{2}\). Our fairness axioms are adapted as follows:
**Definition 10** (\(\alpha\)-Individual Fair Share (IFS) in expectation).: _A mechanism \(f\) satisfies \(\alpha\)-Individual Fair Share in expectation (\(\alpha\)-IFS in expectation) if for any location profile \(x\),_
\[\mathbb{E}[u(f(x),x_{i})]\geq\frac{1}{\alpha n}\qquad\forall i\in N.\]
**Definition 11** (\(\alpha\)-Unanimous Fair Share (UFS) in expectation).: _A mechanism \(f\) satisfies \(\alpha\)-Unanimous Fair Share in expectation (\(\alpha\)-UFS in expectation) if for any location profile \(x\) and any set of agents \(S\) at the same location,_
\[\mathbb{E}[u(f(x),x_{i})]\geq\frac{|S|}{\alpha n}\qquad\forall i\in S.\]
### Strategyproofness
From Proposition 4, we know that in the deterministic setting, strategyproofness is incompatible with our proportional fairness axioms. In the randomized setting, the space of mechanisms is much larger and hence we are able to overcome this impossibility.
We first consider **Mechanism 2** from [Cheng _et al._, 2013]. Denoting the numbers of agents located in \([0,1/2]\) and \((1/2,1]\) by \(n_{1}\) and \(n_{2}\) respectively, **Mechanism 2** places the facility at \(0\) with probability \(\alpha\) and at \(1\) with probability \((1-\alpha)\), where \(\alpha=\frac{2n_{1}n_{2}+n_{2}^{2}}{n_{1}^{2}+n_{2}^{2}+4n_{1}n_{2}}\). This mechanism is known to be group strategy-proof (in expectation) and \(\frac{3}{2}-\)approximates the utilitarian welfare. As we will show, this mechanism satisfies 2-UFS (and therefore also 2-IFS).
**Theorem 8**.: _Mechanism 2 satisfies 2-UFS in expectation._
### Egalitarian Welfare
We now provide some results on egalitarian welfare. Specifically, we give a randomized, strategyproof mechanism which maximizes egalitarian welfare subject to the constraints of 2-IFS and 2-UFS in expectation.
**Randomized Egalitarian Welfare mechanism**
* If all agents are in \([0,\frac{1}{2}]\), place the facility at \(1\).
* If all agents are in \((\frac{1}{2},1]\), place the facility at \(0\).
* Otherwise, place the facility at \(0\) with probability \(\frac{1}{2}\) and at \(1\) with probability \(\frac{1}{2}\).
By considering cases, it is easy to see that this mechanism is strategyproof in expectation.
**Proposition 6**.: _Randomized Egalitarian Welfare mechanism is strategyproof in expectation._
Before analyzing the optimality and approximation ratio of this mechanism, we prove a lemma that shows that in the randomized setting, it suffices to consider mechanisms which can only place the facility at \(0\) or \(1\).
**Lemma 2**.: _Consider an arbitrary agent location profile \(x\). For every 2-IFS/UFS randomized mechanism that gives strictly positive probability to a facility placement between \(0\) and \(1\), there exists a 2-IFS/UFS randomized mechanism that only gives positive support to a facility placement at \(0\) or \(1\) that leads to weakly higher expected utility for each agent._
We now proceed to prove that **Randomized Egalitarian Welfare mechanism** is egalitarian welfare-optimal.
**Proposition 7**.: _The **Randomized Egalitarian Welfare mechanism** is optimal for egalitarian welfare and satisfies 2-UFS._
Proof.: The cases where all agents are in \([0,\frac{1}{2}]\) and all agents are in \((\frac{1}{2},1]\) are trivial.
We now examine the case where both intervals have at least one agent. An agent at \(x_{i}\) has \(\frac{1}{2}x_{i}+\frac{1}{2}(1-x_{i})=1/2\) expected distance from the facility, hence this mechanism satisfies 2-UFS in expectation. By Lemma 2, it suffices to only consider mechanisms which can only place the facility at 0 or 1. Suppose that instead of having \(\frac{1}{2}\) probability of placing the facility at either endpoint, we place the facility at \(1\) with \(\frac{1}{2}+p\) probability and at \(0\) with \(\frac{1}{2}-p\) probability, where \(p\in(0,\frac{1}{2}]\). The expected utility of the rightmost agent is \(x_{n}(\frac{1}{2}-p)+(1-x_{n})(\frac{1}{2}+p)=\frac{1}{2}+p(1-2x_{n})<\frac{1}{2}\). By a symmetric argument, if the facility was placed at \(1\) with \(\frac{1}{2}-p\) probability and at \(0\) with \(\frac{1}{2}+p\) probability, the expected utility of the leftmost agent would be strictly less than \(\frac{1}{2}\). Hence, our mechanism is optimal in this case.
In other words, the approximation ratio of this mechanism for egalitarian welfare is \(1\). Recall that the price of fairness can be interpreted as the approximation ratio of the respective optimal mechanism that satisfies the fairness constraint. This leads us to the following corollary.
**Corollary 2**.: _In the randomized setting, the price of fairness of 2-UFS for egalitarian welfare is \(1\)._
This is in stark contrast to the deterministic setting where the respective price of fairness is \(n-1\).
### 2-Ifs
We now analyze utilitarian welfare, beginning with the axiom of 2-IFS. Consider the randomized mechanism below which maximizes the utilitarian welfare subject to the 2-IFS constraint:
**2-IFS Randomized mechanism**
* If \(\sum_{i=1}^{n}x_{i}=\frac{n}{2}\), place the facility at \(0\) with probability \(\frac{1}{2}\) and at \(1\) with probability \(\frac{1}{2}\).
* If \(\sum_{i=1}^{n}x_{i}>\frac{n}{2}\),
* If \(x_{1}\geq\frac{1}{2n}\), place the facility at \(0\) with probability \(1\).
* If \(x_{1}<\frac{1}{2n}\), place the facility at \(0\) with probability \(1-\alpha\), and at \(1\) with probability \(\alpha\), where \(\alpha=\frac{1-2nx_{1}}{2n(1-2x_{1})}\).
* If \(\sum_{i=1}^{n}x_{i}<\frac{n}{2}\),
* If \(x_{n}\leq 1-\frac{1}{2n}\), place the facility at \(1\) with probability \(1\).
* If \(x_{n}>1-\frac{1}{2n}\), place the facility at \(0\) with probability \(1-\beta\), and at \(1\) with probability \(\beta\), where \(\beta=\frac{1-2nx_{n}}{2n(1-2x_{n})}\).
The intuition behind this mechanism is as follows. When \(\sum_{i=1}^{n}x_{i}=\frac{n}{2}\), both facility locations of \(0\) and \(1\) are tied in terms of maximizing utilitarian welfare, and by placing the facility at either location with probability \(\frac{1}{2}\), we achieve 2-IFS in expectation. When \(\sum_{i=1}^{n}x_{i}>\frac{n}{2}\), the optimal facility location is \(0\), so the mechanism places the facility there if it does not violate 2-IFS for any agent, else it also places the facility at \(1\) with the minimum probability that ensures 2-IFS is ensured for all agents. The case where \(\sum_{i=1}^{n}x_{i}<\frac{n}{2}\) is similar and symmetric.
Our proof of the mechanism's welfare-optimality is based on its intuition.
**Lemma 3**.: _2-IFS Randomized mechanism is optimal for utilitarian welfare amongst all randomized mechanisms satisfying 2-IFS in expectation._
We now prove a tight, constant approximation ratio for this mechanism.
**Theorem 9**.: _2-IFS Randomized mechanism has an approximation ratio for utilitarian welfare of \(\frac{12}{11}\approx 1.091\)._
This leads to the following price of fairness result for 2-IFS.
**Corollary 3**.: _In the randomized setting, the price of fairness of 2-IFS for utilitarian welfare is \(\frac{12}{11}\approx 1.091\)._
### 2-Ufs
We now move to analyze the axiom of 2-UFS in the context of utilitarian welfare. As in the previous subsection, we begin by describing a randomized mechanism which maximizes the utilitarian welfare subject to the 2-UFS constraint:
**2-UFS Randomized mechanism**
* Order the \(m\)**unique** agent locations so that \(x_{1}\) is the smallest agent location and \(x_{m}\) is the largest agent location.
* Let \(S_{1},\ldots,S_{m}\) denote the groups of agents at the \(m\) unique agent locations.
* If \(\sum_{i=1}^{m}|S_{i}|x_{i}=\frac{n}{2}\), place the facility at \(0\) with probability \(\frac{1}{2}\) and at \(1\) with probability \(\frac{1}{2}\).
* If \(\sum_{i=1}^{m}|S_{i}|x_{i}>\frac{n}{2}\),
* Let \(k\) denote the index of the largest unique agent location satisfying \(x_{k}<\frac{1}{2}\).
* For \(i\) in \(\{1,\ldots,k\}\), set \(\alpha_{i}=\frac{|S_{i}|-2nx_{i}}{2n(1-2x_{i})}\).
* Letting \(\alpha=\max\{\alpha_{1},\ldots,\alpha_{k}\}\), place the facility at \(0\) with probability \(1-\alpha\) and at \(1\) with probability \(\alpha\).
* If \(\sum_{i=1}^{m}|S_{i}|x_{i}<\frac{n}{2}\),
* Let \(k\) denote the index of the smallest unique agent location satisfying \(x_{k}>\frac{1}{2}\).
* For \(i\) in \(\{k,\ldots,m\}\), set \(\alpha_{i}=\frac{|S_{i}|-2nx_{i}}{2n(1-2x_{i})}\).
* Letting \(\alpha=\min\{\alpha_{k},\ldots,\alpha_{m}\}\), place the facility at \(0\) with probability \(1-\alpha\) and at \(1\) with probability \(\alpha\).
This mechanism is similar to the 2-IFS Randomized mechanism, but we must now iterate through the groups of agents to find the optimal value of \(\alpha\) that guarantees 2-UFS for all agents. Specifically, if \(\sum_{i=1}^{m}|S_{i}|x_{i}>\frac{n}{2}\), then \(\alpha_{i}\) denotes the smallest probability weight on location \(1\) such that 2-UFS is achieved for \(S_{i}\). Hence by setting \(\alpha\) to be the largest \(\alpha_{i}\), we achieve 2-UFS for all agents.
Again, our proof of this mechanism's optimality is based on the aforementioned intuition.
**Lemma 4**.: _2-UFS Randomized mechanism is optimal for utilitarian welfare amongst all randomized mechanisms satisfying 2-UFS in expectation._
Surprisingly, imposing the stronger fairness axiom of 2-UFS as opposed to 2-IFS has a minimal effect on the welfare-optimal mechanism's approximation ratio.
**Theorem 10**.: _2-UFS Randomized mechanism has an approximation ratio of \(\frac{2}{7}(1+2\sqrt{2})\approx 1.09384\)._
From Theorem 10, we have the following corollary.
**Corollary 4**.: _In the randomized setting, the price of fairness of 2-UFS for utilitarian welfare is \(\frac{2}{7}(1+2\sqrt{2})\approx 1.09384\)._
## 6 Extension 1: Proportional Fairness
In our analyses of price of fairness and randomized mechanisms, we have considered 2-IFS and 2-UFS, which give minimum distance guarantees for individual agents and groups of agents at the same location, respectively. One downside of the 2-UFS definition is that agents located near each other but not at the same location are considered to be in separate groups. An axiom which accounts for groups of agents located relatively close to each other is Proportional Fairness (PF), from [1]. As with IFS and UFS, a PF solution may not exist so we define approximate \(\alpha-\)PF as follows:
**Definition 12** (\(\alpha\)-Pf).: _Given a profile of locations \(\mathbf{x}\), a facility location \(y\) satisfies \(\alpha\)-PF if for any set of agents \(S\) within range \(r:=\max_{i\in S}\{x_{i}\}-\min_{i\in S}\{x_{i}\}\),_
\[u(y,x_{i})\geq\frac{1}{\alpha}(|S|/(n))-r\qquad\forall i\in S.\]
Note that \(\alpha-\)PF implies \(\alpha-\)UFS, and therefore also implies \(\alpha-\)IFS.
However, \(\alpha-\)UFS does not imply \(\alpha-\)PF, hence \(\alpha-\)PF is a stronger notion than \(\alpha-\)UFS.
**Lemma 5**.: _For \(\alpha=2\), there exists an \(\alpha-\)UFS facility location \(y\) that does not satisfy \(\alpha-\)PF._
It follows from Theorem 2 that the smallest value of \(\alpha\) for which an \(\alpha\)-PF solution exists for all location profiles is greater or equal than \(2\). We now show that a 2-PF solution always exists.
**Theorem 11**.: _A 2-PF solution always exists._
Proof Sketch.: We prove the theorem by induction on the number of groups. Suppose we have \(m\) groups of agents where each group consists of agents at the same location. When \(m=1\), i.e, all the agents are at the same point, \(2\)-PF existence follows from \(2\)-UFS existence. Now, we assume for any \(k\) groups of agents where \(k\leq m\) that there exists a \(2\)-PF solution and we extend that for \(k=m+1\).
Suppose we have \(m+1\) groups of agents placed at centers \(c_{1},c_{2},...,c_{m+1}\) which are ordered from left to right. Set a ball \(B_{i}\) with radius \(\frac{|S_{i}|}{2n}\) around each center \(c_{i}\). We consider several cases based on the intersection of balls. If all the balls are disjoint, it can be shown there exists a point \(y\in[0,1]\) which lies outside the union of balls \(B_{1}\cup\cdots\cup B_{m+1}\), satisfying the \(2\)-PF inequality. If there exists two balls, say \(B_{1}\) and \(B_{2}\), intersecting each other, they are merged with the agents placed at a new center \(c_{1}^{\prime}\). We then set a ball \(B_{1}^{\prime}\) centered at \(c_{1}^{\prime}\) with radius \(\frac{|S_{1}|}{2n}+\frac{|S_{2}|}{2n}\). Now, we have \(m\) groups of agents placed at \(c_{1}^{\prime},c_{3},\ldots,c_{m+1}\), and from our inductive assumption, we know a \(2\)-PF solution exists.
Thus, 2-PF is the optimal approximation of PF for the obnoxious facility location problem.
## 7 Extension 2: Hybrid Model
In the hybrid model, agents either want to be located close to the facility (as in the classic facility location model), or wish to be located far away from the facility (as in our obnoxious facility location model). Such a model has several real-world applications such as the placement of schools or religious places of worship; families with children or religious people would want to live near the facility for convenience, whilst others would want to be far from the facility due to the increased noise and traffic. In our model, we say an agent is type \(C\) if it is a classic agent and prefers to be closer to the facility, and an agent is type \(O\) if it is an obnoxious agent and prefers to be further away from the facility.5 We denote the set of classic agents as \(N_{C}\) and the set of obnoxious agents as \(N_{O}\).
Footnote 5: Our model is based on the model presented by Feigenbaum and Sethuraman (2015).
A type \(C\) agent has utility \(u(y,x_{i})=1-d(y,x_{i})\) and a type \(O\) agent has utility \(u(y,x_{i})=d(y,x_{i})\).6
Footnote 6: This choice of utility function is adapted from Feigenbaum and Sethuraman (2015); Aziz _et al._ (2021). We refer the reader to those papers for a justification of the utility model.
When defining IFS and UFS in the hybrid model, we use definitions consistent with Aziz _et al._ (2021) and this paper. Our definition of Hybrid-Individual Fair Share (H-IFS) provides an appropriate distance guarantee for each agent.
**Definition 13** (Hybrid-Individual Fair Share (H-IFS)).: _Given a profile of locations \(x\), a facility location \(y\) satisfies Hybrid-Individual Fair Share (H-IFS) if for all \(i\in N_{C}\),_
\[u(y,x_{i})\geq\frac{1}{n}\quad\text{or, equivalently,}\quad d(y,x_{i})\leq 1- \frac{1}{n},\]
_and for all \(i\in N_{O}\),_
\[u(y,x_{i})\geq\frac{1}{2n}\quad\text{or, equivalently,}\quad d(y,x_{i})\geq\frac{1}{2n}.\]
When defining UFS, we aim to capture proportional fairness guarantees for subsets of agents of the same type at the same location. Consider every subset \(S\subseteq N\) of agents at the same location, where \(S=S_{C}\cup S_{O}\). \(S_{C}\) denotes the agents of \(S\) that are of type \(C\), and \(S_{O}\) denotes the agents of \(S\) that are of type \(O\).
**Definition 14** (Hybrid-Unanimous Fair Share (H-UFS)).: _Given a profile of locations \(x\) such that a subset of \(S_{j}\subseteq N\) agents7 share the same type and location, a facility location \(y\) satisfies Hybrid-Unanimous Fair Share (H-UFS) if for all \(i\in S_{C}\),_
Footnote 7: \(j\in\{C,O\}\)
\[u(y,x_{i})\geq\frac{|S_{C}|}{n}\quad\text{or, equivalently,}\quad d(y,x_{i}) \leq 1-\frac{|S_{C}|}{n},\]
_and for all \(i\in S_{O}\),_
\[u(y,x_{i})\geq\frac{|S_{O}|}{2n}\quad\text{or, equivalently,}\quad d(y,x_{i}) \geq\frac{|S_{O}|}{2n}.\]
**Example 1**.: Suppose there are \(n-k\) type \(C\) agents and \(k\) type \(O\) agents, all at the same location. The facility needs to be between \(\frac{k}{2n}\) and \(\frac{k}{n}\) distance from the group.
Although our definitions have a discrepancy in utility functions between the classic and obnoxious agents, we have specified them to be consistent with related literature and to be the optimal bounds such that a solution is guaranteed to exist. Furthermore, existence of a H-UFS solution under our definition implies existence of a solution under a weaker definition where a set \(S_{C}\) of classic agents at the same location instead have a utility guarantee of \(\frac{|S_{C}|}{2n}\).
**Theorem 12**.: _Under the hybrid model, a H-UFS solution always exists._
## 8 Discussion
In this paper we have formulated proportional fairness axioms for the obnoxious facility location problem, and given welfare-optimal deterministic and randomized mechanisms satisfying these axioms. In both the deterministic and randomized setting, we prove tight price of fairness bounds for 2-IFS and 2-UFS, for the objectives of utilitarian and egalitarian welfare. These correspond to the approximation ratios of the respective welfare-optimal mechanisms. For the deterministic utilitarian welfare-optimal mechanisms, we also prove existence of pure \(\epsilon\)-Nash equilibria and linear \(\epsilon\)-price of anarchy bounds. We also give a randomized, strategyproof mechanism satisfying 2-UFS with a constant utilitarian approximation ratio.
There are several future directions to this work, such as those stemming from our proposed extensions of 2-PF and the hybrid model. For example, the price of anarchy and price of fairness for these two extensions could be computed. We could also supplement our price of anarchy results
with bounds on the price of stability. Further extensions to the price of fairness results could involve different objective and utility functions. It is also worth analyzing the Nash equilibria of the randomized utilitarian welfare-optimal mechanisms, as they are not strategyproof in expectation. Although our proportional fairness axioms are incompatible with strategyproofness in the deterministic setting, we may consider weaker notions of strategyproofness which may be compatible with our fairness properties.
## Acknowledgements
We would like to acknowledge the helpful feedback and suggestions from Minming Li.
|
2301.12522 | Optimal Service Provisioning in IoT Fog-based Environment for QoS-aware
Delay-sensitive Application | This paper addresses the escalating challenges posed by the ever-increasing
data volume, velocity, and the demand for low-latency applications, driven by
the proliferation of smart devices and Internet of Things (IoT) applications.
To mitigate service delay and enhance Quality of Service (QoS), we introduce a
hybrid optimization of Particle Swarm (PSO) and Chemical Reaction (CRO) to
improve service delay in FogPlan, an offline framework that prioritizes QoS and
enables dynamic fog service deployment. The method optimizes fog service
allocation based on incoming traffic to each fog node, formulating it as an
Integer Non-Linear Programming (INLP) problem, considering various service
attributes and costs. Our proposed algorithm aims to minimize service delay and
QoS degradation. The evaluation using real MAWI Working Group traffic data
demonstrates a substantial 29.34% reduction in service delay, a 66.02% decrease
in service costs, and a noteworthy 50.15% reduction in delay violations
compared to the FogPlan framework. | Soroush Hashemifar, Amir Rajabzadeh | 2023-01-29T19:34:14Z | http://arxiv.org/abs/2301.12522v2 | HPCDF: Optimal Service Provisioning in IoT Fog-based Environment for QoS-aware Delay-sensitive Application
###### Abstract
Due to the explosive growth of smart devices, 5G, and the Internet of Things (IoT) applications in recent years, the volume and velocity of generated data, and consequently, delay-sensitive applications are increasing endlessly. This paper aims to improve the service delay and Quality of Service (QoS) by introducing HPCDF (Hybrid PSO-CRO Delay-improved for FogPlan) - an offline QoS-aware framework to deploy and release fog services dynamically. The proposed method provisions, i.e., deploy and release fog services to reduce service delay, based on the aggregated incoming traffic to each fog node. We formulate a cost function as an Integer Non-Linear Programming (INLP) problem by considering each service attributes, including required resources and associated traffic. This problem integrates storage, processing, deployment, communication costs, delay violation, high fog utilization reward, high traffic nodes cost, and service delay penalty. A hybrid binary PSO-CRO (Particle Swarm and Chemical Reaction Optimization) algorithm is proposed to achieve the lowest service delay and QoS loss to address this problem. The evaluation is performed on real-world traffic traces, provided by MAWI Working Group, under three different experiments to study the impact of various parameters of the hybrid binary PSO-CRO algorithm and the proposed framework on service delay. The evaluation results reveal that our proposed algorithm reduces service delay by 29.34%, service cost by 66.02%, and violates the delay 50.15% less in comparison to FogPlan framework.
S
Service Provisioning; Quality of Service; Service Delay; Fog Computing; Internet of Things (IoT)
## 1 Introduction
In recent years, the Internet of Things (IoT) has emerged to facilitate various applications, such as smart homes, wearable devices, smart vehicles, and connected healthcare. Consequently, the amount of data generated by IoT devices is increasing massively. By looking to the future, a new forecast from International Data Corporation (IDC) estimates that there will be 41.6 billion connected IoT devices generating 79.4 zettabytes (ZB) of data in 2025 [1]. Organizations and businesses are intensely investing in IoT solutions to capture and store massive amounts of data to help them get insights. These insights would help them develop new products, accelerate decision-making processes, improve processes, reduce costs, and detect anomalies before they happen. The challenge lies in their ability to use data streams and react immediately to critical.
Knowing that edge devices usually do not have the computational and storage capabilities for local processing of generated data, the traditional approach is to transfer data to the cloud, which has massive storage and processing resources. This approach seems to be costly concerning latency, network bandwidth, storage, and energy use. The cloud seems not to satisfy delay and Quality of Service (QoS) requirements of delay-sensitive IoT applications due to its long distance from different terminal devices and even privacy issues [2].
Taking compute, storage, and networking resources closer to data sources, i.e., IoT devices, not only addresses the problems mentioned earlier [3] but also enhances user-experience [4]. Fog computing [5], as a geo-distributed computing paradigm, can put this idea into practice and results in a tangible reduction in delay, bandwidth usage, energy consumption, and improves service quality. Notice that it is not replacing cloud systems; it is just a complementary computing paradigm. These fog services are provided as virtual machines and containers. While there are many challenges associated with a fog computing environment, we will only focus on two of these challenges. More specifically, in the following sections, we will study the service provisioning problem for better fog resource management, improving service delay, and QoS.
Service provisioning is a fundamental problem in fog computing due to fog nodes' limited computing capacity and rapid development of delay-sensitive applications. On the other hand, fog can increase service delay and energy consumption if the service provisioning on cloud servers or fog nodes is not well-balanced, wildly when each service's resource usage varies over
time. Thus, a dynamic service provisioning mechanism is crucial to deploy services to computing resources efficiently. There exist two families of application services [6]:
* Delay-sensitive: A service with a strict delay requirement and must be deployed closer to IoT devices, i.e., the edge of the fog network.
* Availability-sensitive: A delay-tolerant service that can be deployed further from IoT devices, i.e., other fog nodes or on the cloud (to reduce resource cost).
However, if the fog service is deployed too far from IoT devices, QoS degradation results in increased transmission delay [7]. The provisioning task can be performed statically. In this case, it is essential to choose a fair number of services to deploy, which results in resource cost reduction, and prevents delay from surpassing the threshold. On the other hand, incoming traffic to fog nodes changes over time, which causes changes in services' resource usage [6], so the provisioning strategy is unable to adapt itself to these changes. A dynamic fog service placement to deploy and release fog services dynamically can address the mentioned problem.
This paper formulated the fog service placement process as an optimization problem using Integer Non-Linear Programming (INLP). Since dynamic fog service placement is an NP-hard problem, a meta-heuristic algorithm is proposed based on hybrid binary particle swarm optimization (BPSO) [8] and chemical reaction optimization (CRO) [9]. The algorithm aims to minimize the average service delay and reduce the overall QoS loss concerning resources' computing and storage capacities. We defined a cost function as integrating storage, processing, deployment, communication costs, and delay violation and wrong provisions penalty to achieve this objective. Since meta-heuristic algorithms are slow, we run the algorithm offline, which allows us to determine the optimal placement for a while. During this period, the Fog Service Controller (FSC) provisions services according to the last placement until the algorithm determines a new placement. To make this approach online, one can predict the request and provision them before running the algorithm in FSC. We have left this as future work and described it in detail in the last section of the paper.
To evaluate the HPCDF (Hybrid PSO-CRO Delay-improved for FogPlan), three different experiments are performed to investigate the impact of hyper-parameters of the framework and the hybrid binary PSO-CRO1 (HBPCRO) algorithm. An additional experiment is performed to compare the proposed framework's optimality against the FogPlan framework methods [6]. All the experiments are performed on real-world traffic traces, provided by MAWI2 Working Group, which contains 2 hours of traffic split into chunks of 15 minutes intervals.
Footnote 1: Particle Swarm- Chemical Reaction Optimization
Footnote 2: Measurement and Analysis on the WIDE Internet
The main contributions of this paper are summarized as follows:
* _QoS-aware framework_: We have included QoS violation as a part of the cost function, and a penalty is assigned to it. The meta-heuristic algorithm tries to keep QoS violation less than the customer expected threshold for a certain fraction of the time.
* _Optimal service provisioning_: We have proposed a hybrid binary PSO-CRO algorithm for fog service provisioning. This hybrid meta-heuristic algorithm has been proved to find the optimal optimization solution.
* _Minimum service information is required_: Thus, we consider only the aggregated incoming service traffic to fog nodes; no additional service information is required for the provisioning task.
* _Offline approach_: Unlike the state-of-the-art approaches, HPCDF performs in an offline manner. Hence it has enough time to explore the search space. It also adapts itself to changes that occur on incoming traffic to fog nodes.
The rest of this paper is organized as follows. Section 2 presents a brief of state-of-the-art methods that have been proposed for fog service provisioning. The service provisioning problem is formulated in Section 3. Then, the proposed approach is described in Section 4. Section 5 provides the evaluation settings and results with numerical simulations. Finally, Section 6 and 7 give the conclusion and discussion and also state the future work.
## 2 Related work
As fog computing has emerged as a relatively new paradigm, many open problems exist in it. One of these open problems is optimizing the placement of tasks on available resources, i.e., service provisioning problems. Many studies have been done on solving task scheduling in heterogeneous environments.
Authors in Yousefpour et al. [6] have proposed an application-independent framework for provisioning stateless service in a hybrid cloud-fog environment, named FogPlan. It has also proposed two greedy online approaches. MinViol deploys services with high traffic and releases them if their traffic decreases. On the other hand, MinCost deploys services in order to reduce costs and increase revenue. Each of these algorithms is invoked periodically to update the placement of services. FogPlan does not need any information about the mobility of users or spec of IoT devices. The experiments show that MinCost performs faster than MinViol, but MinViol has lower service delay and delay violation. Authors in Lera et al. [10] proposed a Graph-Partition-based strategy for deployment of applications and services. Their proposed method maximizes service availability and QoS levels by minimizing network latency between unrelated services. First, they map the application to the fog community using a greedy first-fit decreasing approach to increase service availability. Deployed application services are then assigned to fog nodes
within that community to prevent deadline violations. If no resources are available on any service, the application will be deployed to the cloud. Donassolo et al. [11] proposed a service provisioning mechanism based on a divide-and-conquer approach in foggy environments. The goal is to minimize deployment costs. First, decompose the problem into multiple solution components. Each solution component then determines its placement using a greedy strategy with respect to requirements.
Huang et al. [12] formulates the general multi-replica service placement and data flow management problem and models it as a multi-objective scheduling problem. This method was developed based on Pareto-ACO (P-ACO). The fog service placement problem was introduced by Natesha et al. [13] who applied a genetic algorithm based on the elite approach to solve it. Although they optimized energy consumption, service time, and cost, the proposed scheme did not consider the fog layer's resource usage. Jafari et al. [14] used a meta-heuristic approach to simultaneously optimize latency and power consumption in the Cloud-Fog-IoT ecosystem. The authors present two meta-heuristic approaches, including Bees Algorithm (BA) and Non-Dominant Genetic Reordering Algorithm (NSGA-II), to solve the service placement problem (SPP) in the form of a multi-objective problem. BA and NSGA-II are combined with a robust differential evolution method called Minimax Differential Evolution (MMDE) to improve convergence. Simulations show that NSGA-II performs better in terms of latency and power compared to BA. Partial Offloading to Multiple Helper (POMH) is a task offloading technique that provides parallel computation of tasks and improves task latency. Tasks are split into subtasks and transferred to different fog nodes in parallel, thus reducing the total computation time [15]. The authors of [15] performed POMH task offloading using both horizontal task offloading to adjacent fog nodes and vertical offloading to the cloud. They proposed a broad framework for minimizing service delivery delays through adaptive task offload mechanisms.
Wu et al. [16] deals with SPP, meeting different QoS requirements in terms of cost and deadlines. SPP is formulated as a multi-objective optimization problem with tradeoffs between different objectives while maintaining a set of Pareto optimizations. They developed a genetic algorithm as a meta-heuristic approach based on a common parallel architecture and multiple elite operators to improve the SPP solution. To overcome the limitations of fog computing, the proposed scheme is equipped with a two-way trust management mechanism to guarantee QoS and reliability simultaneously.
The state-of-the-art works proposed in this section are summarized in Table. 1. By the way, to the best of our knowledge, none of the previous research works has taken service delay, deadline violation, and financial cost along with utilization into account for a more efficient service provisioning in the cloud-fog environment.
## 3 System Model and Objective Statement
### System Model
In this section, we first define the proposed algorithm system model. Then the queuing delay, service delay, and constraints' expressions are defined. Finally, the optimization problem is defined through INLP.
FSC, as part of Fog Service Provider (FSP), is responsible for allocating resources to users' applications. This decision is made in terms of the number of users' requests as well as fog and cloud available resources. It is worth mentioning that an IoT application is composed of several interconnected components, named micro-service. This service is mostly implemented as a container or virtual machine, which has limited required resources. The purpose is to place a set of services on heterogeneous fog
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Paper**} & \multirow{2}{*}{**Year**} & \multicolumn{4}{c|}{**Objectives**} & \multirow{2}{*}{**Environment**} & \multirow{2}{*}{**Method**} \\ \cline{3-4} \cline{6-7} & & & & & & & \\ \cline{3-6} & & Delay & Deadline violation & Service cost & Utilization & & \\ \hline Yousefpour et al. [6] & 2019 & ✓ & ✓ & ✓ & & cloud-fog & Greedy \\ Lera et al. [10] & 2019 & ✓ & ✓ & & & cloud-fog & Greedy \\ Donassolo et al. [11] & 2019 & & & ✓ & fog & Greedy \\ Huang et al. [12] & 2020 & ✓ & & ✓ & cloud-fog & Ant Colony \\ Natesha et al. [13] & 2021 & ✓ & & ✓ & fog & Genetic Algorithm \\ Jafari et al. [14] & 2021 & ✓ & & & cloud-fog & Bees Algorithm and NSGA-II \\ Tran-Dang et al. [15] & 2021 & ✓ & & & cloud-fog & Greedy \\ Wu et al. [16] & 2022 & ✓ & ✓ & ✓ & ✓ & fog & Genetic Algorithm \\ Our method & 2022 & ✓ & ✓ & ✓ & ✓ & cloud-fog & HBPCRO \\ \hline \end{tabular}
\end{table}
Table 1: Summary of various related work
nodes and cloud servers to minimize a cost function under the constraints discussed further in this section and increase number of satisfied requests according to the QoS level prescribed in the service-level agreement (SLA). FSC considers the services' response delay to decide whether migrate service or not. As the service traffic increases, the service delay grows. At this moment, FSP sends a container deployment request to a fog node, which forwards the service traffic to the newly deployed service. Fig. 1 reveals an example of interactions inside the assumed hybrid fog-cloud environment.
In Fig. 1, we assumed that each fog node could only process two services at once. In this scenario, the third fog node cannot process the fourth service, so leads to deploying it on the cloud server. On the other hand, the first service's arrived requests are forwarded to newly deployed instances of the first service.
#### 3.1.1 Assumptions and decision variables
\(F\) and \(C\) denote fog nodes and cloud servers throughout this paper, respectively, and \(S\) denotes the services. Moreover, \(q_{s}\) and \(th_{s}\) are used to show the QoS level and service delay threshold, respectively. FSP has to guarantee that delay will be lower than \(th_{s}\) for \(q_{s}\) percentage of the time.
We determine the placement of services on fog nodes and cloud servers at any moment through both fog placement (\(P\)) and cloud placement (\(Q\)) matrices as follows,
\[P(s,f,t)=\begin{cases}1&\text{if service s is deployed on node f}\\ 0&\text{otherwise}\end{cases} \tag{1}\]
\[Q(s,k,i)=\begin{cases}1&\text{if service s is deployed on server k}\\ 0&\text{otherwise}\end{cases} \tag{2}\]
Each row of matrices \(P\) and \(Q\) shows the service number. Each column of matrix \(P\) indicates the fog node number and each column of matrix \(Q\) indicates the cloud server number. All notations are shown in Table 2.
Figure 1: Service deployment in a hybrid fog-cloud environment
#### 3.1.2 Required processing resources
The number of instructions that arrives at a fog node per second is defined by the amount of processing capacity which the service has occupied, i.e., \(R_{proc}(s)\), as well as on the amount of incoming traffic to the fog node due to service residency, i.e., \(T^{\prime og}(s,f,t)\). \(R_{proc}(s)\) and \(T^{\prime og}(s,f,t)\) are measured and provided by the FSP monitoring system. In fact, FSP's traffic monitor module monitors the total incoming traffic rate of IoT requests to fog nodes by traffic monitoring agents such as Software Defined Networking (SDN) controller. Commercial SDN controllers such as OpenDayLight and ONOS have comprehensive traffic-monitoring APIs to monitor application-level traffic. Thus, we have formulated the incoming traffic rate of service \(s\) to fog node \(f\) as follows,
\[\psi^{\prime\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
with the service \(s\).
\[\Gamma^{fog}(s,f,t)\ =\frac{R_{proc}(s)}{\sum_{s^{\prime}}P\left(s^{\prime},f,t \right)R_{proc}(s^{\prime})}\ M^{fog}_{proc}(f) \tag{5}\]
\[\Gamma^{cloud}(s,k,t)\ =\ \frac{R_{proc}(s)}{\sum_{s^{\prime}}Q\left(s^{\prime},k,t \right)R_{proc}(s^{\prime})}\ M^{cloud}_{proc}(k) \tag{6}\]
where, \(\ M^{fog}_{proc}(f)\) and \(\ M^{cloud}_{proc}(k)\) denote the processing capacity of the fog and cloud servers, respectively.
#### 3.1.4 Service delay
Service delay defines the period between sending the service request and receiving the response by the IoT node. This delay includes propagation delay, the delay associated with waiting in the queue of fog and cloud servers, processing delay, and round-trip transmission delay between IoT and fog nodes. The service delay, \(\ d_{service}(s,f,t)\), is formulated as described in [6].
#### 3.2.1 Processing, memory and storage capacities
Each fog node can serve a limited number of services due to limited resources. The number of incoming instructions to a server for service \(s\) should not exceed the number of instructions dedicated to this service. This constraint encourages the algorithm to choose a placement that does not violate the capacity limitations formulated in (7) and (8).
\[\psi\ f^{og}(s,f,t)\ <\ \Gamma^{fog}(s,f,t) \tag{7}\]
\[\psi\ cloud(s,k,t)\ <\ \Gamma^{cloud}(s,k,t) \tag{8}\]
In addition to processing resources, memory and storage resources must be constrained to prevent further resource overload. Therefore, we expect the resource utilization of fog nodes and cloud servers not to exceed the available resources. This condition can be defined as equations 9-12.
\[\sum\nolimits_{s}P\left(s,f,t\right)R_{stor}(s)\ <\ M^{fog}_{stor}(f) \tag{9}\]
\[\sum\nolimits_{s}P\left(s,f,t\right)R_{mem}(s)\ <\ M^{fog}_{mem}(f) \tag{10}\]
\[\sum\nolimits_{s}Q\left(s,k,t\right)R_{stor}(s)\ <\ M^{cloud}_{stor}(k) \tag{11}\]
\[\sum\nolimits_{s}Q\left(s,k,t\right)R_{mem}(s)\ <\ M^{cloud}_{mem}(k) \tag{12}\]
where, \(\ M^{fog}_{stor}(f)\) and \(\ M^{cloud}_{stor}(k)\) denote the storage capacity of the fog and cloud servers, respectively. The memory capacity of the fog and cloud servers are also denoted by \(\ M^{fog}_{mem}(f)\) and \(\ M^{cloud}_{mem}(k)\), respectively. Besides, \(R_{stor}(s)\), and \(R_{mem}(s)\) show the storage and memory capacities of service, respectively. In the latest equations, aggregated amount of required memory and storage capacities of services must not exceed the available memory and storage capacities on fog nodes and cloud servers.
#### 3.2.2 Release constraints
If there exists at least a single fog node forwarding traffic to cloud servers, i.e. \(\sum\nolimits_{f}\sum\nolimits_{k\in\{k\nu|h_{s}(f)=k\nu,\forall k\in C\}} T^{fog}\left(s,f,t\right)\ >\ 0\), the controller is not able to release the service from the cloud. This can be formulated as (13).
\[Q(s,k,t)\ =\ \begin{cases}1\ \
This means service \(s\) can be released from the cloud only if there is no incoming request to the cloud for it. This constraint ensures that release from cloud servers occurs safely.
It is worth mentioning that the equations 3-13 are defined as discussed in [6].
### Objective statement
In this subsection, the total cost function is formulated w.r.t constraints and various parts of cost function are proposed. We have used the term loss for most parts of cost, from the machine learning convention, where cost means the average of losses.
#### 3.3.1 Cost function
Our objective is focused on maximizing the QoS, in other words, with service delay and delay violation, as well as utilizations of servers. We have formulated the total cost function as follows,
\[\begin{split} P1:&\text{ \emph{minimize} }\left(\frac{1}{|F|\times|S|}\right)\\ &\times\left[\sum_{f\in F}\sum_{z\in S}\left(\begin{matrix}L^{fog }_{proc}+L^{fog}_{stor}+L^{fog}_{nitigation}\\ +L^{IC}_{Comm}(S,f,h_{s}(f),t)\\ +L^{fg}_{drop}+L^{fog}_{wrong}+L^{fog}_{delay}\end{matrix}\right)\right]\\ +\left(\frac{1}{|F|}\sum_{f\in F}\sum_{f\in F}\sum_{f\in F-(f)} \sum_{f\in S}L^{PF}_{comm}\left(s,f,f^{\prime},t\right)\right]\\ &\quad+\left(\frac{1}{|F|\times|S|}\right)\times\sum_{f\in F}\sum_{ s\in S}L^{fog}_{ntillation}(s,f,t)\\ &\quad+\left(\frac{1}{|C|\times|S|}\right)\times\left[\sum_{s\in c \leq s}\left(L^{cloud}_{proc}+L^{cloud}_{stor}\right)\right]\end{split} \tag{14}\]
where, \(|F|\), \(|S|\), and \(|C|\) show the size of fog nodes, services, and cloud servers sets. The first three terms denote losses related to fog nodes, and the rest of the terms denote the losses of cloud servers. In (14), \(L^{fog}_{proc}\), \(L^{fog}_{stor}\), \(L^{cloud}_{proc}\), and \(L^{cloud}_{stor}\) denote the processing and storage losses in fog and cloud, respectively. \(L^{FG}_{comm}\) measures the loss of communication between fog and cloud servers, and \(L^{fog}_{nitigation}\), \(L^{fog}_{dep}\), and \(L^{fog}_{delay}\) denote delay violation, deployment, and service delay losses in fog, respectively. \(L^{fog}_{utilization}\) measures the loss of low resource utilization of fog nodes, and \(L^{FF}_{comm}\) denotes the communication loss between fog nodes because of offloading a service on another fog node. All losses are described in [6], except for \(L^{fog}_{utilization}\), \(L^{fog}_{dep}\), and \(L^{fog}_{delay}\).
#### 3.3.2 High fog utilization reward
We encourage the service provisioning method to find the best matching resource for each service in order to achieve the optimal placement, i.e., least service delay as well as improved resource utilization and QoS. So we consider the improvement of fog utilization as a reward for the meta-heuristic algorithm [22]. We have considered this reward to encourage the algorithm to utilize fog nodes as much as possible. The utilization of a fog node \(f\) is calculated as stated in [23], proposed in (15).
\[L^{fog}_{utilization}(s,f,t)\ =-\ \alpha\ \times\frac{\sum_{s}P\left(s,f,t \right)\ \times\ R_{mem}(s)}{M^{fog}_{mem}(f)}\ -\ (1-\alpha)\times\frac{\sum_{s}P\left(s,f,t \right)\ \times\ R_{stor}(s)}{M^{fog}_{stor}(f)} \tag{15}\]
where, \(\alpha\) adjusts the balance between resources which is called "impact coefficient". Impact coefficient is a real number in the range _[0, 1]_. In the above equation, \(R_{mem}\) returns the required memory capacity for a specific service. Each fraction in (15) denote what portion of each resource is used by the service and as the resource utilization increase, (15) returns a bigger value.
#### 3.3.3 Service delay and deploy delay losses
The last loss we have taken into consideration is the loss of service delay. This delay penalizes FSP for each millisecond, and hence, forces the algorithm to find the placement with the lowest service delay. This delay consists of both service delay and deploy delay, as formulated as follows,
\[L_{delay}(s,f,t)\ =\ C_{delay}(f)\ d_{service}(s,f,t)\ \tau\ +\ C_{ develop}\left(1-P(s,f,t-1)\right)\ \times P(s,f,t)\ d_{ deploy}(s,f,t)\ \tau \tag{16}\]
where \(C_{delay}\) and \(C_{deploy}\) denote costs of service and deploy delays per millisecond, respectively. The service delay also has impacts on delay violation. So the latest loss can lower the delay violation and improve the QoS level of the service as well. In (16), the term \(d_{deploy}(s,f,t)\) refers to deploy delay, including download delay of the container from FSC and container startup time. We have defined \(d_{deploy}(s,f,t)\) as follows,
\[d_{deploy}(s,f,t)\,=\,\frac{R_{stor}(s)}{\xi(f)}+\textit{Container startup delay} \tag{17}\]
where \(\xi(f)\) is used to denote the transmission rate between FSP's image storage and fog node \(f\), which is measured in bytes per second. The first term in (17) calculates the download delay of the container from FSP onto the server.
Due to dynamic incoming traffic, which changes over time, FSC has to update placement to satisfy the user's expected QoS level all the time. On the other hand, P1 is an NP-hard problem, which can be solved periodically in a reasonable time. Since INLP problems are not scalable in time, we face a significant time issue. Hence, we have solved this time issue by running the algorithm in an offline manner.
## 4 Proposed Methodology
In this section, the proposed approach, HPCDF, is discussed in more detail to address the formulated problem in the previous section. Because the fog service provisioning is an NP-hard problem, we have used a meta-heuristic approach, which aims to find the best placement by investigating the problem's search space with a limited number of iterations. To achieve this goal, HPCDF operates, as stated in Fig. 2. First, all services are randomly placed on fog nodes. Then, the services' placement is optimized through executing hybrid binary PSO-CRO, and the best placement is obtained. Finally, the obtained placement is translated back to the fog placement matrix, P.
The rest of this section is organized as follows: We have first described the particle swarm and chemical reaction optimization algorithms and stated the motivation behind the hybrid meta-heuristic approach. Then, the details of the optimization algorithm are discussed. Finally, the time complexity of the proposed method has been analyzed.
### General description of PSO and CRO algorithms
Particle Swarm Optimization, proposed by Kennedy and Eberhart [17], is considered a robust meta-heuristic algorithm that possesses fewer parameters to adjust and high convergence speed and uni-modal performance problems, i.e., problems without local minima. This algorithm is inspired by the social behavior of a swarm of animals such as birds, fishes, and ants.
Chemical Reaction Optimization, proposed by Lam and Li [9], is a recently proposed evolutionary meta-heuristic with historical footprints in quantum and statistical mechanics. This algorithm simulates the interactions of molecules in a chemical reaction. CRO is efficient and able to handle local minima problems through its computational operators.
### Motivation of hybrid approach
The PSO algorithm can perform well at exploration, i.e., global search, and the CRO algorithm functions well at exploitation, i.e., local search. On the other hand, PSO would be stuck in local minima when utilized in multi-modal problems. However, an excellent optimization algorithm establishes a balance between exploration and exploitation. The mentioned issues motivated researchers to merge these two algorithms, which result in HBPCRO, proposed by Li et al. [18]. Our motivation to utilize HBPCRO is to exploiting both algorithms' benefits. HBPCRO must perform periodically to address the optimization problem formulated in the previous section. The way we adapted this algorithm is discussed below.
#### 4.2.1 Particle, position, and velocity representation
Each member of swarm, a particle5, investigates in the multi-dimensional search space. The Position of the particle denotes a unique placement, i.e. a possible solution. Assume the binary position vector, \(X\), is a vector of size \(|F|\times|S|\), where
Footnote 5: We have assumed a “particle” in PSO, as equal to a “molecule” in CRO, and the word “particle” is used in the rest of this paper.
\[X(i,t)\ =\ P\left(\left|\frac{i}{|F|}\right|,i-|F|\times\left|\frac{i}{|F|} \right|,t\right) \tag{18}\]
In (18), \(i\), which is equal to \(s\times|F|+f\), denotes dimension index, where \(s\) and \(f\) denote the service index and fog node index, respectively. \(X(i,t)\in\{0,1\}\), which denotes the i-th element of particle position, reveals that whether service with the index of \(\left|\frac{i}{|F|}\right|\) is deployed on fog node with the index of \(i-|F|\times\left|\frac{i}{|F|}\right|\) or not.
The position vector and its equivalent matrix are shown in Fig. 3. The placement matrix (P), drawn in Fig. 3 (a), defines each service's placement on each fog node. Fig. 3 (b) shows the flat form of the placement matrix where the whole rows are arranged one after another. The HPCDF approach chooses random values as the initial position, i.e., placement, for all particles.
The velocity of a particle, which determines the speed of a particle in search space, is a vector of \(|F|\times|S|\) real numbers, i.e., \(V(i,t)\in\mathbb{R}\). In other words, velocity \(V(i,t)\) determines how probable is the placement of service indexed as \(\left|\frac{i}{|F|}\right|\) on fog node with the index of \(i-|F|\times\left|\frac{i}{|F|}\right|\).
It is worth mentioning that, in Fig. 3 (a), fog placement matrix is a sparse binary matrix, where each row denotes a service and each column denotes a fog node. A value of 1 means the service should be deployed on the corresponding fog node. On the other hand, in Fig. 3 (b), a flattened vector of placement matrix is denoted. Values of this flattened vector are updated by the proposed method.
Figure 2: General overview of the proposed approach workflow
#### 4.2.2 Particle's movement
Each particle updates its position according to two values, local best (\(p_{best}\)) and global best (\(g_{best}\)) in each iteration of PSO. \(p_{best}\) is the position with the least cost inside each particle. On the other hand, \(\beta_{best}\) is the least cost position among all particles inside the swarm. We have to update \(p_{best},\;\;g_{best}\), velocity, and position periodically. The way we update velocity and position is discussed below.
In the original version of PSO [17], \(V(i,t+1)\) is calculated using (20).
\[V(i,t+1)\;=\;\omega\;V(i,t)\;+\;\alpha\;U(0,1)\left(p_{best}-X(i,t)\right)\;+ \;\beta\;U(0,1)\left(g_{best}-X(i,t)\right) \tag{19}\]
In the latest equation, \(\omega,\alpha\), and \(\beta\) are inertia, cognitive and social coefficients, respectively. On the other hand, \(\alpha\) and \(\beta\) are called acceleration coefficients as well. \(U(0,1)\) is a sample of a uniform random number in the range _[0, 1]_, which diversifies the particle's motion in various directions. In (19), \(\omega,\alpha\), and \(\beta\) are problem-dependent and should be adjusted for the particular problem. It is proved that changing the hyper-parameters of PSO results in better solutions.
On the other hand, to accelerate the algorithm, one can achieve fast convergence version of PSO by changing the hyper-parameters according to the distribution of particles. For this reason, a \(spread\) factor is utilized, as stated in [8], to modify the value of inertia weight continuously. The \(spread\) is defined as follows,
\[spread\;=\;\frac{(precision\;+\;deviation)}{2} \tag{20}\]
where, \(precision\), refers to the maximum difference between particles w.r.t fitness value, and \(deviation\), refers to the Euclidean distance between global best particle's position and the average position of all particles. To calculate inertia weight according to the \(spread\) factor, the following equation is used.
\[\omega\;=\;\exp(\frac{-iteration,num}{spread\times MaxIter}) \tag{21}\]
where \(iteration\_num\) denotes the current iteration and \(MaxIter\) shows the maximum iteration of the algorithm.
On account of changes in number of fog nodes and services over time in fog service provisioning problem, the mentioned hyper-parameters must be adjusted dynamically. For this reason, to update velocity, we have adopted the approach discussed in [19]. The way we update these hyper-parameters are formulated as (22) and (23).
\[\alpha\;=\;\left(\alpha_{f}-\alpha_{i}\right)\times\left(\frac{iteration\_ num}{MaxIter}\right)\;+\;\alpha_{i} \tag{22}\]
Figure 3: Services placement matrix and position vector
\[\beta\,=\,\big{(}\beta_{f}-\beta_{i}\big{)}\times\big{(}\tfrac{iteration\_num}{ Maxter}\big{)}\,+\,\beta_{i} \tag{23}\]
where, \(\alpha_{i}\) and \(\beta_{i}\) are initial values, and \(\alpha_{f}\) and \(\beta_{f}\) are final values of hyper-parameters \(\alpha\) and \(\beta\), respectively. \(\alpha\) and \(\beta\) are updated according to iterations. To prevent particle fluctuations and sudden jumps of particles from one region in search space to the other regions, we have used the technique velocity clipping. This technique controls the values of velocity vector according to (24).
\[V(i,t)=\begin{cases}V_{min}&V(i,t)<V_{min}\\ V(i,t)&V_{min}<V(i,t)<V_{max}\\ V_{max}&V(i,t)>V_{max}\end{cases} \tag{24}\]
where \(V_{min}\) and \(V_{max}\) are the minimum and maximum values of velocity, respectively.
Apart from this, the original version of PSO is defined for continuous problems. While, service provisioning, and in general task scheduling, is a discrete problem. For this purpose, we have substituted the original version of PSO with a binary version, proposed by Kennedy and Eberhart [17], where they differ in how the position is updated. Hence, we have used the sigmoid function to translate velocity to probability, in the range _[0, 1]_, which determines to deploy, i.e., 1, or to release, i.e., 0, the service on fog nodes. The sigmoid function is defined as follows,
\[\sigma(z)\,=\,\tfrac{1}{(1\,+\,e^{-z})} \tag{25}\]
The way we interpret a particle's velocity is discussed in [30] and is as follows. The more positive the velocity becomes, the more likelihood of deploying the service to decide whether to deploy or to release the service. On the other hand, the more negative the velocity becomes, the more likely it is to release the service. This condition forces the transfer function to become a V-shaped function [20]. Equation (26) shows how to convert the sigmoid function to a V-shaped function and update the position vector elements.
\[X(i,t)=\begin{cases}1&(V(i,t)>0)\,\,and\,\,(|2\times\big{(}\sigma(V(i,t))-0.5 \big{)}|\,\geq\,randn)\\ 0&(V(i,t)\,\leq\,0)\,\,and\,\,(|2\times\big{(}\sigma(V(i,t))-0.5\big{)}|\,\geq \,randu)\\ X(i,t-1)&|2\times\big{(}\sigma(V(i,t))-0.5\big{)}|\,<\,randu\end{cases} \tag{26}\]
where \(randu\) denotes a uniform random number that is sampled per dimension. Fig. 4 shows the sigmoid function's behavior according to input values and denotes how we have modified the sigmoid function using (26), which ignores the sign and only considers the magnitude of the input value.
It is worth mentioning that, in Fig. 4 (a), the original sigmoid function is denoted. This function is an S-shaped function, where its domain lies between 0 and 1 and can be interpreted as probability. In Fig. 4 (b), sigmoid is reformed to denote a V-shaped function, where very positive or very negative input value results in deployment or release the service, respectively.
Figure 4: Comparison between Sigmoid and Modified Sigmoid functions
Algorithm 1 shows how the operator \(updatePosition\) is executed in the HBPCRO to update the position of a particle.
```
1:Input: S, F, C, Velocity vector V
2:Output: Updated placement matrices P and Q
3:for each service sdo
4:for each fog node fdo
5:\(randNum\gets U(0,1)\)
6:\(V_{s_{f}}\gets V(s\times|F|+f,t)\)
7:if\(V_{s_{f}}>0\) and \(|2\times(\sigma(V_{s_{f}})-0.5))|>=randNum\)then
8:if has free resourcesthen
9: deploy service s on fog node f
10: recalculate violation
11:else
12:if\(V_{s_{f}}\leq 0\) and \(|2\times(\sigma(V_{s_{f}})-0.5))|>=randNum\)then
13: release service s from fog node f
14: recalculate violation
15:for each cloud server kdo
16:if traffic of service s is forwarded to cloud server kthen
17: deploy service s on cloud server k
18: recalculate violation
19:else
20: release service s from cloud server k
21: recalculate violation
```
**Algorithm 1** updatePosition
In Alg. 1, we first try to provision services on fog nodes. To achieve this, we check if the velocity lies on the right side of the V-shaped function, as shown in Fig. 4, so it means we can deploy the service on the corresponding fog node. The same rule can be applied to the left-hand side of the V-shaped function, which means it is more probable to release the service from the fog node (Lines 7-14). Then, we deploy the released services on the cloud servers and release the deployed services from cloud servers (Lines 15-21). After each decision, we have to recalculate the service delay violation of the new placement.
The CRO operators are executed based on probability. From the CRO perspective, a particle has two kinds of energies, potential and kinetic. The potential energy reveals the amount of fitness function, i.e., cost function, in the current position. The kinetic energy allows a particle to move to a position with higher potential energy, denoting the ability of a particle to escape the local minimum position. Operator \(onWallCollision\) denotes the case the particle has met the wall, so the particle structure, i.e., position and potential energy, are changed a bit. This operator is proposed in Alg. 2.
```
1:Input: S, F, selected particle, localThresh, constraints, MinKeLossPer, energy
2: Copy the selected particle into a new particle
3:for limited number of stepsdo
4: pick random numbers i and j in range [0, F]\(\times|\)S - 1]
5: swap i-th and j-th elements of new particle
6:if constraints are violatedthen bestFit(.) is invoked
7: Calculate potential energy of new particle based on cost value
8:if potential energy is decreased through collisionthen
9: pick random number q in range [MinKeLossPer, 1]
10: keep q% of lost energy as new kinetic energy
11: update personal best position and cost
12: old particle \(\leftarrow\) new particle
13: localThresh \(+=1\)
14: update global best position
```
**Algorithm 2** onWallCollisionOperator
First, two random indices are selected from the position vector, and the values inside these two selected indices are swapped (Lines 4-5). The swapping process is performed, as stated in [21]. If intent on deploying a service, then the constraints are checked (Line 6). If it violates the constraints, the best-fit algorithm is invoked to find the best destination machine. If that placement fails too, then the changes are reverted, and two other elements are selected randomly, and the same process is performed again until all the changes satisfy the constraints. Afterward, the potential energy is recalculated for the new particle
(Line 7). If the potential energy is cut down, the kinetic energy is updated, and the old particle is replaced with the new one (Lines 8-12). Finally, the new particle's \(localThresh\) attribute is incremented by one to denote one more step of local search in the neighborhood (Line 13).
On the other hand, the \(interactMolecularCollision\) operator denotes the case in which two particles hit each other. In this case, the previous molecules will become new molecules with different positions and potential energies. This operator is stated in Alg. 3. In Alg. 3, the same process of Alg. 2 is executed, but between two different particles. We first pick two random elements from the position vectors of two particles and swap the values inside these two selected indices crosswise (Lines 3-12). Then we check the constraints, and if they violate the constraints, then the best-fit algorithm is executed. After that, the accumulated potential energy of both particles is calculated (Line 13), and if it is cut down, the kinetic energies of both particles are updated according to the amount of energy loss, and the old particles are replaced with the new ones (Lines 14-20). Finally, the \(localThresh\) attribute of the new particles are incremented by (Line 21-22).
The last two mentioned operators can rescue the algorithm from getting stuck in the local minimum position because of sudden changes in position.
```
1:\(Input\): S, F, selected particles, localThresh of both selected particles, constraints, energies of both selected particles
2: Copy the selected particles into two new particles
3:for limited number of steps do
4: pick random numbers i and j in range [0, \(|\)F\(|\times\)\(|\)S\(|\) - 1]
5: swap i-th elements of first new particle and first old particle
6: swap j-th elements of first new particle and second old particle
7:if constraints are violatedthen revert changes
8:for limited number of stepsdo
9: pick random numbers i and j in range [0, \(|\)F\(|\times\)\(|\)S\(|\) - 1]
10: swap i-th elements of second new particle and first old particle
11: swap j-th elements of second new particle and second old particle
12:if constraints are violatedthen revert changes
13: Calculate potential energy of new particles based on cost value
14:if potential energy is decreased through collisionthen
15: pick random uniform number q
16: keep q% of lost energy as first new particle's kinetic energy
17: keep (1-q)% of lost energy as second new particle's kinetic energy
18: update personal best positions and costs
19: first old particle \(\leftarrow\) first new particle
20: second old particle \(\leftarrow\) second new particle
21: first new particle's localThresh \(\leftarrow\) 1
22: second new particle's localThresh \(\leftarrow\) 1
23: update global best position
```
**Algorithm 3** interMolecularCollisionOperator
#### 4.2.3 Initial population
The initial position of particles is populated with binary random values and velocity values are sampled from random uniform distribution. It is worth mentioning that the initial position is considered as initial \(p_{best}\) and the cost associated with it, is equal to positive infinity.
#### 4.2.4 The manipulated hybrid PSO-CRO algorithm
We have proposed the hybrid binary PSO-CRO algorithm in Alg. 4 to provision services in the hybrid fog-cloud environment to minimize the service delay and QoS violation.
We first initialize the swarm of particles (Lines 3-5), and the initial \(p_{best}\) of all particles and global \(g_{best}\) are determined. Then, the attribute \(localThresh\), which denotes the number of times that CRO operators are executed inside a random particle, is compared with hyper-parameter \(\gamma\) (Line 8). \(\gamma\) determines the maximum times that particle can investigate in the neighborhood, and the comparison reveals it is time to execute a global search through performing PSO. Afterward, particles' \(p_{best}\) and global \(g_{best}\) are updated in style of PSO, and then, reset the attribute \(localThresh\) to perform the local search again in \(\gamma\) future iterations (Lines 9-12). \(X_{opt}\) in Alg. 4 is used to denote the global best particle which carries the global optimal position and
cost. Afterward, hyper-parameter \(\mathit{interProb}\) is compared with a random uniform number (Line 14), which randomly chooses each CRO operators in each iteration.
```
1:Input: Set of particles, iterations, interProb, y
2:Output: Updated placement matrix P
3: initialize particles set
4:for each particle do
5: update personal best position, cost, and Xopt
6:for a limited number of iterations do
7: pick random particle p
8:if p.localThresh \(>\) y then
9:for each particle do
10: update velocity V and positions X through updatePosition(.) operator
11: update personal best position, cost, and Xopt
12: p.localThresh = 0
13:else
14:if\(\mathit{interProb}\)\(<\) U(0, 1) then
15: select another random particle p'
16: perform interMolecularCollisionOperator(.)
17:else
18: perform onWallCollisionOperator(.)
19: confirm position consistency
20: update Xopt among the swarm
21:if violation and delay thresholds are satisfied then
22: break the iterations loop
23: translate position vector of Xopt back to placement matrix P
```
**Algorithm 4** hybridBinaryPSoCro (HBPCRO)
Whether the \(\mathit{onWallCollision}\) operator or the \(\mathit{interMolecularCollision}\) operator is executed, the position values in various dimensions are changed randomly. So we have to confirm that the resulting position values are consistent and the constraints are satisfied (Line 19).
### Time complexity
The time complexity of the hybrid binary PSO-CRO enormously depends on how PSO and CRO operators are implemented. We first describe the time complexity of each step, and then, the overall complexity is proposed.
The initialization step's complexity is \(O(|F|\times|S|\times N)\), where \(N\) denotes the number of particles. The initializer function has to iterate through the position and velocity vectors to initialize them for \(N\) times. The time complexity of executing PSO is \(O(|F|\times|S|\times N)\). If CRO operators are executed, the complexity becomes \(O(|F|\times|S|)\) because it has to iterate through the position vector and update its values. Finally, the time complexity of the proposed approach is \(O(|F|\times|S|\times N)\), which can be approximated to \(O(N^{3})\).
## 5 Experimental Result
In this section, we discuss the experimental setup, analyze the results of the experiments, and assess the optimality of the proposed approach. The FogPlan framework is used in order to simulate the Fog-Cloud Datacenter, which is implemented in Java. Experiments are conducted on an Intel(r) CoreTM i5-2430M CPU @ 2.40GHz \(\times\)4.
To study the impact of hyper-parameters of HPCDF on placement optimality, we have performed three different experiments. The first experiment investigates how hyper-parameters of hybrid binary PSO-CRO, i.e., number of particles and amount of gamma, impress the optimality of delay and QoS, and the optimal values for both of these parameters are determined. In the second experiment, we have compared the proposed method against greedy baselines w.r.t service delay, amount of violation, and cost of placement. In the third experiment, we have examined different values for the delay and studied the impact of the delay threshold on placement optimality. The following subsections provide experiments' configurations and results analysis, respectively.
### Experiment setup
#### 5.1.1 Network topology
Fog nodes are initially randomly connected to cloud servers, and these connections are changed over time w.r.t services' placement. As stated before, we take the aggregated incoming traffic of all services from all IoT nodes. Accordingly, there is no need to have information about the number of IoT nodes. Table 3 shows the number of machines and services used in all experiments.
#### 5.1.2 Mobile Augmented Reality (MAR) application services
We have considered MAR services in our experiments. MAR refers to a category of ARs which you can have everywhere. The size of request and response messages in this application is in the range of 10 KB to 26 KB and 10 Bytes to 20 Bytes, respectively [22]. We have sampled the required processing capacity of these services from \(U(50,200)\)6 MI per request [23].
Footnote 6: \(U(i,j)\) denotes a normal random number from the range of [i, j]
#### 5.1.3 Parameters and delays
We have sampled the QoS level of each service from \(U(0.8,0.99)\), where 0.99 denotes a strict QoS level. The delay threshold of each service is sampled from \(U(10,15)\) ms [24]. The propagation delay between IoT nodes and fog nodes is sampled from \(U(1,2)\) ms, and between fog nodes and cloud servers is sampled from \(U(15,35)\) ms [25]. We have assumed between 6 to 10 hops distance between fog nodes to cloud servers, with a maximum of two 100 Gbps links, and other links would be 10 Gbps. The transmission rate between IoT nodes to FSC is assumed to be 10 Gbps.
As mentioned before, the deploy and release delays are less than 50 ms [26] and may be ignored. However, we have considered it in the objective in Section 3. Delay of deployment includes the time needed to download containers from FSC onto the machine, and the time needed for container startup is considered equal to 50 ms. It worth mentioning that we have set the impact coefficient, \(\alpha\), equal to 0.5.
The hyper-parameters of hybrid binary PSO-CRO are shown in Table 4. These values for parameters are determined using experiment 1, which is investigated in the next subsection.
#### 5.1.4 Services and machines' capacities
In our experiments, we have considered heterogeneous machines to show their impact on the placement. The processing capacity of each fog node is sampled from \(U(800,1300)\) MIPS, and the processing capacity of each cloud server is sampled
\begin{table}
\begin{tabular}{|c|c|} \hline
**Hyper-parameter** & **Value** \\ \hline Number of particles & 35 \\ Iterations & 700 \\ Gamma (\(\gamma\)) & 3 \\ Initial KE & 100000 \\ Velocity range & [-0.5, 5.0] \\ minKElossPer & 0.1 \\ interProb & 0.8 \\ \(\alpha\) & Linearly decrease from 0.9 to 0.5 \\ \(\beta\) & Linearly increase from 0.5 to 5.5 \\ \hline \end{tabular}
\end{table}
Table 4: Experiments’ configuration for HPCDF
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Experiment** & **Num. of fog nodes** & **Num. of cloud servers** & **Num. of services** \\ \hline
1 & 10 & 3 & 40 \\
2 & 10 & 5 & 50 \\
3 & 10 & 5 & 20 \\ \hline \end{tabular}
\end{table}
Table 3: Number of fog nodes, cloud servers, and services for all experiments
from \(U(16000,26000)\) MIPS. Storage capacity for each fog node and the cloud server is considered between 10 GB to 25 GB and 100 GB to 250 GB, respectively. Memory capacity for each fog node and the cloud server is considered between 4 GB to 16 GB and 8 GB to 32 GB, respectively. Some instance types of Amazon EC2 machines7 are listed in Table 5, suitable for fog nodes and cloud servers.
Footnote 7: [https://aws.amazon.com/ec2/instance-types](https://aws.amazon.com/ec2/instance-types)
#### 5.1.5 Costs
The processing cost in a fog node or a cloud server is equal to \(~{}2\times 10^{-3}~{}\) per million instructions. The storage cost in a fog node or a cloud server is equal to \(~{}4\times 10^{-3}~{}\) per B/s and 4 per B/s, respectively. The communications between fog nodes and cloud servers and between fog nodes and FSC cost 0.2 per Gb and 0.5 per Gb, respectively. The cost of 1% violation is considered in between 100 to 200 per second. Also, the cost of requests released from high traffic nodes is considered as 2 per request per second, and the cost of service delay is equal to \(~{}4\times 10^{-3}~{}\) per millisecond.
#### 5.1.6 Dataset
To experiment with the proposed approach in a reasonably realistic environment, we have used real-world traffic traces provided by the MAWI working group [27]. This dataset is collected from project WIDE, where researches are performed on computer networks. MAWI data sets consist of PCAP files which uses tcpdump format but it does not include the payload. Traffic traces are made by tcpdump, and then, IP addresses in the traces are scrambled by a modified version of tcpdpriv. Traces of five samplepoints are available. The intended traffic data are selected from SamplePoint-F, of 12th April 2017 from 12:00 PM to 2:00 PM, as incoming traffic to fog nodes. We have extracted information from traced TCP flows and aggregated these statistics by geographical location and network prefix. The traffic in this dataset changes every minute. We have used 10 different subnets of the largest traffic machines as fog nodes.
### Numerical Results
In this subsection, we study the effect of the delay threshold on the optimality of the proposed. The proposed algorithm is compared with baselines, like All-Cloud, Min-Viol, Min-Cost, and BPSO. In All-Cloud, all services are provisioned on cloud servers. Min-Cost and Min-Viol, as greedy methods, provision services to decrease delay violation and service cost, respectively. BPSO improves the services' placement by using pure binary PSO. All experiments are discussed below.
#### 5.2.1 Experiment 1
In the first experiment, we investigate the effect of the number of particles and the amount of hyper-parameter gamma, i.e. \(\gamma\), on the amount of service delay, delay violation, and service cost of the proposed approach.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{**Fog node**} & \multicolumn{5}{c|}{**Cloud server**} \\ \hline \multirow{3}{*}{
\begin{tabular}{c} **Cloud** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** ** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** ** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** ** \\ **FSC** ** \\ **FSC** ** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** ** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** ** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** ** \\ **FSC** **FSC** \\ **FSC** ** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** ** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC** \\ **FSC**SC** \\ **FSC** \\ **
In this experiment, various numbers of particles and values for gamma are tested using a random search for 637 iterations, and the result is depicted in Fig. 5. We have considered the ranges [2, 10] and [5, 50] for gamma and several particles, respectively.
As shown in Fig. 5, the x-axis denotes the amount of delay violation. The y-axis denotes the number of particles, and the z-axis denotes the amount of hyper-parameter \(\gamma\). The amount of service delay and service cost are denoted by the size and color of the points, respectively.
It is worth mentioning that amount of service delay and service cost are scaled to the range [20, 80], to better reveal the difference between colors and sizes of the points in Fig. 5. As Fig. 5 reveals, an increasing number of particles and decreasing \(\gamma\), results in less service cost, service delay, and delay violation. We have set the number of particles and the value of gamma according to Fig. 5, as shown in Table 4.
#### 5.2.2 Experiment 2
In the second experiment, we investigate the optimality of HPCDF compared to All-Cloud, Min-Cost, and Min-Viol. The result of this experiment is an average of 10 consecutive runs of the experiment. In this experiment, traffic is changed every 60 seconds, and we have set \(\tau\) equal to 120 seconds. Fig. 6 (a) reveals the normalized traffic from 12:00 PM to 2:00 PM.
Fig. 6 (b) shows the average service delay. The proposed method improves service cost by 66.02% in comparison to Min-Viol. In the rest of this paper, we have compared HPCDF with Min-Viol, as a reference. All-Cloud has the highest service cost; Because through this method, strict QoS level services are also deployed on cloud servers, which increases delay violation, and consequently, average service cost. On the other hand, HPCDF proves to be more stable in terms of service cost in comparison to other baselines. As can be inferred from the experiment, HPCDF is way better than binary PSO in terms of service cost.
Average service delay is depicted in Fig. 6 (c). All-Cloud has the highest service delay due to the deployment of strict QoS level services on cloud servers. HPCDF improves average service delay by 29.34% in comparison to Min-Viol. BPSO does not improve service delay significantly due to getting stuck in local minima issue. HPCDF has solved this issue with the help of the CRO algorithm to explore global and local regions by an appropriate proportion.
Fig. 6 (d) denotes the average delay violation. All-Cloud, as mentioned before, has the highest delay violation. As shown in Fig. 6 (d), Min-Cost is more unstable than other methods and has many variations in service delay and delay violation. Because it does not consider violation into account directly and only cares about improving service cost, where this improvement can occur in other parameters than delay violation. It is worth mentioning that HPCDF improves the average delay violation by 50.15%.
Fig. 6 (e) and Fig. 6 (f) show the number of deployed services on cloud and fog servers, respectively. It can be seen that HPCDF deploys more number of services on fog nodes than cloud servers, which causes service delay, service cost, and delay violation improvements. Due to the limited resources of fog nodes, it is not practical to deploy the whole services on fog nodes to minimize service delay, service cost, and delay violation.
Figure 5: Impact of hyper-parameters gamma and number of particles on service delay, delay violation, and service cost
Figure 6: Simulation result of second experiment
#### 5.2.3 Experiment 3
In the third experiment, we study the effect of delay threshold on service delay, service cost, delay violation, and the number of deployed services on fog and cloud servers. The result of this experiment is also an average of 10 consecutive runs of the experiment. In this experiment, traffic is changed every 10 seconds, and we have set \(\tau\) equal to 10 seconds. Delay threshold changes from 1 millisecond to 100 milliseconds, and the results are depicted in Fig. 7.
As shown in Fig. 7 (a), using HPCDF, as the delay threshold increases, service cost decreases until the delay threshold equal to 11 milliseconds. For delay thresholds bigger than 11 milliseconds, service cost smoothly increases. For delay thresholds between 66 milliseconds and 76 milliseconds, service cost explosively increases and then becomes stable. As we know, the more delay threshold becomes, the more services are deployed on cloud servers, and hence, the service cost gets bigger. For delay thresholds bigger than 81 milliseconds, a fixed number of services are deployed on the fog and cloud servers because fog nodes also try to serve services because of the cost of utilization and delay. This is the reason service cost becomes stable for delay thresholds bigger than 81 milliseconds. Service cost for HPCDF increases later than Min-Viol, because of, cost of deploying services on fog nodes and low utilization of fog nodes, as mentioned before. In general, as the delay threshold increases, more services will be deployed on cloud servers, so the service cost increases.
As shown in Fig. 7 (b), service delay increases smoothly for delay thresholds lower than 66 milliseconds. For the delay threshold between 66 milliseconds and 81 milliseconds, service delay explosively increases because more services are deployed on cloud servers.
Fig. 7 (c) denotes the effect of delay threshold on delay violation. For delay thresholds lower than 11 milliseconds, delay violation decreases, and after that, get vanished. As the delay threshold increases, QoS level sensitivity decreases, and delay violation gets lower.
Fig. 7 (d) and Fig. 7 (e) show the effect of delay threshold on the number of services deployed on cloud and fog servers, respectively. As expected, the more delay threshold becomes, the lower QoS level sensitivity becomes, and more services are deployed on cloud servers rather than on fog nodes. As the number of services on cloud servers increases, because of increasing the delay violation, less number of services are deployed on fog nodes, and for delay thresholds above 81 milliseconds, a fixed number of services are deployed on fog nodes. This leads to serving services with enough resources and stability in the usage of fog and cloud resources.
Figure 7: Effect of delay threshold on other parameters
## 6 Discussion
The results of experiments indicate that hybrid binary PSO-CRO can achieve a more optimal solution than greedy approaches, but the proposed approach is slower than greedy ones. As can be seen in experiments, the proposed meta-heuristic approach achieves lower service, service delay, and delay violation than greedy strategies. It is evident from experiments that reducing reconfiguration interval decreases intended parameters slightly. One can estimate the reconfiguration interval using a learning algorithm to further reduce service delay and delay violation.
A limitation of HPCDF is its slowness. One way to address this issue is to improve the performance of hybrid binary PSO-CRO by enhancing the convergence speed of PSO and CRO algorithms for high dimensional search spaces, as discussed in [28, 29, 30].
Another drawback of the proposed algorithm is that the same set of hyper-parameters of the algorithm is used for incoming traffic. One can utilize a learning strategy to estimate a unique set of hyper-parameters for every incoming traffic in a specific moment, hence obtaining an algorithm that is more customized to the current incoming traffic.
## 7 Conclusion
Fog computing is emerged as a geo-distributed computing paradigm to improve delay, bandwidth usage, energy consumption, and QoS for delay-sensitive applications. We have discussed how some of these parameters can be formulated as an optimization problem and proposed a hybrid meta-heuristic approach, named HBPCRO, to address the optimization problem. We have then proposed the results of our experiments on real-world data and analyzed the impact of different hyper-parameters on the proposed algorithm's output solution. HBPCRO achieved the least service delay, delay violation, and service cost, which guarantees a more reliable solution than greedy methods. We have shown that reducing reconfiguration interval decreases service delay and delay violation slightly, and increasing delay threshold, increases all intended parameters.
As future works, these two ideas seems interesting:
* One can modify the proposed algorithm to provision stateful services, which is involved with migrating states of CPU, Memory, Hard Disk, and Network configuration
* One can use flexible resources, such as Docker containers. With this idea, the algorithm would prefer to instantly change the required resources of a pre-deployed service rather than deploying more instances of that service. This idea significantly reduces the cost and delay of migrations.
* One can consider availability for each server and optimize the placement to find the most available and delay-improved placement. This helps the proposed method to become more realistic and industry acceptable.
* One can pre-schedule the services to make the proposed approach sort of online and real-time method. Precisely, predict, and schedule all of the more reasonable requests according to the resources' capacities. Then, store the predicted requests and corresponding placements in a database. When it comes to deploying in production, the placement associated with each request is retrieved and applied in the FSC.
|
2303.04539 | Gender Segregation: Analysis across Sectoral-Dominance in the UK Labour
Market | This paper aims to evaluate how changing patterns of sectoral gender
segregation play a role in accounting for women's employment contracts and
wages in the UK between 2005 and 2020. We then study wage differentials in
gender-specific dominated sectors. We found that the propensity of women to be
distributed differently across sectors is a major factor contributing to
explaining the differences in wages and contract opportunities. Hence, the
disproportion of women in female-dominated sectors implies contractual features
and lower wages typical of that sector, on average, for all workers. This
difference is primarily explained by "persistent discriminatory constraints",
while human capital-related characteristics play a minor role. However, wage
differentials would shrink if workers had the same potential and residual wages
as men in male-dominated sectors. Moreover, this does not happen at the top of
the wage distribution, where wage differentials among women working in
female-dominated sectors are always more pronounced than those of men. | Riccardo Leoncini, Mariele Macaluso, Annalivia Polselli | 2023-03-08T12:23:37Z | http://arxiv.org/abs/2303.04539v3 | # Gender Segregation: Analysis across Sectoral-Dominance in the UK Labour Market
###### Abstract
Although the degree of gender segregation in the UK has decreased over time, women's participation in traditionally "female-dominated" sectors is disproportionately high. This paper aims to evaluate how changing patterns of sectoral gender segregation affected women's employment contracts and wages in the UK between 2005 and 2020. We then study wage differentials in gender-specific dominated sectors. We found that the propensity of women to be distributed differently across sectors is a major factor contributing to explaining the differences in wages and contract opportunities. Hence, the disproportion of women in female-dominated sectors implies contractual features and lower wages typical of that sector, on average, for all workers. This difference is primarily explained by persistent discriminatory constraints, while human capital-related characteristics play a minor role. However, wage differentials would shrink if workers had the same potential wages as men in male-dominated sectors. Moreover, this does not happen at the top of the wage distribution, where wage differentials among women in female-dominated sectors are always more pronounced than men.
**JEL codes:** J16, J2, J31, J61, J71.
**Keywords:** gender sectoral segregation, labour markets, gender inequality, wage differentials.
Introduction
Despite an upward trend in several OECD countries, the increase in female employment rates has primarily interested sectors where women are already over-represented, such as health care, food and accommodation, and service activities (OECD, 2020; Eurofound and European Commission, 2021). Between 2005 and 2020, the share of women in total employment in the United Kingdom (UK) exceeded 70% in sectors such as education, health, and households as employers. In contrast, it is below 30% in sectors like agriculture, mining and quarrying, manufacturing, construction, and transport. In a few sectors (i.e., distribution, financial and insurance services, arts and entertainment), the share of men and women is around 50% (see Table 1).
To support equal treatment of workers in the workplace and improve gender diversity across industries, the UK adopted several reforms gradually over the past decade1. One of the most significant pieces of legislation ever was the Equality Act 2010 (EA2010) which sets out several measures prohibiting, among others, gender discrimination in various areas, such as employment, pay, services and provision of goods2. Although these policies have led to more balanced participation rates (Office for National Statistics, 2022), industrial segregation is still a major factor contributing to explaining the sorting of women across occupations (see Tables 2-3) and the labour market differentials, including employment and wage gap (Olsen et al., 2018; Government Equalities Office, 2019; Irvine, 2022)3. For instance, Razzu and Singleton (2018) found that the reduction of women's representation in certain industrial sectors - such as manufacturing and banking & finance - is responsible for the shifts in female employment after the 1990s, although the
gap between women and men in terms of some observable characteristics (i.e., education levels) has closed over time. In addition, female labour supply in in-person sectors has recently faced severe disruption due to the COVID-19 outbreak - especially for young women, working mothers, and female immigrants (Czymara et al., 2020; Open Society Foundations, 2020; Johnston, 2021)4.
Footnote 4: Many recent studies have defined COVID-19 pandemic as _she-session_, showing that this crisis has significantly hit women with and without children especially in female-dominated sectors (Gupta, 2020; Goldin, 2022).
Therefore, this study investigates: (i) how gender segregation across sectors affects the type of employment contracts (i.e. part-time, permanent, remote work, number of weekly working hours) and hourly wages for women and men within and between female- and male-dominated sectors; and (ii) how the gender wage differentials differ in female- and male-dominated sectors based on observable and unobservable characteristics. The analysis relies on the UK Labour Force Survey (LFS) quarterly data for the fiscal years between 2005 and 2020.
The first question is addressed through a propensity score matching (PSM) by estimating the average differences in labour market outcomes between workers in female- and male-dominated sectors with similar observed socio-demographic and working characteristics. To answer the second question, we first build on the three-fold Kitagawa (1955) - Blinder (1973) - Oaxaca (1973) (KBO) decomposition to explore the components that drive hourly wage differentials within female- and male-dominated sectors over time. While the contribution of human capital and observable skills are outlined in the Mincerian wage regression, we then look into predicted wages and the unexplained component of the KBO using residual wages from the Mincerian regression. This approach is similar to the method used in the literature on migration to calculate individual potential earnings (Parey et al., 2017) and capture the part of earnings that is uncorrelated to observed skills (Gould and Moav, 2016; Borjas et al., 2019)5.
Footnote 5: This literature highlights that immigrants could be positively/negatively selected based on both observed (e.g., higher levels of education) and unobserved determinants of labour market success (e.g. motivation, ambition and ability) that can enter into the decision to self-select into migration (Chiswick, 1978, 1986, 1999; Borjas, 1987; Bertoli et al., 2016).
Our main findings can be summarised in the following three points. First, gender
based sectoral segregation matters in the disparity of contractual opportunities, even controlling for occupational composition. Workers in female-dominated sectors are more likely to be segregated into atypical contracts (part-time), to work fewer hours and less from home, and to earn less than their counterparts in male-dominated sectors. This is also true for men, who work with employment contracts and lower wages typical of female-dominated sectors than their peers in male-dominated sectors. Second, from the KBO decomposition, there are few differences in observable characteristics between men and women, so human capital plays a minor role in explaining wage differentials. Instead, most of the difference is due to the persistent discriminatory constraints6, while a component remains still unexplained, especially in male-dominated sectors, which is usually associated with behavioural traits - i.e., risk aversion, competition in risky environments, bargaining power (as explained by the literature: Gneezy et al., 2003; Gneezy and Rustichini, 2004; Booth, 2009; Bertrand, 2011; Saccardo et al., 2018). Third, wage differentials between and within female- and male-dominated sectors would shrink if workers had the same potential wages as men in male-dominated sectors. However, women in female-dominated sectors would always earn less than men in high-paid jobs due to the negative selection in the labour market, _ceteris paribus_.
Footnote 6: The “coefficient effect” from KBO is typically referred to as ongoing discriminatory constraints in the labour market for the minority group (Altonji and Blank, 1999).
While most of the literature explains gender segregation by looking at occupational and job dimensions (Blackburn et al., 1993; Watts, 1992, 1995, 1998; Petrongolo, 2004; Cortes and Pan, 2018; Folke and Rickne, 2022; Scarborough et al., 2021),7 our work is closely related to the scant literature on the role of gender segregation across sectors (Moir and Smith, 1979; Kreimer, 2004; Campos-Soria and Ropero-Garcia, 2016; Kamerade and Richardson, 2018; Scarborough et al., 2021). These papers highlight how gender division of labour is still embedded in sectors (Carvalho et al., 2019), as they drive in a significant way the labour market dynamics and wage differentials (Moir and Smith, 1979). In
addition, the disproportion of women (or men) within sectors is considered a structural factor shaping the differential effects on labour markets caused by economic recessions (Rubery, 2010; Rubery and Rafferty, 2013; Kamerade and Richardson, 2018) and the business cycle (Hoynes et al., 2012; Perivier, 2014; Doepke and Tertilt, 2016; Razzu et al., 2020; Pilatowska and Witkowska, 2022). While most of these papers focus on the role of gender segregation in explaining the gender pay gap, our contribution is triple. First, we build two indicators that measure the degree and gender-type sectoral segregation (i.e., sectoral dominance and sectoral segregation index). Second, we use PSM to estimate the average effect of segregation in gender-dominated sectors and men and women on employment contracts and hourly wages matching the worker's socio-demographic and workplace characteristics. Third, we look into the KBO to investigate each component contributing to different wage trajectories. In addition to observable skills, we explore the individual wage potential and how men and women differ in unobservable characteristics within female and male-dominated sectors among genders.
Finally, we extend the findings of the 1980s literature on the issue of "comparable worth"8(Treiman et al., 1981; Maahs et al., 1985; Bielby and Baron, 1986; Aaron and Lougy, 1987) by considering a more recent time period. This literature found that the disproportion of women in female-dominated occupations is associated with lower pay in that occupation, on average, for all employees - men and women (Treiman et al., 1981; Killingsworth, 1987). However, the negative effect on the wage of being in such jobs is more significant for men than women (Roos, 1981), even after controlling for relevant worker and job characteristics, including industry effects (Johnson and Solon, 1984). Consistent with these studies, we find that it does for the differences in the industrial sectors in which women and men are located. However, we found a more pronounced wage differential among women than men in female-dominated sectors at the top of the wage distribution. Further, using sectors allows us to obtain a more accurate estimation
of the segregation indices to measure the degree of unbalance of a sector towards women or men.
The rest of the paper is structured as follows. Section 2 describes the data and reports some descriptive analysis. Section 3 discusses the measures of gender sectoral dominance and segregation. Section 4 presents the empirical strategy. Section 5 reports the estimated results. Section 6 concludes.
## 2 Data and Descriptive Statistics
### Data Sources and Characteristics of the Sample
Our analysis is based on the Labour Force Survey (LFS) quarterly data released by the UK Office for National Statistics (ONS). LFS is the most extensive household study in the UK, providing a comprehensive source of data on workers and the labour market. Our final estimation sample includes the working-age population (aged 16-64) over the fiscal years 2005 and 2020, consisting of 1,788,945 women and 1,544,280 men. The period 2005-2020 is rather important as it covers widespread enforcement of equality legislation and includes the 2007-2008 financial and economic crisis and the recent changes caused by the COVID-19 outbreak in 2020.9
Footnote 9: Most of the literature shows that the 2007-2008 crisis had a severe impact on male-dominated sectors, such as on construction and manufacturing (Hoynes et al., 2012; Perivier, 2014; Doepke and Tertilt, 2016). In contrast, the COVID-19 crisis has hit counter-cyclical sectors (e.g., in-person services) sharply (Pilatowska and Witkowska, 2022).
The dataset includes variables on a wide range of (i) demographic characteristics (gender, age, nationality, ethnicity, religion); (ii) socio-economic factors (presence of dependent children, marital status, education, experience, full/part-time job, remote work, public sector, training opportunities, sectors and occupations); (iii) geographical information on residence and working region. We distinguish between UK natives and citizens from the European Economic Area (EEA) and immigrants from non-EEA countries. Information on wages in the LFS is the self-reported gross weekly pay for the reference week10. The classification of sectors of the economy follows the Standard Industrial Clas
sification (UK SIC) at one-digit11.
Footnote 11: Our analysis uses UK SIC 2007, the current five-digit classification used in identifying business establishments by type of economic activity. For years before 2008, we used the correspondence between the sections of SIC 2003 and SIC 2007.Sectors labelled as _O - Public administration and defence_ and _U - Extra territorial_ are removed from the sample due to the different nature of contracts and wages in their related jobs.
Table 4 reports the summary statistics of the main variables by gender. There is a strong prevalence of UK natives in both male and female samples (above 80%), followed by non-EEA immigrants and EEA citizens. The average age is similar for both men and women (around 40 years). Women in the sample are, on average, as educated as men (13 years of education, on average), and slightly less experienced (23.74 years of experience vs 24.30). Half of the women in the sample are either married or cohabiting (i.e., in a stable relationship). In addition, 37% of the women have dependent children, compared to only 28% for men. Women work on average around 31 hours per week12 while men 40 hours per week, which seems dependent on a higher share of women working part-time (43% vs 12% for men).
Footnote 12: Whenever applicable, the number of hours includes usual hours of paid overtime to the total hours worked in the main job.
A more detailed investigation of the reasons for working part-time among women, men and the total sample is in Table 5. Figures show that the share of women who could not find a full-time job is smaller than the share of those who did not want a full-time job (22% against 44%); this is the opposite for men (31% against 22%). Women in the sample choose part-time jobs to spend more time with family (32% against 9% for men), and due to domestic commitments that prevent full-time work (27% against 7% for men). On the contrary, men mainly decide not to have a full-time job because "they are financially secure and work because they want" (22% against 9% for women), and earn enough with part-time jobs (13% against 6% for women).
### Descriptive Overview on the Entry Decision
A preliminary descriptive analysis shows the contribution of socio-economic factors on the decision to enter the labour market by comparing men and women. Table 6 reports
the marginal effects of a Probit regression model by gender. Columns (1) and (3) are the estimates for the total sample (2005-2020), and Columns (2) and (3) for the 2020 sample.
European men are 2.6 percentage points (henceforth, p.p.) more likely to enter the labour force with respect to UK men, while European women are 1.7 p.p. less likely to be active with respect to UK women. In contrast, non-European men and women are less likely to be in the labour force, although the magnitudes are higher in absolute value for women (9.7 p.p.) than men (0.7 p.p.), as expected. Women in a long-term relationship (either married or in a civil partnership) tend to be out of the labour force with a probability of 3.4 p.p. in the total sample and 4.3 p.p. during Covid-19 in stark contrast to men. On average, the presence of dependent children increases the likelihood of entering the labour market by 6.5 p.p. for men and only 2.3 p.p. for women over the entire period in analysis. During Covid-19, the magnitudes for women increase to 4.3 p.p.13 Compared to individuals with low education, more educated people are less likely to be in the labour force (between 0.5 p.p. and 1.7 p.p. for men; between 1.2 p.p. and 2.3 p.p. for women). In addition, receiving benefits of any kind decreases the probability of being in the labour force by around 23-25 p.p. for both men and women.
Footnote 13: Results for women support the empirical evidence of a reduced “child penalty” – i.e., the lower labour force participation of women with the arrival of children – on mother’s labour supply over the past decades (Boushey et al., 2005; Goldin, 2006).
## 3 Conceptual Framework
### Gender Sectoral Segregation Index
Following Watts (1998), gender sectoral segregation can be defined as a disproportionate share of men or women in sectors of the economy, independently of the nature of the job allocation. A sector is _female dominated (fd)_ if the share of women employed in that sector is higher than the overall share of men in that sector; it is _male dominated (md)_ otherwise. In formulae, the classification criterion for gender sectoral dominance is as
follows:
\[\text{Sectoral Dominance}=\begin{cases}\text{Female}&\text{if }\frac{W_{it}}{W_{t}}> \frac{M_{it}}{M_{t}}\\ \text{Male}&\text{otherwise}\end{cases} \tag{1}\]
where \(W_{jt}\) and \(M_{jt}\) are respectively the total number of women and men employed in sector \(j\) (SIC 1-digit) at time \(t\); \(W_{t}\) and \(M_{t}\) are respectively the total number of female and male workers at time \(t\). The classification criterion defined in (1) uses a "majority voting" rule - i.e., the group with the largest number of members (either male or female) represents the sector.14 Table 7 reports the list of female-dominated sectors and male-dominated sectors based on the aforementioned criterion.
Footnote 14: The denominators in (1) are not total employment (i.e., male plus female employees) but total employment by gender group, providing the overall share of women (or men) in a sector. The advantage of the criterion consists in avoiding the use of conservative thresholds of more than 60% (Killingsworth, 1987) to classify a sector as female-dominated.
Classification criterion (1) is used to construct measures of concentration of workers in female-dominated (_fd_) and male-dominated (_md_) sectors. We define the Sectoral Segregation Index (\(SSI_{t}^{s}\)) as a measure of the degree of disproportion in the distributions of men and women in female- and male-dominated sectors at each time period. The index is based on the well-known Index of Dissimilarity, which is used in labour (Watts, 1998) and education economics (Zoloth, 1976; James and Taeuber, 1985) to study group compositions, and quantify the segregation among two groups (Cortese et al., 1976).
\(SSI_{t}^{s}\) is calculated for the two gender dominated sectors (\(SSI^{fd}\) and \(SSI^{md}\)) as follows:
\[SSI_{t}^{s}=\frac{1}{2}\sum_{j\in J_{s}}\left|\frac{W_{jt}}{W_{t}}-\frac{M_{jt} }{M_{t}}\right|\ \ \text{for all $t$ and $s\in\{md,fd\}$} \tag{2}\]
where \(J_{s}\) is the set of sectors in male-dominated (\(s=md\)) or female-dominated group (\(s=fd\)). The index ranges between 0 and 1. Large values of \(SSI^{fd}\) (\(SSI^{md}\)) flag large gender imbalance towards women (men), and indicate the proportion of women (men) that would have to either leave or enter each sector to avoid gender sectoral segregation. The value of the index remains unchanged when transferring workers between sectors within each gender group.15
Because the index informs on time-varying group imbalance within gender-dominated sectors, this information can be used to define another segregation measure for high and low-segregated sectors within each group. This is done by ranking sectors from the least to the most segregated, according to the values of \(SSI_{t}^{s}\). Specifically, sectors that, on average, display high (low) segregation are classified as highly (low) segregated sectors. Because the difference in the shares of female and male employees can be extremely low in some sectors or high in others, this additional index allows us to distinguish between high and low-segregated sectors within the two gender-dominated groups. Table 7 lists female-dominated sectors and male-dominated sectors divided by the degree of segregation.
The distributions of \(SSI^{md}\) and \(SSI^{fd}\) are displayed in Figures 1 and 2 over the entire period of study (respectively, solid and long-dashed lines) and after EA2010 (respectively, dotted and short-dashed lines). Figure 1 documents that the aggregate gender segregation in the UK labour market is relatively small in both male and female-dominated sectors. The maximum level of the index is 0.174 in female-dominated sectors in both time samples, while 0.18 in the full-time sample for male-dominated sectors, and 0.173 after EA2010. After removing values of the index before the EA2010, the distribution of the index for male-dominated sectors shifts to the left, whereas that for female-dominate remains unchanged. This means that sectoral gender segregation decreased after 2010 in male-dominated sectors but not in female-dominated sectors.
Figure 2 distinguishes between male and female-dominated sectors with high and low gender segregation. Among low segregated sectors (left panel), gender segregation appears to be smaller in male-dominated sectors (with peaks at 0.005 and 0.014) than in female-dominated sectors (with a mean of 0.015). Among highly segregated sectors (right panel), the densities for male-dominated sectors are spread all over the support, while the densities for female-dominated sectors are centred around the mean of 0.32 in both time spans. Therefore, gender segregation in highly segregated female-dominated sectors appears, on average, smaller in magnitude than the one observed in male-dominated sectors.
The trend highlighted in the graphs suggests two plausible scenarios: the UK labour market may have experienced either a higher inflow of women into male-dominated sectors (in this case, the EA2020 may have played a positive role) or a higher transition of men into unemployment. To further shed light on these scenarios, we decompose the overall effect using a shift-share sectoral analysis in the next section.
### Shift-Share Decomposition of Employment
To better understand the determinants of the change in the shares of female employment, we adopt a revised version of Olivetti and Petrongolo's (2016) shift-share decomposition16. The growth of female employment share is decomposed into a first component that captures the change in the total _employment share_ of the sector (_between component_), and a second component reflects changes in _gender composition_ within the sector (_within component_):
Footnote 16: Unlike the original paper that uses the number of worked hours, we use the employment shares. Razzu et al. (2020) present an extension of Olivetti and Petrongolo’s (2016) decomposition considering the role of changing types of employment within industry sectors according to education from 1971 to 2016 in the UK.
\[\Delta e^{f}_{st}=\underbrace{\sum_{j=1}^{J_{s}}\alpha^{f}_{jt}\Delta e_{jt}}_ {\text{Between-sector}}+\underbrace{\sum_{j=1}^{J_{s}}\alpha_{jt}\Delta e^{f}_ {jt}}_{\text{Within-sector}}\ \ \text{for all $s,t$} \tag{3}\]
where \(\Delta e^{f}_{st}=\frac{E^{f}_{st}}{E_{st}}-\frac{E^{f}_{t_{0}}}{E_{t_{0}}}\) is the difference in the share of female employment between the base time period \(t_{0}\) and the current time period \(t\); \(\Delta e_{jt}=\frac{E_{jt}}{E_{t}}-\frac{E_{jt_{0}}}{E_{t_{0}}}\) is the difference in the share of total employment in sector \(j\) between \(t_{0}\) and \(t\); \(\Delta e^{f}_{jt}=\frac{E^{f}_{jt}}{E_{jt}}-\frac{E^{f}_{jt_{0}}}{E_{jt_{0}}}\) is the difference in the share of female employment in sector \(j\); \(\alpha^{f}_{jt}=\frac{(e^{f}_{jt_{0}}+e^{f}_{jt})}{2}\) and \(\alpha_{jt}=\frac{(e_{jt_{0}}+e_{jt})}{2}\) are decomposition weights (i.e., the average share of female employment in sector \(j\) and the average share of sector \(j\), respectively). The reference year is the first available year in the dataset (\(t_{0}=2005\)); \(s\) stands for sectors classified as female/male dominated according to Equation (1).
Figure 3 displays the shift-share decomposition of female employment. The graph shows the difference in employment in the comparison year concerning the base year (i.e., the fiscal year 2005) for women. The overall change in employment is shown in the solid
line and its decomposition into the _between_ and _within_ components, respectively, with dashed and dotted lines. The cross marks the components for female-dominated sectors, and the circle for male-dominated sectors. In this way, we can investigate which term drives the overall change in employment and assess the effect of economic downturns and policies.
Female composition (_within_ component) in the top graph started to increase gradually in male-dominated sectors after the EA2010. In contrast, it suddenly increased in female-dominated sectors after the economic crisis in 2008 but then decreased after 2012. As expected, the _between_ and _within_ components in female-dominated sectors dropped in 2020 due to the pandemic outbreak. Conversely, there was a rapid rise in female employment in male-dominated sectors. Total employment shares in female-dominated sectors (_between_ component) were almost close to the levels of the base year until 2008; after that, there was a rise in female employment in female-dominated sectors that was arrested by the Covid-19 outbreak. These results are in line with the literature assessing that during the recession period of the 2007-2008 crisis, female employment was generally affected less than male employees, while during the recovery phase, male employment recovered faster than female employment (Hoynes et al., 2012; Doepke and Tertilt, 2016; Ellieroth et al., 2019)17.
Footnote 17: However, Razzu et al. (2020) emphasise how gender segregation across industry sectors and occupations in the UK exacerbates women’s employment and pay gap during the business cycle.
Overall, the shift-share decomposition highlights interesting facts. First, the 2007-2008 crisis harshly hit male-dominated sectors while stimulating female employment in female-dominated sectors.18 On the contrary, from the first evidence, the Covid-19 outbreak arrested the overall employment in both male and female-dominated sectors. It led to a reduction in female employment in female-dominated sectors, in stark contrast to male-dominated sectors. Second, from a counterfactual perspective, the EA2010 did stimulate female employment from the demand side, as we observe a substantial increase in female composition in male-dominated sectors after 2010. This means that a higher
proportion of women were employed within each male-dominated sector at the expense of decreasing male employment (the contrast is visible in the graph for male employment in Figure B.1 in the Appendix).
## 4 Empirical Strategy
### Estimating Gender Sectoral Segregation on Employment Contracts and Wages
We now evaluate the contribution of the gender sectoral segregation on the average difference in labour market outcomes (i.e., permanent jobs, part-time jobs, working hours, remote work, and hourly wages) _between_ gender-dominated sectors among workers with similar observable skills and socio-demographic characteristics. Therefore, a propensity score matching (PSM) approach using "working in female-dominated sectors" as treatment status is adopted. The underlying assumption is that workers who choose to work in female- and male-dominated sectors only differ in the endowment of their observed skills and human capital accumulation.
Let \(p(\mathbf{X})=Pr(D|\mathbf{X})\) be the propensity score such that \(p(\mathbf{X}_{i})\in(0,1)\), where \(D\) is the treatment and \(\mathbf{X}\) a set of observable controls.19 Provided that the assumptions of the PSM are satisfied, the average treatment effect (ATE) is
Footnote 19: The propensity scores obtained from a Probit regression model are used to match control and treated units, and are reported in Table A.2. The choice of covariates is based on the relevant literature on gender segregation (e.g., Petrongolo, 2004) and the model selection performed by lasso, and include socio-demographic characteristics and work-related information (e.g., occupation). Table A.3 in Appendix reports the selected covariates from the penalised regressions.
\[\tau^{ATE}=\mathbb{E}[y_{1}-y_{0}]=\mathbb{E}\Bigg{[}\frac{D-p(\mathbf{X})}{p( \mathbf{X})(1-p(\mathbf{X}))}\ y\Bigg{]} \tag{4}\]
Controlling for the propensity score eliminates the selection bias as workers may self-select into jobs while controlling for observable factors (Cameron and Trivedi, 2005). The comparison _between_ gender-sectoral dominance is done for the pooled sample (men and women together), male sample, and female sample.
A standard sensitivity analysis is conducted to check the balancing property of the covariates before and after matching in the treated and non-treated groups. The covariates are balanced if the standardised bias after matching is within \(\pm 5\%\)(Rosenbaum and Rubin, 1985). The matching method successfully builds a meaningful control group if the condition is satisfied.
### Estimating Wages in Gender-Specific Dominated Sectors
We now focus on the gendered differences in hourly wages in male- and female-dominated sectors based on observable and unobservable characteristics. For this purpose, we first perform the counterfactual KBO decomposition to examine the components that drive wage differentials within male- and female-dominated sectors. We then run Mincerian wage regressions to explore the role of human capital and retrieve the predicted and residual wages.
#### 4.2.1 Decomposing the Gender Wage Differentials
To study gender wage differentials over time within female- and male-dominated sectors, we use a three-fold KBO decomposition20. This method decomposes the average difference in log hourly wages by gender in three components: a part that is explained by observable group differences in productivity and background characteristics (_endowment effect_); a part that, due to differences in the coefficients, includes differences in the intercept (_coefficient effect_); and a residual component that cannot be explained by such
observed differences in the outcome variable (_unexplained effect_). In formulae,
\[\begin{split}\underbrace{\mathbb{E}(y_{ml})-\mathbb{E}(y_{fml})}_{ \text{overall difference}}&=\underbrace{[\mathbb{E}(\mathbf{X}_{ml})- \mathbb{E}(\mathbf{X}_{fml})]^{\prime}\boldsymbol{\beta}_{fml}}_{\text{ endowment effect}}\\ &+\underbrace{\mathbb{E}(\mathbf{X}_{fml})^{\prime}( \boldsymbol{\beta}_{ml}-\boldsymbol{\beta}_{fml})}_{\text{coefficients effect}}\\ &+\underbrace{[\mathbb{E}(\mathbf{X}_{ml})-\mathbb{E}(\mathbf{X }_{fml})]^{\prime}(\boldsymbol{\beta}_{ml}-\boldsymbol{\beta}_{fml})}_{\text{ interaction effect}}\end{split} \tag{5}\]
where \(\mathbf{X}\) is a vector containing the covariates used in Section 4.2.2, such as, socio-demographic variables, human-capital variables, work-related variables, and a constant term; and \(\boldsymbol{\beta}\) is a vector of slope parameters and the intercept; \(fml\) stands for women and \(ml\) for men.
When the _endowment effect_ is negative, female workers possess better predictors (i.e., characteristics) than their male counterparts. When the _coefficient effect_ is positive, discrimination towards women explains wage differential. In the following paragraphs, we further investigate the role of each component of the KBO decomposition, such as human capital, individual potential wages, and residual wages.
#### 4.2.2 The Contribution of Human Capital
We use a Mincerian regression to analyse how women's human capital and observable skills affect wage differences between sectors and genders, as follows:
\[\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\delta_{t}+\boldsymbol{\epsilon} \tag{6}\]
where \(\mathbf{y}\) is hourly wages in logarithm; \(\mathbf{X}\) is \(N\times k\) matrix of control variables (i.e., socio-demographic, human-capital and work-related variables); and \(\delta_{t}\) are the time fixed effects.
The set of controls includes three groups of variables as follows. _Socio-demographic variables_ include age and its square, nationality, ethnicity, religion, being in a stable relationship, having dependent children and the interaction of the last two. _Human-capital variables_ are education, experience and its square, years in education and its
square, and training offered by the current employer. _Work-related variables_ include a dummy for female-dominated sectors, a dummy for low gender sector segregation, a dummy for working in the public sector, and the type of occupation. Working region dummies are included. Equation (6) is estimated using OLS.
#### 4.2.3 The Role of Predicted and Residual Wages
In this section, we use the results of Mincerian regressions presented in Section 4.2.2 to: (i) calculate the predicted hourly wages - that measure the individual wage potential based on observable factors; and (ii) the residual wages - that capture the part of wage uncorrelated with skills for each sub-group of workers. The sub-groups include: men in male-dominated sectors (_ml_, _ml-dom_), women in male-dominated sectors (_fml_, _ml-dom_), men in female-dominated sectors (_ml_, _fml-dom_), women in female-dominated sectors (_fml_, _fml-dom_). This approach is similar to the one used in the migration literature for selection based on predicted wages (Parey et al., 2017) and unobservables (Gould and Moav, 2016; Borjas et al., 2019).
We conduct a counterfactual exercise in which we examine the trajectory of wage potentials and residuals for each subgroup if the workers had the same estimated coefficients of men working in male-dominated sectors,
\[\hat{\mathbf{y}}^{c}_{g,gdom} =\mathbf{X}_{g,gdom}\hat{\boldsymbol{\beta}}_{ml,ml-dom} \tag{7}\] \[\hat{\mathbf{u}}^{c}_{g,gdom} =\mathbf{y}_{g,gdom}-\hat{\mathbf{y}}^{c}_{g,gdom} \tag{8}\]
where \(g=\{ml,fml\}\) and \(gdom=\{ml-dom,fml-dom\}\). Predicted and residual wages are sorted and used to construct the Cumulative Distribution Functions (CDF) by gender and gender sectoral dominance. We can then compare the CDFs of men and women _between_ and _within_ gender-sectoral dominance. The Kolmogorov-Smirnov (K-S) test checks whether the distributions of the (actual and counterfactual) predicted and residual wages are statistically different among the four sub-groups.
Estimation Results
### Estimation Results for the PSM on Contracts and Wages
Table 8 reports the average treatment effects (ATE) after matching for several labour outcome variables by samples (i.e., pooled, men, and women)21.
Footnote 21: The propensity scores for matching treated and control units come from estimates reported in Table A.2 in the Appendix. The table shows the likelihood of a worker being employed in a female-dominated sector based on socio-demographic characteristics and working environment. Being a woman in a stable relationship without dependent children decreases the probability of working in a female-dominated sector. Having dependent children, being non-European and working in operative jobs, technical and secretarial occupations reduces the likelihood of being in female-dominated occupations.
Looking at the contractual features in the pooled sample, we found that if a worker in a female-dominated sector were hired in a male-dominated sector, they would work 13.5 p.p. less part-time and 4.4 p.p. more from home, and their worked hours would increase by 12.5 p.p. This remains valid when we examine the effect for men and women separately. That is, both men and women in female-dominated sectors would work more hours (11.6 and 11.8 p.p., respectively), less part-time (13.1 and 13.3 p.p., respectively), and more from home (2.8 and 5.6 p.p., respectively) if they were employed in male-dominated sectors. All estimates are significantly different from zero at a 1% significance level.
The difference in having a permanent job between a worker employed in female-dominated sectors with one in male-dominated sectors is not significant in the pooled sample. In other words, there is no difference in the types of contracts (permanent vs temporary) offered to similar workers in the two gender-dominated sectors. When estimating the effect by gender, we observe that the difference is significant for men but not for women. In particular, men in female-dominated sectors would be hired with temporary contracts by 3 p.p. more if they were in male-dominated sectors.
Regarding wage differentials, any worker in female-dominated sectors would be paid 9.4 p.p. more if employed in male-dominated sectors. Women in female-dominated sectors earn 8.7 p.p. less than their counterparts in male-dominated sectors. However, the wage differential is more pronounced among men. That is, men in female-dominated
sectors earn 11.4 p.p. less than their peers in male-dominated sectors. All estimates are significant at the 1% level. This is consistent with the "comparable worth" literature findings, that is, jobs or occupations dominated by women pay, on average, less all employees (Treiman et al., 1981; Killingsworth, 1987), and the effect on wages in such jobs is more negative for men than women (Roos, 1981; Johnson and Solon, 1984).
The sensitivity analysis in Figure 4 confirms that the balancing property is satisfied for all samples since all covariates are well balanced - with standardised bias after matching between \(\pm 5\%\). Overall, the matching method effectively built a valid control group.
These results suggest that gender sectoral segregation can explain observed differences in employment contracts (i.e. part-time, permanent, remote work, number of weekly working hours) and wage differentials. We indeed observe that contractual features typical of a specific gender (e.g., part-time jobs and low wages for women) are more common in sectors dominated by that group.
### Estimation Results for Wages
#### 5.2.1 Results for the KBO
The evolution of the three components of the KBO decomposition and their sum over time is shown in Figure 5 by gender sectoral segregation. Women are contrasted to men within the same gender-dominated sector. The dashed line represents the _coefficient effect_, the long-dashed line the _endowment effect_ and the dotted line the part of the "unexplained" component of the three-fold decomposition (or _interaction effect_). The shadowed regions are the corresponding 95% confidence intervals. The solid line is the sum of the three effects and reveals their overall contribution22
Footnote 22: For the contribution of each of the socio-demographic characteristics, human capital attributes and sectoral indicators, see Tables A.4-A.6 in the Appendix.
We observe that the _coefficient effect_ is positive in both gender-dominated sectors. While it seems to vary around a trend in male-dominated sectors, it steadily decreases over time in female-dominated sectors. This suggests that women should be paid more than men to prevent discriminatory constraints between the two groups.
The dynamics of the _endowment effect_ differ in male and female-dominated sectors. Specifically, women working in male-dominated sectors have, on average better human capital than men before 2010 and after 2018. However, the _endowment effect_ is positive between 2010 and 2018, meaning that women have worse observed characteristics than men. Conversely, men and women employed in female-dominated sectors are, on average similar in terms of human capital as the _endowment effect_ is very close to zero.
The _unexplained component_ in male-dominated sectors positively contributes to pushing the differential wage upwards before 2010 but negatively afterwards. This captures the remaining potential effects of differences in unobserved factors other than human capital contributing to shaping the trajectories of wages in these sectors. The literature usually associates these factors with behavioural traits, such as self-esteem, ambition, bargaining power, risk aversion, lack of competition, etc. (Gneezy et al., 2003; Gneezy and Rustichini, 2004; Booth, 2009; Bertrand, 2011; Saccardo et al., 2018).
Overall, the _coefficient effect_ prevails over the other two, despite being partly offset by the negative _unexplained effect_ in male-dominated sectors.
#### 5.2.2 Results based on Human Capital Factors
Table 9 reports the estimated coefficients of the Mincerian wage regression23 by gender for pooled sectors (Columns 1-2) and gender-dominated sectors (Columns 3-6).
Footnote 23: Usual worked hours per hour, and its square are not included in the regression specifications because of possible endogeneity issues due to reverse causality. In addition, because hourly wages are calculated based on usually worked hours per week, estimates will be downward biased due to the division bias (Borjas, 1980).
Looking at socio-demographic characteristics, age positively contributes to higher wages despite the small magnitudes. On average, European (EEA) and non-European (non-EEA) workers earn less than UK natives in all samples. However, the reduction in magnitudes is, on average higher for EEA than non-EEA, for EEA in male-dominated sectors but non-EEA in female-dominated sectors. The presence of dependent children has a strong negative correlation with women's wages in all samples, in stark contrast with male estimates that are positive. The above effect is attenuated for married women with dependent children (2.0 p.p. in the total sample) with a higher magnitude in male
dominated sectors (4.2 p.p.). The estimates are non-significant for men.
Looking at the human capital variables, workers with higher educational attainment24 earn, as expected, more than those with low education; magnitudes are slightly higher for women than men for high education in all samples. As expected, more years of education increase wages but with a diminishing effect (the square is negative). From the estimates of years of education, we find that the optimal number of years in education that maximises wages is approximately 15.8 years for men as opposed to 19.5 years for women in the total sample25. Therefore, women are required to have higher education than men who need only a degree to earn optimal wages.26 Potential working experience has significant diminishing returns (the coefficient of experience is positive and its square negative but very small), and receiving training increases the hourly wage, especially in male-dominated sectors.
Footnote 24: In the Mincerian regression, we included both the categorical variable for education band (low, intermediate, and higher education) and the continuous variable for years of education and its square. The OLS assumption of the absence of perfect multicollinearity is not violated because years of education capture the intensity of the returns of education within an education band. The information provided by the two variables is complementary.
Footnote 25: The figures come from the following calculations: \(0.158/(2\times 0.005)=16\) for men, and \(0.117/(2\times 0.003)=20\) for women.
Footnote 26: The optimal number of years of education for women in female-dominated sectors is 18 (\(=0.108/(2\times 0.003)\)) while 16 (\(=0.130/(2\times 0.004)\)) for men; in male-dominated sectors is 16 years for men and 17 women.
As for workplace characteristics, working in public rather than in the private sector is associated with higher wages for women with higher magnitudes than men. However, the coefficients are non-significant in male-dominated sectors. This suggests that the private sector pays more in male-dominated sectors while the public sector offers better remuneration for female-dominated sectors. As expected, working part-time is negatively correlated with hourly wage (magnitudes are higher for men suggesting a higher penalty for them). Working in sectors with low gender sectoral segregation is associated with higher wages for male workers only in the pooled sample but negatively correlated with wages for women in both female- and male-dominated sectors. Working in female-dominated sectors as opposed to male-dominated sectors is negatively correlated with hourly wages (16.3 p.p. for men vs. 15.8 p.p. for women). The interaction term between female-dominated sectors and low gender segregation is positive and significant for
women only.
#### 5.2.3 Results based on Predicted and Residual Wages
This section discusses empirical evidence on the differences in the selection of workers in male- and female-dominated sectors in terms of observable (predicted wages) and unobservable (residual wages) characteristics.
Figures 6 and 7 respectively display the CDFs of the potential and residual wages for men and women employed in male- and female-dominated sectors. The CDFs on the left sort the actual predicted and residual wages calculated using the estimated coefficients for each subgroup of Table 9. The CDFs on the right report sorted counterfactual predicted and residual wages calculated with the estimated coefficients of men working in male-dominated sectors. The solid line is for men in male-dominated sectors (_ml_, _ml-dom_), the short-dashed line for women in male-dominated sectors (_fml_, _ml-dom_), the long-dashed line for men in female-dominated sectors (_ml_, _fml-dom_), the dash-dotted line for women in female-dominated sectors _fml_, _fml-dom_).
From the left graph in Figure 6, women who work in female-dominated sectors have lower predicted wages than those working in male-dominated sectors and all male workers (their CDFs always lie to the left). For low levels of potential wages, men employed in female-dominated sectors earn much less than women in male-dominated sectors. However, the gap vanishes completely when moving to the top of the distribution. Looking at the counterfactual exercise on the right, the horizontal distance between the four CDFs shrinks considerably when the estimated coefficients of men in male-dominated sectors are used to predict hourly wages. This means that if workers had the same potential wages as men in male-dominated sectors, then wage differentials of men and women across female- and male-dominated sectors would be smaller. Interestingly, for low levels of potential counterfactual wages, women employed in female-dominated sectors would earn slightly more than men in female-dominated sectors. But as the potential counterfactual wages increase, the two CDFs cross and diverge, so that men would earn more. Women in female-dominated sectors would always be paid less than those in male-dominated sectors,
who would be rewarded much more in low-paid jobs than men in male-dominated sectors. However, these women would always earn less than men in male-dominated sectors. The differential increases considerably as we move to the top of the wage distribution.27
Footnote 27: These findings contrast Roos (1981) and Johnson and Solon (1984), who always find a more pronounced wage differential for men than women.
In the left graph of Figure 7, the CDFs of residual wages of women employed in female dominated-sectors do not coincide with the other three curves, laying to their right for low residual wages and to their left for high values. In other words, these women earn more in low-paid jobs but much less in high-paid jobs than the other groups of workers for reasons other than their skills and human capital. The counterfactual exercise (to the right) helps assess the residual difference in wages across sub-groups as we fix the estimated coefficients to those of men in male-dominated sectors. All curves shift to the left of the CDF of male workers in male-dominated sectors, showing that all other sub-groups are negatively selected with respect to the former. Their counterfactual residuals are smaller than those of the benchmark. In particular, the CDF of women in female-dominated sectors is the most distant from the benchmark, especially at the top of the distribution. However, at the bottom of the distribution, we no longer observe a positive selection of women in female-dominated sectors. This suggests that differences in wages in high-paid jobs cannot be attributed to acquired skills or accumulated human capital only.
From the K-S test reported in Table 10, all test statistics are significant at the 1% level. Therefore the null hypothesis of equality of distributions among the four sub-groups is strongly rejected, confirming that the distributions of (actual and counterfactual) predicted and residual wages of men and women across sectors differ.
Overall, female-dominated sectors are not as rewarding as male-dominated sectors in monetary terms, especially for middle and low-paid jobs. Observed and counterfactual results document the negative selection of women in female- and male-dominated sectors with respect to men in the same gender-sectoral dominance, especially at the top of the wage distribution. Negative selection of women suggests that their returns will always be lower than those of comparable men based on both observable and unobservable
characteristics.
## 6 Conclusion
This work investigated how sectoral gender segregation in the UK shapes differences in contracts (i.e., permanent jobs, part-time jobs, working hours, remote work) and hourly wages across male and female workers between fiscal years 2005 and 2020. We further analysed how wages differ in female- and male-dominated sectors by looking at both observable and unobservable characteristics.
Our measures of sectoral gender segregation (Sectoral Dominance Indicator and Sectoral Segregation Index) first suggest a reduction in the level of segregation across years, especially after the gradual implementation of reforms by the UK government aiming at promoting gender equality. However, our empirical analysis suggests that the persistent imbalance in the shares of men or women in some sectors contributes to explaining the differences in employment contracts and wages. We found that contractual characteristics typical of a specific gender (e.g., part-time for women) are much more common in sectors dominated by that group. This means that even controlling for the occupational composition, any worker employed in female-dominated sectors is working on average more part-time, fewer hours and less from home than their counterparts in male-dominated sectors. Interestingly, men in female-dominated sectors would be offered, on average more temporary jobs if hired in male-dominated sectors. In addition, sectors with higher shares of women offer lower wages than those dominated by men. That is, workers employed in female-dominated sectors are, on average, paid 9.4 p.p. less than those in male-dominated sectors. This result is confirmed when we compare the same gender between male- and female-dominated sectors.
The decomposition of wage differentials by gender shows that ongoing discriminatory constraints mainly explain the difference within male- and female-dominated sectors. In stark contrast, differences in human capital and observable characteristics play a minor role. This means that women have observable attributes similar to men regarding
accumulated human capital, and without these discriminatory constraints, wage differentials between women and men within male- and female-dominated sectors would be lower. However, women in female-dominated sectors have lower predicted wages than those working in male-dominated sectors and all male workers. Overall, predicted wage differentials between and within female- and male-dominated sectors would be smaller if workers had the same potential wages as men in male-dominated sectors but not at the top of the wage distribution. Accounting for unobserved factors, women in female-dominated sectors would always earn less than men in female-dominated sectors and workers in male-dominated sectors for reasons other than differences in skills due to the negative selection in the labour market, _ceteris paribus_.
This analysis has policy implications. Gender segregation in the labour market may be responsible for causing more challenges for women than their male counterparts regarding labour participation, access to jobs and career opportunities. This gap could potentially widen in the post-pandemic. Our findings can provide policy-makers with empirical evidence supporting appropriate reforms favouring vulnerable categories of workers (i.e., women, mothers, and immigrants) and policies designed to sustain long-run economic growth, especially as the UK is facing new challenges (i.e., pandemic and Brexit). |
2310.04401 | Neighbour Sum Patterns : Chessboards to Toroidal Worlds | We say that a chessboard filled with integer entries satisfies the
neighbour-sum property if the number appearing on each cell is the sum of
entries in its neighbouring cells, where neighbours are cells sharing a common
edge or vertex. We show that an $n\times n$ chessboard satisfies this property
if and only if $n\equiv 5\pmod 6$. Existence of solutions is further
investigated of rectangular, toroidal boards, as well as on Neumann
neighbourhoods, including a nice connection to discrete harmonic functions.
Construction of solutions on infinite boards are also presented. Finally,
answers to three dimensional analogues of these boards are explored using
properties of cyclotomic polynomials and relevant ideas conjectured. | Sayan Dutta, Ayanava Mandal, Sohom Gupta, Sourin Chatterjee | 2023-10-06T17:50:53Z | http://arxiv.org/abs/2310.04401v1 | # Neighbour Sum Patterns : Chessboards to Toroidal Worlds
###### Abstract
We say that a chessboard filled with integer entries satisfies the _neighbour-sum property_ if the number appearing on each cell is the sum of entries in its neighbouring cells, where neighbours are cells sharing a common edge or vertex. We show that an \(n\times n\) chessboard satisfies this property if and only if \(n\equiv 5\pmod{6}\). Existence of solutions is further investigated of rectangular, toroidal boards, as well as on Neumann neighbourhoods, including a nice connection to discrete harmonic functions. Construction of solutions on infinite boards are also presented. Finally, answers to three dimensional analogues of these boards are explored using properties of cyclotomic polynomials and relevant ideas conjectured.
_key-words_: p-adic valuation, Kronecker product, spatial lattice, discrete harmonic function, cyclotomic polynomials,
## 1 Introduction
The Regional Mathematical Olympiad (RMO) is the second of a series of math tests held in India, which all leads up to participation in the International Mathematical Olympiad (IMO).
Our inspiration stems from RMO 1991 Problem 8 -
_The 64 squares of an 8 \(\times\) 8 chessboard are filled with positive integers in such a way that each integer is the average of the integers on the neighbouring squares. (Two squares are neighbours if they share a common edge or a common vertex. Thus a square can have 8, 5 or 3 neighbours depending on its position). Show that all the 64 integer entries are in fact equal._
**Brief Solution**: _Any given entry must lie in between the smallest and largest entries of its neighbours. Thus, the largest entry on the board must be surrounded by identical entries. This forces all entries to be equal._
While this has a surprisingly bleak answer, a small modification to the criterion might not be so! This is the analogue we explore in this paper:
_An \(n\times n\) chessboard for \(n\geq 3\), with each square bearing an integer, is said to have the **neighbour-sum** property if each number is the sum of the numbers on the neighbouring squares (Such a collection of elements is called a **solution**). Two squares are neighbours if they share a common edge or a common vertex. A chessboard with all squares bearing the number zero is said to be a **trivial** solution._
For an \(n\times n\) chessboard, how many distinct non-trivial solutions are there?
At a first glance, the problem might seem to be rooted in combinatorics, but certain observations favour a different angle. It is clear that given a non-trivial solution (\((x_{ij})\)) of the matrix representation \(X\) of a chessboard (of dimension \(n\times n\) for some \(n\)), all matrix representations of the form \(\alpha X=((\alpha x_{ij})),\ \alpha\in\mathbb{Z}\backslash\{0\}\) are also valid non-trivial solutions. Furthermore, sum of two solutions is also another solution. This motivates the idea of a transformation which contains the vectorisation [6] of \(X\) in its kernel1.
Footnote 1: The vectorisation of a matrix \(A\), denoted by \(\operatorname{vec}(A)\), is a vector formed by stacking the columns of A in a top-down format.
## 2 Finding square boards with such solutions
Trying to resolve our problem has led us to a critical observation - sum of solutions is a solution. It remains to produce an appropriate transformation \(T\) such that for a chessboard \(\mathfrak{G}\in M_{n}(\mathbb{Z})\) (the use of this notation being a subtle foreshadowing) with its vectorisation \(\operatorname{\textbf{vec}(\mathfrak{G})}\in\mathbb{Z}^{n^{2}}\), one has
\[T\operatorname{\textbf{vec}(\mathfrak{G})}=\mathbf{0}\]
An immediate transformation is one that replaces every element with the sum of its neighbours minus the
Figure 1: King’s Graph on a standard \(8\times 8\) chessboard. Image Courtesy: David Epstein
element itself. Clearly, all solutions would be in the kernel of this transformation. For a \(2\times 2\) chessboard,
\[\not{\Phi}_{2\times 2}=\begin{bmatrix}\begin{array}{c|c|c}x_{1}&x_{3}\\ \hline x_{2}&x_{4}\end{array}\end{bmatrix}\]
the corresponding \(4\times 4\) transformation would be given by
\[T_{4\times 4}=\begin{pmatrix}-1&1&1&1\\ 1&-1&1&1\\ 1&1&-1&1\\ 1&1&1&-1\end{pmatrix}\]
This is non-singular, so this does not have a non-trivial kernel. Note that we are only interested in \(n\geq 3\), and the same can be shown for \(T_{3\times 3}\) for \(n=3\). But are there some values of \(n\) for which non-trivial solutions can be present? This takes us to our main result.
### Existence of solutions
**Theorem 1**.: _An \(n\times n\) chessboard has a non-trivial solution of the neighbour-sum property if and only if \(6\mid(n+1)\)._
This is a very specific result, and to prove it we will first have to introduce some key ideas.
Let \(\not{\Phi}\) be an \(n\times n\) chessboard, and let \(T_{n}\) be the transformation mentioned above. Then, one can write \(A_{n}=T_{n}+\mathbb{I}_{n^{2}}\), where \(A_{n}\) is precisely the adjacency matrix of the square chessboard where adjacency is only amongst neighbours. Denote the \(i\)-th element of \(\mathbf{vec(\not{\Phi})}\) as \(\not{\Phi}_{i}\). Then, the \(i\)-th entry of the vector \(A_{n}\,\mathbf{vec(\not{\Phi})}\) gives the sum of the neighbours of the square \(\not{\Phi}_{i}\). This means that the neighbour-sum property can be expressed as \(A_{n}\,\mathbf{vec(\not{\Phi})}=\mathbf{vec(\not{\Phi})}\).
**Proposition 2**.: _The set of all vectorised solutions is in \(\ker(T_{n})\), which always contains the trivial solution._
**Proposition 3**.: _Define \(B_{n}\in M_{n}(\mathbb{Z})\) with \(B_{ij}=1\) when \(|i-j|\leq 1\) and \(B_{ij}=0\) otherwise. Then,_
\[A_{n}=B_{n}\otimes B_{n}-\mathbb{I}_{n^{2}},\qquad T_{n}=B_{n}\otimes B_{n}-2 \mathbb{I}_{n^{2}}.\]
Proof.: Note that \(B_{n}\) can be interpreted as the adjacency matrix ([2], pg. 7) of a graph \(G_{n}\) on vertices \(\{1,\ldots,n\}\), where \(i,i^{\prime}\) are neighbours when \(|i-i^{\prime}|\leq 1\). In the Cartesian product ([8], pg. 115 - 116) \(G_{n}\times G_{n}\), whose adjacency matrix is \(B_{n}\otimes B_{n}\), we have an edge between \((i,j)\) and \((i^{\prime},j^{\prime})\) precisely when \(|i-i^{\prime}|\leq 1\) and \(|j-j^{\prime}|\leq 1\). Removing self loops by subtracting \(\mathbb{I}_{n^{2}}\) from the adjacency matrix yields the transformation \(A_{n}\) as desired.
_Remark_.: The graph \(G_{n,n}\) is called the King's Graph [1, 3], because it shows the movement of a King on a chessboard. At any square (equivalent to a node in the graph), the King has 3,5 or 8 adjacent squares to move to depending on its position on the board. The adjacency matrix of this graph is \(A_{n}=B_{n}\otimes B_{n}-\mathbb{I}_{n^{2}}\). This implicit relation is the solitary reason for the \(\not{\Phi}\) notation.
\[B_{5}=\begin{bmatrix}1&1&0&0&0\\ 1&1&1&0&0\\ 0&1&1&1&0\\ 0&0&1&1\\ 0&0&0&1&1\end{bmatrix},\qquad B_{5}\otimes B_{5}=\begin{bmatrix}B_{5}&B_{5}&0 &0&0\\ B_{5}&B_{5}&B_{5}&0&0\\ 0&B_{5}&B_{5}&B_{5}\\ 0&0&0&B_{5}&B_{5}\\ 0&0&0&B_{5}&B_{5}\end{bmatrix}\]
With this, our search for non-trivial chessboards with the neighbour-sum property reduces to finding eigenvectors of \(B_{n}\otimes B_{n}\) corresponding to the eigenvalue 2.
**Fact 4**.: _The eigenvalues of \(A\otimes B\) are \(\{\lambda_{i}\mu_{j}\}\), where \(\{\lambda_{i}\}\) are the eigenvalues of \(A\), and \(\{\mu_{j}\}\) are the eigenvalues of \(B\)[11]._
**Fact 5**.: _The eigenvalues of \(B_{n}\) are \(\lambda_{k}=1+2\cos(k\pi/(n+1))\) for \(k=1,\ldots,n\). This is due to the tridiagonal Toeplitz form of \(B_{n}\) with all non-zero elements being unity [13]._
Using these two facts, we formulate the following proposition.
**Proposition 6**.: _The space \(\ker(T_{n})\) is non-trivial if and only if there exist \(p,q\in\mathbb{N}\) such that \(1\leq p,q\leq n\) and_
\[\left(1+2\cos\left(\frac{p\pi}{n+1}\right)\right)\left(1+2\cos\left(\frac{q\pi }{n+1}\right)\right)=2.\]
In order to deal with the equation in Proposition (6) and others similar to it, we require the following result.
**Theorem 7**.: _The only solutions of_
\[\left(1+2\cos(u\pi)\right)\left(1+2\cos(v\pi)\right)=2.\]
_where \(u,v\in\mathbb{Q}\cap(0,1)\) are \(u=1/3,v=1/2\) and \(u=1/2,v=1/3\)._
Proof of Theorem 7.: The given equation can be rewritten as
\[(\alpha+1+\alpha^{-1})(\beta+1+\beta^{-1})=2, \tag{1}\]
where \(\alpha=e^{iu\pi},\beta=e^{iv\pi}\) are roots of unity with positive imaginary parts.
Let \(u=p/N\), \(v=q/N\) be a solution of this equation, where \(p,q,N\in\mathbb{N}\) and \(1\leq p,q<N\). Set \(R=\mathbb{Z}[e^{\pi i/N}]\). Let \(\mathfrak{p}\) be a prime of \(R\) lying over the prime \(2\) in \(\mathbb{Z}\), and let \(v:R\to\mathbb{Q}\) be the \(\mathfrak{p}\)-adic valuation ([7] pg. 755), normalized so that \(v(2)=1\). So, Equation 1 gives
\[v(\alpha+1+\alpha^{-1})+v(\beta+1+\beta^{-1})=1. \tag{2}\]
To proceed any further, we state the following lemma.
**Lemma 8**.: _Let \(\eta\) be a primitive \(m\)-th root of unity. Then_
\[v(\eta+1+\eta^{-1})=\begin{cases}\infty&m=3\\ 1/2^{k}&m=3\cdot 2^{k+1}\text{ for }k\geq 0\\ 0&\text{otherwise}\end{cases}.\]
Now, it is clear that the only ways to decompose \(1\) as a sum of two numbers in \(\{\infty,1,1/2,1/4,\cdots,0\}\) are \(1+0\), \(0+1\) and \(1/2+1/2\). We use this in Equation 2.
**Case I**: If \(v(\alpha+1+\alpha^{-1})=1\), \(v(\beta+1+\beta^{-1})=0\), then \(\alpha\) must be a primitive \(6\)-th root of unity, forcing \(\alpha=e^{i\pi/3}\), \(u=1/3\). This in turn forces \(v=1/2\). Interchanging the roles of \(\alpha,\beta\) yields the solution \(u=1/2\), \(v=1/3\).
**Case II**: If \(v(\alpha+1+\alpha^{-1})=v(\beta+1+\beta^{-1})=1/2\), then \(\alpha,\beta\) must be primitive \(12\)-th roots of unity, forcing \(\alpha,\beta\in\{e^{i\pi/6},e^{5i\pi/6}\}\). This gives, \(\alpha+1+\alpha^{-1},\beta+1+\beta^{-1}\in\{1+\sqrt{3},1-\sqrt{3}\}\); but these do not satisfy Equation 1.
Now, we return to the proof of the Lemma we just used2.
Proof of Lemma 8.: The case \(\eta=1\) (so \(m=1\)) is easy to check by hand, so we assume that \(\eta\neq 1\) from now on.
We have
\[\eta+1+\eta^{-1}=\eta^{-1}\cdot\frac{\eta^{3}-1}{\eta-1}\]
so
\[v(\eta+1+\eta^{-1})=v(\eta^{3}-1)-v(\eta-1).\]
Now, if \(\omega\) is a primitive \(\ell\)-th root of unity, then
\[v(\omega-1)=\begin{cases}\infty&\ell=1\\ 1/2^{k}&\ell=2^{k+1}\\ 0&\text{otherwise}\end{cases}\]
as, if \(\ell=1\), then \(\zeta_{\ell}-1=0\); if two distinct primes divide \(\ell\), then \(1-\zeta_{\ell}\) is a unit; and if \(\ell=p^{k}\), then we have \((\zeta_{\ell}-1)^{\phi(\ell)}=(p)\) as ideal and hence, \(v(\zeta_{\ell}-1)\) is nonzero only when \(p=2\) and the valuation is \(\frac{1}{\phi(\ell)}\).
Since \(\eta\) is a primitive \(m\)-th root of unity, \(\eta^{3}\) is a primitive \(m/\text{GCD}(m,3)\)-th root of unity, so combining the above two equations gives the claim.
Finally, we are equipped with enough tools to provide a proof for Theorem 1.
Proof of Theorem 1.: Propositions (2) and (6) give a criterion for the existence of non-trivial \(n\times n\) chessboards with the neighbour-sum property. Theorem (7) forces both \(2\mid(n+1)\) and \(3\mid(n+1)\), from which the claim follows.
### Looking at the solutions
If the dimension \(n\) of a square board is one less than a multiple of \(6\), the transformation \(B_{n}\otimes B_{n}\) has eigenvalue \(2\) with multiplicity \(2\), which implies that \(\ker(T_{n})\) is two-dimensional. It is a matter of computation to yield the exact solutions, which correspond to the associated eigenvectors. For the smallest board satisfying the neighbour-sum property (\(n=5\)),
\[\mathfrak{G}_{5\times 5}^{(1)}=\begin{array}{|c|c|c|c|c|}\hline 1&0&-1&0&1\\ \hline 1&0&-1&0&1\\ \hline 0&0&0&0&0\\ \hline-1&0&1&0&-1\\ \hline-1&0&1&0&-1\\ \hline\end{array}\quad\text{and}\quad\quad\mathfrak{G}_{5\times 5}^{(2)}= \begin{array}{|c|c|c|c|c|}\hline 1&1&0&-1&-1\\ \hline 0&0&0&0&0\\ \hline-1&-1&0&1&1\\ \hline 0&0&0&0&0\\ \hline 1&1&0&-1&-1\\ \hline\end{array}\]
are the only solutions.
Clearly, these solutions are the transposes of each other, considered distinct due to their vectorisations being so. These are surprisingly simple solutions, with every element being in the set \(\{-1,0,1\}\). Custom solutions can be produced with linear combinations \(\lambda\mathfrak{G}_{5\times 5}^{(1)}+\mu\mathfrak{G}_{5\times 5}^{(2)}\) for \(\lambda,\mu\in\mathbb{Z}\), both not zero.
One observation is critical in understanding the kind of solutions one should expect to see for larger square boards admitting solutions. Define a _phantom boundary_ to be a boundary of cells (all containing zero) of a
board such that the elements of the boundary contribute only to the neighbour-sum property of the board and not themselves3.
Footnote 3: For a \(p\times p\) board, the boundary is the collection of rows \(1,p\) and columns \(1,p\).
Consider the following \(2\times 2\) board with a phantom boundary (represented in yellow), yielding a \(4\times 4\) board.
\begin{tabular}{|c|c|c|c|} \hline
0 & 0 & 0 & 0 \\ \hline
0 & \(x_{1}\) & \(x_{3}\) & 0 \\ \hline
0 & \(x_{2}\) & \(x_{4}\) & 0 \\
0 & 0 & 0 & 0 \\ \hline \end{tabular} The presence of the phantom boundary does not alter the conditions necessary for the \(2\times 2\) to satisfy the neighbour-sum property as the zeroes don't contribute to the sum. This idea is key to be able to identify disjoint solutions in large boards.
We can already see that the \(5\times 5\) solutions can be formed of small \(2\times 1\) units, emphasized in the following figure.
\[\Phi^{(1)}_{5\times 5}=\boxed{\begin{array}{|c|c|c|c|c|}\hline 1&0&-1&0&1\\ \hline 1&0&-1&0&1\\ \hline 0&0&0&0&0\\ \hline -1&0&1&0&-1\\ \hline -1&0&1&0&-1\\ \hline \end{array}}\]
Adding a phantom boundary to this solution clearly shows that the solution can 6 disjoint regions (three \((1,1)\) and three \((-1,-1)\), alternating) separated by zeroes. This pattern can easily be repeated to get the two solutions for \(n=11,17,\ldots\). Since the kernel is always 2 dimensional, the solutions on \(n\times n\) formed from extensions of those on \(5\times 5\) form a basis for the eigenspace. This gives us a complete characterization of solutions of the neighbour-sum property on square boards.
It is easy to see and prove that in the standard solutions, every second column and third row in \(\Phi^{(1)}_{n\times n}\) (resp. second row and third column in \(\Phi^{(2)}_{n\times n}\)) contains only zero elements. The only zeroes that are common are at positions \((i,j)\) where either both \(i\) and \(j\) are even or they are both multiples of 3. So, any non-trivial linear combination \(\lambda\dot{\Phi}^{(1)}_{5\times 5}+\mu\dot{\Phi}^{(2)}_{5\times 5}\) for \(\lambda,\mu\in\mathbb{Z}\) would preserve the zeroes in those positions.
_Remark._ As the standard solutions form a basis for the kernel, there cannot be any square board with a non-trivial solution without any zero elements.
## 3 Lessening symmetry - from squares to rectangles
A simple generalization of the neighbour-sum property can be made to rectangular boards of size \(m\times n\) with \(m,n\geq 2\).4 Theorem 7 will still be key to finding solutions here.
Footnote 4: For \(m\neq 1\), \(n=1\), we get a one-dimensional strip, which has solutions when \(m\equiv 2\;(\mathrm{mod}\,3)\), with solutions easily constructible from the \(2\times 1\) units on the square board.
Following similar arguments as in the case of \(n\times n\) chessboards, it is not difficult to arrive at analogues of Propositions 3 and 6 for \(m\times n\) chessboards.
**Proposition 9**.: _The set of all \(m\times n\) solutions i.e., \(m\times n\) chessboards with the neighbour-sum property, is in \(\ker(T_{m,n})\), where_
\[T_{m,n}=B_{m}\otimes B_{n}-2\mathbb{I}_{mn}.\]
**Proposition 10**.: _The space \(\ker(T_{m,n})\) is non-trivial if and only if there exist \(p,q\in\mathbb{N}\) such that \(1\leq p\leq m\), \(1\leq q\leq n\) and_
\[\left(1+2\cos\left(\frac{p\pi}{m+1}\right)\right)\left(1+2\cos\left(\frac{q\pi }{n+1}\right)\right)=2.\]
This yields the following characterisation.
**Theorem 11**.: _Non-trivial \(m\times n\) chessboards satisfying the neighbour-sum property exist if and only if \(2\mid(m+1)\) and \(3\mid(n+1)\), or vice versa._
Proof.: Follows immediately from Proposition 10 and Theorem 7.
_Remark_.: _The dimension of \(\ker(T_{m,n})\) is at most \(2\). It is equal to \(1\) only if \(m\neq n\)._
This remark follows directly from the solution space explored before. There are two fundamental solutions for a square board, and they can only _fit_ in a rectangular board if both dimensions are large enough. In all other cases, we have the nullity to be at most \(1\).
Some simple consequences of these are:
1. If \(\not{\otimes}_{m\times n}\) has a non-trivial kernel of dimension \(d\leq 2\), then \(\not{\otimes}_{n\times m}\) also has a non-trivial kernel of dimension \(d\). Further, the solutions are transposes of one-another.
2. The standard solutions of a square board can be partitioned into disjoint non-trivial rectangular solutions.
3. A chessboard of dimensions \(m\times n\), where \(m+1\) is even and \(n+1\) is an odd multiple of \(3\) (or vice versa), has solution(s) by Theorem 11. Then, a board of dimensions \((m+1)\times(n+1)\) also has solution(s). Furthermore, if a standard solution of the \(m\times n\) board is made of \(2\times 1\) units, then a _corresponding_ standard solution on the \((m+1)\times(n+1)\) board is made of similar \(1\times 2\) units. This correspondence is clear when \(\ker(T_{m,n})\) has dimension \(1\).
## 4 Coffee Mugs
For any finite board, a plethora of boundary effects come into being while deducing solutions for the neighbour-sum property. Which already prompts a natural curiosity - what if there were no boundaries? And where else to look for but a good old coffee mug _a.k.a_ a torus!
Simply speaking, a Torus in \(\mathbb{R}^{3}\) is just the Cartesian product of two circles, given by \(\mathbb{T}^{2}=\mathbb{S}^{1}\times\mathbb{S}^{1}\) ([9], pg. 5). In our case, we can form one from a rectangular \(m\times n\) board \(X\) by _wrapping around_ the board along the two dimensions5. We must first define an appropriate adjacency matrix \(A^{o}_{m,n}\) which endows \(X\) with the correct neighbourhood structure. Following that, we set \(T^{\circ}_{m,n}=A^{\circ}_{m,n}-\mathbb{I}_{mn}\) and examine the solution space \(\ker(T^{\circ}_{m,n})\).
### Solutions on Coffee Mugs
**Proposition 12**.: _Define \(B_{n}^{\circ}\in M_{n}(\mathbb{Z})\) with \(B_{ij}^{\circ}=1\) when \(i-j\in\{-1,0,1\}\pmod{n}\) and \(B_{ij}^{\circ}=0\) otherwise. Then,_
\[A_{m,n}^{\circ}=B_{m}^{\circ}\otimes B_{n}^{\circ}-\mathbb{I}_{mn},\qquad T_{m,n}^{\circ}=B_{m}^{\circ}\otimes B_{n}^{\circ}-2\mathbb{I}_{mn}.\]
Proof.: Note that \(B_{n}^{\circ}\) is the adjacency matrix of the graph \(G_{n}\) from Proposition 3 with the extra edge \((n,1)\). Proceeding in the same manner, we obtain the required adjacency matrix of the toroidal King's Graph \(G_{m,n}^{\circ}\).
**Fact 13**.: _The eigenvalues of \(B_{n}^{\circ}\) are \(\lambda_{k}^{\circ}=1+2\cos(2k\pi/n)\) for \(k=1,\ldots,n\). [13]_
Note that \(B_{n}^{\circ}\) is a tridiagonal circulant matrix [5] with all non-zero entries being unity, wherefore its eigenvalues are solutions to its associated polynomial \(f(x)=1+x+x^{n-1}\), which gives the above result.
**Proposition 14**.: _The space \(\ker(T_{m,n}^{\circ})\) is non-trivial if and only if there exist \(p,q\in\mathbb{N}\) such that \(1\leq p\leq m\), \(1\leq q\leq n\) and_
\[\left(1+2\cos\left(\frac{2p\pi}{m}\right)\right)\left(1+2\cos\left(\frac{2q \pi}{n}\right)\right)=2.\]
**Theorem 15**.: _Non-trivial \(m\times n\) toroidal chessboards satisfying the neighbour-sum property exist if and only if \(4\mid m\) and \(6\mid n\), or vice versa._
Proof.: Follows from Proposition 14 and Theorem 7. Note that the edge cases where \(p=m/2\) or \(p=m\) can be eliminated by hand.
Figure 2: A torus is formed from a square/rectangle by gluing opposite sides, which makes the rectangle a fundamental polygon of the torus. Image Courtesy: Ilmari Karonen, Wikimedia Commons
The _Neumann Neighbourhood_
Working with a spatial lattice yields some nice analogues, such as torii. Anither way to generate analogues is by redefining the idea of neighbours. We would take a look at the Neumann neighbourhood [16] in this section. An \(n\times n\) chessboard for \(n\geq 3\), with each square bearing an integer, is said to have the _Neumann-neighbour-sum (Nns)_ property if each number is the sum of the numbers on the neighbouring squares - where two squares are neighbours if and only if they share a common edge. _Solutions_ and _trivial solutions_ are defined as before.
Are there non-trivial boards with _Nns_ property?
### On Existence
**Proposition 16**.: _The set of all \(m\times n\) chessboards with the Neumann-neighbour-sum property is in \(\ker(T_{m,n}^{+})\), where_
\[T_{m,n}^{+}=B_{m}\oplus B_{n}-3\mathbb{I}_{mn}=B_{m}\otimes\mathbb{I}_{n}+ \mathbb{I}_{m}\otimes B_{n}-3\mathbb{I}_{mn}.\]
Proof.: Let \(G_{m,n}^{+}\) be the graph corresponding to an \(m\times n\) chessboard with the Neumann neighbourhood structure. We claim that its adjacency matrix is
\[A_{m,n}^{+}=B_{m}\otimes\mathbb{I}_{n}+\mathbb{I}_{m}\otimes B_{n}-2\mathbb{I }_{mn}.\]
Indeed, in the graph corresponding to the adjacency matrix \(B_{m}\otimes\mathbb{I}_{n}\), each square is connected to itself and the squares above and below it. Similarly, in the graph corresponding to \(\mathbb{I}_{m}\otimes B_{n}\), each square is connected to itself and the squares to its left and right. Adding these two graphs gives \(G_{m,n}^{+}\) except with two self loops for each square, which we remove by subtracting \(2\mathbb{I}_{mn}\) to retrieve the above expression for \(A_{m,n}^{+}\).
**Fact 17**.: _The eigenvalues of \(A\oplus B\) are \(\{\lambda_{i}+\mu_{j}\}\), where \(\{\lambda_{i}\}\) are the eigenvalues of \(A\), and \(\{\mu_{j}\}\) are the eigenvalues of \(B\)[11]._
This gives us an eigenvalue equation unlike those we've seen so far.
**Proposition 18**.: _The space \(\ker(T_{m,n}^{+})\) is non-trivial if and only if there exist \(p,q\in\mathbb{N}\) such that \(1\leq p\leq m\), \(1\leq q\leq n\) and_
\[\cos\left(\frac{p\pi}{m+1}\right)+\cos\left(\frac{q\pi}{n+1}\right)=\frac{1}{2}.\]
Now we discuss a particular case \(m=n\). We want solutions of \(2\cos\left(\frac{a\pi}{n+1}\right)+2\cos\left(\frac{b\pi}{n+1}\right)=1\) with \(a,b\in\{1,2,3,\ldots,n\}\).
**Theorem 19**.: _[_4_]_ _Suppose we have at most four distinct rational multiples of \(\pi\) lying strictly between 0 and \(\pi/2\) for which some rational linear combination of their cosines is rational but no proper subset has this property._
_Then the appropriate linear combination is proportional to one from the following list:_
\[\cos\pi/3=\frac{1}{2},\] \[-\cos\varphi+\cos(\pi/3-\varphi)+\cos(\pi/3+\varphi)=0\qquad(0< \varphi<\pi/6),\] \[\cos\pi/5-\cos 2\pi/5=\frac{1}{2},\] \[\cos\pi/7-\cos 2\pi/7+\cos 3\pi/7=\frac{1}{2},\] \[\cos\pi/5-\cos\pi/15+\cos 4\pi/15=\frac{1}{2},\] \[-\cos 2\pi/5+\cos 2\pi/15-\cos 7\pi/15=\frac{1}{5},\] \[\cos\pi/7+\cos 3\pi/7-\cos\pi/21+\cos 8\pi/21=\frac{1}{2},\] \[\cos\pi/7-\cos 2\pi/7+\cos 2\pi/21-\cos 5\pi/21=\frac{1}{2},\] \[-\cos 2\pi/7+\cos 3\pi/7+\cos 4\pi/21+\cos 10\pi/21=\frac{1}{2},\] \[-\cos\pi/15+\cos 2\pi/15+\cos 4\pi/15-\cos 7\pi/15=\frac{1}{2}.\]
**Note:** If \(a\geq\frac{n+1}{2}\), \(a^{\prime}:=n-a\leq\frac{n+1}{2}\implies\cos\left(\pi\frac{a}{n+1}\right)=- \cos\left(\pi\frac{a^{\prime}}{n+1}\right)\)
**Theorem 20**.: _The equation has a solution iff \(5\mid n+1\) or \(6\mid n+1\)._
Proof.: By the Note, the problem reduces to finding solutions of \(\pm 2\cos\left(\frac{a^{\prime}\pi}{n+1}\right)\pm 2\cos\left(\frac{b^{\prime} \pi}{n+1}\right)=1\) with \(0<a^{\prime},b^{\prime}\leq\frac{n+1}{2}\).
Also, if \(a^{\prime}=\frac{n+1}{2}\), \(2\mid n+1\) and \(\cos\left(\pi\frac{b}{n+1}\right)=\frac{1}{2}\implies\frac{b^{\prime}}{n+1}= \frac{1}{3}\implies 2,3\mid n+1\implies 6\mid n+1\). The same analysis holds if one of the term is zero i.e., \(2\cos\left(\frac{a^{\prime}\pi}{n+1}\right)=0\implies a^{\prime}=\frac{n+1}{2 }\implies 6\mid n+1\).
So, we can assume that none of the terms is zero and \(0<\frac{a^{\prime}}{n+1},\frac{b^{\prime}}{n+1}<\frac{1}{2}\). By applying Theorem 19 the only \(2\) term relation is
\[\cos\left(\frac{\pi}{5}\right)-\cos\left(\frac{2\pi}{5}\right)=\frac{1}{2} \implies 2\cos\left(\frac{\pi}{5}\right)+2\cos\left(\frac{3\pi}{5}\right)=1\]
This implies \(5\mid n+1\). So, if the solution of the equation exists, then either \(6\mid n+1\) or \(5\mid n+1\).
For the converse, if \(n+1=6k\), we have \(2\cos\left(\frac{3k\pi}{n+1}\right)+2\cos\left(\frac{2k\pi}{n+1}\right)=1\).
If \(n+1=5k\), we have \(2\cos\left(\frac{k\pi}{n+1}\right)+2\cos\left(\frac{3k\pi}{n+1}\right)=1\)
Further, due to the transformation \(T_{n,n}^{+}\) being symmetric and having zero eigenvalue with multiplicity \(2\) (whenever solutions exist), the kernel is two dimensional whenever it is non-trivial.
### Solutions of Nns
We will take a look at solutions for \(n=4\), which is the smallest value of \(n\geq 3\) such that \(5\) divides \(n+1\).
\[\begin{split}\mathfrak{G}_{4\times 4}^{(1)+}=\left\lceil\begin{array}{c|c|c|c}0&1&1&0 \\ \hline-1&0&0&-1\\ \hline-1&0&0&-1\\ \hline 0&1&1&0\\ \hline\end{array}\right.\end{split}\quad\text{and}\quad\quad \mathfrak{G}_{4\times 4}^{(2)+}=\left\lceil\begin{array}{c|c|c|c}1&0&0&1\\ \hline 1&-1&-1&1\\ \hline 1&-1&-1&1\\ \hline 1&0&0&1\\ \hline\end{array}\right.\end{split}\]
Note that unlike the original case (we would from now on refer to the original neighbourhood i.e., the one where squares sharing a common edge or vertex are neighbours, as the Moore neighbourhood), here the elements of the basis are not transposes of each other. The \(n=4\) solutions can be mirrored to get extended solutions for larger boards.
For \(6\) divides \(n+1\), check that the solutions for the Moore neighbourhood also work for the Neumann neighbourhood and can be similarly extended from the \(n=5\) case.
### An interesting problem on harmonic functions
A _discrete harmonic function_[10] on a graph \(G=(\mathcal{V},\mathcal{E})\) is defined as follows:
**Definition 21**.: _A function \(f:\mathcal{V}\to\mathbb{R}\) is harmonic at a node/vertex \(x\in\mathcal{V}\) if it satisfies the following relation_
\[f(x)=\frac{\sum_{\{x,y\}\in\mathcal{E}}f(y)}{deg(x)}\]
_where \(deg(x)\) is the degree of the vertex \(x\)._
Consider a toroidal chessboard with the Neumann neighbourhood condition i.e., two squares are neighbours iff they share a common edge. To define the associated graph \(G\), we identify the squares with vertices and draw an edge between every neighbour or the torus. This creates a graph where every vertex has degree \(4\).
Since every vertex is a representation of a cell on the toroidal board, we can denote a pair of coordinates \((p,q)\) to represent it, where \(1\leq p\leq m,\;1\leq q\leq n\). It is easy to then find an appropriate transformation \(T_{m,n}^{0+}\) whose kernel contains solutions of the modified neighbour sum equation
\[f(p,q+1)+f(p,q-1)+f(p+1,q)+f(p-1,q)=4f(p,q)\]
Figure 3: Solutions for \(n=9\). The colourbar shows that the blue regions are \(-1\), orange ones are \(1\), and the rest zeroes. Notice how the \(n=4\) solutions are neatly extended to form this basis.
where if \((p,q)\) represents vertex \(x\), then the tuples \((p\pm 1,q)\), \((p,q\pm 1)\) represents its Neumann neighbours. Note that this is not a neighbour sum problem, rather the mean tuple is one-fourth of the neighbour sum. Call this a _Neumann-neighbour-average_ property of the vertex \(x=(p,q)\).
**Proposition 22**.: _The set of all \(m\times n\) toroidal chessboards with the Neumann-neighbour-average property is precisely \(ker(T_{m,n}^{\circ+})\), where_
\[T_{m,n}^{\circ+}=B_{m}^{\circ}\oplus B_{n}^{\circ}-6\mathbb{I}_{m,n}\]
Proof.: The proof is along the same lines as that of Proposition 16. Here the matrices \(B_{m}^{\circ},\ B_{n}^{\circ}\) correspond to adjacency matrices of circular graphs with self-loops. Further, in the final transformation \(T_{m,n}^{\circ+}\), we require the diagonal elements to be \((-4)\) keeping all other elements constant, so we subtract \(6\mathbb{I}_{m,n}\).
**Proposition 23**.: _The space \(\ker(T_{m,n}^{\circ+})\) is non-trivial if and only if there exist \(p,q\in\mathbb{N}\) such that \(1\leq p\leq m\), \(1\leq q\leq n\) and_
\[\cos\left(\frac{2p\pi}{m}\right)+\cos\left(\frac{2q\pi}{n}\right)=2\]
This eigenvalue equation has the trivial solution \(\cos\left(\frac{2p\pi}{m}\right)=\cos\left(\frac{2q\pi}{n}\right)=1\), which gives \(p=m,\ q=n\). So, the kernel is one-dimensional with the corresponding eigenvalues of \(B_{m}^{\circ}\) and \(B_{n}^{\circ}\) are both \(3\).
**Fact 24**.: _The eigenvectors of \(A\oplus B\) are \(\{x_{i}\otimes y_{j}\}\) where \(\{x_{i}\}\) are the eigenvectors of \(A\) and \(\{y_{j}\}\) are the eigenvectors of \(B\)._
Note that eigenvectors of \(B_{m}^{\circ}\) over \(\mathbb{Z}^{m}\) for eigenvalue \(=3\) are of the form \(c\mathbb{1}_{m}\), where \(c\in\mathbb{Z}\backslash\{0\}\) is a constant. Then the corresponding unique solution (in vectorised format) is \(c^{\prime}\mathbb{1}_{m}\otimes\mathbb{1}_{n}=c^{\prime}\mathbb{1}_{mn},\ \ c^{\prime}\in\mathbb{Z}\backslash\{0\}\), which is just a constant.
This gives a neat result -
**Theorem 25**.: _Discrete harmonic functions on a toroidal graph are constant functions._
_Remark_.: If the same problem is specified on a finite square lattice, we could imagine the presence of a phantom boundary such that the mean-value property as defined is consistent. As the harmonic function takes the value zero on the boundary, by the maximum modulus principle for discrete harmonic functions [10], the only value it can take in the interior is identically zero. This can be shown with our linear algebra machinery, and is a nice connection between the two ideas.
## 6 Board extends to Infinity
At this juncture, it seems very natural to ask the same question for infinite chessboards. However, since an infinite board has fewer restrictions, it is much easier to answer this question.
A semi-infinite chessboard is one where the board is infinite in only one direction along the x and y axes. In that case, every square on the board can be expressed as a tuple \((i,j),\ i,j\in\mathbb{N}\). We do this numbering akin to that of rows and columns from the cell \((1,1)\) with only three neighbours sharing a common vertex or edge.
An infinite chessboard, which is infinite in both directions along the x and y axes is numbered as follows. Take a row and column and call them the \(0\)-th row and column. Columns left of the \(0\)-th column (resp. rows above the \(0\)-th row) will be numbered by the negative integers while those right of (resp. below) would be numbered by the positive integers. The element at the intersection of the \(i\)-th row and \(j\)-th column is represented by the tuple \((i,j)\).
### Semi-infinite chessboard
For a semi-infinite board, denote the value at any cell enumerated by \((i,j)\) as \(x_{ij}\).
We can take any two sequences \(\{a_{i}\}_{i=1}^{\infty}\) and \(\{b_{i}\}_{i=1}^{\infty}\) (with \(a_{1}=b_{1}\)) to fill up the positions \(\{(1,i)\}_{i=1}^{\infty}\) and \(\{(i,1)\}_{i=1}^{\infty}\) respectively (enumeration begins from top left). Notice that the values at \((1,1),(1,2)\) and \((2,1)\) fixes \((2,2)\), call this value \(x_{22}\).
\(x_{22}\) along with the other given values fixes \(x_{23}\) and \(x_{32}\), and it is easy to check that done recursively, this fixes row \(2\) and column \(2\).
Note that in the first scenario, we could have used a phantom boundary beyond the first row and column to show that the existence of two adjacent filled rows and columns fixes the third row and column for semi-infinitely long axes. Recursively fixing elements, we can generate the solution. We are using a matrix notation just for clarity (this is not a matrix!).
### Infinite chessboard
For an infinite board, it is again a matter of choosing the sequences \(\{a_{i}\}_{i\in\mathbb{Z}\setminus\{0\}}\), \(\{b_{i}\}_{i\in\mathbb{Z}\setminus\{0\}}\), \(\{c_{i}\}_{i\in\mathbb{Z}\setminus\{0\}}\), and \(\{d_{i}\}_{i\in\mathbb{Z}\setminus\{0\}}\) satisfying
\[a_{1}=b_{1},\qquad a_{-1}=d_{1},\qquad c_{1}=b_{-1},\qquad c_{-1}=d_{-1}\]
and putting them along any two rows and columns as shown in the figure below.
Wrapping up
So, a simple problem inspired from a Math Olympiad paper has brought us a long way.
We first formulated a linear algebraic problem to tackle the problem, which led us to an eigenvalue equation. The key to solving that was rooted in certain tools of algebraic number theory and cyclotomic fields. This gave us a very powerful result to not only find explicit solutions for square boards, but also on rectangular and toroidal boards.
Changing the neighbourhood structure from Moore to von-Neumann also introduced some new interesting conditions for existence of solutions, and helped us explore a very interesting connection with discrete harmonic functions on graphs. Finally, we discussed in brief the simpler cases of semi-infinite and infinite boards (under Moore neighbourhood) - which makes for a very comprehensive take on the problem in two dimensions.
The next two sections would contain some rigorous mathematical insight into solving the relevant eigenvalue equations on higher dimensional grids (or lattice graphs), such as hypercubes. While there are a lot of different non-intuitive solutions that pop up in higher dimensions, we have tried to look at some easy to prove conditions that are generally true. The prospect of future work on those analogues is immense.
## 8 A brief Digression - some more Algebra
We wanted to get the integer \(2\) as products of algebraic integers of the form
\[1+2\cos\left(\frac{2\pi p}{2(n+1)}\right)=1+\zeta_{m}^{a}+\zeta_{m}^{-a}=: \lambda_{a,m}\]
where \(\frac{a}{m}\) is the reduced form of \(\frac{p}{2(n+1)}\). Naturally, we hoped that if one of these algebraic integers involving primitive \(m\)-th root of unity appeared in the product, all of its conjugates would too. This is true in two-dimensions, but a counter-example in three dimensions is as follows:
\[\left(1+2\cos\left(\frac{2\pi}{24}\right)\right)\left(1+2\cos\left(\frac{22 \pi}{24}\right)\right)\left(1+2\cos\left(\frac{20\pi}{24}\right)\right)=2.\]
So, to get a sufficient condition for the existence of a solution we calculated the product of the conjugates i.e. the usual field norm [15] of \(\lambda_{a,m}\) say \(g(m)\), which would be an integer. If the product of some of these norms equals \(2\) and the total number of terms counting conjugates equals \(d\), we get a solution.
Motivated by this, let us formally define
\[g(m):=\prod_{\begin{array}{c}1\leq a<\frac{m}{2}\\ \gcd(a,m)=1\end{array}}\left(1+2\cos\left(\frac{2\pi a}{m}\right)\right).\]
Observe that if we can write \(2\) as the product of \(g(m_{i})\)'s where each \(m_{i}\mid 2(n+1)\) and the "total length" of the product is \(d\), we will have a solution in \(d\) dimensions,
\[\prod_{i=1}^{d}\left(1+2\cos\left(\frac{p_{i}}{n+1}\right)\right)=2.\]
Here, by the length of \(g(m_{i})\), we mean number of terms appearing in the product, i.e. \(\frac{\phi(m_{i})}{2}\), and by total length we mean \(\sum_{m_{i}}\frac{\phi(m_{i})}{2}\). As we'll see in Theorem 28, \(3\mid(n+1)\) is a necessary condition, if \(2\mid(n+1)\implies 4\mid 2(n+1)\), we can choose \(m_{1}=6\) and \(m_{2}=\cdots=m_{d}=4\). This gives us a solution. To generalize the idea and get a better sufficient condition, we calculate \(g(m)\) for different primes.
**Theorem 26**.: \[g(m)=\psi_{m}(-1)(-1)^{\frac{\phi(m)}{2}}=\Phi_{m}(\zeta_{3})(\zeta_{3})^{\frac{- \phi(m)}{2}}(-1)^{\frac{\phi(m)}{2}}\]
_where \(\Psi_{m}\) is the minimal polynomial of \(2\cos\left(\frac{2\pi}{m}\right)\)[14, 17]and \(\Phi_{m}\) is the \(m\)-th cyclotomic polynomial ([12] pg. 308) and \(\zeta_{3}\) primitive 3rd root of unity._
Proof.: \[\Psi_{m}(x)=\prod_{\begin{array}{c}1\leq a<\frac{m}{2}\\ \gcd(a,m)=1\end{array}}\left(x-2\cos\left(\frac{2\pi a}{m}\right)\right)\]
Putting \(x=-1\), we get the first equality. One can easily check that \(\Psi_{n}\left(z+z^{-1}\right)=z^{-\frac{\phi(n)}{2}}\Phi_{n}(z)\). Putting \(z=\zeta_{3}\), we get the second equality.
Now using this we can explicitly calculate \(g(m)\) for any \(m\in\mathbb{N},m>3\). We'll show some of the relevant calculations here.
**Theorem 27**.: _Let \(p>3\) be a prime. Then,_
\[g(p)=\left(\frac{3}{p}\right)=\begin{cases}1&\text{if }p\equiv\pm 1\pmod{12} \\ -1&\text{if }p\equiv\pm 5\pmod{12}\end{cases}\]
_where \((\cdot)\) is the Legendre symbol. Moreover, \(g(2p)=1\) if \(p\equiv 5\pmod{12}\)._
Proof.: \(\Phi_{p}(x)=1+x+\cdots+x^{p-1}\) and \(\Phi_{2p}(x)=1-x+\cdots+x^{p-1}\).
\((-1)^{\frac{p-1}{2}}=\left\{\begin{array}{c}1\qquad\text{if }p\equiv 1\pmod{4} \\ -1\quad\text{if }p\equiv-1\pmod{4}\end{array}\right.\cdot\zeta_{3}^{-(\frac{p-1}{2} )}\Phi_{p}(\zeta_{3})=\left\{\begin{array}{c}1\qquad\text{if }p\equiv 1\pmod{3} \\ -1\quad\text{if }p\equiv-1\pmod{3}\end{array}\right.\).
Also, \(\phi(p)=\phi(2p)\) for any odd prime and \(\zeta_{3}^{-(\frac{p-1}{2})}\Phi_{2p}(\zeta_{3})=1\) if \(p\equiv-1\pmod{3}\). This along with Theorem 26 gives us the proof.
## 9 Higher Dimensions and other generalisations
Using the tools developed in the previous sections, we will look at some nice results for higher dimensional analogues of the neighbour sum problem, such as on hypercubes. This is a natural extension of all our work til now, yet the conditions for existence or explicit solutions might be completely different from what we have observed. The problem can also be extended to arbitrary graphs or lattice-subsets in \(Z^{d}\), but the problem becomes too complicated to even get a flavour of the kinds of solutions expected. Different tools might be needed to tackle them.
For example, start by considering the same problem on a \(d\)-dimensional hypercube. The analogue of the equation in Proposition 6 for this case is
\[\prod_{i=1}^{d}\left(1+2\cos\left(\frac{p_{i}}{n+1}\right)\right)=2\]
which is equivalent to trying to find the number of solutions of the equation
\[\sum_{i=1}^{d}v(\alpha_{i}+1+\alpha_{i}^{-1})=1\]
which now has a lot more combinations than was possible in the \(d=2\) case. A criterion for existence of solutions is no longer as straightforward as on rectangles or squares. However, a necessary condition for existence is easy to figure out.
**Theorem 28**.: _If an \(n^{d}\) board satisfies the neighbour sum problem, then \(3|(n+1)\)._
Proof.: As already established, if \(n^{d}\) board satisfies the problem, then the equation
\[\sum_{i=1}^{d}v(\alpha_{i}+1+\alpha_{i}^{-1})=1\]
has a solution.
But, this means that there is an \(k\) such that \(v(\alpha_{i}+1+\alpha_{i}^{-1})>0\), hence completing the proof.
**Theorem 29**.: _If \(6\mid(n+1)\) or \(15\mid(n+1)\), there are solutions of_
\[\prod_{i=1}^{d=3}\left(1+2\cos\left(\frac{p_{i}}{n+1}\right)\right)=2\]
_._
Proof.: Let \(6\mid n+1\), which gives \(n+1=6k\) for some \(k\in\mathbb{N}\). \(p_{1}=2k\pi\), \(p_{2}=p_{3}=3k\pi\) solves the eigenvalue equation.
Let \(n+1=15k\). Then \(p_{1}=5k\pi\), \(p_{2}=3k\pi\), \(p_{3}=9k\pi\) is a solution.
Numerical considerations urge us to make the following conjecture, which if true, should not be very difficult to prove for someone with the necessary expertise in Galois Theory:-
_Conjecture_. The converse of the Theorem 29 is true.
It is also not very difficult to arrive at a not-so-interesting sufficient condition for the problem for \(n^{d}\) board. We have discussed the required language in Section 8.
By using Theorem 27, we give the sufficient condition for the existence of solutions.
**Theorem 30**.: _Let \(n+1=3^{a_{0}}p_{1}^{a_{1}}\cdots p_{r}^{a_{r}}q_{1}^{b_{1}}\cdots q_{s}^{b_{s}}\) where \(q_{i}\equiv 7\pmod{12}\) and \(p_{i}\not\equiv 7\pmod{12}\)._
1. _If any_ \(p_{i}=2\) _a solution of the form_ \(g(6)g(4)\cdots g(4)=2\) _exists._
2. _If all_ \(p_{i}\)_'s are odd primes, then if there are integers_ \(x_{1},\cdots,x_{r},y_{1},\cdots,y_{s}\geq 0\) _such that_ \[x_{1}\left(\frac{p_{1}-1}{2}\right)+\cdots+x_{r}\left(\frac{p_{r}-1}{2}\right) +2y_{1}\left(\frac{q_{1}-1}{2}\right)+\cdots+2y_{s}\left(\frac{q_{s}-1}{2} \right)=d-1\] _a solution exists and is given by_ \[g(6)g(p_{1}^{\prime})^{x_{1}}\cdots g(p_{r}^{\prime})^{x_{r}}g(q_{1})^{2y_{1} }\cdots g(q_{s})^{2y_{s}}=2\] _Here_ \(p_{i}^{\prime}=2p_{i}\) _if_ \(p_{i}\equiv 5\pmod{12}\) _and_ \(p_{i}^{\prime}=p_{i}\) _otherwise._
Proof.: It is enough to explicitly compute the form of the solutions.
We have also previously established that for \(d=2\), whenever we have a solution, the solution space is two dimensional. This motivates us to ask the question for higher dimensional analogues of the problem. For this question, we do not have any useful results to present. We give the sequence \(\{a_{n}^{d}\}_{n\geq 2}\) of number for solutions of \(n^{d}\) boards obtained numerically.
d=3 :
0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 15, 0, 0, 6, 0, 0, 3, 0, 0, 0, 0, 0, 15, 0, 0, 0, 9, 0, 0, 0, 0, 15, 0, 0, 0, 0, 3, 0, 0, 6, 0, 0, 15,...
d=4 :
0, 0, 0, 4, 0, 0, 0, 0, 0, 88, 0, 0, 24, 0, 0, 4, 0, 0, 0, 0, 0, 136, 0, 0, 0, 0, 0, 220, 0, 0, 0, 0, 0, 88, 0, 0, 48, 0, 0, 52, 0, 0, 24, 0, 0, 136,...
d=5 :
0, 0, 0, 5, 0, 0, 0, 0, 0, 335, 0, 0, 480, 0, 0, 485, 0, 0, 540, 0, 0, 1295, 0, 0, 0, 0, 1865, 0, 0, 0, 0, 815, 0, 0, 0, 0, 1385, 0, 0, 480, 0, 0, 2255,...
One might be interested to find patterns in these sequences - a task left to the inquisitive reader.
We conclude our paper with a small remark on extending the neighbour-sum property to custom graphs. In our paper, we defined a transformation whose kernel contains the relevant solutions. The key step was to decompose the graph (subset of \(\mathbb{Z}^{2}\)) into Cartesian products of line-graphs (or cycle graphs), which translates neatly over to the adjacency matrix representations through Kronecker prodcuts and sums. Similar decompositions for custom graphs can help in reducing the difficulty of the problem and introduce several interesting ideas. One might need a thorough understanding of spectral graph theory and linear algebra to tackle such general problems.
## 10 Acknowledgments
We would like to acknowledge the contribution of Prof. Dr. David E. Speyer for his contributions to formulating the proof of our main existence theorem on square boards as well as ideas regarding its generalisation. We would also like to thank our batchmates in CS for helping out with generation of numerical solutions and relevant sequences. We are indebted to their help in succesfully completing this paper early.
|
2310.14612 | Triangular solution to the planar elliptic three-body problem in the
parametrized post-Newtonian formalism | A triangular solution [Phys. Rev. D 107, 044005 (2023)] has recently been
found to the planar circular three-body problem in the parametrized
post-Newtonian (PPN) formalism, for which they focus on a class of fully
conservative theories characterized by the Eddington-Robertson parameters
$\beta$ and $\gamma$. The present paper extends the PPN triangular solution to
quasi-elliptic motion, for which the shape of the triangular configuration
changes with time at the PPN order. The periastron shift due to the PPN effects
is also obtained. | Yuya Nakamura, Hideki Asada | 2023-10-23T06:42:59Z | http://arxiv.org/abs/2310.14612v2 | Triangular solution to the planar elliptic three-body problem in the parametrized post-Newtonian formalism
###### Abstract
A triangular solution [Phys. Rev. D 107, 044005 (2023)] has recently been found to the planar circular three-body problem in the parametrized post-Newtonian (PPN) formalism, for which they focus on a class of fully conservative theories characterized by the Eddington-Robertson parameters \(\beta\) and \(\gamma\). The present paper extends the PPN triangular solution to quasi-elliptic motion, for which the shape of the triangular configuration changes with time at the PPN order. The periastron shift due to the PPN effects is also obtained.
pacs: 04.25.Nx, 45.50.Pk, 95.10.Ce, 95.30.Sf
## I Introduction
The three-body problem is among the classical ones in physics, which led to a study of the chaos [1]. Particular solutions, notably Euler's collinear solution and Lagrange's equilateral one [2; 4] represent regular orbits, which have attracted a lot of interest e.g. [5; 6; 7; 8; 9].
Nordtvedt [10] pointed out that the position of the triangular points is very sensitive to the ratio between the gravitational mass and the inertial one in gravitational experimental tests, though the post-Newtonian (PN) terms are partly considered.
For the restricted three-body problem in the PN approximation, Krefetz [11] and Maindl [12] found the PN triangular configuration for a general mass ratio between two masses. These studies were extended to the PN three-body problem for general masses [13; 14; 15; 16; 17; 18], where the PN counterparts for Euler's collinear [13; 14] and Lagrange's equilateral solutions [15; 16] were found. It should be noted that the PN triangular solutions are not necessarily equilateral for general mass ratios and they are equilateral only for either the equal mass case or two test masses. The stability of the PN solution and the radiation reaction at 2.5PN order were also studied [17; 18].
In a scalar-tensor theory of gravity, a collinear configuration for three-body problem was discussed [19]. In addition to such fully classical treatments, a possible quantum gravity correction to the Lagrange points was argued [20; 21].
Moreover, the recent discovery of a relativistic hierarchical triple system including a neutron star [22] has sparked renewed interest in the relativistic three-body problem and the related gravitational experiments [23; 24; 25].
In the PPN formalism, collinear and triangular solutions to the planar circular three-body problem have recently been found [33], where they focus on a class of fully conservative theories characterized by the Eddington-Robertson parameters \(\beta\) and \(\gamma\), because the two parameters are the most important ones; \(\beta\) measures how much nonlinearity there is in the superposition law for gravity and \(\gamma\) measures how much space curvature is produced by unit rest mass [26; 27]. See e.g. [28] for the celestial mechanics in this class of PPN theories.
In the Newtonian gravity, triangular solutions are not only to the circular three-body problem but also to the elliptic one [2; 3]. Can a (quasi-)elliptic orbit of triangular solutions be found in PPN case? A point is that the PPN force seems to be too complicated to admit elliptic orbits for a triple system. The main purpose of the present paper is to find it in the class of fully conservative theories.
This paper is organized as follows. In Section II, basic methods and equations are presented. Section III discusses the PPN triangular solution to the planar elliptic three-body problem. Section V summarizes this paper. Throughout this paper, \(G=c=1\). \(A,B\) and \(C\in\{1,2,3\}\) label three masses.
## II Basic methods and equations
### Newtonian planar elliptic triangular solution
Let us begin by briefly summarizing the triangular solution to the Newtonian planar elliptic three-body problem [2; 3]. A homothetic solution is possible and it represents the Lagrange equilateral solution in elliptic motion. See e.g. Section 5 of Reference [2] for more detail.
The equation of motion (EOM) for three masses (\(M_{A}\) at the position \(\mathbf{R}_{A}\)) reads
\[M_{A}\mathbf{a}_{A}=\sum_{B=1}^{N}\frac{M_{A}M_{B}}{(R_{AB})^{2}}\mathbf{n}_{AB}, \tag{1}\]
where \(\mathbf{a}_{A}\) denotes the acceleration of the \(A\)-th mass, \(\mathbf{R}_{AB}\equiv\mathbf{R}_{A}-\mathbf{R}_{B}\), \(R_{AB}\equiv|\mathbf{R}_{AB}|\), and \(\mathbf{n}_{AB}\equiv\mathbf{R}_{AB}/R_{AB}\).
By taking the cross product of \(\mathbf{R}_{1}\) and Eq. (1) for
\(A=1\), we obtain
\[\mathbf{R}_{1}\times\mathbf{R}_{2}\left(\frac{1}{(R_{12})^{3}}-\frac{1}{(R_{31})^{3}} \right)=0, \tag{2}\]
where the coordinate center is chosen as the center of mass (COM) of \(\sum_{A}M_{A}\mathbf{R}_{A}=0\). For a triangular configuration, \(\mathbf{R}_{1}\nparallel\mathbf{R}_{2}\). From Eq. (2), we thus obtain \(R_{12}=R_{23}\). By cyclic arguments, we obtain an equilateral solution [2; 3].
In elliptic motion, the arm length \(R_{A}\) becomes \(R_{1}=af_{N}\sqrt{\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2}},\ R_{2}=af_{N}\sqrt{\nu_{3}^{2}+\nu_{3} \nu_{1}+\nu_{1}^{2}},\) and \(R_{3}=af_{N}\sqrt{\nu_{1}^{2}+\nu_{1}\nu_{2}+\nu_{2}^{2}}\), where the total mass is \(M\equiv\sum_{A}M_{A}\), the mass ratio is defined as \(\nu_{A}\equiv M_{A}/M\), \(a\) is some constant, and \(f_{N}\) denotes the dilation factor [2; 3; 15; 16]. In circular motion, \(f_{N}=1\), while \(f_{N}\) is a function of time in elliptic motion [2; 3].
From the total energy and angular momentum, an elliptic orbit is obtained as [2; 3]
\[f_{\rm N}=\frac{\mathcal{A}_{\rm N}(1-e_{\rm N}^{2})}{1+e_{\rm N}\cos\theta}, \tag{3}\]
where \(\theta\) denotes the true anomaly, \(e_{\rm N}\) is the eccentricity of the elliptic orbit as \(e_{\rm N}=\sqrt{1+2L_{\rm N}^{2}\mathcal{E}_{\rm N}\mathcal{M}^{-2}\mu^{-3}}\) for the total energy \(\mathcal{E}_{\rm N}\), the total angular momentum \(L_{\rm N}\) and \(\mu\equiv M(\nu_{1}\nu_{2}+\nu_{2}\nu_{3}+\nu_{3}\nu_{1})\), and \(\mathcal{A}_{\rm N}\equiv-\mu M/(2a\mathcal{E}_{\rm N})\). Here, \(\theta=0\) is chosen as the periastron.
For the simplicity, we refer to \(A\equiv a\mathcal{A}_{\rm N}\) as the semi-major axis and \(P\equiv a\mathcal{A}_{\rm N}(1-e_{\rm N}^{2})\) as the semi-latus rectum. For instance, the semi-major axis for the elliptic orbit of \(M_{1}\) is \(a\mathcal{A}_{\rm N}\sqrt{\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2}}\).
From the total angular momentum, the angular velocity \(\omega_{\rm N}\) of the triangular configuration is obtained as
\[\omega_{\rm N}= (1+e_{\rm N}\cos\theta)^{2}\sqrt{\frac{M}{P^{3}}}. \tag{4}\]
### EOM in the PPN formalism
In a class of fully conservative theories including only the Eddington-Robertson parameters \(\beta\) and \(\gamma\), the PPN EOM becomes [26; 27]
\[\mathbf{a}_{A}= -\sum_{B\neq A}\frac{M_{B}}{R_{AB}^{2}}\mathbf{n}_{AB}\] \[-\sum_{B\neq A}\frac{M_{B}}{R_{AB}^{2}}\bigg{\{}\gamma v_{A}^{2} -2(\gamma+1)(\mathbf{v}_{A}\cdot\mathbf{v}_{B})\] \[\qquad+(\gamma+1)v_{B}^{2}-\frac{3}{2}(\mathbf{n}_{AB}\cdot\mathbf{v}_{B} )^{2}-\bigg{(}2\gamma+2\beta+1\bigg{)}\frac{M_{A}}{R_{AB}}\] \[\qquad-2(\gamma+\beta)\frac{M_{B}}{R_{AB}}\bigg{\}}\mathbf{n}_{AB}\] \[+\sum_{B\neq A}\frac{M_{B}}{R_{AB}^{2}}\bigg{\{}\mathbf{n}_{AB}\cdot[2 (\gamma+1)\mathbf{v}_{A}-(2\gamma+1)\mathbf{v}_{B}]\bigg{\}}(\mathbf{v}_{A}-\mathbf{v}_{B})\] \[+\sum_{B\neq A}\sum_{C\neq A,B}\frac{M_{B}M_{C}}{R_{AB}^{2}}\bigg{[} \frac{2(\gamma+\beta)}{R_{AC}}+\frac{2\beta-1}{R_{BC}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\frac{1}{2}\frac{R_{AB}}{R_ {BC}^{2}}(\mathbf{n}_{AB}\cdot\mathbf{n}_{BC})\bigg{]}\mathbf{n}_{AB}\] \[-\frac{1}{2}(4\gamma+3)\sum_{B\neq A}\sum_{C\neq A,B}\frac{M_{B}M_ {C}}{R_{AB}R_{BC}^{2}}\mathbf{n}_{BC}+O(c^{-4}), \tag{5}\]
where \(\mathbf{v}_{A}\) denotes the velocity of the \(A\)-th mass.
Figure 1: Schematic figure for the PPN triangular configuration of three masses. The inequilateral triangle is characterized by \(\varepsilon_{AB}\). In the Newtonian limit, \(\varepsilon_{AB}\) vanishes and \(R_{AB}\) becomes \(af_{\rm N}\).
## III PPN planar elliptic triangular solution
### PPN planar elliptic orbit
In order to obtain a PPN solution as a perturbation around the Newtonian equilateral elliptic solution, we assume a quasi-common dilation as \(R_{AB}=af(1+\varepsilon_{AB})\) for three masses, where \(\varepsilon_{AB}\) denotes a PPN distortion. The perfectly common dilation occurs at the Newton order, whereas the dilation is not common by \(\varepsilon_{AB}\). See also Figure 1.
In the same way as deriving Eq. (2), we take the cross product of \(\mathbf{R}_{1}\) and Eq. (5) for \(M_{1}\) to obtain
\[\ell_{1}^{2}\frac{d}{dt}\bigg{(}f^{2}\omega\bigg{)}(\mathbf{\lambda} \times\mathbf{\rho})\] \[= (\mathbf{\lambda}\times\mathbf{\rho})\left\{-\frac{\sqrt{3}}{2}\frac{M}{ f_{\rm N}a}\nu_{2}\nu_{3}\right.\] \[\times\bigg{[}3(\varepsilon_{12}-\varepsilon_{31})+\frac{M}{2a}( \nu_{3}-\nu_{2})\bigg{(}\frac{1}{f_{\rm N}}-\frac{1}{\mathcal{A}_{\rm N}} \bigg{)}\] \[\quad\left.+\frac{3}{8}a^{2}\{\dot{f}_{\rm N}(1+3\nu_{1})+\sqrt{3} f_{\rm N}\omega_{\rm N}(1-\nu_{1}-2\nu_{2})\}\right.\] \[\quad\quad\times\{\dot{f}_{\rm N}(1-\nu_{1}-2\nu_{2})+\sqrt{3}f_{ \rm N}\omega_{\rm N}(1-\nu_{1})\}\] \[\quad\quad-\frac{M}{4f_{\rm N}a}(\nu_{2}-\nu_{3})(8\beta-3)\bigg{]}\] \[-\frac{\sqrt{3}}{4}Ma\nu_{2}\bigg{(}\nu_{3}\frac{\dot{f}_{\rm N}} {f_{\rm N}}+\frac{\omega_{\rm N}}{\sqrt{3}}(\nu_{1}-\nu_{2}-1)\bigg{)}\] \[\quad\quad\times\bigg{(}(4\gamma+3+\nu_{2}-\nu_{1})f_{\rm N}- \sqrt{3}\nu_{3}f_{\rm N}\omega_{\rm N}\bigg{)}\] \[+\frac{\sqrt{3}}{4}Ma\nu_{3}\bigg{(}\nu_{2}\frac{\dot{f}_{\rm N}} {f_{\rm N}}-\frac{\omega_{\rm N}}{\sqrt{3}}(\nu_{1}-\nu_{3}-1)\bigg{)}\] \[\quad\quad\times\bigg{(}(4\gamma+3+\nu_{3}-\nu_{1})\dot{f}_{\rm N }+\sqrt{3}\nu_{2}f_{\rm N}\omega_{\rm N}\bigg{)}\bigg{\}}\] \[+O(c^{-4}), \tag{6}\]
where \(\ell_{1}\equiv a\sqrt{\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2}}\)[2; 3; 16; 33], and we introduce an orthonormal basis \(\mathbf{\lambda}\) and \(\mathbf{\rho}\). Here, \(\mathbf{\lambda}\equiv\mathbf{R}_{1}/R_{1}\), and \(\mathbf{\rho}\) is the 90 degree rotation of \(\mathbf{\lambda}\). It is more convenient to use the orthonormal basis than \(\mathbf{R}_{1}\) and \(\mathbf{R}_{2}\), because the right-hand side of Eq. (5) relies upon not only the positions but also the velocities. In elliptic motion, the velocity is not always orthogonal to the position vector, though it is in circular motion.
From the PPN total angular momentum, we find
\[\frac{d}{dt}\bigg{(}f^{2}\omega\bigg{)}\] \[= -\frac{M\dot{f}_{\rm N}\omega}{4a}\frac{13\nu_{1}\nu_{2}\nu_{3}-8 \{(\gamma+1)\eta-\zeta\}}{\eta}+O(c^{-4}), \tag{7}\]
where dot denotes the time derivative and we denote \(\eta\equiv\nu_{1}\nu_{2}+\nu_{2}\nu_{3}+\nu_{3}\nu_{1}\) and \(\zeta\equiv\nu_{1}^{2}\nu_{2}^{2}+\nu_{2}^{2}\nu_{3}^{2}+\nu_{3}^{2}\nu_{1}^{2}\). Eq. (7) is reduced to \(d(f_{N}\omega_{\rm N}^{2})/dt=0\) in the Newtonian limit, which recovers the Newtonian case of the planar elliptic triangular solution. It follows that Eq. (7) can be derived also from the sum of Eq. (6) for \(A=1,2,3\).
By substituting Eq. (7) into the left-hand side of Eq. (6), we obtain
\[\varepsilon_{12}-\varepsilon_{31}\] \[= \frac{M}{8A}(\nu_{3}-\nu_{2})(3\nu_{1}+1)\] \[-\frac{M}{12P}(\nu_{3}-\nu_{2})(9\nu_{1}+8\beta-2)(1+e_{\rm N} \cos\theta)\] \[+\frac{M}{4P}(\nu_{3}-\nu_{2})(3\nu_{1}-1)(1+e_{\rm N}\cos\theta) ^{2}\] \[-\frac{\sqrt{3}e_{\rm N}M}{72\nu_{2}\nu_{3}P}\sin\theta(1+e_{\rm N }\cos\theta)\] \[\quad\times\bigg{[}34\nu_{1}\nu_{2}\nu_{3}+16\nu_{1}(\nu_{2}^{2}+ \nu_{3}^{2})\] \[\quad\quad+9\nu_{2}\nu_{3}\{1-3\nu_{1}^{2}+(\nu_{2}-\nu_{3})^{2}\}\] \[\quad\quad-\frac{4(\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2})(13\nu_ {1}\nu_{2}\nu_{3}+8\zeta)}{\eta}\bigg{]}+O(c^{-4}), \tag{8}\]
where Eq. (3) is used for \(f_{N}\). By cyclic arguments, \(\varepsilon_{23}-\varepsilon_{12}\) and \(\varepsilon_{31}-\varepsilon_{23}\) are obtained.
Following Reference [16], the gauge fixing is chosen as \(\varepsilon_{12}+\varepsilon_{23}+\varepsilon_{31}=0\), which keeps the triangular area. From this gauge fixing and Eq. (8), we obtain
\[\varepsilon_{12}\] \[= \frac{M}{24A}[3\{\nu_{1}(\nu_{3}-2\nu_{2})+\nu_{3}(1+\nu_{2})\}-1]\] \[-\frac{M}{36P}[2(4\beta-1)(3\nu_{3}-1)\] \[\quad\quad+9\{\nu_{1}(\nu_{3}-\nu_{2})+\nu_{2}(\nu_{3}-\nu_{1})\}] (1+e_{\rm N}\cos\theta)\] \[+\frac{M}{12P}[1-3(\nu_{3}^{2}+2\nu_{1}\nu_{2})](1+e_{\rm N}\cos \theta)^{2}\] \[-\frac{e_{\rm N}M\sqrt{3}}{108P}(\nu_{1}-\nu_{2})\sin\theta(1+e_{ \rm N}\cos\theta)\] \[\quad\times\bigg{[}\frac{8\nu_{3}(1-\nu_{3})}{\nu_{1}\nu_{2}}+27 \nu_{3}-1\] \[\quad\quad\quad+\frac{2(\nu_{1}\nu_{2}-\nu_{3}^{2})(13\nu_{1}\nu_{2} \nu_{3}+8\zeta)}{\nu_{1}\nu_{2}\nu_{3}\eta}\bigg{]}+O(c^{-4}). \tag{9}\]
It is worthwhile to mention that \(\gamma\) makes no contribution to Eq. (9), while \(\beta\) is included in it. This is because the space curvature by unit rest mass has no relevance with asymmetry characterized by \(\varepsilon_{12}\).
We can obtain also \(\varepsilon_{23}\) and \(\varepsilon_{31}\) cyclically. We thus obtain the PPN triangular quasi-elliptic solution. Note that this solution does not follow a perfectly elliptic motion, though the Newtonian orbit as the zeroth-order solution is elliptic. The PPN solution does not stand in a long time scale owing to the periastron shift by PPN effects as shown below. Namely, the above solution is obtained in the sense of an osculating orbit [2; 3].
### Periastron shift
After direct calculations, the PPN expression of the total energy [29; 30] for the PPN planar quasi-elliptic triangular solution can be rewritten as
\[\left(\frac{du}{d\theta}\right)^{2}+G(u)\frac{du}{d\theta}=F(u), \tag{10}\]
where \(u\equiv 1/f\). \(F(u)\) and \(G(u)\) are functions of \(u\), which are too long to write down in this paper.
The periastron shift is
\[\theta_{\rm PPN}=\int_{u_{\rm min}}^{u_{\rm max}}du\frac{1}{\left(\frac{du}{d \theta}\right)}-\pi, \tag{11}\]
where \(u_{\rm max}\) and \(u_{\rm min}\) correspond to the periastron and periapsis, respectively.
In the same way as the post-Newtonian calculations of the periastron shift [26; 27], by using Eq(10) for Eq. (11), we obtain the periastron shift at the PPN order as
\[\theta_{\rm PPN}\] \[= \frac{\pi M}{36P\eta}\bigg{[}18\nu_{1}\nu_{2}\nu_{3}(9-2\beta)+ \eta(65-44\beta+72\gamma)+36\zeta\bigg{]}\] \[+O(c^{-4}). \tag{12}\]
The periastron shift per orbital period is \(2\theta_{\rm PPN}\). In GR (\(\beta=\gamma=1\)), Eq. (12) becomes
\[\theta_{\rm PPN}= \frac{\pi M}{36P\eta}(126\nu_{1}\nu_{2}\nu_{3}+93\eta+36\zeta)+O (c^{-4}). \tag{13}\]
In the test particle limit of a third mass (\(\nu_{3}\to 0\)), Eq. (12) disagrees with that of a binary system, because the restricted three-body dynamics does not equal to the binary dynamics [2; 3; 16]. See e.g. Eq. (66) in Reference [26] and Eq. (13.51) in [27] for the PPN periastron shift formula of a binary case.
## IV Conclusion
We found a PPN triangular solution to the planar elliptic three-body problem in a class of fully conservative theories. The distortion function \(\varepsilon_{AB}\) of a triangular solution depends on \(\beta\) but not on \(\gamma\). It follows that, in the circular limit, the present solution recovers the PPN triangular circular solution in Reference [33]. In the limit of \(e_{\rm N}\to 0\), Eq. (9) agrees with Eq. (41) in [33].
The periastron shift of the PPN triangular solution was also obtained. Because of the three-body interactions, the periastron shift in the PPN triangular solution is different from that of a binary system.
It is left for future to study the stability of the present solution.
## V Acknowledgments
We thank Yuuiti Sendouda and Marcus Werner for encouraging comments. This work was supported in part by Japan Science and Technology Agency (JST) SPRING, Grant Number, JPMJSP2152 (Y.N.), and in part by Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research, No. 20K03963 (H.A.).
|
2305.01917 | Splittings for C*-correspondences and strong shift equivalence | We present an extension of the notion of in-splits from symbolic dynamics to
topological graphs and, more generally, to C*-correspondences. We demonstrate
that in-splits provide examples of strong shift equivalences of
C*-correspondences. Furthermore, we provide a streamlined treatment of Muhly,
Pask, and Tomforde's proof that any strong shift equivalence of regular
C*-correspondences induces a (gauge-equivariant) Morita equivalence between
Cuntz-Pimsner algebras. For topological graphs, we prove that in-splits induce
diagonal-preserving gauge-equivariant *-isomorphisms in analogy with the
results for Cuntz-Krieger algebras. Additionally, we examine the notion of
out-splits for C*-correspondences. | Kevin Aguyar Brix, Alexander Mundey, Adam Rennie | 2023-05-03T06:17:36Z | http://arxiv.org/abs/2305.01917v2 | # Splittings for C\({}^{\sharp}\)-correspondences and strong shift equivalence
###### Abstract
We present an extension of the notion of in-splits from symbolic dynamics to topological graphs and, more generally, to \(C^{*}\)-correspondences. We demonstrate that in-splits provide examples of strong shift equivalences of \(C^{*}\)-correspondences. Furthermore, we provide a streamlined treatment of Muhly, Pask, and Tomforde's proof that any strong shift equivalence of regular \(C^{*}\)-correspondences induces a (gauge-equivariant) Morita equivalence between Cuntz-Pimsner algebras. For topological graphs, we prove that in-splits induce diagonal-preserving gauge-equivariant \(*\)-isomorphisms in analogy with the results for Cuntz-Krieger algebras. Additionally, we examine the notion of out-splits for \(C^{*}\)-correspondences.
## 1 Introduction
This paper studies noncommutative dynamical systems--defined as \(C^{*}\)-correspondences over a not necessarily commutative \(C^{*}\)-algebras--building on previous work [13, 14, 15, 16, 17, 18, 19]. Inspired by classical constructions of state splittings in symbolic dynamics [20], we introduce in-splits and out-splits for \(C^{*}\)-correspondences. We prove that these operations change the \(C^{*}\)-correspondence, but leave the abstract dynamical system invariant, up to a notion of strong shift equivalence (conjugacy) as defined by Muhly, Pask, and Tomforde. This strong shift equivalence is reflected in the associated Cuntz-Pimsner \(C^{*}\)-algebras as gauge-equivariant Morita equivalence.
Symbolic dynamics [14] is a powerful tool in the study of smooth dynamical systems (such as toral automorphisms or Smale's Axiom A diffeomorphisms) that works by discretising time using shift spaces. Every subshift of finite type can be represented by a finite directed graph. The _conjugacy problem_ for subshifts of finite type is fundamental: _when are two shifts of finite type the same?_ Williams [20] showed that two subshifts of finite type are conjugate if and only if the adjacency matrices \(\mathsf{A}\) and \(\mathsf{B}\) of their graph representations are _strong shift equivalent_. That is, there are adjacency matrices \(\mathsf{A}=\mathsf{A}_{1},\ldots,\mathsf{A}_{n}=\mathsf{B}\) such that for each \(i=1,\ldots,n-1\) there are rectangular matrices with nonnegative integer entries \(\mathsf{R}\) and \(\mathsf{S}\) such that \(\mathsf{A}_{i}=\mathsf{RS}\) and \(\mathsf{SR}=\mathsf{A}_{i+1}\).
Williams' motivation was the observation that state splittings of graph representations change the graph but leave the associated shift space invariant up to conjugacy. The data of a state splitting is reflected in matrices \(\mathsf{R}\) and \(\mathsf{S}\) as above, and Williams proved the _decomposition theorem_: any conjugacy is a finite composition of elementary conjugacies coming from state
splittings. Deciding whether two subshifts are conjugate can be difficult in practice, and it is an open problem in symbolic dynamics to determine whether strong shift equivalence is decidable.
In [10], Cuntz and Krieger associated a \(C^{*}\)-algebra \(\mathcal{O}_{\mathsf{A}}\), now known as a Cuntz-Krieger algebra, to a subshift with adjacency matrix \(A\) and showed that it is a universal simple \(C^{*}\)-algebra when \(\mathsf{A}\) is irreducible and not a permutation. The \(C^{*}\)-algebra \(\mathcal{O}_{\mathsf{A}}\) comes equipped with an action of the circle group \(\mathbb{T}\)--the _gauge action_--and a canonical commutative subalgebra--the _diagonal_. Cuntz and Krieger proved that conjugate subshifts induce Morita equivalent Cuntz-Krieger algebras.
Recently, Carlsen and Rout [10] completed the picture: \(\mathsf{A}\) and \(\mathsf{B}\) are strong shift equivalent if and only if there is a \(*\)-isomorphism \(\Phi\colon\mathcal{O}_{\mathsf{A}}\otimes\mathcal{K}\to\mathcal{O}_{\mathsf{ B}}\otimes\mathcal{K}\) that is both gauge-equivariant and diagonal-preserving (\(\mathcal{K}\) is the \(C^{*}\)-algebra of compact operators on separable Hilbert space). Cuntz-Krieger algebras have been generalised in many ways (e.g. directed graphs and their higher-rank analogues, see [21] and references therein), and we emphasise Pimsner's construction from a \(C^{*}\)-correspondence [22], later refined by Katsura [11], and applied by Katsura to his topological graphs [11].
We mention in passing that there are other moves on graphs: Parry and Sullivan's _symbol expansions_[23] and the _Cuntz splice_ both related to flow equivalence as well as more advanced moves [10] which were utilised in the geometric classification of all unital graph \(C^{*}\)-algebras [10]. We leave open whether these moves have analogues for correspondences.
In the general setting of \(C^{*}\)-correspondences (a right Hilbert \(C^{*}\)-module with a left action [14]), we do not have access to a notion of conjugacy, but Muhly, Pask, and Tomforde [24] introduced strong shift equivalence in direct analogy with Williams' work. For regular \(C^{*}\)-correspondences, they showed that the induced Cuntz-Pimsner algebras are Morita equivalent (we verify that this Morita equivalent is in fact gauge-equivariant in the sense of [12]). It is an interesting open problem whether the weaker notion of _shift equivalence_ introduced in [11] (see also [13]) also implies gauge-equivariant Morita equivalence.
For directed graphs, an in-split is a factorisation of the range map, and the range map induces the left action on the graph correspondence. An in-split of a general correspondence is then formulated as a factorisation of the left action subject to natural conditions. Similarly, an out-split is a factorisation of the source map which is reflected in the right-module structure of the graph correspondence, and we define an out-split of a general correspondence accordingly, although this appears less natural than the in-split. Our notions of splittings of correspondences provide examples of strong shift equivalences. They exhibit the same asymmetry as in the classical setting (cf. [1]): an out-split induces a gauge-equivariant Morita equivalence, while an in-split induces a gauge-equivariant \(*\)-isomorphism of Cuntz-Pimsner algebras. We leave open the problem of whether an arbitrary strong shift equivalence of correspondences is a composition of splittings.
We specialise our splittings to the case of topological graphs and in this case the analogy with directed graphs is almost complete. For general 'non-commutative dynamics' defined by \(C^{*}\)-correspondences over not-necessarily commutative \(C^{*}\)-algebras, the analogy is as complete as it can be. It is unreasonable to expect complete characterisations of strong shift equivalence in terms of Cuntz-Pimsner algebras akin to the Carlsen-Rout result, due to the lack of a diagonal subalgebra for a general correspondence.
In Section2 we recall what we need about \(C^{*}\)-modules, correspondences, and their associated
\(C^{*}\)-algebras. Along the way we provide some proofs for results that seem to be missing from the literature. Section3 recalls strong shift equivalence of correspondences and refines the main result of Muhly, Pask, and Tomforde [14, Theorem 3.14]. In-splits for topological graphs and general correspondences are introduced in Section4. Within this section we also extend the idea of diagonal subalgebra to topological graphs and show that the gauge equivariant \(*\)-isomorphisms between a topological graph correspondence and any of its in-splits is diagonal-preserving. Finally, Section5 defines and gives the basic properties of non-commutative out-splits.
**Acknowledgements**
K.A.B. was supported by a Carlsberg Foundation Internationalisation Fellowship and a DFF-International Postdoc 1025-00004B. A.M. was supported by ARC Project DP200100155 and University of Wollongong RevITAlising Research Grant IV036. A.M. and A.R. thank Bram Mesland and Aidan Sims for useful discussions. The authors would also like to thank Jason DeVito for helpful advice via the Mathematics StackExchange, and Paige Riddiford for careful proofreading.
## 2 Correspondences and Cuntz-Pimsner algebras
In this preliminary section we provide background information and establish notation for what we need to know about \(C^{*}\)-correspondences and their \(C^{*}\)-algebras (Toeplitz-Pimsner algebras and Cuntz-Pimsner algebras), frames, and topological graphs.
### \(C^{*}\)-modules and correspondences
We follow many of the conventions of [1] for \(C^{*}\)-modules, and Pimsner [13] and Katsura [12] for the algebras defined by \(C^{*}\)-correspondences.
A right Hilbert \(A\)-module \(X_{A}\) is a right module over a \(C^{*}\)-algebra \(A\) equipped with an \(A\)-valued inner product \((\cdot\mid\cdot)_{A}\) such that \(X_{A}\) is complete with respect to the norm induced by the inner product. The module \(X_{A}\) is _full_ if \(\overline{(X_{A}\mid X_{A})}_{A}=A\). We denote the \(C^{*}\)-algebra of adjointable operators on \(X_{A}\) by \(\operatorname{End}_{A}(X)\), the \(C^{*}\)-ideal of generalised compact operators by \(\operatorname{End}_{A}^{0}(X)\), and the finite-rank operators by \(\operatorname{End}_{A}^{00}(X)\). The finite-rank operators are generated by rank-one operators \(\Theta_{x,y}\) satisfying \(\Theta_{x,y}(z)=x\cdot(y\mid z)_{A}\), for all \(x,y,z\in X_{A}\).
**Definition 2.1**.: Let \(X_{B}\) be a right Hilbert \(B\)-module and let \(\phi_{X}\colon A\to\operatorname{End}_{B}(X)\) be a \(*\)-homomorphism. The data \((\phi_{X},{}_{A}X_{B})\) is called an \(A\)-\(B\)-_correspondence_ (or just a correspondence), and if \(\phi_{X}\) is understood we will write \({}_{A}X_{B}\). If \(A=B\) we refer to \((\phi_{X},{}_{A}X_{A})\) as a correspondence _over_\(A\).
A correspondence \((\phi_{X},{}_{A}X_{B})\) is _nondegenerate_ if \(\overline{\phi_{X}(A)X}=X\), and following [14, Definition 3.1], we say the correspondence is _regular_ if the left action is _injective_ (i.e. \(\ker(\phi_{X})=\{0\}\)) and _by compacts_ (i.e. \(\phi_{X}(A)\subseteq\operatorname{End}_{B}^{0}(X)\)).
Throughout we assume that \(A\) and \(B\) are both \(\sigma\)-unital \(C^{*}\)-algebras and that all Hilbert modules are countably generated, although many of our results do not critically rely on these assumptions.
There is a natural notion of morphism between correspondences.
**Definition 2.2**.: Let \((\phi_{X},{}_{A}X_{A})\) and \((\phi_{Y},{}_{B}Y_{B})\) be correspondences. A _correspondence morphism_\((\alpha,\beta)\colon(\phi_{X},{}_{A}X_{A})\to(\phi_{Y},{}_{B}Y_{B})\) consists of a \(*\)-homomorphism \(\alpha\colon A\to B\) and a linear map \(\beta\colon X\to Y\) satisfying:
1. \((\beta(\xi)\mid\beta(\eta))_{B}=\alpha((\xi\mid\eta)_{A})\) for all \(\xi\), \(\eta\in X\);
2. \(\beta(\xi\cdot a)=\beta(\xi)\cdot\alpha(a)\), for all \(a\in A\) and \(\xi\in X\); and
3. \(\beta(\phi_{X}(a)\xi)=\phi_{Y}(\alpha(a))\beta(\xi)\), for all \(a\in A\) and \(\xi\in X\).
A correspondence morphism is _injective_ if \(\alpha\) is injective (in which case \(\beta\) is isometric) and a _correspondence isomorphism_ if \(\alpha\) and \(\beta\) are isomorphisms. Composition of morphisms is defined by \((\alpha,\beta)\circ(\alpha^{\prime},\beta^{\prime})=(\alpha\circ\alpha^{ \prime},\beta\circ\beta^{\prime})\). If \((\phi_{Y},{}_{B}X_{B})=(\operatorname{Id}_{B},{}_{B}B_{B})\) is the identity correspondence [1] over the \(C^{*}\)-algebra \(B\), then we call \((\alpha,\beta)\) a _representation_ of \((\phi_{X},{}_{A}X_{A})\) in \(B\).
A representation \((\alpha,\beta)\) of a \(C^{*}\)-correspondence \((\phi_{X},X_{A})\) is said to _admit a gauge action_ if there is a strongly continuous action \(\gamma^{(\alpha,\beta)}\) of \(\mathbb{T}\) on \(C^{*}(\alpha,\beta)\coloneqq C^{*}(\alpha(A)\cup\beta(X_{A}))\)--the \(C^{*}\)-algebra generated by the image of \((\alpha,\beta)\) in \(B\)--by \(*\)-automorphisms such that \(\gamma_{z}^{(\alpha,\beta)}(\alpha(a))=\alpha(a)\) for all \(a\in A\), and \(\gamma_{z}^{(\alpha,\beta)}(\beta(x))=z\beta(x)\) for all \(x\in X\).
**Definition 2.3**.: The _Toeplitz algebra_\(\mathcal{T}_{X}\) of a \(C^{*}\)-correspondence \((\phi,{}_{A}X_{A})\) is the universal \(C^{*}\)-algebra for representations of \((\phi_{X},{}_{A}X_{A})\) in the following sense. There exists a representation \((\underline{\iota}_{A},{}_{\underline{L}X})\colon(\phi,{}_{A}X_{A})\to \mathcal{T}_{X}\) such that \(\mathcal{T}_{X}=C^{*}(\underline{\iota}_{A},{}_{\underline{L}X})\), and for any other representation \((\alpha,\beta)\colon(\phi,{}_{A}X_{A})\to B\) in a \(C^{*}\)-algebra \(B\), there is a unique \(*\)-homomorphism \(\alpha\times\beta\colon\mathcal{T}_{X}\to B\) such that \((\alpha\times\beta)\circ\underline{\iota}_{A}=\alpha\) and \((\alpha\times\beta)\circ\underline{\iota}_{X}=\beta\).
To a correspondence \((\phi_{X},{}_{A}X_{A})\) we associate its _covariance ideal_
\[J_{\phi_{X}}\coloneqq\phi_{X}^{-1}(\operatorname{End}_{A}^{0}(X))\cap\ker( \phi_{X})^{\perp},\]
which is an ideal in \(A\) (cf. [13, Definition 3.2]). The covariance ideal is the largest ideal of \(A\) such that the restriction of \(\phi_{X}\) to it is both injective and has image contained in \(\operatorname{End}_{A}^{0}(X)\). We will often consider covariant morphisms which respect the covariance ideal.
A correspondence morphism \((\alpha,\beta)\colon(\phi_{X},{}_{A}X_{A})\to(\phi_{Y},{}_{B}Y_{B})\) induces a \(*\)-homomorphism of compacts \(\beta^{(1)}\colon\operatorname{End}_{A}^{0}(X)\to\operatorname{End}_{B}^{0}(Y)\) satisfying \(\beta^{(1)}(\Theta_{x_{1},x_{2}})=\Theta_{\beta(x_{1}),\beta(x_{2})}\) for all \(x_{1},x_{2}\in X\).
**Definition 2.4**.: A morphism \((\alpha,\beta)\colon(\phi_{X},{}_{A}X_{A})\to(\phi_{Y},{}_{B}Y_{B})\) is _covariant_ if
\[\beta^{(1)}\circ\phi_{X}(c)=\phi_{Y}\circ\alpha(c)\quad\text{ for all }c\in J_{\phi_{X}}.\]
In particular, we must have \(\alpha(J_{\phi_{X}})\subseteq J_{\phi_{Y}}\). If \((\phi_{Y},{}_{B}X_{B})=(\operatorname{Id}_{B},{}_{B}B_{B})\) is the identity correspondence over \(B\), then we call \((\alpha,\beta)\) a _covariant representation_ of \((\phi_{X},{}_{A}X_{A})\) in \(B\).
**Definition 2.5**.: The _Cuntz-Pimsner algebra_\(\mathcal{O}_{X}\) of a \(C^{*}\)-correspondence \((\phi,{}_{A}X_{A})\) is the universal \(C^{*}\)-algebra for covariant representations of \((\phi_{X},{}_{A}X_{A})\) in the following sense. There exists a universal covariant representation \((\iota_{A},\iota_{X})\colon(\phi,{}_{A}X_{A})\to\mathcal{O}_{X}\) such that \(\mathcal{O}_{X}=C^{*}(\iota_{A},\iota_{X})\), and for any other covariant representation \((\alpha,\beta)\colon(\phi,{}_{A}X_{A})\to B\) on a \(C^{*}\)-algebra \(B\), there is a unique \(*\)-homomorphism \(\alpha\times\beta\colon\mathcal{O}_{X}\to B\) such that \((\alpha\times\beta)\circ\iota_{A}=\alpha\) and \((\alpha\times\beta)\circ\iota_{X}=\beta\).
The universal covariant representation \((\iota_{A},\iota_{X})\) admits a gauge action \(\gamma^{X}\colon\mathbb{T}\curvearrowright\mathcal{O}_{X}\) that we shall refer to as the _canonical gauge action_.
**Lemma 2.6**.: _Let \((\alpha,\beta)\colon(\phi_{X},{}_{A}X_{A})\to(\phi_{Y},{}_{B}Y_{B})\) be a covariant correspondence morphism, and let \((\iota_{A},\iota_{X})\) and \((\iota_{B},\iota_{Y})\) be universal covariant representations of \(\mathcal{O}_{X}\) and \(\mathcal{O}_{Y}\), respectively. Then there is an induced gauge-equivariant \(*\)-homomorphism \(\alpha\times\beta\colon\mathcal{O}_{X}\to\mathcal{O}_{Y}\) satisfying_
\[(\alpha\times\beta)\circ\iota_{A}=\iota_{B}\circ\alpha\quad\text{and}\quad( \alpha\times\beta)\circ\iota_{X}=\iota_{Y}\circ\beta.\]
_If \(\alpha\) is injective, then \(\alpha\times\beta\) is injective._
_Remark 2.7_.: The relation \((\alpha\times\beta)\circ\iota_{X}^{(1)}=\iota_{Y}^{(1)}\circ\beta\) also follows easily from the lemma and the definition of the induced \({}^{(1)}\) maps on compacts.
Proof.: The composition \((\iota_{B},\iota_{Y})\circ(\alpha,\beta)\) is a covariant representation of \((\phi_{X},X_{A})\) on \(\mathcal{O}_{Y}\), so by the universal property (and a slight abuse of notation) there is a \(*\)-homomorphism \(\alpha\times\beta\colon\mathcal{O}_{X}\to\mathcal{O}_{Y}\) satisfying \((\alpha\times\beta)\circ\iota_{A}=\iota_{B}\circ\alpha\) and \((\alpha\times\beta)\circ\iota_{X}=\iota_{Y}\circ\beta\). If \(a\in A\), then
\[(\alpha\times\beta)\circ\gamma_{z}^{X}(\iota_{A}(a))=\iota_{B}\circ\alpha(a)= \gamma_{z}^{Y}\circ(\alpha\times\beta)(\iota_{A}(a)),\]
for all \(z\in\mathbb{T}\), and if \(x\in X_{A}\), then
\[(\alpha\times\beta)\circ\gamma_{z}^{X}(\iota_{X}(x))=z(\alpha\times\beta)( \iota_{X}(x))=\gamma_{z}^{Y}\circ(\alpha\times\beta)(\iota_{X}(x)),\]
for all \(z\in\mathbb{T}\). This shows that \(\alpha\times\beta\) is gauge-equivariant. If \(\alpha\) is injective, then \((\iota_{A}\circ\alpha,\iota_{X}\circ\beta)\) is an injective representation that admits a gauge action so \(\alpha\times\beta\) is injective by the gauge invariant uniqueness theorem [13, Theorem 6.4].
To talk about Morita equivalence we isolate a special kind of correspondence.
**Definition 2.8**.: An \(A\)-\(B\)-imprimitivity bimodule between \(C^{*}\)-algebras \(A\) and \(B\) is a correspondence \((\phi,{}_{A}X_{B})\) with an additional left \(A\)-valued inner product such that the right \(B\) action is adjointable for the left inner product, and \(X\) is full as a left and as a right module. Moreover
\[\phi({}_{A}(x|y))z=x\cdot(y|z)_{B}\quad x,\,y,\,z\in X.\]
If such an imprimitivity bimodule exists then \(A\) and \(B\) are _Morita equivalent_.
There is also a group-equivariant version of Morita equivalence due to Combes, [12]. To describe equivariant Morita equivalence and the gauge action of the circle on Cuntz-Pimsner algebras we recall some definitions and results.
**Definition 2.9**.: Let \(G\) be a locally compact Hausdorff group and let \(A\) be a \(G\)-\(C^{*}\)-algebra with strongly continuous action \(\alpha\colon G\to\operatorname{Aut}(A)\). An _action of \(G\) on an \(A\)-module \(X_{A}\)_ is a strongly continuous action \(g\mapsto U_{g}\) of \(G\) on \(X_{A}\) by \(\mathbb{C}\)-linear isometries such that
* \(U_{g}(x\cdot a)=U_{g}(x)\alpha_{g}(a)\) for all \(x\in X\) and \(a\in A\); and
* \((U_{g}x\mid U_{g}y)_{A}=\alpha_{g}((x\mid y)_{A})\) for all \(x,\,y\in X\).
If \((\phi,{}_{B}X_{A})\) is a correspondence and \(B\) is a \(G\)-\(C^{*}\)-algebra with action \(\beta:G\to\operatorname{Aut}(B)\), then \(U\) is an _action on the correspondence_ if \(U\) is an action on \(X_{A}\) and in addition \(U_{g}\phi(b)=\phi(\beta_{g}(b))U_{g}\) for all \(b\in B\). The action is _covariant_, if in addition \(\beta_{g}(J_{X})=J_{X}\).
_Remark 2.10_.: The operators \(U_{g}\) on \(X_{A}\) are typically not \(A\)-linear due to condition (i).
**Lemma 2.11**.: _If \((U,\alpha)\) is an action of \(G\) on the right module \(X_{A}\), then there is an induced strongly continuous action \(\overline{\alpha}\colon G\to\operatorname{Aut}(\operatorname{End}^{0}_{A}(X))\) defined by \(\overline{\alpha}_{g}(T):=\operatorname{Ad}_{U_{g}}(T)=U_{g}TU_{g^{-1}}\). For rank-1 operators \(\overline{\alpha}_{g}(\Theta_{x,y})=\Theta_{U_{g}x,U_{g}y}\)._
An action of a group on a correspondence induces a "second quantised" action on both the associated Toeplitz and Cuntz-Pimsner algebras. This is an immediate consequence of the universal properties of both Toeplitz and Cuntz-Pimsner algebras.
**Lemma 2.12** (cf. [11]).: _If \((U,\alpha)\) if an action of \(G\) on an \(A\)-correspondence \((\phi,{}_{A}X_{A})\), then there is an induced action \(\sigma\colon G\to\operatorname{Aut}(\mathcal{T}_{X})\) on the Toeplitz-Pimsner algebra such that_
\[\sigma_{g}(\underline{\iota}_{A}(a))=\underline{\iota}_{A}(\alpha_{g}(a)) \quad\text{and}\quad\sigma_{g}(\underline{\iota}_{X}(x))=\underline{\iota}_ {X}(U_{g}x)\]
_for all \(g\in G\), \(a\in A\), and \(x\in X\). If the action \((U,\alpha)\) is covariant, then \(\sigma\) descends to an action \(\sigma\colon G\to\operatorname{Aut}(\mathcal{O}_{X})\)._
**Example 2.13**.: The action of the circle \(\mathbb{T}\) on a correspondence \((\phi,{}_{A}X_{A})\) defined by
\[U_{z}(x)=zx,\qquad\alpha_{z}(a)=a,\qquad x\in X,\ \ a\in A,\ \ z\in\mathbb{T}\]
happens to have each \(U_{z}\) adjointable, and induces the gauge actions on \(\mathcal{T}_{X}\) and \(\mathcal{O}_{X}\).
**Definition 2.14**.: Let \(A\) and \(B\) be \(C^{*}\)-algebras and that \(\gamma^{A}\colon G\curvearrowright A\) and \(\gamma^{B}\colon G\curvearrowright B\) are strongly continuous actions of a locally compact Hausdorff group \(G\). Following Combes [10], we say that \(\gamma^{A}\) and \(\gamma^{B}\) are _Morita equivalent_ if:
1. there is an \(A\)-\(B\)-imprimitivity bimodule \({}_{A}X_{B}\);
2. there is a strongly continuous action of \(G\) on \(X\) by \(\mathbb{C}\)-linear isometries \(U_{g}\);
3. the action \(\operatorname{Ad}_{U}\) of \(G\) on the compacts \(A\) (respectively \(B\)) of \(X\) as a right \(B\) (respectively left \(A\)) module is the action \(\gamma^{A}\) (respectively \(\gamma^{B}\)) (see [10, page 292]).
Equivalently [10, Section 4], \(\gamma^{A}\) and \(\gamma^{B}\) are Morita equivalent if there exists a \(C^{*}\)-algebra \(C\) such that \(A\) and \(B\) are (isomorphic to) complementary full corners in \(C\), and \(C\) admits an action \(\gamma^{C}\) such that \(\gamma^{A}\) is \(\gamma^{C}|_{A}\) and \(\gamma^{B}\) is \(\gamma^{C}|_{B}\).
### Frames
An important technical and computational tool for Hilbert \(C^{*}\)-modules is the concept of a frame. This is as close as one can get to an orthonormal basis in a \(C^{*}\)-module, and it serves similar purposes. In fact, Kajiwara, Pinzari, and Watatani refer to frames as bases, see [11]. In the signal analysis literature, see for instance [13, 14], what we call a frame is also known as a _standard normalised tight frame_.
**Definition 2.15**.: Let \(X_{A}\) be a right \(A\)-module. A (right) _countable frame_ for \(X_{A}\) is a sequence \((x_{j})_{j\in\mathbb{N}}\) in \(X_{A}\) such that \(\sum_{j=1}^{\infty}\Theta_{x_{j},x_{j}}\) converges strictly to the identity operator in \(\operatorname{End}_{A}(X)\). Equivalently, we have \(x=\sum_{j=1}^{\infty}\Theta_{x_{j},x_{j}}x\) for all \(x\in X_{A}\) with the sum converging in norm.
For the strict topology, see [14], but for our purposes it is enough to know that the strict topology coincides with the \(*\)-strong topology on bounded sets.
If \((x_{j})_{j\in\mathbb{N}}\) is a frame for \(X_{A}\), then \(X_{A}\) is generated as a right \(A\)-module by \(x_{j}\), so \(X\) is countably generated. Conversely, any countably generated \(C^{*}\)-module over a \(\sigma\)-unital \(C^{*}\)-algebra \(A\) admits a countable frame, cf. [13, Proposition 2.1].
The following result is well-known to experts, but we were unable to find a reference. As the proof is non-trivial, we include it for completeness. We thank Bram Mesland for helpful suggestions.
**Proposition 2.16**.: _Let \(X_{A}\) be a countably generated right Hilbert \(A\)-module and let \((\phi,{}_{A}Y_{B})\) be a countably generated \(A\)-\(B\)-correspondence. Let \((x_{i})_{i\in\mathbb{N}}\) be a countable frame for \(X_{A}\) and let \((y_{j})_{j\in\mathbb{N}}\) be a countable frame for \(Y_{B}\). Then \((x_{i}\otimes y_{j})_{i,j\in\mathbb{N}}\) is a countable frame for \(X\otimes_{\phi}Y\)._
To prove Proposition 2.16 we require some technical lemmas.
**Lemma 2.17**.: _Let \(A\) be a \(C^{*}\)-algebra. Suppose \(a,b\in A\) are positive elements such that \(a\leq b\leq 1\) in the minimal unitisation \(A^{+}\). Then for each \(h\in A\) we have \(\|ah-h\|\leq\|bh-h\|\)._
Proof.: This follows from the calculation
\[\|ah-h\|^{2} =\|(a-1)h\|^{2}=\|h^{*}(a-1)^{2}h\|=\sup\{\phi(h^{*}(a-1)^{2}h)\}\] \[\leq\sup\{\phi(h^{*}(b-1)^{2}h)\}=\|bh-h\|^{2},\]
where the supremum is taken over all states \(\phi\) on \(A\) and \(1\in A^{+}\).
**Lemma 2.18**.: _Let \(X_{A}\), \((\phi,{}_{A}Y_{B})\), \((x_{i})_{i\in\mathbb{N}}\), and \((y_{i})_{i\in\mathbb{N}}\) be as in the statement of Proposition 2.16. Then for each \(N,M\in\mathbb{N}\),_
\[\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N}\Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_ {j}}\Big{\|}\leq 1.\]
Proof.: Let \(\ell^{2}(Y)\coloneqq\ell^{2}(\mathbb{N})\otimes_{\mathbb{C}}Y\) with the standard right \(B\)-module structure. For each \(N\in\mathbb{N}\), we wish to define a linear, right \(B\)-linear, map \(\psi_{N}:X\otimes_{\phi}Y\to\ell^{2}(Y)\) on elementary tensors by
\[(\psi_{N}(x\otimes y))_{i}=\begin{cases}\phi((x_{i}\mid x)_{A})y&\text{if }i \leq N;\\ 0&\text{otherwise}.\end{cases}\]
Observe that
\[\|\psi_{N}(x\otimes y)\|_{\ell^{2}(Y)}^{2} =\Big{\|}\sum_{i=1}^{N}(\phi((x_{i}\mid x)_{A})\cdot y\mid(\phi( (x_{i}\mid x)_{A})\cdot y)_{B}\Big{\|}\] \[=\Big{\|}\sum_{i=1}^{N}(y\mid\phi((x\mid x_{i})_{A}(x_{i}\mid x)_ {A})y)_{B}\Big{\|}=\Big{\|}\Big{(}y\mid\phi\Big{(}x\;\Big{|}\sum_{i=1}^{N} \Theta_{x_{i},x_{i}}x\Big{)}_{A}y\Big{)}_{B}\Big{\|}\] \[\leq\|x\|^{2}\|y\|^{2}\Big{\|}\sum_{i=1}^{N}\Theta_{x_{i},x_{i}} \Big{\|}\leq\|x\|^{2}\|y\|^{2},\]
so that \(\|\psi_{N}\|\leq 1\). Hence, \(\psi_{N}\) extends to a bounded linear map \(\psi_{N}\colon X\otimes_{\phi}Y\to\ell^{2}(Y)\). Observe that \(\psi_{N}\) is adjointable with adjoint \(\psi_{N}^{*}((z_{i})_{i})=\sum_{i=1}^{N}x_{i}\otimes z_{i}.\) Embedding \(\psi_{N}\) in the "bottom left" corner of the \(C^{*}\)-algebra \(\operatorname{End}_{B}((X\otimes_{\phi}Y)\oplus\ell^{2}(Y))\) shows that \(\|\psi_{N}^{*}\|=\|\psi_{N}\|\leq 1\).
Now for each \(M\in\mathbb{N}\) let \(T_{M}=\sum_{j=1}^{M}\Theta_{y_{j},y_{j}}\). Observe that \(T_{M}\) acts diagonally on \(\ell^{2}(Y)\) and that as an operator on \(Y_{B}\) we have \(\|T_{M}\|_{\operatorname{End}_{B}(Y)}\leq 1\). Then for \((z_{i})_{i}\in\ell^{2}(Y)\),
\[\|T_{M}((z_{i})_{i})\|^{2} =\Big{\|}\sum_{n=1}^{\infty}(T_{M}z_{i}\mid T_{M}z_{i})_{B}\Big{\|} \leq\Big{\|}\sum_{n=1}^{\infty}\|T_{M}\|_{\operatorname{End}_{B}(Y)}^{2}(z_{i} \mid z_{i})_{B}\Big{\|}\] \[\leq\|T_{M}\|_{\operatorname{End}_{B}(Y)}^{2}\|(z_{i})_{i}\|^{2} \leq\|(z_{i})_{i}\|^{2},\]
where the first inequality follows from [10, Proposition 1.2.]. Thus, \(\|T_{M}\|_{\operatorname{End}_{B}(\ell^{2}(Y))}\leq 1\). Since we can write
\[\sum_{i=1}^{M}\sum_{j=1}^{N}\Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_{j}}= \psi_{N}^{*}\circ T_{M}\circ\psi_{N}\]
the result follows.
Proof of Proposition 2.16.: It suffices to show that \((\sum_{(i,j)\in\Sigma}\Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_{j}})_{\Sigma \subset\subset\mathbb{N}^{2}}\) is an approximate identity for \(\operatorname{End}_{B}^{0}(X\otimes_{\phi}Y)\), where the sequence is indexed by finite subsets \(\Sigma\) of \(\mathbb{N}^{2}\). Fix \(\varepsilon>0\). We first claim that for each \(\xi\in X\otimes_{\phi}Y\) there exists \(M,N\in\mathbb{N}\) such that
\[\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N}\Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_ {j}}\xi-\xi\Big{\|}<\varepsilon. \tag{2.1}\]
It suffices to consider the case where \(\xi=\eta\otimes\zeta\) for some \(\eta\in X_{A}\), \(\zeta\in Y_{B}\). Take \(M\) large enough so that
\[\Big{\|}\sum_{i=1}^{M}\Theta_{x_{i},x_{i}}\eta-\eta\Big{\|}<\frac{\varepsilon} {2}\]
and take \(N\) large enough so that
\[\Big{\|}\sum_{j=1}^{N}\Theta_{y_{j},y_{j}}\phi((x_{i}\mid\eta)_{A})\zeta-\phi ((x_{i}\mid\eta)_{A})\zeta\Big{\|}<\frac{\varepsilon}{2M}\]
for all \(1\leq i\leq M\). It follows that
\[\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N}\Theta_{x_{i}\otimes y_{j},x_ {i}\otimes y_{j}}\xi-\xi\Big{\|}=\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N}x_{i} \otimes y_{j}\cdot(y_{j}\mid\phi((x_{i}\mid\eta)_{A})\zeta)_{B}-\eta\otimes \zeta\Big{\|}\] \[\leq\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N}x_{i}\otimes y_{j}\cdot(y _{j}\mid\phi((x_{i}\mid\eta)_{A})\zeta)_{B}-\sum_{i=1}^{M}x_{i}\otimes\phi((x_ {i}\mid\eta)_{A})\zeta\Big{\|}\] \[\quad+\Big{\|}\sum_{i=1}^{M}x_{i}\otimes\phi((x_{i}\mid\eta)_{A}) \zeta-\eta\otimes\zeta\Big{\|}\] \[\leq\sum_{i=1}^{M}\|x_{i}\|\,\Big{\|}\sum_{j=1}^{N}y_{j}\cdot(y_ {j}\mid\phi((x_{i}\mid\eta)_{A})\zeta)_{B}-\phi((x_{i}\mid\eta)_{A})\zeta\Big{\|} +\frac{\varepsilon}{2}<\varepsilon.\]
We now claim that for each \(T\in\operatorname{End}_{B}^{0}(X\otimes_{\phi}Y)\) there is a sequence \((M_{k},N_{k})_{k=1}^{\infty}\) in \(\mathbb{N}^{2}\), with each of \((M_{k})_{k}\) and \((N_{k})_{k}\) strictly increasing, such that
\[\sum_{i=1}^{M_{k}}\sum_{j=1}^{N_{k}}\Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_ {j}}T\to T \tag{2.2}\]
as \(k\to\infty\). If \(T\) is a rank-one operator, then (2.2) holds for \(T\), as follows from the claim (2.1), which we have proved. By taking finite sums, the claim is also true for finite-rank \(T\).
Fix \(\varepsilon>0\). Suppose that \(T\in\operatorname{End}^{0}_{B}(X\otimes_{\phi}Y)\) is arbitrary, and take a finite rank operator \(S\) such that \(\|T-S\|<\frac{\varepsilon}{3}\). Let \(M,N\in\mathbb{N}\) be such that \(\|\sum_{i=1}^{M}\sum_{j=1}^{N}\Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_{j}}S -S\|<\frac{\varepsilon}{3}\). Then Lemma 2.18 implies that,
\[\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N}\Theta_{x_{i}\otimes y_{j},x_ {i}\otimes y_{j}}T-T\Big{\|}\] \[\leq\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N}\Theta_{x_{i}\otimes y_{j },x_{i}\otimes y_{j}}\Big{\|}\|T-S\|+\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N} \Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_{j}}S-S\Big{\|}+\|T-S\|<\varepsilon.\]
To finish, fix \(\varepsilon>0\) and let \(T\in\operatorname{End}^{0}_{C}(X\otimes_{\phi}Y)\) and take \(K\) large enough so that
\[\Big{\|}\sum_{i=1}^{M}\sum_{j=1}^{N_{K}}\Theta_{x_{i}\otimes y_{j},x_{i} \otimes y_{j}}T-T\Big{\|}<\varepsilon.\]
Lemma 2.17 shows that for any finite set \(\Sigma\subseteq\mathbb{N}^{2}\) with \(\{(i,j)\mid 1\leq i\leq M_{k},1\leq j\leq N_{k}\}\subseteq\Sigma\) we have \(\|\sum_{(i,j)\in\Sigma}\Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_{j}}T-T\|<\varepsilon\). Consequently, \((\sum_{(i,j)\in\Sigma}\Theta_{x_{i}\otimes y_{j},x_{i}\otimes y_{j}})_{ \Sigma\subset\mathbb{C}\times\mathbb{N}^{2}}\) is an approximate identity for \(\operatorname{End}^{0}_{C}(X\otimes_{\phi}Y)\), and \((x_{i}\otimes y_{j})_{i,j}\) is a frame for \(X\otimes_{\phi}Y\).
### Topological graphs
Topological graphs and their \(C^{*}\)-algebras were introduced by Katsura [14] as a generalisation of directed graphs and their \(C^{*}\)-algebras. Any (partially defined) local homeomorphism on a locally compact Hausdorff (sometimes known as a _Deaconu-Renault system_) space may be interpreted as a topological graph and, in turn, any topological graph admits a boundary path space whose shift map gives a Deaconu-Renault system. The \(C^{*}\)-algebra of the topological graph is \(*\)-isomorphic to the \(C^{*}\)-algebra of its associated Deaconu-Renault system, cf. [14, 15].
**Definition 2.19**.: A _topological graph_\(E=(E^{0},E^{1},r,s)\) is a quintuple consisting of second countable locally compact Hausdorff spaces \(E^{0}\) of _vertices_ and \(E^{1}\) of _edges_, together with a continuous _range_ map \(r\colon E^{1}\to E^{0}\), and a local homeomorphism \(s\colon E^{1}\to E^{0}\) called the _source_.
Two topological graphs \(E=(E^{0},E^{1},r_{E},s_{E})\), \(F=(F^{0},F^{1},r_{F},s_{F})\) are (graph) isomorphic if there are homeomorphisms
\[\mu:E^{0}\to F^{0}\quad\text{and}\quad\nu:E^{1}\to F^{1}\]
such that \(\mu\circ s_{E}=s_{F}\circ\nu\) and \(\mu\circ r_{E}=\nu\circ r_{F}\).
If \(E^{0}\) and \(E^{1}\) are countable discrete sets, then \(E\) is a _directed graph_.
_Remark 2.20_.: The term _topological graph_ is sometimes also used to refer to the more general notion of a _topological quiver_ (see [17]). In a topological quiver the condition that \(s\) is a local homeomorphism is weakened to \(s\) being an open map with the additional requirement of a compatible family of measures on the fibres of \(s\). We do not work in this generality.
In [14, Section 4], Katsura studies the boundary path space \(E_{\infty}=(E^{0}_{\infty},E^{1}_{\infty},r_{\infty},s_{\infty})\) of a topological graph \(E\). This is again a topological graph and \(\sigma_{E}\coloneqq s_{\infty}\colon E^{1}_{\infty}\to E^{0}_{\infty}\) is a partially defined local homeomorphism. Two topological graphs \(E\) and \(F\) are then said to be _conjugate_ if the Deaconu-Renault systems on their boundary path spaces are conjugate, i.e. if there is a homeomorphism \(h\colon E^{1}_{\infty}\to F^{1}_{\infty}\) such that \(h\circ\sigma_{E}=\sigma_{F}\circ h\) and \(h^{-1}\circ\sigma_{F}=\sigma_{E}\circ h^{-1}\).
The space of _paths of length \(n\)_ in a topological graph is the \(n\)-fold fibred product
\[E^{n}\coloneqq E^{1}\times_{s,r}\dots\times_{s,r}E^{1}=\Big{\{}e_{1}e_{2} \dotsm e_{n}\in\prod^{n}E^{1}\mid s(e_{i})=r(e_{i+1})\Big{\}}.\]
equipped with the subspace topology of the product topology.
**Definition 2.21**.: To a topological graph \(E=(E^{0},E^{1},r_{E},s_{E})\), Katsura [14] associates a \(C_{0}(E^{0})\)-correspondence \(X(E)\) as follows. Equip \(C_{c}(E^{1})\) with the structure of a pre-\(C_{0}(E^{0})\)-\(C_{0}(E^{0})\)-correspondence by
\[x\cdot a(e) =x(e)a(s(e))\] \[a\cdot x(e) =a(r(e))x(e)\] \[(x\mid y)_{C_{0}(E^{0})}(v) =\sum_{s(e)=v}\overline{x(e)}y(e),\]
for all \(x,y\in C_{c}(E^{1})\), \(a\in C_{0}(E^{0})\), \(e\in E^{1}\), and \(v\in E^{0}\). The completion \(X(E)\) is a \(C_{0}(E^{0})\)-\(C_{0}(E^{0})\)-correspondence called the _graph correspondence of \(E\)_, and the Cuntz-Pimsner algebra \(\mathcal{O}_{X(E)}\) is called the _\(C^{*}\)-algebra of the topological graph \(E\)_.
We fix some terminology to discuss regular and singular points of topological graphs.
**Definition 2.22**.: Let \(\psi\colon X\to Y\) be a continuous map between locally compact Hausdorff spaces. We consider the following subsets of \(Y\):
* \(\psi\)_-sources_: \(Y_{\psi-\mathrm{src}}\coloneqq Y\setminus\overline{\psi(X)}\)
* \(\psi\)_-finite receivers_: \(Y_{\psi-\mathrm{fin}}\coloneqq\{y\in Y:\exists\text{ a precompact open neighbourhood }V\text{ of }y\) \[\text{ such that }\psi^{-1}(\overline{V})\text{ is compact}\}\]
* \(\psi\)_-infinite receivers_: \(Y_{\psi-\mathrm{inf}}\coloneqq Y\setminus Y_{\psi-\mathrm{fin}}\)
* \(\psi\)_-regular set_: \(Y_{\psi-\mathrm{reg}}\coloneqq Y_{\psi-\mathrm{fin}}\setminus\overline{Y_{ \psi-\mathrm{src}}}\)
* \(\psi\)_-singular set_: \(Y_{\psi-\mathrm{sing}}\coloneqq Y\setminus Y_{\psi-\mathrm{reg}}=Y_{\psi- \mathrm{inf}}\cup\overline{Y_{\psi-\mathrm{src}}}\).
_Remark 2.23_.: If \(E=(E^{0},E^{1},r,s)\) is a topological graph then we use the range map \(r\colon E^{1}\to E^{0}\) to construct subsets of \(E^{0}\) according to Definition 2.22. In this context we drop the map \(r\) and, for instance, write \(E^{0}_{\mathrm{reg}}=E^{0}_{r-\mathrm{reg}}\).
A topological graph \(E\) is said to be _regular_ if \(E^{0}_{\mathrm{sing}}=\varnothing\).
We recall that the behaviour of the left action \(\phi\colon C_{0}(E^{0})\to\mathrm{End}^{0}_{C_{0}(E^{0})}(X(E))\) is reflected in the singular structure of \(E\). In particular, the covariance ideal is given by \(J_{\phi}=C_{0}(E^{0}_{\mathrm{reg}})\), so that regular topological graphs induce regular graph correspondences, cf. [14].
A frame for the graph correspondence is relatively easy to describe.
**Example 2.24**.: Let \(E=(E^{0},E^{1},r,s)\) be a topological graph. Since \(E^{1}\) is second countable and locally compact its paracompact; so admits a locally finite cover \(\{U_{i}\}_{i\in I}\) by precompact open sets such that the restrictions \(s\colon U_{i}\to s(U_{i})\) are homeomorphisms onto their image. Let \(\{\rho_{i}\}_{i\in I}\) be a partition of unity subordinate to \(\{U_{i}\}_{i\in I}\) and let \(x_{i}=\rho_{i}^{1/2}\). We claim that \((x_{i})_{i\in I}\) is a frame for \(X(E)\). For each \(x\in C_{c}(E^{1})\) and \(e\in E^{1}\),
\[\sum_{i}(x_{i}\cdot(x_{i}\mid x)_{C_{0}(E^{0})})(e)=\sum_{i}\sum_{s(f)=s(e)}x_ {i}(e)\overline{x_{i}(f)}x(f)=\sum_{i}\rho_{i}(e)x(e)=x(e).\]
Since \(x\) has compact support, finitely many of the \(U_{i}\) cover \(\operatorname{supp}(x)\). Hence,
\[\Big{\|}\sum_{i}(x_{i}\cdot(x_{i}\mid x)_{C_{0}(E^{0})})-x\Big{\|}^{2}=\sup_{v \in E^{0}}\sum_{s(e)=v}\Big{|}\sum_{i}(x_{i}\cdot(x_{i}\mid x)_{C_{0}(E^{0})} )(e)-x(e)\Big{|}^{2}\to 0.\]
Since \(C_{c}(E^{1})\) is dense in \(X(E)\) it follows that \((x_{i})_{i\in I}\) is a frame for \(X(E)\).
## 3 Strong shift equivalence
Strong shift equivalence was introduced by Williams in [20] as an equivalence relation on _adjacency matrices_: finite square matrices with nonnegative integral entries in the context of shifts of finite type [15]. Two adjacency matrices \(\mathsf{A}\) and \(\mathsf{B}\) are elementary strong shift equivalent if there exist rectangular matrices \(\mathsf{R}\) and \(\mathsf{S}\) with nonnegative integral entries such that
\[\mathsf{A}=\mathsf{R}\;\mathsf{S}\quad\text{and}\quad\mathsf{B}=\mathsf{S}\; \mathsf{R}.\]
This is not a transitive relation. To amend this we say that \(\mathsf{A}\) and \(\mathsf{B}\) are strong shift equivalent if there are square matrices \(\mathsf{A}=\mathsf{A}_{1},\ldots,\mathsf{A}_{n}=\mathsf{B}\) such that \(\mathsf{A}_{i}\) is elementary strong shift equivalent to \(\mathsf{A}_{i+1}\) for all \(i=1,\ldots,n-1\). The _raison d'etre_ for this equivalence relation is the following classification theorem due to Williams: recalling that a shift of finite type may be represented by an adjacency matrix, a pair of two-sided shifts of finite type are topologically conjugate if and only if the adjacency matrices that represent the systems are strong shift equivalent.
Muhly, Pask, and Tomforde [14] introduce _strong shift equivalence_ for \(C^{*}\)-correspondences, which we recall below. They show that the induced Cuntz-Pimsner algebras of strong shift equivalent correspondences are Morita equivalent. Kakariadis and Katsoulis [16] later introduced the a priori weaker notion of _shift equivalence_ of \(C^{*}\)-correspondences, and similar notions were further studied by Carlsen, Dor-On, and Eilers [11].
**Definition 3.1** ([14, Definition 3.2]).: Correspondences \((\phi_{X},{}_{A}X_{A})\) and \((\phi_{Y},{}_{B}Y_{B})\) are _elementary strong shift equivalent_ if there are correspondences \((\phi_{R},{}_{A}R_{B})\) and \((\phi_{S},{}_{B}S_{A})\) such that
\[X\cong R\otimes_{A}S\quad\text{and}\quad Y\cong S\otimes_{B}R.\]
They are _strong shift equivalent_ if there are correspondences \(X=X_{1},\ldots,X_{n}=Y\) such that \(X_{i}\) is elementary strong shift equivalent to \(X_{i+1}\) for all \(i=1,\ldots,n-1\).
In [11, Remark 3.6], Carlsen, Dor-On, and Eilers observe that if adjacency matrices are strong shift equivalent in the sense of Williams, then their \(C^{*}\)-correspondences are also strong shift equivalent in the sense of Muhly, Pask, and Tomforde. The converse is still not known.
In this section we show that there is a gauge equivariant Morita equivalence of the Cuntz-Pimsner algebras of shift equivalent correspondences in the sense of Definition 2.14. In the process, we revisit the Morita equivalence proof of [12] and break it into a series of instructive lemmas. The first records how Cuntz-Pimsner algebras behave with respect to direct sums of correspondences.
**Lemma 3.2**.: _Let \((\phi_{X},{}_{A}X_{A})\) and \((\phi_{Y},{}_{B}Y_{B})\) be correspondences. The inclusion \((j_{A},j_{X})\) of \((\phi_{X},{}_{A}X_{A})\) into the \(A\oplus B\)-correspondence \((\phi_{X\oplus Y},{}_{A\oplus B}X\oplus Y_{A\oplus B})\) is a covariant correspondence morphism that induces a gauge-equivariant and injective \(*\)-homomorphism \(j_{A}\times j_{X}\colon\mathcal{O}_{X}\to\mathcal{O}_{X\oplus Y}\)._
Proof.: It is clear that \((j_{A},j_{X})\) is a correspondence morphism, and for covariance we must show that \(j_{X}^{(1)}\circ\phi_{X}(c)=\phi_{X\oplus Y}\circ j_{A}(c)\) for all \(c\in J_{X}\). Let \((x_{i})_{i}\) be a frame for \(X\) and \((y_{j})_{j}\) a frame for \(Y\). A frame for \(X\oplus Y\) is given by the direct sum of the frames for \(X\) and \(Y\). Let \(P_{X}\) denote the projection in \(\operatorname{End}_{A\oplus B}(X\oplus Y)\) onto \(X\) so that \(P_{X}=\sum_{i}\Theta_{j_{X}(x_{i}),j_{X}(x_{i})}\), with the sum taken in the strict topology. It follows that
\[j_{X}^{(1)}\circ\phi_{X}(c) =\sum_{i}\Theta_{j_{X}(\phi_{X}(c)x_{i}),j_{X}(x_{i})}=\phi_{X \oplus Y}(j_{A}(c))\sum_{i}\Theta_{j_{X}(x_{i}),j_{X}(x_{i})}\] \[=\phi_{X\oplus Y}(j_{A}(c))P_{X}=\phi_{X\oplus Y}\circ j_{A}(c),\]
for all \(c\in J_{X}\). Lemma 2.6 implies that the induced \(*\)-homomorphism \(j_{A}\times j_{X}\colon\mathcal{O}_{X}\to\mathcal{O}_{X\oplus Y}\) is gauge-equivariant and injective.
**Lemma 3.3**.: _If \((\phi_{X},{}_{A}X_{B})\) and \((\phi_{Y},{}_{B}Y_{C})\) are \(C^{*}\)-correspondences, then \(J_{\phi_{X\oplus Y}}\subseteq J_{\phi_{X}}\)._
Proof.: It follows from [11, Corollary 3.7] that if \(\phi_{X}(a)\otimes\operatorname{Id}_{Y}\in\operatorname{End}^{0}_{C}(X\otimes Y)\), then \(\phi_{X}(a)\in\operatorname{End}^{0}_{A}(X\cdot\phi_{Y}^{-1}(\operatorname{ End}^{0}_{B}(Y)))\). It is clear that \(\ker(\phi_{X})\subseteq\ker(\phi_{X}\otimes\operatorname{Id}_{Y})\) so \(\ker(\phi_{X}\otimes\operatorname{Id}_{Y})^{\perp}\subseteq\ker(\phi_{X})^{\perp}\). The result now follows.
Let \((\phi_{X},{}_{A}X_{A})\) be a correspondence and \(N\) a positive integer. With \(\phi_{X^{\otimes N}}\coloneqq\phi_{X}\otimes\operatorname{Id}_{N-1}\) the pair \((\phi_{X^{\otimes N}},X^{\otimes N})\) is a correspondence over \(A\). Given a representation \((\alpha,\beta)\colon(\phi_{X},{}_{A}X_{A})\to B\) we denote by \(\beta^{\otimes N}\colon X^{\otimes N}\to B\) the induced map \(\beta^{\otimes N}(x_{1}\otimes\cdots\otimes x_{N})=\beta(x_{1})\cdots\beta(x_ {N})\), for all \(x_{1}\otimes\cdots\otimes x_{N}\in X^{\otimes N}\) and note that \((\alpha,\beta^{\otimes N})\) is a representation of \((\phi_{X^{\otimes N}},X^{\otimes N})\) in \(B\), [10, SS2]. We let \(\beta^{(N)}\coloneqq(\beta^{\otimes N})^{(1)}\colon\operatorname{End}^{0}_{A }(X^{\otimes N})\to B\).
Our next lemma records how the Cuntz-Pimsner algebra of \((\phi_{X^{\otimes N}},X^{\otimes N})\) embeds in the Cuntz-Pimsner algebra of \((\phi_{X},X_{A})\). However, we note that this embedding is not gauge-equivariant in the usual sense.
**Lemma 3.4**.: _Let \((\phi_{X},{}_{A}X_{A})\) be a nondegenerate correspondence and let \((\iota_{A},\iota_{X})\colon(\phi_{X},{}_{A}X_{A})\to\mathcal{O}_{X}\) be a universal representation. Then for each \(N\in\mathbb{N}\), \((\iota_{A},\iota_{X}^{\otimes N})\colon(\phi_{X^{\otimes N}},{}_{A}X_{A}^{ \otimes N})\to\mathcal{O}_{X}\) is an injective covariant representation. In particular, there is an induced injective \(*\)-homomorphism \(\tau_{N}\colon\mathcal{O}_{X^{\otimes N}}\to\mathcal{O}_{X}\) such that \(\tau_{N}\circ\iota_{X^{\otimes N}}=\iota_{X}^{\otimes N}\)._
_Furthermore, let \(\gamma\colon\mathbb{T}\to\operatorname{Aut}(\mathcal{O}_{X})\) denote the gauge action on \(\mathcal{O}_{X}\) and let \(\overline{\gamma}\colon\mathbb{T}\to\operatorname{Aut}(\mathcal{O}_{X^{ \otimes N}})\) denote the gauge action on \(\mathcal{O}_{X^{\otimes N}}\). Consider the \(N\)-th power \(\overline{\gamma}^{N}\colon\mathbb{T}\to\operatorname{Aut}(\mathcal{O}_{X^{ \otimes N}})\) of the gauge action on \(\mathcal{O}_{X^{\otimes N}}\): so \(\overline{\gamma}^{N}_{z}(\iota_{A}(a))=\iota_{A}(a)\) and \(\overline{\gamma}^{N}_{z}(\iota_{X}(x))=z^{N}\iota_{X}(x)\) for all \(a\in A\) and \(x\in X^{\otimes N}\). Then \(\tau_{N}\circ\overline{\gamma}^{N}_{z}=\gamma_{z}\circ\tau_{N}\) for all \(z\in\mathbb{T}\)._
Proof.: We need to verify that \((\iota_{A},\iota_{X}^{\otimes N})\) is covariant. It follows from Lemma3.3 that \(J_{\phi_{X\otimes N}}\subseteq J_{\phi_{X}}\). Recall that a rank-\(1\) operator in \(\operatorname{End}_{A}^{0}(X\cdot J_{\phi_{X\otimes N}})\) may be written in the form \(\Theta_{x\cdot a,y}\) for \(x,y\in X_{A}\) and \(a\in J_{\phi_{X\otimes N}}\). Then \(\Theta_{x\cdot a,y}\otimes\operatorname{Id}_{N-1}\in\operatorname{End}_{A}^{0 }(X^{\otimes N})\) since \(J_{\phi_{X\otimes N}}\subseteq J_{\phi_{X\otimes N-1}}\). Moreover, if \((x_{i})\) is a frame for \(X_{A}^{\otimes(N-1)}\), then
\[\Theta_{x\cdot a,y}\otimes\operatorname{Id}_{N-1}=\sum_{i}\Theta_{x\otimes \phi(a)x_{i},y\otimes x_{i}}.\]
We proceed by induction, the base case being covariance of \((\iota_{A},\iota_{X})\), which is given. Suppose for induction that \((\iota_{A},\iota_{X}^{\otimes N-1})\) is covariant. This is equivalent to the fact that \(\iota_{A}(a)=\sum_{i}\iota_{X}^{(N-1)}(\Theta_{\phi_{X}(a)e_{i},e_{i}})\) for all \(a\in J_{X^{\otimes(N-1)}}\). (cf. [12, Remark 3.9]). Using the inductive hypothesis at the second last equality, it follows that for any \(x,y\in X_{A}\) and \(a\in J_{\phi_{X\otimes N}}\),
\[\iota_{X}^{(N)}(\Theta_{x\cdot a,y}\otimes\operatorname{Id}_{N-1}) =\iota_{X}^{(N)}\Big{(}\sum_{i}\Theta_{x\otimes\phi(a)x_{i},y \otimes x_{i}}\Big{)}=\sum_{i}\iota_{X}(x)\iota_{X}^{\otimes(N-1)}(\phi(a)x_{i })\iota_{X}^{\otimes(N-1)}(x_{i})^{*}\iota_{X}(y)^{*}\] \[=\iota_{X}(x)\iota_{X}^{(N-1)}\Big{(}\sum_{i}\Theta_{\phi(a)x_{i },x_{i}}\iota_{X}(y)^{*}=\iota_{X}(x)\iota_{A}(a)\iota_{X}(y)^{*}=\iota_{X}^{( 1)}(\Theta_{x\cdot a,y}).\]
It follows that for any \(T\in\operatorname{End}_{A}^{0}(X\cdot J_{\phi_{X\otimes N}})\) we have \(\iota_{X}^{(1)}(T)=\iota_{X}^{(N)}(T\otimes\operatorname{Id}_{N-1})\). Covariance of \((\iota_{A},\iota_{X})\) now implies that for all \(a\in J_{\phi_{X\otimes N}}\),
\[\iota_{A}(a)=\iota_{X}^{(1)}(\phi_{X}(a))=\iota_{X}^{(N)}(\phi_{X}(a)\otimes \operatorname{Id}_{N-1})\]
so that \((\iota_{A},\iota_{X}^{\otimes N})\) is covariant. The universal property of \(\mathcal{O}_{X^{\otimes N}}\) yields a \(*\)-homomorphism \(\tau_{N}\colon\mathcal{O}_{X^{\otimes N}}\to\mathcal{O}_{X}\) satisfying \(\tau_{N}\circ\iota_{A}=\iota_{A}\) and \(\tau_{N}\circ\iota_{X^{\otimes N}}=\iota_{X}^{\otimes N}\).
By considering local \(N\)-th roots, the fixed point algebras \(\mathcal{O}_{X^{\otimes N}}^{\overline{\gamma}}\) and \(\mathcal{O}_{X^{\otimes N}}^{\overline{\gamma}^{N}}\) can be seen to coincide. Moreover, it is straightforward to see that \(\tau_{N}\circ\overline{\gamma}_{z}^{N}=\gamma_{z}\circ\tau_{N}\) for all \(z\in\mathbb{T}\). With minimal adjustments, the proof of the Gauge-Invariant Uniqueness Theorem found in [13, Theorem 6.4] carries over to the action \(\overline{\gamma}^{N}\), so since \((\iota_{A},\iota_{X}^{\otimes N})\) is an injective representation it follows that \(\tau_{N}\) is injective.
The next theorem is the main result of [10]: strong shift equivalent \(C^{*}\)-correspondences (that are nondegenerate and regular) have Morita equivalent Cuntz-Pimsner algebras. Here we simply sketch the proof to make it clear that the Morita equivalence Muhly, Pask, and Tomforde construct in fact implements a gauge-equivariant Morita equivalence. This is certainly known (or at least anticipated) by experts but we consider it worthwhile to mention it.
**Theorem 3.5** ([10, Theorem 3.14]).: _Suppose \((\phi_{X},{}_{A}X_{A})\) and \((\phi_{Y},{}_{B}Y_{B})\) are nondegenerate and regular correspondences. If they are strong shift equivalent, then the Cuntz-Pimsner algebras \(\mathcal{O}_{X}\) and \(\mathcal{O}_{Y}\) are gauge-equivariantly Morita equivalent._
Proof.: It suffices to assume that \(X_{A}\) and \(Y_{B}\) are elementary strong shift equivalent. Choose nondegenerate and regular correspondences \((\phi_{R},{}_{A}R_{B})\) and \((\phi_{S},{}_{B}S_{A})\) (cf. [10, Section 3]) such that
\[X_{A}\cong R\otimes_{B}S\quad\text{and}\quad Y_{B}\cong S\otimes_{A}R.\]
By Lemma3.2 we have covariant morphisms \((j_{A},j_{X})\colon(\phi_{X},{}_{A}X_{A})\to(\phi_{X\oplus Y},{}_{A\oplus B}X \oplus Y_{A\oplus B})\) and \((j_{B},j_{Y})\colon(\phi_{Y},{}_{B}Y_{B})\to(\phi_{X\oplus Y},{}_{A\oplus B}X \oplus Y_{A\oplus B})\).
Let \(Z=S\oplus R\) be the correspondence over \(A\oplus B\) with the obvious right module structure and left action \(\phi_{Z}\colon A\oplus B\to\operatorname{End}_{A\oplus B}(Z)\) given by \(\phi_{Z}(a,b)(s,r)=(\phi_{S}(b)s,\phi_{R}(a)r)\) for all \((a,b)\in A\oplus B\) and \((r,s)\in Z\).1 Then \(Z^{\otimes 2}\) is isomorphic to \(X\oplus Y\) as \(A\oplus B\)-correspondences by [12, Proposition 3.4].
Footnote 1: Muhly, Pask, and Tomforde call \(Z\) the _bipartite inflation_.
By Lemmas 3.2 and 3.4 there are inclusions \(\lambda_{X}\colon\mathcal{O}_{X}\to\mathcal{O}_{Z}\) and \(\lambda_{Y}\colon\mathcal{O}_{Y}\to\mathcal{O}_{Z}\) such that the diagram
commutes.
As in [12, Lemma 3.12], we may construct full complementary projections \(P_{X}\) and \(P_{Y}\) in the multiplier algebra \(\operatorname{Mult}(\mathcal{O}_{Z})\) (using approximate identities in \(A\) and \(B\), respectively) such that \(\lambda_{X}(\mathcal{O}_{X})=P_{X}\mathcal{O}_{Z}P_{X}\) and \(\lambda_{Y}(\mathcal{O}_{Y})=P_{Y}\mathcal{O}_{Z}P_{Y}\) are full and \(P_{X}+P_{Y}=1_{\operatorname{Mult}(\mathcal{O}_{Z})}\).
For gauge equivariant Morita equivalence (see Definition 2.14) we will produce a circle action on \(\mathcal{O}_{Z}\) which restricts to to the gauge actions on \(\mathcal{O}_{X}\) and \(\mathcal{O}_{Y}\). The action on \(\mathcal{O}_{Z}\) will not be the gauge action, as the gauge action on \(\mathcal{O}_{Z}\) runs at 'half-speed' compared to the gauge actions on \(\mathcal{O}_{X}\) and \(\mathcal{O}_{Y}\).
Define an action of \(\mathbb{T}\) on \(Z=S\oplus R\) by
\[U_{z}(s,r)=(s,zr),\qquad(s,r)\in Z,\ z\in\mathbb{T}.\]
Conjugation by the second quantisation of \(U_{z}\) on the Fock module of \(Z\) gives an action on the Toeplitz algebra of \(Z\), which descends to \(\mathcal{O}_{Z}\), [13] and Lemma 2.12.
Since \(X=R\otimes_{B}S\) and \(Y=S\otimes_{A}R\), we see that the induced action on \(Z^{\otimes 2}\cong X\oplus Y\) is the sum of the actions of \(\mathbb{T}\) on \(X\) and \(Y\) given on \(x\in X\) and \(y\in Y\) by
\[x\mapsto zx,\qquad y\mapsto zy,\qquad z\in\mathbb{T}.\]
These actions induce the gauge actions of \(\mathcal{O}_{X}\) and \(\mathcal{O}_{Y}\), respectively.
_Remark 3.6_.: Regularity of correspondences is not required for Lemmas 3.2 and 3.4. We note however, that regularity plays a crucial role in the proof of Theorem 3.5, namely in constructing the projections \(P_{X}\) and \(P_{Y}\). There are counterexamples to Theorem 3.5 when either \(X\) or \(Y\) is not regular (see [12]).
## 4 In-splits
In this section, we recall the notion of in-splits for directed graphs, and extend the notion to both topological graphs and \(C^{*}\)-correspondences.
### In-splits for topological graphs
Let us start by recalling the classical notion from symbolic dynamics of in-splittings. Let \(E=(E^{0},E^{1},r,s)\) be a countable discrete directed graph. Fix a vertex \(w\in E^{0}\) which is not a source (i.e. \(r^{-1}(w)\neq\varnothing\)) and let \(\mathcal{P}=\{\mathcal{P}_{i}\}_{i=1}^{n}\) be a partition of \(r^{-1}(w)\) into a finite number of nonempty sets such that at most one of the partition sets \(\mathcal{P}_{i}\) is infinite.
Following [1, Section 5], the _in-split graph of \(E\) associated to \(\mathcal{P}\)_ is the directed graph \(E_{r}(\mathcal{P})\) defined by
\[E_{r}^{0}(\mathcal{P}) =\{v_{1}\mid v\in E^{0},v\neq w\}\cup\{w_{1},\ldots,w_{n}\},\] \[E_{r}^{1}(\mathcal{P}) =\{e_{1}\mid e\in E^{1},s(e)\neq w\}\cup\{e_{1},\ldots,e_{n}\mid e \in E^{1},s(e)=w\},\] \[r_{\mathcal{P}}(e_{i}) =\begin{cases}r(e)_{1}&\text{if $r(e)\neq w$}\\ w_{j}&\text{if $r(e)=w$ and $e\in\mathcal{P}_{j}$},\end{cases}\] \[s_{\mathcal{P}}(e_{i}) =s(e)_{i},\]
for all \(e_{i}\in E_{r}^{1}(\mathcal{P})\).
_Remark 4.1_.: If \(E\) is a finite graph with no sinks and no sources, then the bi-infinite paths on \(E\) define a two-sided shift of finite type (an edge shift). The in-split graph \(E_{r}(\mathcal{P})\) is again a finite graph with no sinks and no sources, and the pair of edge shifts are topologically conjugate. In fact, if \(\mathsf{A}\) and \(\mathsf{A}(\mathcal{P})\) denote the adjacency matrices of \(E\) and \(E_{r}(\mathcal{P})\), respectively, then there are rectangular nonnegative integer matrices \(\mathsf{R}\) and \(\mathsf{S}\) such that \(\mathsf{A}=\mathsf{RS}\) and \(\mathsf{SR}=\mathsf{A}(\mathcal{P})\). That is, the matrices are strong shift equivalent, cf. [1, Chapter 7].
**Example 4.2**.: Consider the directed graphs
Note that the loop \(e\) is both an incoming and an outgoing edge for \(w\). Partition \(r^{-1}(w)\) into \(\mathcal{P}=\{\mathcal{P}_{1},\mathcal{P}_{2}\}\) with \(\mathcal{P}_{1}=\{e,h\}\) and \(\mathcal{P}_{2}=\{g\}\). The right-most graph above is then the in-split graph with respect to \(\mathcal{P}\). The outgoing edges from \(w\) are coloured to highlight their duplication in the in-split graph.
The adjacency matrices of the graph and its in-split are
\[\mathsf{A}=\begin{pmatrix}1&1\\ 2&0\end{pmatrix}\qquad\text{and}\qquad\mathsf{B}=\begin{pmatrix}1&0&1\\ 1&0&1\\ 1&1&0\end{pmatrix},\]
respectively, and the rectangular matrices
\[\mathsf{R}=\begin{pmatrix}1&0\\ 1&0\\ 1&1\end{pmatrix}\qquad\text{and}\qquad\mathsf{S}=\begin{pmatrix}1&0&1\\ 0&1&0\end{pmatrix}\]
satisfy \(\mathsf{B}=\mathsf{RS}\) and \(\mathsf{SR}=\mathsf{A}\). This is an example of an (elementary) strong shift equivalence.
Suppose \(E\) is a graph and let \(E(\mathcal{P})\) be an in-split graph. Define a finite-to-one surjection \(\alpha\colon E^{0}_{r}(\mathcal{P})\to E^{0}\) by \(\alpha(v_{i})=v\) for all \(v\in E^{0}_{r}(\mathcal{P})\) (forgetting the labels) and use the partition to define a map \(\psi\colon E^{1}\to E^{0}_{r}\) by
\[\psi(e)=\begin{cases}r(e)_{1}&\text{if $r(e)\neq w$,}\\ w_{i}&\text{if $r(e)=w$ and $e\in\mathcal{P}_{i}$,}\end{cases}\]
for all \(e\in E^{1}\). Note that since \(w\) is not a source, \(\alpha\) maps sources bijectively to sources, and since at most one set in \(\mathcal{P}\) contains infinitely many edges, it also follows that \(\alpha\) maps infinite receivers bijectively to infinite receivers.
Our first observation is that \(r=\alpha\circ\psi\), so that an in-split may be thought of as a factorisation of the range map \(r\colon E^{1}\to E^{0}\) through the new vertex set \(E^{0}_{r}(\mathcal{P})\). For our second observation, consider the fibred product
\[E^{1}_{r}\coloneqq E^{1}\times_{s,\alpha}E^{0}(\mathcal{P})=\{(e,v_{i})\in E ^{1}\times E^{0}(\mathcal{P}):s(e)=v_{i}\}\]
The map from \(E^{1}(\mathcal{P})\) to \(E^{1}_{r}\) given by \(e_{i}\mapsto(e,s(e)_{i})\) induces a graph isomorphism between \(E(\mathcal{P})\) and \(E_{r}\). These observations allow us to define in-splits for more general topological graphs.
**Definition 4.3**.: An _in-split_ (or _range-split_) of a topological graph \(E=(E^{0},E^{1},r,s)\) is a triple \(I=(\alpha,E^{0}_{I},\psi)\) consisting of
1. a locally compact Hausdorff space \(E^{0}_{I}\),
2. a continuous map \(\psi\colon E^{1}\to E^{0}_{I}\), and
3. a continuous and proper surjection \(\alpha\colon E^{0}_{I}\to E^{0}\) that restricts to a homeomorphism between \(E^{0}_{I,\psi-\text{sing}}\) and \(E^{0}_{\text{sing}}\),
such that \(\alpha\circ\psi=r\).
_Remark 4.4_.: For directed graphs the continuity assumptions of an in-split \(I=(\alpha,E^{0}_{I},\psi)\) are automatic. The properness of \(\alpha\) can be reinterpreted as requiring that \(|\alpha^{-1}(v)|<\infty\) for all \(v\in E^{0}\). In the case of directed graphs the notion of in-split introduced in Definition 4.3 directly generalises that of [1, Section 5] (with source and range maps flipped).
We associate a new topological graph to an in-split.
**Lemma 4.5**.: _Let \(E=(E^{0},E^{1},r,s)\) be a topological graph and let \(I=(\alpha,Y,\psi)\) be an in-split of \(E\). Then \(E_{I}=(E^{0}_{I},E^{1}_{I},r_{I},s_{I})\) is a topological graph, where_
1. \(E^{1}_{I}\coloneqq E^{1}\times_{s,\alpha}E^{0}_{I}=\{(e,v)\in E^{1}\times E^{ 0}_{I}\mid s(e)=\alpha(v)\}\) _equipped with the subspace topology of the product_ \(E^{1}\times E^{0}_{I}\)_; and_
2. \(r_{I}(e,v)=\psi(e)\) _and_ \(s_{I}(e,v)=v\)_, for all_ \(e\in E^{1}\) _and_ \(v\in E^{0}_{I}\)_._
_Moreover, \(E^{0}_{I,r_{I}-\text{sing}}\) and \(E^{0}_{\text{sing}}\) are homeomorphic via \(\alpha\)._
Proof.: The space \(E^{1}_{I}\) is locally compact as a closed subspace of \(E^{1}\times E^{0}_{I}\) and the maps \(r_{I}\) and \(s_{I}\) are clearly continuous. To see that \(s_{I}\) is open, take open sets \(U\) in \(E^{1}\) and \(V\) in \(E^{0}_{I}\) and consider the basic open set \(W=(U\times V)\cap E^{1}_{I}\) in \(E^{1}_{I}\). Then \(s_{I}(W)=\alpha^{-1}(s(U))\cap V\) which is open in \(E^{0}_{I}\), so \(s_{I}\) is open.
To see that \(s_{I}\) is locally injective, fix \((e,v)\in E^{1}_{I}\). Since \(s\) is locally injective, there exists an open neighbourhood \(U\) of \(e\) in \(E^{1}\) such that \(s|_{U}\) is injective. Let \(V\) be any open neighbourhood of \(v\) in
\(E^{0}_{I}\). Then \(W=(U\times V)\cap E^{1}_{I}\) is an open neighbourhood of \((e,v)\) in \(E^{1}_{I}\). If \((e^{\prime},v^{\prime}),(e^{\prime\prime},v^{\prime\prime})\in W\) are such that \(v^{\prime}=s_{I}(e^{\prime},v^{\prime})=s_{I}(e^{\prime\prime},v^{\prime \prime})=v^{\prime\prime}\), then \(s(e^{\prime})=\alpha(v^{\prime})=s(e^{\prime\prime})\) so that \(e^{\prime}=e^{\prime\prime}\). We conclude that \(s_{I}\) is a local homeomorphism and so \(E_{I}\) is a topological graph.
For the final statement we show that the \(r_{I}\)-singular and \(\psi\)-singular subsets of \(E^{0}_{I}\) coincide, and then appeal to the fact that \(\alpha\) restricts to a homeomorphism between \(E^{0}_{I,\psi-\mathrm{sing}}\) and \(E^{0}_{\mathrm{sing}}\). First observe that since \(\alpha\) is surjective, we have \(r_{I}(E^{1}_{I})=\psi(E^{1})\) and so \(E^{0}_{I,r_{I}-\mathrm{src}}=E^{0}_{I,\psi-\mathrm{src}}\).
Now fix a precompact open set \(V\subseteq E^{0}_{I}\), and observe that
\[r_{I}^{-1}(\overline{V})=\{(e,v)\in E^{1}_{I}\mid\psi(e)\in\overline{V}\}.\]
First suppose that \(r_{I}^{-1}(\overline{V})\) is compact. Let \(p_{1}\colon E^{1}\times_{s,\alpha}E^{0}\to E^{1}\) denote the projection onto the first factor and observe that \(p_{1}(r_{I}^{-1}(\overline{V}))=\psi^{-1}(\overline{V})\) since \(\alpha\) is surjective. Moreover, the set is compact since \(p_{1}\) is continuous, so \(E^{0}_{I,r_{I}-\mathrm{fin}}\subseteq E^{0}_{I,\psi-\mathrm{fin}}\).
Now suppose that \(\psi^{-1}(\overline{V})\) is compact. Since \(\alpha\) is proper and \(s\) is continuous, \(\alpha^{-1}(s(\psi^{-1}(\overline{V})))\) is compact in \(E^{0}_{I}\). Since \(E^{0}\) is Hausdorff, \(E^{1}_{I}\) is a closed subspace of \(E^{1}\times E^{0}_{I}\). Consequently,
\[r_{I}^{-1}(\overline{V})=\psi^{-1}(\overline{V})\times_{s,\alpha}\alpha^{-1} (s(\psi^{-1}(\overline{V})))=(\psi^{-1}(\overline{V})\times\alpha^{-1}(s(\psi ^{-1}(\overline{V}))))\cap E^{1}_{I}\]
is a closed subspace of the compact product \(\psi^{-1}(\overline{V})\times\alpha^{-1}(s(\psi^{-1}(\overline{V})))\), and therefore compact. It follows that, \(E^{0}_{I,\mathrm{fin}}=E^{0}_{I,\psi-\mathrm{fin}}\) and so \(E^{0}_{I,\mathrm{sing}}=E^{0}_{I,\psi-\mathrm{sing}}\) as desired.
_Remark 4.6_.: Let \(E\) be a regular topological graph (so \(E^{0}_{\mathrm{sing}}=\varnothing\)) and \(I=(\alpha,E^{0}_{I},\psi)\) an in-split of \(E\). The condition that \(\alpha\) restricts to a homeomorphism on singular sets implies that \(E^{0}_{I,\mathrm{reg}}=E^{0}_{I}\) so \(E_{I}\) is also regular. In particular, \(\psi\) is both proper and surjective in this case.
**Definition 4.7**.: We call \(E_{I}=(E^{0}_{I},E^{1}_{I},r_{I},s_{I})\) the _in-split graph of \(E\) via \(I\)_.
Williams' [26] original motivation for introducing state splittings--such as in-splits--was that even if the in-split graph is different, the dynamical systems they determine (the edge shifts) are topologically conjugate. We proceed to prove that this is also the case for our in-splits for topological graphs. It is interesting to note that our approach provides a new proof of this fact even in the classical case of discrete graphs. To do this we need some lemmas.
**Lemma 4.8**.: _Let \(I=(\alpha,E^{0}_{I},\psi)\) be an in-split of a topological graph \(E=(E^{0},E^{1},r,s)\). The projection onto the first factor \(\alpha_{1}\colon E^{1}_{I}=E^{1}\times_{s,\alpha}E^{0}_{I}\to E^{1}\) is continuous, proper, and surjective. Moreover, the following diagram commutes:_
Proof.: It is clear that \(\alpha_{1}\) is continuous, and surjectivity follows from surjectivity of \(\alpha\). If \(K\) is a compact subset of \(E^{1}\), then
\[\alpha_{1}^{-1}(K)=K\times_{s,\alpha}\alpha^{-1}(s(K))=(K\times\alpha^{-1}(s( K)))\cap E^{1}_{I}.\]
Since \(\alpha\) is proper and \(s\) continuous, \(\alpha_{1}^{-1}(K)\) is a closed subset of the compact set \(K\times\alpha^{-1}(s(K))\), so \(\alpha_{1}\) is proper. Commutativity of the diagram follows from the definition of \(r_{I}\)
We recall that the _\(n\)-th power_ of a topological graph \(E\) is the topological graph \(E^{(n)}\coloneqq(E^{0},E^{n},r,s)\) where \(r(e_{1}\cdots e_{n})\coloneqq r(e_{1})\) and \(s(e_{1}\cdots e_{n})\coloneqq s(e_{n})\). We record how taking powers of topological graphs interacts with in-splits.
**Lemma 4.9**.: _Let \(E=(E^{0},E^{1},r,s)\) be a topological graph and \(I=(\alpha,E^{0}_{I},\psi)\) an in-split of \(E\). Then \(E^{n}_{I}\simeq E^{n}\times_{s,\alpha}E^{0}_{I}\) for all \(n\geq 1\), where \(s\colon E^{n}\to E^{0}\) is given by \(s(e_{1}\cdots e_{n})=s(e_{n})\). Moreover, if \(\psi^{(n)}\colon E^{n}\to E^{0}_{I}\) is the map defined by \(\psi^{(n)}(e_{1}\cdots e_{n})=\psi(e_{1})\), then the \(n\)-th power graph \(E^{(n)}_{I}\) be obtained from \(E^{(n)}\) via the in-split \(I^{(n)}=(\alpha,E^{0}_{I},\psi^{(n)})\)._
Proof.: First, observe that
\[E^{n}_{I}=\{(e_{1},v_{1},\ldots,e_{n},v_{n})\mid e_{i}\in E^{1},\,v_{i}\in E^{ 0}_{I},\,s(e_{i})=\alpha(v_{i}),\text{ and }v_{i}=\psi(e_{i+1})\text{ for all }i\geq 1\}.\]
Since \(\alpha\circ\psi=r\) it follows that the map \((e_{1},v_{1},\ldots,e_{n},v_{n})\mapsto(e_{1}\cdots e_{n},v_{n})\) from \(E^{n}_{I}\) to \(E^{n}\times_{s,\alpha}E^{0}_{I}\) is a homeomorphism with inverse \((e_{1}\cdots e_{n},v_{n})\mapsto(e_{1},\psi(e_{2}),e_{2},\ldots,\psi(e_{n}),e _{n},v_{n})\). The final statement follows immediately.
We now show that in-splits of regular topological graphs induce topological conjugacies. Recall that for a regular topological graph \(E\) the _infinite path space_ is given by
\[E^{\infty}=\left\{e_{1}e_{2}\ldots\in\prod_{i=1}^{\infty}E^{1}\mid s(e_{i})=r (e_{i+1})\right\}\]
with a cylinder set topology making it a locally compact Hausdorff space. The _shift map_\(\sigma_{E}\colon E^{\infty}\to E^{\infty}\) is the local homeomorphism defined by \(\sigma_{E}(e_{1}e_{2}\ldots)=e_{2}e_{3}\ldots\).
**Theorem 4.10**.: _Let \(E=(E^{0},E^{1},r,s)\) be a regular topological graph and let \(I=(\alpha,E^{0}_{I},\psi)\) be an in-split of \(E\). Then the dynamical systems on the infinite path spaces \((\sigma_{E},E^{\infty})\) and \((\sigma_{E_{I}},E^{\infty}_{I})\) are topologically conjugate._
Proof.: Use Lemma 4.9 to identify \(E^{n}_{I}\) with \(E^{n}\times_{s,\alpha}E^{0}_{I}\). For each \(n\geq 1\) let \(r^{n}\colon E^{n+1}\to E^{n}\) be the map given by \(r^{n}(e_{1}\cdots e_{n+1})=e_{1}\cdots e_{n}\). Then \(r^{n}_{I}\colon E^{n+1}_{I}\to E^{n}\) satisfies \(r^{n}_{I}(e_{1}\cdots e_{n+1},v_{n})=(e_{1}\cdots e_{n},\psi(e_{n+1}))\). Define \(\psi^{n}\colon E^{n+1}\to E^{n}_{I}\) by \(\psi^{n}(e_{1}\ldots e_{n+1})=(e_{1}\cdots e_{n},\psi(e_{n+1}))\), and let \(\alpha^{n}\colon E^{n}_{I}\to E^{n}\) be the projection onto the first factor. It is then routine to verify that the diagram
(4.1)
commutes, where \(\alpha^{\infty}\) and \(\psi^{\infty}\) are induced by the universal properties of the projective limit spaces \(E^{\infty}\simeq\varprojlim(E^{n},r^{n})\) and \(E^{\infty}_{I}\simeq\varprojlim(E^{n}_{I},r^{n}_{I})\). In particular, \(E^{\infty}_{I}\) and \(E^{\infty}\) are homeomorphic via \(\alpha^{\infty}\) and \(\psi^{\infty}\).
For conjugacy, we make the key observation that if \(s^{n}\colon E^{n+1}\to E^{n}\) is given by \(s^{n}(e_{1}\cdots e_{n+1})=e_{2}\cdots e_{n+1}\), then the shift \(\sigma_{E}\colon E^{\infty}\to E^{\infty}\) is the unique map making the diagram
commute. With a similar commuting diagram for the shift \(\sigma_{E_{I}}\colon E_{I}^{\infty}\to E_{I}^{\infty}\), it follows from (4.1) that \(\alpha^{\infty}\circ\sigma_{E_{I}}=\sigma_{E}\circ\alpha^{\infty}\).
_Remark 4.11_.: The condition that the topological graphs be regular should not be essential. A similar argument--though more technically demanding--should work for general topological graphs by replacing the path space \(E^{\infty}\) with the boundary path space and using the direct limit structure of the boundary path space outlined in either [10, SS3.3.1] or [11].
_Remark 4.12_.: We have seen that any in-split induces a conjugacy of the limit dynamical systems. In the case of shifts of finite type, this was first proved by Williams [14] where he also showed that the converse holds: any conjugacy is a composition of particular conjugacies that are induced from in-splits and their inverses. We do not know whether a similar result could hold in the case of topological graphs.
**Examples 4.13**.: Let \(E\) be a regular topological graph.
1. We refer to \(I=(\operatorname{Id}_{E^{0}},E^{0},r)\) as the _identity in-split_ since \(E_{I}\) is graph isomorphic to \(E\).
2. We refer to \(I=(r,E^{1},\operatorname{Id}_{E^{1}})\) as the _complete in-split_ of \(E\). The topological graph associated to \(I\) has vertices \(E_{I}^{0}=E^{1}\) and edges \[E_{I}^{1}=E^{1}\times_{s,r}E^{1}=\{(e^{\prime},e)\in E^{1}\times E^{1}:s(e^{ \prime})=r(e)\}\] that may be identified with \(E^{2}\), the composable paths of length \(2\). The range and source maps satisfy \(r_{I}(e^{\prime},e)=\operatorname{Id}_{E^{1}}(e^{\prime})=e^{\prime}\) and \(s_{I}(e^{\prime},e)=e\), for all \((e^{\prime},e)\in E_{I}^{1}\). We denote this in-split graph by \(\hat{E}=(E^{1},E^{2},\hat{r},\hat{s})\) and refer to it as the _dual graph_ of \(E\). When \(E\) is a regular topological graph, then \(E_{I}\) is graph isomorphic to Katsura's dual graph, cf. [11, Definition 2.3], and when \(E\) is discrete, then \(E_{I}\) is discrete and it is graph isomorphic to the dual graph as in [10, Corollary 2.6]. Iterating the dual graph construction in the case of topological graphs coincides with Katsura's iterative process in [11, Section 3].
The following lemma is akin to [1, Lemma 2.4] (see also [14]) in the setting of nonnegative integer matrices. This lemma shows that the dual graph is in some sense the "largest" in-split of a regular topological graph.
**Lemma 4.14**.: _Let \(E\) be a regular topological graph and let \(I=(\alpha,E_{I}^{0},\psi)\) be an in-split of \(E\). Let \(\alpha_{1}\colon E_{I}^{1}\to E^{1}\) be the projection onto the first factor as in Lemma 4.8. Then \(I^{\prime}=(\psi,E^{1},\alpha_{1})\) is an in-split of \(E_{I}\) with the property that \((E_{I})_{I^{\prime}}\) is graph isomorphic to the dual graph \(\hat{E}\)._
Proof.: Since \(E\) is regular, it follows from Remark 4.6 that \(\psi\) is proper and surjective, and Lemma 4.8 implies that \(\alpha_{1}\) is proper and surjective. Therefore, \(I^{\prime}=(\psi,E^{1},\alpha_{1})\) is an in-split of \(E_{I}\). Let \(F=(E_{I})_{I^{\prime}}\) be the resulting in-split graph and observe that \(F^{0}=E^{1}\). Moreover, we have
\[F^{1}=E_{I}^{1}\times_{s_{I},\psi}E^{1}=\{(e^{\prime},x,e)\in E^{1}\times E_{ I}^{0}\times E^{1}\mid s(e^{\prime})=\alpha(x)\text{ and }x=\psi(e)\}\]
with \(r_{F}(e^{\prime},x,e)=\alpha_{1}(e^{\prime},x)=e^{\prime}\) and \(s_{F}(e^{\prime},x,e)=e\) for all \((e^{\prime},x,e)\in F^{1}\).
The map \(F^{1}\to\hat{E}^{1}\) sending \((e^{\prime},x,e)\mapsto(e^{\prime},e)\) is a homeomorphism which intertwines the range and source maps. It is injective because \(x\in E_{I}^{0}\) is uniquely determined by \(e^{\prime}\), and it is surjective since if \((e^{\prime},e)\in\hat{E}^{1}\) are composable edges, then \(x=\psi(e^{\prime})\) satisfies \(\alpha(x)=\alpha\circ\psi(e^{\prime})=r(e^{\prime})=s(e)\), so \((e^{\prime},x,e)\) is mapped to \((e^{\prime},e)\)
A simple class of examples comes from "topologically fattening" the class of directed graphs.
**Example 4.15**.: Let \(E=(E^{0},E^{1},r,s)\) be a regular directed graph and fix a locally compact Hausdorff space \(X\). Let \(F^{0}\coloneqq E^{0}\times X\) and \(F^{1}\coloneqq E^{1}\times X\) with the respective product topologies and define \(r_{F}(e,x)=(r(e),x)\) and \(s_{F}(e,x)=(s(e),x)\). Then \(F=(F^{0},F^{1},r_{F},s_{F})\) is a topological graph.
If \(I=(\alpha,E^{0}_{I},\psi)\) is an in-split of \(E\), then \(I_{X}\coloneqq(\alpha\times\operatorname{Id}_{X},E^{0}_{I}\times X,\psi\times \operatorname{Id}_{X})\) is an in-split of \(F\). It is straightforward to check that the associated topological graph \(F_{I_{X}}\) is isomorphic to \((E^{0}_{I}\times X,E^{1}_{I}\times X,r_{I}\times\operatorname{Id}_{X},s_{I} \times\operatorname{Id}_{X})\).
In the setting of topological graphs there are also strictly more exotic examples than those obtained via fattening directed graphs.
**Example 4.16**.: Fix \(m,n\in\mathbb{Z}\setminus\{0\}\) and let \(E^{0}\coloneqq\mathbb{T}\) and \(E^{1}\coloneqq\mathbb{T}\). Define \(r,s\colon E^{1}\to E^{0}\) by \(r(z)=z^{m}\) and \(s(z)=z^{n}\). Then \(E=(E^{0},E^{1},r,s)\) is a topological graph. Suppose \(a,b\in\mathbb{Z}\) satisfy \(m=ab\). Define \(\psi\colon E^{1}\to\mathbb{T}\) by \(\psi(z)=z^{a}\) and \(\alpha\colon\mathbb{T}\to E^{0}\) by \(\alpha(z)=z^{b}\). Since \(r(z)=z^{m}=(z^{a})^{b}=\alpha\circ\psi(z)\), it follows that \(I=(\alpha,\mathbb{T},\psi)\) is an in-split of \(E\). The new edge set is
\[E^{1}_{I}=\{(z_{1},z_{2})\in\mathbb{T}^{2}\mid z_{1}^{n}=z_{2}^{b}\}.\]
We claim that \(E^{1}_{I}\) is homeomorphic to a disjoint union of \(\gcd(n,b)\) copies of \(\mathbb{T}\).
Let \(q_{b},q_{n}\) be the unique integers such that \(n=q_{n}\gcd(n,b)\) and \(b=q_{b}\gcd(n,b)\), and note that \(q_{n}\) and \(q_{b}\) have no common factors. We also record that \(q_{n}b=q_{n}q_{b}\gcd(n,b)=q_{b}n\). For each \(|b|\)-th root of unity \(\omega\) define \(\pi_{\omega}\colon\mathbb{T}\to E^{1}_{I}\) by
\[\pi_{\omega}(z)=(z^{q_{b}},\omega z^{q_{n}}).\]
Suppose that \((z_{1},z_{2})\in E^{1}_{I}\) and let \(z\) be a \(|q_{b}|\)-th root of \(z_{1}\). Then \((z^{q_{n}})^{b}=(z^{q_{b}})^{n}=z_{1}^{n}=z_{2}^{b}\), so \((z_{2}/z^{q_{n}})^{b}=1\). Hence, there is some \(|b|\)-th root of unity \(\omega\) such that \(z_{2}=\omega z^{q_{n}}\). In particular, every \((z_{1},z_{2})\in E^{1}_{I}\) can be written in the form \((z^{q_{b}},\omega z^{q_{n}})=\pi_{\omega}(z)\) for some \(z\in\mathbb{T}\) and some \(|b|\)-th root of unity \(\omega\).
We claim that each \(\pi_{\omega}\) is injective. Suppose that \(\pi_{\omega}(z)=\pi_{\omega}(v)\) for some \(z,v\in\mathbb{T}\). Then \(z^{q_{b}}=v^{q_{b}}\) and \(z^{q_{n}}=v^{q_{b}}\). Consequently \(z=\omega_{0}v\) for some \(\omega_{0}\in\mathbb{T}\) that is simultaneously a \(|q_{b}|\)-th and a \(|q_{n}|\)-th root of unity. Since \(q_{b}\) and \(q_{n}\) are coprime, we must have \(\omega_{0}=1\), so \(\pi_{\omega}\) is injective. Since each \(\pi_{\omega}\) is a continuous injection from a compact space to a Hausdorff space, it follows that each \(\pi_{\omega}\) is a homeomorphism onto its image.
Fix a primitive \(|b|\)-th root of unity \(\lambda\). We claim that \(\pi_{\lambda^{c}}\) and \(\pi_{\lambda^{d}}\) have the same image if and only if \(c\equiv kn+d\pmod{|b|}\) for some \(0\leq k<|q_{b}|\). If \(c\equiv kn+d\pmod{|b|}\), then \(\lambda^{c}=\lambda^{kn+d}\). For all \(z\in\mathbb{T}\) we compute
\[\pi_{\lambda^{c}}(z^{\gcd(n,b)}) =((z^{\gcd(n,b)})^{q_{b}},\lambda^{c}(z^{\gcd(n,b)})^{q_{n}})=(z^ {b},\lambda^{kn+d}z^{n})\] \[=((\lambda^{k\gcd(n,b)}z)^{q_{b}},\lambda^{d}(\lambda^{k\gcd(n,b )}z)^{q_{n}})=\pi_{\lambda^{d}}((\lambda^{k}z)^{\gcd(n,b)}).\]
Since \(z\mapsto z^{\gcd(n,b)}\) and \(z\mapsto(\lambda^{k}z)^{\gcd(n,b)}\) both surject onto \(\mathbb{T}\), it follows that \(\pi_{\lambda^{c}}\) and \(\pi_{\lambda^{d}}\) have the same image.
Conversely, suppose that \(\pi_{\lambda^{c}}(z)=\pi_{\lambda^{d}}(v)\) for some \(z,v\in\mathbb{T}\). Then \(z^{q_{b}}=v^{q_{b}}\) and \(\lambda^{c}z^{q_{n}}=\lambda^{d}v^{q_{n}}\). Since \(z^{q_{b}}=v^{q_{b}}\), there is an \(|q_{b}|\)-th root of unity \(\lambda_{0}\) such that \(z=\lambda_{0}v\). Since \(b=q_{b}\gcd(n,b)\)
there exists \(0\leq k<|q_{b}|\) such that \(\lambda_{0}=\lambda^{k\gcd(n,b)}\). It follows that
\[\lambda^{c}z^{q_{n}}=\lambda^{d}v^{q_{n}}=\lambda^{d}(\lambda^{k\gcd(n,b)}z)^{q_ {n}}=\lambda^{kn+d}z^{q_{n}}.\]
Therefore, \(\lambda^{c}=\lambda^{kn+d}\) so \(c\equiv kn+d\pmod{|b|}\). It follows that \(E^{1}_{I}\) is a disjoint union of circles, but what remains is to count how many distinct circles it is composed of.
Since the maps \(\pi_{\lambda^{c}}\) and \(\pi_{\lambda^{d}}\) have the same image if and only if \(c\equiv kn+d\pmod{|b|}\) for some \(k\), the number of circles is in bijection with the cosets of \(\mathbb{Z}_{|b|}/n\mathbb{Z}_{|b|}\). To determine the number of cosets it suffices to determine the cardinality of \(n\mathbb{Z}_{|b|}\). Using Bezout's Lemma at the last equality we observe that
\[n\mathbb{Z}_{|b|}=\{nc\in\mathbb{Z}_{|b|}\mid c\in\mathbb{Z}\}=\{nc+bd\in \mathbb{Z}_{|b|}\mid c,d\in\mathbb{Z}_{|b|}\}=\{k\gcd(n,b)\in\mathbb{Z}_{|b|} \mid k\in\mathbb{Z}_{|b|}\}.\]
It follows that \(\mathbb{Z}_{|b|}/n\mathbb{Z}_{|b|}\) contains \(\gcd(n,b)\) cosets.
For an explicit identification of \(E^{1}_{I}\) with the disjoint union of \(\gcd(n,b)\) circles, fix a primitive \(|b|\)-th root of unity \(\lambda\) and let \(\pi\colon\{1,\ldots,\gcd(n,b)\}\times\mathbb{T}\to E^{1}_{I}\) be the homeomorphism defined by \(\pi(k,z)=\pi_{\lambda^{k}}(z)=(z^{q_{b}},\lambda^{k}z^{q_{n}})\). Under this identification,
\[r_{I}(k,z)=\psi(z^{q_{b}})=z^{q_{b}a}=z^{m/\gcd(n,b)}\quad\text{and}\quad s_{ I}(k,z)=\lambda^{k}z^{q_{n}}=\lambda^{k}z^{n/\gcd(n,b)}.\]
Remarkably, the quite different topological graphs \(E\) and \(E_{I}\) induce topologically conjugate dynamics on their respective path spaces by Theorem4.10. This is far from obvious.
By swapping the role of \(b\) and \(n\) above, we could alternatively let \(\gamma\in\mathbb{T}\) be a primitive \(|n|\)-th root of unity to see that \(\pi^{\prime}\colon\{1,\ldots,\gcd(n,b)\}\times\mathbb{T}\to E^{1}_{I}\) defined by \(\pi^{\prime}(k,z)=(\gamma^{k}z^{q_{b}},z^{q_{n}})\) is a homeomorphism. Identifying \(E^{1}_{I}\) with the disjoint union of circles via \(\pi^{\prime}\), the range and source maps for \(E_{I}\) satisfy
\[r_{I}(k,z)=\psi(\gamma^{k}z^{q_{b}})=\gamma^{ka}z^{q_{b}a}=\gamma^{ka}z^{m/ \gcd(n,b)}\quad\text{and}\quad s_{I}(k,z)=z^{q_{n}}=z^{n/\gcd(n,b)}.\]
In general, the naive composition of in-splits cannot be realised as a single in-split. If one pays the penalty of passing to paths, then the following result provides a notion of composition of in-splits, highlighting the role of the projective limit decomposition of path spaces from (4.1).
**Proposition 4.17**.: _Suppose that \(E\) is a regular topological graph and that there is a finite sequence of in-splits \(I_{k}=(\alpha_{k},E_{I_{k}},\psi_{k})\) for \(k=1,\ldots,n\) such that_
* \(I_{1}\) _is an in-split of_ \(E\)_, and_
* \(I_{k}\) _is an in-split of_ \(E_{I_{k-1}}\) _for_ \(k\geq 2\)_._
_Then \(E^{(n)}_{I_{n}}=(E^{0}_{I_{n}},E^{n}_{I_{n}},r,s)\) is isomorphic to the graph obtained by a single in-split \((\alpha,E^{0}_{I_{n}},\psi)\) of \(E^{(n)}=(E^{0},E^{n},r,s)\). Moreover, \((\sigma^{n}_{E},E^{\infty})\) is topologically conjugate to \((\sigma^{n}_{E_{I_{n}}},E^{\infty}_{I_{n}})\)._
Proof.: For each \(0\leq p,k\leq n\) let \(\alpha^{p}_{k}\colon E^{p}_{I_{k}}\to E^{p}_{I_{k-1}}\) and \(\psi^{p}_{k}\colon E^{p}_{I_{k-1}}\to E^{p-1}_{I_{k}}\) be the maps arising from the sequences defined in the proof of Theorem4.10, where for consistency we take the
convention that \(E_{I_{0}}\coloneqq E\), \(\alpha_{k}^{0}\coloneqq\alpha_{k}\), and \(\psi_{k}^{0}\coloneqq\psi_{k}\). In particular the diagram
commutes.
Let \(\alpha=\alpha_{1}\circ\cdots\circ\alpha_{n}\) and let \(\psi=\psi_{n}^{0}\circ\psi_{n-1}^{1}\circ\cdots\circ\psi_{2}^{n-1}\circ\psi_{1}^ {n}\). We claim that \((\alpha,E_{I_{n}}^{0},\psi)\) is an in-split of \(E^{(n)}\). Clearly \(\alpha\) is a continuous proper surjection and \(\psi\) is continuous. Moreover, \(\alpha\circ\psi=r\circ r^{1}\circ\cdots r^{n-2}\circ r^{n-1}\) is the range map on the \(n\)-th power \(E^{(n)}\).
Repeatedly applying Lemma 4.9 and using the fact that each \(\alpha_{i}\) surjects, we see that
\[E_{I_{n}}^{n}\simeq E_{I_{n-1}}^{n}\times_{s,\alpha_{n}}E_{I_{n}}^ {0} \simeq(\cdots((E^{n}\times_{s,\alpha_{1}}E_{I_{1}}^{0})\times_{s, \alpha_{2}}E_{I_{2}}^{0})\times_{s,\alpha_{3}}\cdots)\times_{s,\alpha_{n}}E_{ I_{n}}^{0}\] \[\simeq E^{n}\times_{s,\alpha_{1}\circ\cdots\circ\alpha_{n}}E_{I_{ n}}^{0}=E^{n}\times_{s,\alpha}E_{I_{n}}^{0}.\]
The source maps on \(E^{n}\times_{s,\alpha}E_{I_{n}}^{0}\) as an in-split and \(E_{I_{n}}^{0}\) clearly agree, and commutativity of the preceding diagram also imply that the range maps agree.
The final statement follows after observing that \(E^{\infty}\simeq\varprojlim(E^{k},r^{k})\simeq\varprojlim(E^{nk},r^{nk})\) and applying Theorem 4.10.
### Noncommutative in-splits
Inspired by the recasting of in-splits for directed graphs and topological graphs we introduce the following analogous notion of in-splits for \(C^{*}\)-correspondences.
**Definition 4.18**.: An _in-split_ of a nondegenerate \(A\)-\(A\)-correspondence \((\phi,{}_{A}X_{A})\) is a triple \(I=(\alpha,B,\psi)\) consisting of a \(C^{*}\)-algebra \(B\) together with a nondegenerate injective \(*\)-homomorphism \(\alpha\colon A\to B\) and a left action \(\psi\colon B\to\operatorname{End}_{A}(X)\) such that \(\psi\circ\alpha=\phi\) and, moreover,
1. \(\alpha(J_{\phi})\subseteq J_{\psi}\coloneqq\psi^{-1}(\operatorname{End}_{A}^ {0}(X))\cap\ker(\psi)^{\perp}\), and
2. the induced \(*\)-homomorphism \(\overline{\alpha}\colon A/J_{\phi}\to B/J_{\psi}\) is an isomorphism.
To an in-split \((\alpha,B,\psi)\) of \((\phi,X_{A})\) we associate the \(C^{*}\)-correspondence \((\psi\otimes\operatorname{Id}_{B},X\otimes_{\alpha}B)\) over \(B\) where the left action is given as \((\psi\otimes\operatorname{Id}_{B})(b^{\prime})(x\otimes b)=\psi(b^{\prime})x \otimes b\) for all \(x\in X_{A}\) and \(b^{\prime},b\in B\). We call this the _in-split correspondence of \((\phi,{}_{A}X_{A})\) associated to \(I\)_.
Observe that since \(\phi\) and \(\alpha\) are nondegenerate, so is \(\psi\). We identify the covariance ideal for the in-split correspondence.
**Lemma 4.19**.: _The ideal \(J_{\psi}\) of \(B\) is the covariance ideal for \((\psi\otimes\mathrm{Id}_{B},X\otimes_{\alpha}B)\). That is, \(J_{\psi}=J_{\psi\otimes\mathrm{Id}_{B}}\)._
Proof.: Lemma3.3 implies \(J_{\psi\otimes\mathrm{Id}_{B}}\subseteq J_{\psi}\). For the other inclusion, observe that it follows from [15, Corollary 3.7] and
\[\alpha^{-1}(\mathrm{End}_{B}^{0}(B))=\alpha^{-1}(B)=A\]
that the map \(T\mapsto T\otimes\mathrm{Id}_{B}\) from \(\mathrm{End}_{A}(X)\) to \(\mathrm{End}_{B}(X\otimes_{\alpha}B)\) takes compact operators to compact operators. In particular, \(\psi(b)\otimes\mathrm{Id}_{B}\) is compact for each \(b\in\psi^{-1}(\mathrm{End}_{A}^{0}(X))\), so \(\psi^{-1}(\mathrm{End}_{A}^{0}(X))\subset(\psi\otimes\mathrm{Id}_{B})^{-1}( \mathrm{End}_{B}^{0}(X\otimes_{\alpha}B))\).
Clearly, \(\ker(\psi)\subseteq\ker(\psi\otimes\mathrm{Id}_{B})\). On the other other hand, if \(b_{0}\in\ker(\psi\otimes\mathrm{Id}_{B})\), then
\[0=(\psi(b_{0})x\otimes b\mid\psi(b_{0})x\otimes b)_{B}=(b\mid\alpha((\psi(b_{0 })x,\psi(b_{0})x)_{A})b)_{B}\]
for all \(x\otimes b\in X\otimes_{\alpha}B\). In particular, \(\alpha((\psi(b_{0})x\mid\psi(b_{0})x)_{A})=0\), so injectivity of \(\alpha\) implies \(\psi(b_{0})x=0\) for all \(x\in X_{A}\). Hence, \(\ker(\psi)=\ker(\psi\otimes\mathrm{Id}_{B})\). We conclude that \(J_{\psi\otimes\mathrm{Id}_{B}}=J_{\psi}\).
Condition2 allows for a useful decomposition of elements in \(B\) in the following way.
**Lemma 4.20**.: _For each \(b\in B\) there exists \(a\in A\) and \(k\in J_{\psi}\) such that \(b=\alpha(a)+k\)._
If \((\phi,{}_{A}X_{A})\) is the correspondence associated to a topological graph \(E\) and \(I\) is an in-split of \(E\), then \(I\) induces an in-split of correspondences in the sense of 4.18. Moreover, the new correspondence associated to the in-split of correspondences may be identified with the graph correspondence of the in-split graph \(E_{I}\). It is in this sense that 4.18 generalises the topological notion of in-split of 4.3.
**Proposition 4.21**.: _Let \(E\) be a topological graph and let \(I=(\alpha,E_{I}^{0},\psi)\) be an in-split of \(E\). Let \((\phi,X(E))\) and \((\phi_{I},X(E_{I}))\) be the graph correspondences of \(E\) and \(E_{I}\), respectively. Then there is an induced in-split \((\alpha^{*},C_{0}(E_{I}^{0}),\psi^{*})\) of \((\phi,X(E))\) satisfying_
\[\alpha^{*}(f)=f\circ\alpha\quad\text{and}\quad\psi^{*}(f)x(e)=f(\psi(e))x(e)\]
_for all \(f\in A\), \(x\in C_{c}(E^{1})\), and \(e\in E^{1}\)._
_Moreover, the in-split correspondence \((\psi^{*}\otimes\mathrm{Id},X(E)\otimes_{\alpha^{*}}C_{0}(E_{I}^{0}))\) is isomorphic to \((\phi_{I},X(E_{I}))\)._
Proof.: Let \(A=C_{0}(E^{0})\) and \(A_{I}=C_{0}(E_{I}^{0})\) be the coefficient algebras of \(X(E)\) and \(X(E_{I})\), respectively. Since \(\alpha\) is a proper surjection there is a well-defined nondegenerate injective \(*\)-homomorphism \(\alpha^{*}\colon A\to A_{I}\) given by \(\alpha^{*}(f)=f\circ\alpha\) for all \(f\in A\). For each \(g\in A_{I}\), define an endomorphism \(\psi^{*}(g)\) on \(C_{c}(E^{1})\) by \(\psi^{*}(g)x(e)\coloneqq g(\psi(e))x(e)\) for all \(x\in C_{c}(E^{1})\) and \(e\in E^{1}\). The computation
\[\|\psi^{*}(g)x\|^{2}=\|(\psi^{*}(g)x\mid\psi^{*}(g)x)_{A}\|_{\infty}=\sup_{v\in E ^{0}}\sum_{s(e)=v}|g(\psi(e))x(e)|^{2}\leq\|g\|_{\infty}^{2}\|x\|^{2},\]
for all \(x\in C_{c}(E^{1})\) and \(e\in E^{1}\) shows that the map \(g\mapsto\psi^{*}(g)\) extends to a \(*\)-homomorphism \(\psi^{*}\colon A_{I}\to\mathrm{End}_{A}(X(E))\) satisfying \(\psi^{*}(g)^{*}=\psi^{*}(\bar{g})\).
Observe that \(J_{\phi}=C_{0}(E^{0}_{\rm reg})\) and \(J_{\psi}=C_{0}(E^{0}_{I,\psi-{\rm reg}})\), and since \(\alpha\) restricts to a homeomorphism between \(E^{0}_{\rm sing}\) and \(E^{0}_{I,\psi-{\rm sing}}\), it follows that \(\alpha\) maps \(E_{I,\psi-{\rm reg}}\) onto \(E^{0}_{\rm reg}\), so \(\alpha^{*}(J_{\phi})\subseteq J_{\psi}\), and the induced map \(\overline{\alpha}\colon C_{0}(E^{0}_{\rm sing})\cong A/J_{\phi}\to A/J_{\psi} \cong C_{0}(E^{0}_{I,\psi-{\rm sing}})\) is a \(*\)-isomorphism. Therefore, \((\alpha^{*},A_{I},\psi^{*})\) is an in-split of the graph correspondence \((\phi,X(E))\).
Next we verify that the \(C^{*}\)-correspondences \((\psi^{*}\otimes\operatorname{Id},X(E)\otimes_{\alpha^{*}}C_{0}(E^{0}_{I}))\) and \((\phi_{I},X(E_{I}))\) are isomorphic. Define \(\beta\colon C_{c}(E^{1})\otimes_{\alpha^{*}}C_{c}(E^{0}_{I})\to C_{c}(E^{1}_{I})\) by \(\beta(x\otimes g)(e,v)=x(e)g(v)\), for all \(x\in C_{c}(E^{1})\), \(g\in C_{c}(E^{0}_{I})\) and \((e,v)\in E^{1}_{I}\). The computation
\[(\beta(x\otimes f)\mid\beta(x^{\prime}\otimes f^{\prime}))_{A_{I} }(v) =\sum_{s_{I}(e,v)=v}\overline{x(e)}x^{\prime}(e)\overline{f(x)}f^{ \prime}(v)=\sum_{s(e)=\alpha(v)}\overline{x(e)}x^{\prime}(e)\overline{f(x)}f^ {\prime}(v)\] \[=(x\mid x^{\prime})_{A}(\alpha(v))\overline{f(x)}f^{\prime}(v)=( f\mid\alpha^{*}((x\mid x^{\prime})_{A})f^{\prime})_{A_{I}}(v)\] \[=(x\otimes f\mid x^{\prime}\otimes f^{\prime})_{A_{I}}(v),\]
shows that \(\|\beta(x\otimes f)\|=\|x\otimes f\|\). Consequently, \(\beta\) extends to an isometric linear map \(\beta\colon X(E)\otimes_{\alpha^{*}}A_{I}\to X(E_{I})\).
If \(x\in C_{c}(E^{1})\) and \(g,g^{\prime}\in A_{I}\), then \(\beta((x\otimes g)\cdot g^{\prime})=\beta(x\otimes g)\cdot g^{\prime}\) and
\[\phi_{I}(g^{\prime})\beta(x\otimes g)(e,v)= g^{\prime}(\psi(e))x(e)g(v)= \beta((\psi^{*}\otimes\operatorname{Id})(g^{\prime})x\otimes g)(e,v),\]
for all \((e,v)\in E^{1}_{I}\). This shows that \((\operatorname{Id},\beta)\colon(\psi^{*}\otimes\operatorname{Id},X(E)\otimes_ {\alpha^{*}}C_{0}(E^{0}_{I}))\to(\phi_{I},X(E_{I}))\) is a correspondence morphism.
It remains to verify that \(\beta\) is surjective. Fix \(\eta\in C_{c}(E^{1}_{I})\). Since \(s_{I}\) is a local homeomorphism, we can cover \({\rm supp}(\eta)\) by finitely many open sets \(\{U_{i}\}\) such that \(s_{I}|_{U_{i}}\) is injective. Let \(\{\rho_{i}\}\) be a partition of unity subordinate to the cover \(\{U_{i}\}\). Then \(\rho_{i}\eta\) has support in \(U_{i}\).
Define \(\xi_{i}\in C_{c}(E^{0}_{I})\) by \(\xi_{i}(v)=\rho_{i}\eta(s^{-1}(v),v)\), and use Urysohn's Lemma to find \(\zeta_{i}\in C_{c}(E^{1})\) such that \(\zeta_{i}\) is identically \(1\) on the compact set
\[\{e\in E^{1}:\,\text{there exists $v\in E^{0}_{I}$ such that $(e,v)\in{\rm supp}(\rho_{i}\eta)$}\}.\]
Then \(\rho_{i}\eta=\beta(\zeta_{i}\otimes\xi_{i})\) by construction and so
\[\eta=\sum_{i}\rho_{i}\eta=\sum_{i}\beta(\zeta_{i}\otimes\xi_{i})\]
is in the image of \(\beta\). As \(\eta\in C_{c}(E^{1}_{I})\) is arbitrary, \(\beta\) is surjective.
Every discrete directed graph is--in particular--a topological graph, so Proposition4.21 also applies to directed graphs. Since in-splits are examples of strong shift equivalences, Theorem3.5 shows that the associated Cuntz-Pimsner algebras are gauge-equivariantly Morita equivalent.
**Proposition 4.22**.: _Let \((\phi,{}_{A}X_{A})\) be a \(C^{*}\)-correspondence and let \((\alpha,B,\psi)\) be an in-split. Then \((\phi,{}_{A}X_{A})\) is strong shift equivalent to the in-split correspondence \((\psi\otimes\operatorname{Id},X\otimes_{\alpha}B)\). Hence \(\mathcal{O}_{X}\) is gauge equivariantly Morita equivalent to \(\mathcal{O}_{X\otimes_{\alpha}B}\)._
Proof.: Consider the \(C^{*}\)-correspondences \(R=(\psi,{}_{B}X_{A})\) and \(S=(\alpha,{}_{A}B_{B})\) and observe that \(S\otimes R\) is isomorphic to \((\phi,{}_{A}X_{A})\) via the map \(b\otimes x\mapsto\psi(b)x\) for all \(x\in X_{A}\) and \(b\in B\), while \(R\otimes S\) is the in-split \((\psi\otimes\operatorname{Id},X\otimes_{\alpha}B)\). This is a strong shift equivalence.
For in-splits more is true: they are gauge-equivariantly \(*\)-isomorphic (see Theorem 4.24), generalising [1, Theorem 3.2] (see also [1, Theorem 3.2]). First, we need a lemma.
**Lemma 4.23**.: _Let \(X_{A}\) be a right Hilbert \(A\)-module and suppose that \(\alpha\colon A\to B\) is an injective \(*\)-homomorphism. Then there is a well-defined injective linear map \(\beta\colon X\to X\otimes_{\alpha}B\) satisfying \(\beta(x\cdot a)=x\otimes\alpha(a)\) for all \(x\in X\) and \(a\in A\)._
_Moreover, suppose \(\alpha\) is nondegenerate, \(X_{A}\) is countably generated, and \(A\) is \(\sigma\)-unital. Let \((x_{i})_{i}\) be a countable frame for \(X_{A}\) and let \((u_{j})_{j}\) be an increasing approximate unit for \(A\). With \(a_{j}\coloneqq(u_{j}-u_{j-1})^{1/2}\) the sequence \((x_{i}\otimes\alpha(a_{j}))\) is a frame for \(X_{A}\otimes_{\alpha}B\)._
Proof.: For any \(x\in X_{A}\) there is a unique \(x^{\prime}\in X_{A}\) such that \(x=x^{\prime}\cdot(x^{\prime}\mid x^{\prime})_{A}\), cf. [13, Proposition 2.31], so we may assume that any element in \(X_{A}\) is of the form \(x\cdot a\), for some \(x\in X_{A}\) and \(a\in A\). Observe that for any \(x_{1},x_{2}\in X_{A}\) and \(a_{1},a_{2}\in A\), we have
\[(x_{1}\otimes\alpha(a_{1})\mid x_{2}\otimes\alpha(a_{2}))_{B}=\alpha((x_{1} \cdot a_{1}\mid x_{2}\cdot a_{2})_{A}), \tag{4.2}\]
so
\[\|x_{1}\otimes\alpha(a_{1})-x_{2}\otimes\alpha(a_{2})\|^{2} =\|(x_{1}\otimes\alpha(a_{1})\mid x_{1}\otimes\alpha(a_{1}))_{B}- (x_{1}\otimes\alpha(a_{1})\mid x_{2}\otimes\alpha(a_{2}))_{B}\] \[\quad-(x_{2}\otimes\alpha(a_{2})\mid x_{1}\otimes\alpha(a_{1}))_{ B}+(x_{2}\otimes\alpha(a_{2})\mid x_{2}\otimes\alpha(a_{2}))_{B}\|\] \[=\|\alpha((x_{1}\cdot a_{1}\mid x_{1}\cdot a_{1})_{A}-(x_{1} \cdot a_{1}\mid x_{2}\cdot a_{2})_{A}\] \[\quad-(x_{2}\cdot a_{2}\mid x_{1}\cdot a_{1})_{A}+(x_{2}\cdot a_{ 2}\mid x_{2}\cdot a_{2})_{A})\|\] \[=\|x_{1}\cdot a_{1}-x_{2}\cdot a_{2}\|^{2}.\]
This computation shows that \(\beta\colon X\to X\otimes_{\alpha}B\) given by \(\beta(x\cdot a)=x\otimes\alpha(a)\) for all \(x\in X\) and \(a\in A\) is well-defined and isometric.
For the second statement, observe that \((a_{i})_{i\in\mathbb{N}}\) is a frame for \(A\) as a right \(A\)-module since
\[\sum_{i=1}^{j}a_{i}\cdot(a_{i}\mid a)_{A}=\sum_{i=1}^{j}a_{i}a_{i}^{*}a=u_{j}a\to a\]
as \(j\to\infty\). The result now follows from Proposition 2.16.
**Theorem 4.24**.: _Let \((\phi,{}_{A}X_{A})\) be a countably generated correspondence over a \(\sigma\)-unital \(C^{*}\)-algebra \(A\), let \((\alpha,B,\psi)\) be an in-split, and let \((\psi\otimes\operatorname{Id},X\otimes_{\alpha}B)\) be the in-split correspondence. With the map \(\beta\) as in Lemma 4.23, the pair \((\alpha,\beta)\colon(\phi,X)\to(\psi\otimes\operatorname{Id},X\otimes_{ \alpha}B)\) is a covariant correspondence morphism. The induced \(*\)-homomorphism \(\alpha\times\beta\colon\mathcal{O}_{X}\to\mathcal{O}_{X\otimes_{\alpha}B}\) is a gauge-equivariant \(*\)-isomorphism._
Proof.: We first verify that \((\alpha,\beta)\colon(\phi,X)\to(\psi\otimes\operatorname{Id},X\otimes_{ \alpha}B)\) is a correspondence morphism. For the right action, we see for \(x\in X_{A}\) and \(a,a^{\prime}\in A\) that
\[\beta(x\cdot a)\cdot\alpha(a^{\prime})=x\otimes\alpha(aa^{\prime})=\beta((x \cdot a)\cdot a^{\prime}),\]
and for the left action, we apply \(\psi\circ\alpha=\phi\) to observe that
\[(\psi\otimes\operatorname{Id})(\alpha(a^{\prime}))\beta(x\cdot a)=\phi(a^{ \prime})x\otimes\alpha(a)=\beta(\phi(a^{\prime})x\cdot a),\]
for all \(x\in X_{A}\) and \(a,a^{\prime}\in A\). Together with (4.2) this shows that \((\alpha,\beta)\) is a correspondence morphism.
For covariance of \((\alpha,\beta)\) let \((x_{i}\otimes\alpha(a_{j}))\) be the frame for \(X_{A}\otimes_{\alpha}B\) as defined in Lemma4.23. Then for \(T\in\operatorname{End}^{0}_{A}(X)\),
\[\beta^{(1)}(T)=\sum\Theta_{\beta(Tx_{i}\cdot a_{j}),\beta(x_{i}\cdot a_{j})}=( T\otimes\operatorname{Id}_{B})\sum\Theta_{x_{i}\otimes\alpha(a_{j}),x_{i} \otimes\alpha(a_{j})}=T\otimes\operatorname{Id}_{B}. \tag{4.3}\]
Let \(a\in J_{\phi}\). Then setting \(T=\phi_{X}(a)\) we have
\[\beta^{(1)}\circ\phi_{X}(a)=\phi_{X}(a)\otimes\operatorname{Id}_{B}=(\psi \circ\alpha(a))\otimes\operatorname{Id}_{B}=(\psi\otimes\operatorname{Id}_{B })\circ\alpha(a)\]
so \((\alpha,\beta)\) is covariant. Since \(\alpha\) is injective, we know from Lemma2.6 that \(\alpha\times\beta\) is injective and gauge-equivariant.
For surjectivity we first claim that \(\iota_{B}(B)\) lies in the image of \(\alpha\times\beta\). Fix \(b\in B\) and write \(b=\alpha(a)+k\) for some \(a\in A\) and \(k\in J_{\psi}=J_{\psi\otimes\operatorname{Id}_{B}}\) using Lemma4.20. Since \(\psi(k)\) is compact, we get \(\beta^{(1)}(\psi(k))=\psi(k)\otimes\operatorname{Id}_{B}\). It then follows from covariance of \((\iota_{B},\iota_{X\otimes_{\alpha}B})\) that
\[\iota_{B}(k)=\iota_{X\otimes_{\alpha}B}^{(1)}\circ(\psi\otimes\operatorname{ Id}_{B})(k)=\iota_{X\otimes_{\alpha}B}^{(1)}\circ\beta^{(1)}(\psi(k))=( \alpha\times\beta)\circ\iota_{X}^{(1)}(\psi(k))\]
also lies in the image of \(\alpha\times\beta\). Consequently,
\[\iota_{B}(b)=\iota_{B}(\alpha(a))+\iota_{B}(k)=(\alpha\times\beta)(\iota_{A}( a))+\iota_{B}(k)\in(\alpha\times\beta)(\mathcal{O}_{X}).\]
Finally, observe that if \(x\cdot a\otimes b\in X\otimes_{A}B\), then
\[\iota_{X\otimes_{\alpha}B}(x\cdot a\otimes b)=(\alpha\times\beta)(\iota_{X}( x\cdot a))\iota_{B}(b)\]
which is in the image of \(\alpha\times\beta\). This shows that \(\alpha\times\beta\) is surjective, and we conclude that it is a gauge-equivariant \(*\)-isomorphism.
**Example 4.25**.: Let \((\phi,{}_{A}X_{A})\) be a regular \(C^{*}\)-correspondence and let \((\alpha,B,\psi)\) be an in-split. Since both \(\phi\) and \(\alpha\) are injective, we may identify \(B\) with a subalgebra of \(\operatorname{End}^{0}_{A}(X)\) that contains \(\phi(A)\). Conversely, any \(C^{*}\)-algebra \(B\) satisfying \(\phi(A)\subset B\subset\operatorname{End}^{0}_{A}(X)\) determines an in-split \((\psi,B,\phi)\) where \(\psi\colon B\to\operatorname{End}_{A}(X)\) is the inclusion. Therefore, there is a gauge-equivariant \(*\)-isomorphism \(\mathcal{O}_{X}\cong\mathcal{O}_{X\otimes_{\phi}B}\). In particular--as noted in [12, Example 6.4]--there is a gauge-equivariant \(*\)-isomorphism \(\mathcal{O}_{X}\cong\mathcal{O}_{X\otimes_{\phi}\operatorname{End}^{0}_{A}(X)}\)
Consider a regular correspondence \((\phi,{}_{A}X_{A})\) and let \(i\colon\operatorname{End}^{0}_{A}(X)\to\operatorname{End}_{A}(X)\) denote the inclusion. Then \((i\otimes\operatorname{Id}_{\operatorname{End}^{0}_{A}(X)},X\otimes \operatorname{End}^{0}_{A}(X))\) may be thought of as a "maximal" in-split of \((\phi,{}_{A}X_{A})\) in analogy to the dual graph in the setting of topological graphs. This analogy is further justified by the following noncommutative version of Lemma4.14.
**Lemma 4.26**.: _Let \((\phi,{}_{A}X_{A})\) be a regular nondegenerate \(C^{*}\)-correspondence with an in-split \(I=(\alpha,B,\psi)\). Let \(\alpha_{1}\colon\operatorname{End}^{0}_{A}(X)\to\operatorname{End}^{0}_{A}(X \otimes_{\alpha}B)\) be the map defined by \(\alpha_{1}(T)=T\otimes\operatorname{Id}_{B}\). Then \(I^{\prime}=(\psi,\operatorname{End}^{0}_{A}(X),\alpha_{1})\) is an in-split of \((\psi\otimes\operatorname{Id}_{B},X\otimes_{\alpha}B)\) and_
\[(\alpha_{1}\otimes\operatorname{Id}_{\operatorname{End}^{0}_{A}(X)},(X \otimes_{\alpha}B)\otimes_{\psi}\operatorname{End}^{0}_{A}(X))\cong\big{(}i \otimes\operatorname{Id}_{\operatorname{End}^{0}_{A}(X)},X\otimes_{\phi} \operatorname{End}^{0}_{A}(X)\big{)}\]
_as \(\operatorname{End}^{0}_{A}(X)\)-\(\operatorname{End}^{0}_{A}(X)\)-correspondences._
**Example 4.27**.: Let \(E=(E^{0},E^{1},r,s)\) be the topological graph of Example 4.16, with \(r(z)=z^{m}\) and \(s(z)=z^{n}\) and let \(E_{I}=(E_{I}^{0},E_{I},r_{I},s_{I})\) be the in-split graph of Example 4.16. In particular, \(E_{I}^{0}=\mathbb{T}\), \(E_{I}^{1}=\bigsqcup_{k=0}^{\gcd(n,b)-1}\mathbb{T}\), \(r_{I}(k,z)=z^{m/\gcd(n,b)}\) and \(s_{I}(k,z)=\lambda^{k}z^{n/\gcd(n,b)}\) for some fixed \(|b|\)-th root of unity \(\lambda\). Then \(\mathcal{O}_{X(E)}\) and \(\mathcal{O}_{X(E_{I})}\) are gauge equivariantly \(*\)-isomorphic.
Consider \(E\) when \(m=n=2\), and define a directed graph \(F=(F^{0},F^{1},r_{F},s_{F})\) with vertices \(F^{0}=\mathbb{T}\), edges \(F^{1}=\{0,1\}\times\mathbb{T}\), and \(r_{F}(z)=s_{F}(z)=z\). It is shown in [10, SS5] that the graph correspondence \(X(E)\) is isomorphic to the graph correspondence \(X(F)\), while \(E\) and \(F\) are not isomorphic as graphs.
Let \(a=1\) and \(b=2\), so \(\gcd(n,b)=2\). The in-split \(E_{I}\) has vertices \(E_{I}^{0}=\mathbb{T}\) and edges \(E_{I}^{1}=\{0,1\}\times\mathbb{T}\) with \(r_{I}(k,z)=(-1)^{k}z^{2}\) and \(s_{I}(k,z)=z^{2}\). Here we have used the second description of \(E_{I}\) from Example 4.16. Since the edges, vertices, and source maps are the same for both \(E_{I}\) and \(F\), it follows that \(X(E_{I})\) and \(X(F)\) are isomorphic as right \(C(\mathbb{T})\)-modules. On the other hand, since the range maps on \(E_{I}\) and \(F\) are different, the left action of \(C(\mathbb{T})\) differs between the two modules. In particular, \(X(E_{I})\) is not isomorphic to \(X(F)\cong X(E)\) as \(C^{*}\)-correspondences. We suspect that \(X(E_{I})\) is typically not isomorphic to \(X(E)\) in general.
### In-splits and diagonal-preserving isomorphism
The work of Eilers and Ruiz [1, Theorem 3.2] shows that unital graph algebras of in-splits (out-splits in their terminology) are gauge-equivariantly \(*\)-isomorphic in a way that also preserves the canonical diagonal subalgebras. In our general setting of Cuntz-Pimsner algebras, there is no obvious notion of canonical diagonal subalgebras. However, specialising to the setting of topological graphs, we can define such a diagonal. We prove in Proposition 4.33 that in-splits of correspondences over topological graphs gives a diagonal-preserving and gauge-equivariant \(*\)-isomorphism of the corresponding Cuntz-Pimsner algebras.
**Lemma 4.28**.: _Let \((\phi,X_{A})\) be a nondegenerate \(C^{*}\)-correspondence over \(A\) and let \((\alpha,B,\psi)\) be an in-split. Then \((X\otimes_{\alpha}B)^{\otimes k}\cong X^{\otimes k}\otimes_{\alpha}B\) as right \(B\)-modules via the isomorphism_
\[x_{1}\otimes b_{1}\otimes\cdots\otimes x_{k}\otimes b_{k}\mapsto x_{1}\otimes \psi(b_{1})x_{2}\otimes\cdots\psi(b_{k-1})x_{k}\otimes b_{k}.\]
_In particular, \(\operatorname{Fock}(X\otimes_{\alpha}B)\cong\operatorname{Fock}(X)\otimes _{\alpha}B\)._
Proof.: Since \(\phi\) and \(\alpha\) are nondegenerate, so is \(\psi\). Hence, \(X\otimes_{\alpha}B\otimes_{\psi}X\cong X\otimes_{\psi\circ\alpha}X=X^{\otimes 2}\) via the map \(x_{1}\otimes b_{1}\otimes x_{2}\mapsto x_{1}\otimes\psi(b_{1})x_{2}\). The result now follows inductively.
We now restrict to topological graphs and show that there is a notion of diagonal subalgebra.
**Lemma 4.29**.: _Let \(E\) be a topological graph and let_
\[\mathcal{C}^{1}_{E}=\{x\in C_{c}(E^{1})\mid x\geq 0\text{ and }s|_{\operatorname{supp}(x)}\text{ is injective}\}.\]
_Then \(\mathcal{D}^{1}_{E}\coloneqq\overline{\operatorname{span}}\{\Theta_{x,x}\mid x \in\mathcal{C}^{1}_{E}\}\) is a commutative subalgebra of \(\operatorname{End}^{0}_{C_{0}(E^{0})}(X(E))\) which is isomorphic to \(C_{0}(E^{1})\)._
Proof.: Since \(E^{1}\) is paracompact and \(s\) is locally injective we may choose a locally finite open cover \(\{U_{i}\}\) of \(E^{1}\) such that \(s|_{U_{i}}\) is injective. Let \(\{\rho_{i}\}\) be a partition of unity subordinate to \(\{U_{i}\}\).
Fix a positive function \(x\in C_{0}(E^{1})\) and define
\[\Psi(x)\coloneqq\sum_{i=1}^{\infty}\Theta_{(\rho_{i}x)^{1/2},(\rho_{i}x)^{1/2}} \in\mathcal{D}^{1}_{E}.\]
The sum converges as for \(z\in C_{c}(E^{1})\) and \(e\in E^{1}\),
\[\Psi(x)z(e)=\sum_{i=1}^{\infty}(\rho_{i}x)^{1/2}(e)\sum_{s(f)=s(e)}(\rho_{i}x)^ {1/2}(f)z(f)=\sum_{i=1}^{\infty}\rho_{i}(e)x(e)z(e)=x(e)z(e). \tag{4.4}\]
In particular, \(\Psi(x)\) acts as a multiplication operator. Moreover, (4.4) implies that \(\Psi(x)\) is independent of the choice of open cover and partition of unity. Since \(\Psi(x+y)=\Psi(x)+\Psi(y)\), and \(\Psi(xy)=\Psi(x)\Psi(y)\) for positive \(x,y\in C_{0}(E^{1})\) and positive elements span \(C_{0}(E^{1})\) we can linearly extend the formula \(\Psi(x)\) to all \(x\in C_{0}(E^{1})\) to obtain a \(*\)-homomorphism \(\Psi\colon x\mapsto\Psi(x)\) from \(C_{0}(E^{1})\) to \(\mathcal{D}^{1}_{E}\). Since \(|\operatorname{supp}(x)\cap s^{-1}(v)|\leq 1\) for all \(x\in\mathcal{C}^{1}_{E}\) and \(v\in E^{0}\), it follows that
\[\|\Psi(x)\|^{2}=\sup_{\|z\|=1}\sup_{v\in E^{0}}\sum_{s(e)=v}|\Psi(x)z(e)|^{2} =\sup_{\|z\|=1}\sup_{v\in E^{0}}\sum_{s(e)=v}|x(e)z(e)|^{2}.\]
Since \(\sup_{\|z\|=1}\sum_{s(e)=v}|x(e)z(e)|^{2}\) is the square of the operator norm of the multiplication operator \(\Psi(x)\) restricted to \(\ell^{2}(s^{-1}(v))\) it follows that \(\|\Psi(x)\|^{2}=\|x\|_{\infty}^{2}\) and so \(\Psi\) is isometric. For surjectivity observe that for each \(x\in\mathcal{C}^{1}_{E}\),
\[\Theta_{x,x}z(e)=x^{2}(e)z(e)=\Psi(x^{2})z(e)\]
so \(\Theta_{x,x}=\Psi(x^{2})\). Since the \(\Theta_{x,x}\) densely span \(\mathcal{D}^{1}_{E}\) surjectivity of \(\Psi\) follows.
**Definition 4.30**.: Let \(E\) be a topological graph. We call \(\mathcal{D}^{1}_{E}\subseteq\operatorname{End}^{0}_{C_{0}(E^{0})}(X(E))\) the _diagonal_ of \(\operatorname{End}^{0}_{C_{0}(E^{0})}(X(E))\). For \(k\geq 1\) define
\[\mathcal{C}^{k}_{E} \coloneqq\{x\in C_{c}(E^{k})\mid x\geq 0\text{ and }s|_{\operatorname{supp}(x)}\text{ is injective}\}\quad\text{ and}\] \[\mathcal{D}^{k}_{E} \coloneqq\overline{\operatorname{span}}\{\Theta_{x,x}\mid x\in \mathcal{C}^{k}_{E}\}\cong C_{0}(E^{k}).\]
Let \(\mathcal{D}^{0}_{E}=C_{0}(E^{0})\). Define the _diagonal_ of \(\mathcal{O}_{X(E)}\) to be the \(C^{*}\)-subalgebra
\[\mathcal{D}_{E} \coloneqq\sum_{k=0}^{\infty}\iota^{(k)}_{X(E)}(\mathcal{D}^{k}_{ E})=\overline{\operatorname{span}}\{\iota^{(k)}_{X(E)}(\Theta_{x,x})\mid x\in \mathcal{C}^{k}_{E},k\geq 0\}\] \[=\overline{\operatorname{span}}\{\iota^{(k)}_{X(E)}(x)\iota^{(k)} _{X(E)}(x)^{*}\mid x\in\mathcal{C}^{k}_{E},k\geq 0\},\]
where the terms of the sum are not necessarily disjoint.
_Remark 4.31_.: For each \(k\geq 0\), \(\operatorname{End}^{0}_{C_{0}(E^{0})}(X(E^{k}))\) is isomorphic to to the groupoid \(C^{*}\)-algebra of the amenable etale groupoid \(\mathcal{R}_{k}\coloneqq\{(x,y)\in E^{k}\times E^{k}\mid s(x)=s(y)\}\). The isomorphism \(\Phi\colon\operatorname{End}^{0}_{C_{0}(E^{0})}(X(E^{k}))\to C^{*}(\mathcal{R}_ {k})\) satisfies \(\Phi(\Theta_{x,y})(e,f)=x(e)\overline{y(f)}\) for \(x,y\in X(E)\), with the inverse satisfying
\[\Phi^{-1}(\xi)x(e)=\sum_{s(f)=s(e)}\xi(e,f)x(f)\]
for \(\xi\in C_{c}(\mathcal{R}_{k})\) and \(x\in C_{c}(E^{1})\). Details can be found in [11, Proposition 3.2.14]. The map \(\Phi\) takes \(\mathcal{D}_{k}\) onto the canonical diagonal subalgebra of \(C^{*}(\mathcal{R}_{k})\) consisting of \(C_{0}\)-functions on the unit space. Moreover, \(\mathcal{D}_{E}\) is the canonical diagonal of the Deaconu-Renault groupoid associated to \(E\)[11, Proposition 3.3.16].
Recall from Proposition4.21 that if \((\alpha,E_{I}^{0},\psi)\) is an in-split of a topological graph \(E\), then \((\alpha^{*},C_{0}(E_{I})^{0},\psi^{*})\) is an in-split of the graph correspondence \((\phi,X(E))\). Since Proposition4.21 implies \(X(E_{I})\cong X(E)\otimes_{\alpha^{*}}C_{0}(E_{I}^{0})\), we may consider the \(*\)-isomorphism \(\alpha^{*}\times\beta\) of Theorem4.24 as a map \(\alpha^{*}\times\beta\colon\mathcal{O}_{X(E)}\to\mathcal{O}_{X(E_{I})}\). We will show that \(\alpha^{*}\times\beta\) also preserves diagonals in the sense that \((\alpha^{*}\times\beta)(\mathcal{D}_{E})=\mathcal{D}_{E_{I}}\).
To this end, let \(B=C_{0}(E_{I}^{0})\). It follows from Lemma4.28 that there are \(*\)-isomorphisms
\[X(E)^{\otimes k}\otimes_{\alpha^{*}}B\cong(X(E)\otimes_{\alpha^{*}}B)^{ \otimes k}\cong X(E_{I})^{\otimes k}\cong X(E_{I}^{k})\]
for all \(k\geq 1\). Using Lemma4.9 to identify \(E_{I}^{k}\) with \(E^{k}\times_{s,\alpha}E_{I}^{0}\) we define
\[(x_{1}\otimes\cdots\otimes x_{k}\otimes b)(e_{1},\ldots,e_{k},v)\coloneqq x_{1 }(e_{1})\cdots x_{k}(e_{k})b(v),\]
for all \(x_{1},\ldots,x_{k}\in X(E)\), \(b\in B\), and \((e_{1},\ldots,e_{k},v)\in E^{k}\times_{s,\alpha}E_{I}^{0}\).
**Lemma 4.32**.: _Let \(x\in C_{c}(E^{k})\) and \(b\in C_{0}(E_{I}^{0})\). Then \(x\otimes b\in\mathcal{C}_{E_{I}}^{k}\) if and only if \(x\in\mathcal{C}_{E}^{k}\)._
Proof.: Fix \(x\in\mathcal{C}_{E}^{k}\) and \(b\in C_{0}(E_{I}^{0})\). If \((x\otimes b)(e,v)=x(e)b(v)\) and \((x\otimes b)(e^{\prime},v)=x(e^{\prime})b(v)\) are nonzero for \(e,e^{\prime}\in E^{k}\) with \(s(e)=s(e^{\prime})=\alpha(v)\), then \(x(e)\) and \(x(e^{\prime})\) are nonzero, so by assumption \(e=e^{\prime}\) and hence \((x\otimes b)\in\mathcal{C}_{E_{I}}^{k}\). Conversely, suppose \((x\otimes b)\in\mathcal{C}_{E_{I}}^{k}\) and \(x(e)\) and \(x(e^{\prime})\) are nonzero for some \(e,e^{\prime}\in E^{k}\) with \(s(e)=s(e^{\prime})\). Then \((x\otimes b)(e,v)\) and \((x\otimes b)(e^{\prime},v)\) are both nonzero as soon as one is nonzero. Hence, \(e=e^{\prime}\) and so \(x\in\mathcal{C}_{E}^{k}\).
**Proposition 4.33**.: _Let \(E\) be a topological graph and let \(I=(\alpha,E_{0}^{I},\psi)\) be an in-split of \(E\). Then the Cuntz-Pimsner algebras \(\mathcal{O}_{X(E)}\) and \(\mathcal{O}_{X(E_{I})}\) are gauge-equivariantly \(*\)-isomorphic in a way that also preserves the diagonal subalgebras._
Proof.: Since \(\alpha^{*}\times\beta\) is injective, it is enough to show that \((\alpha^{*}\times\beta)(\mathcal{D}_{E})=\mathcal{D}_{E_{I}}\). Let \(a_{j}=(u_{j}-u_{j-1})^{1/2}\) be as in the statement of Lemma4.23, and recall that \((\alpha^{*}(a_{j}))_{j}\) is a frame for \(B\) as a right Hilbert \(B\)-module. Let \(\beta^{(k)}\) denote the map \((\beta^{k})^{(1)}\colon\operatorname{End}_{A}^{0}(X(E)^{\otimes k})\to \operatorname{End}_{B}^{0}(X(E_{I})^{\otimes k})\). Given \(x\in\mathcal{C}_{E}^{k}\) we may apply (4.3) to \(\Theta_{x,x}\in\operatorname{End}_{A}^{0}(X^{\otimes k})\) to see that
\[\beta^{(k)}(\Theta_{x,x})=\Theta_{x,x}\otimes\operatorname{Id}_{B}=\sum_{i=1} ^{\infty}\Theta_{x\otimes\alpha^{*}(a_{i}),x\otimes\alpha^{*}(a_{i})}.\]
It follows from Lemma4.32 that \(x\otimes\alpha^{*}(a_{i})\in\mathcal{C}_{E_{I}}^{k}\) so \(\beta^{(k)}(\Theta_{x,x})\in\mathcal{D}_{E_{I}}^{k}\). Consequently,
\[(\alpha^{*}\times\beta)\circ\iota_{X(E)}^{(k)}(\Theta_{x,x})=\iota_{X(E_{I})}^ {(k)}\circ\beta^{(k)}(\Theta_{x,x})\in\mathcal{D}_{E_{I}}\]
and so \((\alpha^{*}\times\beta)(\mathcal{D}_{E})\subseteq\mathcal{D}_{E_{I}}\).
For surjectivity, first observe that since \(X(E_{I})^{\otimes k}\cong X(E)^{\otimes k}\otimes_{\alpha^{*}}B\) is densely spanned by the set \(\{x\otimes b\mid x\in X(E)^{\otimes k},\,b\in B\}\), and Lemma4.32 states that \(x\otimes b\in\mathcal{C}_{E_{I}}^{k}\) if and only if \(x\in\mathcal{C}_{E}^{k}\), so
\[\mathcal{D}_{E_{I}}^{k}=\overline{\operatorname{span}}\{\Theta_{x\otimes b,x \otimes b}\mid x\in\mathcal{C}_{E}^{k},\,b\in C_{0}(E_{I}^{0})\}.\]
Observe that for \(x\otimes b\in\mathcal{C}_{E_{I}}^{k}\),
\[\iota_{X\otimes_{\alpha}B}^{(k)}(\Theta_{x\otimes b,x\otimes b})=\iota_{X \otimes_{\alpha}B}^{k}(x\otimes b)\iota_{X\otimes_{\alpha}B}^{k}(x\otimes b)^{ *}=(\alpha\times\beta)(\iota_{X}^{k}(x))\iota_{B}(bb^{*})(\alpha\times\beta)( \iota_{X}^{k}(x))^{*},\]
so it suffices to show that \(\iota_{B}(b)\in(\alpha\times\beta)(\mathcal{D}_{E})\) for each \(b\in B\).
Fix \(b\in B\) and use Lemma4.20 to write \(b=\alpha^{*}(a)+j\) for some \(a\in A\) and \(j\in J_{\psi}\). We have \(\iota_{B}(\alpha^{*}(a))=(\alpha^{*}\times\beta)(\iota_{A}(a))\in(\alpha^{*} \times\beta)(\mathcal{D}_{E})\). On the other hand, when \(j\geq 0\),
\[\psi^{*}(j)=\psi^{*}(j)^{1/2}\sum_{i=1}^{\infty}\Theta_{x_{i},x_{i}}\psi^{*}(j) ^{1/2}=\sum_{i=1}^{\infty}\Theta_{\psi^{*}(j)^{1/2}x_{i},\psi^{*}(j)^{1/2}x_{i }}.\]
Since \((\psi^{*}(j)^{1/2}x_{i})(e)=j(\psi(e))^{1/2}x_{i}(e)\) it follows that \(s\) restricted to the support of \(\psi^{*}(j)^{1/2}x_{i}\) is injective. Hence, \(\psi(j)\in\mathcal{D}_{E}^{1}\) and by linearity this is also true for general \(j\in J_{\psi}\). Covariance of \((\iota_{B},\iota_{X\otimes_{\alpha}B})\) and (4.3) imply that
\[\iota_{B}(j)=\sum_{i}\iota_{X\otimes_{\alpha}B}^{(1)}(\Theta_{ \psi^{*}(j)^{1/2}x_{i},\psi^{*}(j)^{1/2}x_{i}}\otimes\mathrm{Id}_{B}) =\sum_{i}\iota_{X\otimes_{\alpha}B}^{(1)}\circ\beta^{(1)}(\Theta_{ \psi^{*}(j)^{1/2}x_{i},\psi^{*}(j)^{1/2}x_{i}})\] \[=\sum_{i}(\alpha^{*}\times\beta)\circ\iota_{X}^{(1)}(\Theta_{ \psi^{*}(j)^{1/2}x_{i},\psi^{*}(j)^{1/2}x_{i}})\]
belongs to \((\alpha^{*}\times\beta)(\mathcal{D}_{E})\). Consequently, \(\iota_{B}(b)\in(\alpha^{*}\times\beta)(\mathcal{D}_{E})\) and so \((\alpha^{*}\times\beta)(\mathcal{D}_{E})=\mathcal{D}_{E_{I}}\).
**Example 4.34**.: The \(*\)-isomorphism between \(\mathcal{O}_{X(E)}\) and \(\mathcal{O}_{X(E_{I})}\) of Example4.27 is also diagonal preserving.
_Remark 4.35_.: Theorem3.5 and Proposition4.33 imply that a diagonal-preserving, gauge-equivariant \(*\)-isomorphism between the Cuntz-Pimsner algebras of topological graphs is _not_ sufficient to recover the original \(C^{*}\)-correspondence up to isomorphism. An analogous result for Cuntz-Pimsner algebras of graph correspondences states that diagonal-preserving, gauge-equivariant isomorphisms are _not_ sufficient to recover the graph up to conjugacy.
The final section of [1] exhibits an example of a pair of finite and strongly connected graphs that are not conjugate but whose graph \(C^{*}\)-algebras admit a \(*\)-isomorphism that is both gauge-equivariant and diagonal-preserving. The main result of [1] uses groupoid techniques to recover a topological graph up to conjugacy using \(*\)-isomorphisms that intertwine a whole family of gauge actions. For general Cuntz-Pimsner algebras there is no obvious such family of gauge actions.
A recent preprint [10] explains how to recover the graph correspondence of a compact topological graph from its Toeplitz algebra, its gauge action, and the commutative algebra of functions on the vertex space.
## 5 Out-splits
In this section, we consider the dual notion of an out-split. The non-commutative version applied to Cuntz-Pimsner algebras is not as fruitful as non-commutative in-splits. The inputs are more restrictive and the outputs less exciting, but we include this section for completeness.
For a graph, we will see that an out-split corresponds to a factorisation of the source map. We use the notation of Bates and Pask [1] as well as Eilers and Ruiz [1], but we warn the reader that our graph conventions follow Raeburn's monograph [11] and so are _opposite_ to the convention used in those papers.
### Out-splits for directed graphs
Let \(E=(E^{1},E^{0},r,s)\) be a countable discrete directed graph. We recall the notion of an out-split from [1, Section 3]. Fix a regular \(w\in E^{0}\) (i.e. \(0<|s^{-1}(w)|<\infty\)), and let \(\{\mathcal{P}^{i}\}_{i=1}^{n}\) be a partition of \(s^{-1}(w)\) into finitely many (possibly empty) sets.
The _out-split graph of \(E\) associated to \(\mathcal{P}\)_ is the graph \(E_{r}(\mathcal{P})\) given as
\[E_{s}(\mathcal{P})^{0} =\{v_{1}:v\in E^{0}\}\cup\{w_{1},\ldots,w_{n}\}\] \[E_{s}(\mathcal{P})^{1} =\{e_{1}:e\in E^{1},r(e)\neq w\}\cup\{e_{1},\ldots,e_{n}:e\in E^{1 },r(e)=w\},\] \[r_{\mathcal{P}}(e_{j}) =r(e)_{j},\] \[s_{\mathcal{P}}(e_{j}) =\begin{cases}s(e)_{1}&\text{if $s(e)\neq w$},\\ w_{i}&\text{if $s(e)=w$ and $e\in\mathcal{P}^{i}$},\end{cases}\]
for all \(e_{j}\in E^{1}_{s}(\mathcal{P})\).
**Example 5.1**.: Consider the graphs
The incoming edges to \(w\) are coloured for clarity. Then \(s^{-1}(w)=\{e,f\}\) and we consider the partition \(\mathcal{P}_{1}=\{e\}\) and \(\mathcal{P}_{2}=\{f\}\). The out-split graph--with respect to this partition--is the right-most graph above.
Note that the loop \(e\) is both an incoming and an outgoing edge. The adjacency matrices of the graphs are
\[\mathsf{A}=\begin{pmatrix}1&1\\ 2&0\end{pmatrix}\qquad\text{and}\qquad\mathsf{C}=\begin{pmatrix}1&1&0\\ 0&0&1\\ 2&2&0\end{pmatrix}\]
and the rectangular matrices
\[\mathsf{R}=\begin{pmatrix}1&0\\ 0&1\\ 2&0\end{pmatrix}\quad\text{and}\quad\mathsf{S}=\begin{pmatrix}1&1&0\\ 0&0&1\end{pmatrix}\]
satisfy \(\mathsf{C}=\mathsf{SR}\) and \(\mathsf{RS}=\mathsf{A}\). Therefore, \(\mathsf{A}\) and \(\mathsf{C}\) are (elementary) strong shift equivalent. Any out-split induces a strong shift equivalence, cf [13, Chapter 7].
The out-split at \(w\) can be summarised as two pieces of information: there is a finite-to-one surjection \(\alpha\colon E^{0}_{s}(\mathcal{P})\to E^{0}\) given by \(\alpha(v_{j})=v\), for all \(v_{j}\in E^{0}_{s}(\mathcal{P})\), and a surjection \(\psi\colon E^{1}\to E^{0}_{s}\) given by
\[\psi(e)=\begin{cases}s(e)_{1}&\text{if $s(e)\neq w$},\\ w_{i}&\text{if $s(e)=w,e\in\mathcal{P}^{i}$},\end{cases}\]
for all \(e\in E^{1}\). Observe that \(s=\alpha\circ\psi\), so we interpret an out-split as a factorisation of the source map (in contrast to an in-split which we saw was a factorisation of the range map).
We may now form the graph \((E^{0}_{s}(\mathcal{P}),E^{0}_{s}(\mathcal{P})\times_{\alpha,r}E^{1},r,s)\) where the edge set is the fibred product
\[E^{0}_{s}(\mathcal{P})\times_{\alpha,r}E^{1}=\{(v_{j},e)\in E^{0}_{s}( \mathcal{P})\times E^{1}:v=r(e)\}\]
and \(r(v_{j},e)=v_{j}\) and \(s(v_{j},e)=\psi(e)\) for all \((v^{j},e)\in E^{0}_{s}(\mathcal{P})\times_{\alpha,r}E^{1}\). This is graph isomorphic to the out-split graph \(E_{s}(\mathcal{P})\) via the map \(e_{j}\mapsto(v_{j},e)\) for all \(e^{j}\in E^{1}_{s}(\mathcal{P})\).
We give a definition of out-splits for regular topological graphs, which includes regular directed graphs.
**Definition 5.2**.: An _out-split_ (or _source-split_) of a topological graph \(E=(E^{0},E^{1},r,s)\) is a triple \(\mathbb{O}=(\alpha,Y,\psi)\) consisting of
1. a locally compact Hausdorff space \(Y\),
2. a proper surjective local homeomorphism \(\alpha\colon Y\to E^{0}\), and
3. a proper surjective local homeomorphism \(\psi\colon E^{1}\to Y\),
such that \(\alpha\circ\psi=s\).
_Remark 5.3_.: The continuity assumptions of an out-split \(\mathbb{O}=(\alpha,E^{0}_{\mathbb{O}},\psi)\) are automatic for regular directed graphs.
We associate a new topological graph to an out-split.
**Lemma 5.4**.: _Let \(E=(E^{0},E^{1},r,s)\) be a regular topological graph and let \(\mathbb{O}=(\alpha,Y,\psi)\) be an out-split of \(E\). Then \(E_{\mathbb{O}}=(E^{0}_{\mathbb{O}},E^{1}_{\mathbb{O}},r_{\mathbb{O}},s_{ \mathbb{O}})\) is a regular topological graph, where_
1. \(E^{0}_{\mathbb{O}}\coloneqq Y\)_;_
2. \(E^{1}_{\mathbb{O}}\coloneqq E^{0}_{\mathbb{O}}\times_{\alpha,r}E^{1}=\{(v,e) \in E^{0}_{\mathbb{O}}\times E^{1}\mid\alpha(v)=r(e)\}\) _equipped with the subspace topology of the product_ \(E^{0}_{\mathbb{O}}\times E^{1}\)_; and_
3. \(r_{\mathbb{O}}(v,e)=v\) _and_ \(s_{\mathbb{O}}(v,e)=\psi(e)\)_, for all_ \(e\in E^{1}\) _and_ \(v\in E^{0}_{\mathbb{O}}\)_._
Proof.: We will be brief as the proof is similar to the in-split case. The edge space \(E^{1}_{\mathbb{O}}\) is a closed subspace of a locally compact Hausdorff space, and so is locally compact and Hausdorff. Also \(s_{\mathbb{O}}\) is a local homeomorphism since \(\psi\) and \(\alpha\) are.
The map \(r_{\mathbb{O}}\) is clearly continuous and is surjective since \(r\) is surjective. The range \(r_{\mathbb{O}}\) is proper, and to see this we let \(K\subset E^{0}_{\mathbb{O}}\) be compact. Then
\[r_{\mathbb{O}}^{-1}(K)=K\times_{\alpha,r}r^{-1}(\alpha(K))\]
is compact. So \(E_{\mathbb{O}}\) is a regular topological graph.
**Definition 5.5**.: We call \(E_{\mathbb{O}}=(E^{0}_{\mathbb{O}},E^{1}_{\mathbb{O}},r_{\mathbb{O}},s_{ \mathbb{O}})\) the _out-split graph of \(E\) via \(\mathbb{O}\)_.
### Noncommutative out-splits
In-splits for topological graphs correspond to factorisations of the range map. In the noncommutative setting this translates to a factorisation of the left action on the associated graph
correspondence. On the other hand, out-splits for topological graphs correspond to a factorisation of the source map, which defines the right-module structure of the graph correspondence. This makes the noncommutative analogy for out-splits more difficult to pin down than in the case of in-splits.
**Definition 5.6**.: An _out-split_ of a regular \(C^{*}\)-correspondence \((\phi_{X},{}_{A}X_{A})\) consists of:
1. an inclusion \(\alpha\colon A\to B\) with corresponding conditional expectation \(\Lambda\colon B\to A\);
2. a right \(B\)-module structure on \(X\) which is compatible with \(\alpha\) and \(\Lambda\) in the sense that \(x\cdot\alpha(a)=x\cdot a\) for all \(x\in X\) and \(a\in A\) and \(\Lambda((x_{1}\mid x_{2})_{B})=(x_{1}\mid x_{2})_{A}\) for all \(x_{1},x_{2}\in X\);
3. a left action of \(A\) on \(X_{B}\) by adjointable operators that agrees with the left action of \(A\) on \(X_{A}\). In either case, we denote the left action by \(\phi_{X}\).
Let \(B^{\Lambda}_{A}\) be the completion of \(B\) with respect to the inner product \((b_{1}\mid b_{2})_{A}=\Lambda(b_{1}^{*}b_{2})\) for all \(b_{1},b_{2}\in B\), and let \((\operatorname{Id}_{B},{}_{B}B^{\Lambda}_{A})\) be the associated \(B\)-\(A\)-correspondence with left action of \(B\) given by multiplication. We then define the _out-split correspondence_\((\phi_{\Lambda},B^{\Lambda}\otimes_{A}X_{B})\) over \(B\) where the left action is just left multiplication.
The idea behind Definition 5.6 is that by using the expectation \(\Lambda\) we are able to factor the structure of \(X_{A}\) as a right module through the algebra \(B\). The following lemma makes this more precise. We write \([b]\) for the class of of \(b\in B\) in \(B^{\Lambda}\).
**Lemma 5.7**.: _The correspondence \((\phi_{X},{}_{A}X_{A})\) is isomorphic to \((\phi_{X}\otimes\operatorname{Id}_{B^{\Lambda}},{}_{A}X_{B}\otimes_{B}B^{ \Lambda}_{A})\)._
Proof.: Let \(x,x^{\prime}\in X_{B}\) and \(b,b^{\prime}\in B\). Observe that
\[(x\cdot b\mid x^{\prime}\cdot b^{\prime})_{A}=\Lambda((x\cdot b\mid x^{\prime }\cdot b^{\prime})_{B})=\Lambda(b^{*}(x\mid x^{\prime})_{B}b^{\prime})=([b]\mid[ (x\mid x^{\prime})_{B}b^{\prime}])_{A}=(x\otimes[b]\mid x^{\prime}\otimes[b^{ \prime}])_{A}.\]
In particular \(\|x\cdot b\|=0\) if and only if \(\|x\otimes[b]\|=0\). Consequently, the map \(\beta\colon X_{B}\otimes_{B}B^{\Lambda}\to X_{A}\) given by \(\beta(x\otimes[b])=x\cdot b\) for \(x\in X_{A}\) and \(b\in b\) is well-defined. The map \(\beta\) is clearly an \(A\)-\(A\)-bimodule map, and so \((\operatorname{Id}_{A},\beta)\) defines an injective correspondence morphism from \((\phi_{X}\otimes\operatorname{Id}_{B^{\Lambda}},{}_{A}X_{B}\otimes_{B}B^{ \Lambda}_{A})\) to \((\phi_{X},{}_{A}X_{A})\). For surjectivity fix \(x\in X_{A}\). Then there exists \(y\in X_{A}\) such that \(x=y\cdot(y\mid y)_{A}=\beta(y\otimes[\alpha((y\mid y)_{A})])\).
**Theorem 5.8**.: _The correspondence \((\phi_{X},{}_{A}X_{A})\) is elementary strong shift equivalent to the out-split \((\phi_{\Lambda},B^{\Lambda}\otimes_{A}X_{B})\), and so the Cuntz-Pimsner algebras \(\mathcal{O}_{X\otimes B^{\Lambda}}\) and \(\mathcal{O}_{B^{\Lambda}\otimes X}\) are gauge equivariantly Morita equivalent._
Proof.: Appealing to Lemma 5.7, it follows by definition that \((\phi_{X},{}_{A}X_{A})\) is elementary strong shift equivalent to \((\phi_{\Lambda},B^{\Lambda}\otimes_{A}X_{B})\). The Morita equivalence is the main result of [13] applied to the correspondences \(R=(\phi_{X},{}_{A}X_{B})\) and \(S=(\operatorname{Id}_{B},{}_{B}B^{\Lambda}_{A})\), and the gauge equivariance follows from Theorem 3.5.
_Remark 5.9_.: With apologies for the terminology, un-out-splitting seems more natural. That is starting with a correspondence \((A,X_{B})\) and an expectation \(\Lambda\colon B\to A\), one can naturally construct \((A,X\otimes_{B}B^{\Lambda}_{A})\). In our previous language we would have \(X_{A}\cong X_{B}\otimes_{B}B^{\Lambda}_{A}\). The downside is that \((A,X_{B})\) is not a self-correspondence.
In the case where \(X_{A}=X(E)\) is the correspondence of a directed graph \(E\) with out-split \(\mathbb{O}\), Definition 5.6 recovers the correspondence of the associated out-split graph \(X(E_{\mathbb{O}})\).
**Proposition 5.10**.: _Let \(\mathbb{O}=(\alpha,E_{\mathbb{O}}^{0},\psi)\) be an out-split of a regular topological graph \(E\). Let \(A=C_{0}(E^{0})\) and \(B=C_{0}(E_{\mathbb{O}}^{0})\). Then:_
1. \(\alpha^{*}\colon A\to B\) _given by_ \(\alpha^{*}(a)(v)=a(\alpha(v))\) _is an injective_ \(*\)_-homomorphism;_
2. _the conditional expectation_ \(\Lambda\colon B\to A\) _given by_ \[\Lambda(b)(v)=\sum_{u\in\alpha^{-1}(v)}b(u)\] _for_ \(b\in C_{c}(E_{\mathbb{O}}^{0})\) _is compatible with_ \(\alpha^{*}\)_; and_
3. \(X(E)\) _can be equipped with the structure of a right_ \(B\)_-module via the formulae_ \[(x\cdot b)(e)=x(e)b(\psi(e))\quad\text{ and }\quad(x\mid y)_{B}(u)=\sum_{e\in\psi^{-1}(u)} \overline{x(e)}y(e)\] _for all_ \(x,y\in C_{c}(E^{1})\) _and_ \(b\in C_{0}(E_{\mathbb{O}}^{0})\)_, and the left action of_ \(A\) _on_ \(X(E)\) _also defines a left action by adjointable operators with respect to the new right_ \(B\)_-module structure._
_Moreover, the correspondences \((\phi,X(E_{\mathbb{O}}))\) and \((\phi^{\Lambda},B^{\Lambda}\otimes_{A}X(E))\) are isomorphic._
Proof.: Since \(\alpha\colon E_{\mathbb{O}}^{0}\to E^{0}\) is proper and surjective, \(\alpha^{*}\) defines an injective \(*\)-homomorphism. The expectation \(\Lambda\) is clearly compatible with \(\alpha^{*}\) in the sense that \(\Lambda(\alpha^{*}(a_{1})b\alpha^{*}(a_{2}))=a_{1}\Lambda(b)a_{2}\) for all \(a_{1},a_{2}\in A\) and \(b\in B\). It is also straightforward to verify that the formulae in (iii) define a right \(B\)-module structure on \(X(E)\).
Since \(s=\alpha\circ\psi\), it follows that \(x\cdot\alpha^{*}(a)=x\cdot a\). Moreover,
\[\Lambda((x_{1}\mid x_{2})_{B})(v) =\sum_{u\in\alpha^{-1}(v)}(x_{1}\mid x_{2})_{B}(u)=\sum_{u\in \alpha^{-1}(v)}\sum_{e\in\psi^{-1}(u)}\overline{x_{1}(e)}x_{2}(e)\] \[=\sum_{s(e)=v}\overline{x_{1}(e)}x_{2}(e)=(x_{1}\mid x_{2})_{A}( v),\]
for all \(x_{1},x_{2}\in X\) and \(v\in E^{0}\). It follows that we have an out-split (cf. Definition 5.6) on the graph module \(X(E)\) so we may form the out-split correspondence \((\phi^{\Lambda},B^{\Lambda}\otimes_{A}X(E))\).
We would like to define a map \(\Psi\colon(\phi_{\Lambda},B^{\Lambda}\otimes_{A}X(E)_{B})\to(\phi,X(E_{ \mathbb{O}}))\) by
\[\Psi([b]\otimes x)(u,e)=b(u)x(e),\]
for all \([b]\otimes x\in B^{\Lambda}\otimes_{A}X_{B}\) and \((u,e)\in E_{\mathbb{O}}^{1}\). For \(u\in E_{\mathbb{O}}^{0}\), recall that
\[s_{\mathbb{O}}^{-1}(u)=\{(w,e)\in E_{\mathbb{O}}^{0}\times E^{1}:\psi(e)=u, \alpha(w)=r(e)\}.\]
With this observation we can compute
\[(\Psi([b_{1}]\otimes x_{1})\mid\Psi([b_{1}]\otimes x_{1}))_{B}(u) =\sum_{(w,e)\in s_{\mathbb{O}}^{-1}(u)}\overline{b_{1}(w)x_{1}(e )}b_{2}(w)x_{2}(e)\] \[=\sum_{e\in\psi^{-1}(u)}\sum_{w\in\alpha^{-1}(r(e))}\overline{b_{ 1}(w)x_{1}(e)}b_{2}(w)x_{2}(e)\] \[=\sum_{e\in\psi^{-1}(u)}\overline{x_{1}(e)}\Lambda(b_{1}^{*}b_{2} )(r(e))x_{2}(e)\] \[=(x_{1}\mid\Lambda(b_{1}^{*}b_{2})x_{2})_{B}(u)\] \[=([b_{1}]\otimes x_{1}\mid[b_{2}]\otimes x_{2})_{B}(u).\]
Consequently, \(\Psi\) is well-defined and extends to an isometric linear map \(\Psi\colon B^{\Lambda}\otimes_{A}X_{B}\to X(E_{\mathbb{O}})\). The map \(\Psi\) preserves the left action since
\[\Psi(\phi_{\Lambda}(b_{1})([b_{2}]\otimes x))(v,e)=\Psi([b_{1}b_{2}]\otimes x)(v,e)=b_{1}(v)b_{2}(v)x(e)=\phi(b_{1})\Psi([b_{2}]\cdot x)(v,e),\]
for all \(b_{1},b_{2}\in B\), \(x\in X\), and \((v,e)\in E^{1}_{\mathbb{O}}\); similarly, \(\Psi\) preserves the right action as
\[\Psi([b_{1}]\otimes x\cdot b_{2})(v,e)=b_{1}(v)x(e)b_{2}(\psi(e))=(\Psi([b_{1} ]\otimes x)\cdot b_{2})(v,e),\]
for all \(b_{1},b_{2}\in B\), \(x\in X\), and \((v,e)\in E^{1}_{\mathbb{O}}\).
Since functions of the form \((v,e)\mapsto b(v)x(e)\) densely span \(C_{c}(E^{0}_{\mathbb{O}}\times_{\alpha,r}E^{1})\), it follows from the Stone-Weierstrass theorem that \(\Psi\) is surjective.
**Example 5.11**.: We give the out-split version of Example 4.16.
Fix \(m,n\in\mathbb{Z}\setminus\{0\}\) and let \(E^{0}\coloneqq\mathbb{T}\) and \(E^{1}\coloneqq\mathbb{T}\). Define \(r,s\colon E^{1}\to E^{0}\) by \(r(z)=z^{m}\) and \(s(z)=z^{n}\). Then \(E=(E^{0},E^{1},r,s)\) is a topological graph. Suppose \(a,b\in\mathbb{Z}\) satisfy \(n=ab\). Define \(\psi\colon E^{1}\to\mathbb{T}\) by \(\psi(z)=z^{a}\) and \(\alpha\colon\mathbb{T}\to E^{0}\) by \(\alpha(z)=z^{b}\). Since \(s(z)=z^{n}=(z^{a})^{b}=\alpha\circ\psi(z)\), it follows that \(\mathbb{O}=(\alpha,\mathbb{T},\psi)\) is an out-split of \(E\). Exactly as in Example 4.16, the new edge space
\[E^{1}_{\mathbb{O}}=\{(z_{1},z_{2})\in\mathbb{T}^{2}\mid z_{1}^{b}=z_{2}^{m}\}.\]
is homeomorphic to a disjoint union of \(\gcd(m,b)\) copies of \(\mathbb{T}\).
An explicit identification of \(E^{1}_{\mathbb{O}}\) with the disjoint union of circles is given by fixing a primitive \(|b|\)-th root of unity \(\lambda\). Let \(\pi\colon\{1,\ldots,\gcd(m,b)\}\times\mathbb{T}\to E^{1}_{\mathbb{O}}\) be the homeomorphism defined by \(\pi(k,z)=(\lambda^{k}z^{m/\gcd(m,b)},z^{b/\gcd(m,b)})\). Under this identification,
\[r_{\mathbb{O}}(k,z)=\lambda^{k}z^{m/\gcd(m,b)}\quad\text{and}\quad s_{\mathbb{ O}}(k,z)=\psi(z^{b/\gcd(m,b)})=z^{ab/\gcd(m,b)}=z^{n/\gcd(m,b)}.\]
By Theorem 5.8, the topological graphs \(E\) and \(E_{\mathbb{O}}\) have gauge equivariantly Morita equivalent \(C^{*}\)-algebras. This is very different from the \(*\)-isomorphism arising from the analogous in-split of the range map.
|
2304.07097 | Interpretable Weighted Siamese Network to Predict the Time to Onset of
Alzheimer's Disease from MRI Images | Alzheimer's Disease (AD) is a progressive disease preceded by Mild Cognitive
Impairment (MCI). Early detection of AD is crucial for making treatment
decisions. However, most of the literature on computer-assisted detection of AD
focuses on classifying brain images into one of three major categories:
healthy, MCI, and AD; or categorizing MCI patients into (1) progressive: those
who progress from MCI to AD at a future examination time, and (2) stable: those
who stay as MCI and never progress to AD. This misses the opportunity to
accurately identify the trajectory of progressive MCI patients. In this paper,
we revisit the brain image classification task for AD identification and
re-frame it as an ordinal classification task to predict how close a patient is
to the severe AD stage. To this end, we select progressive MCI patients from
the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and construct an
ordinal dataset with a prediction target that indicates the time to progression
to AD. We train a Siamese network model to predict the time to onset of AD
based on MRI brain images. We also propose a Weighted variety of Siamese
network and compare its performance to a baseline model. Our evaluations show
that incorporating a weighting factor to Siamese networks brings considerable
performance gain at predicting how close input brain MRI images are to
progressing to AD. Moreover, we complement our results with an interpretation
of the learned embedding space of the Siamese networks using a model
explainability technique. | Misgina Tsighe Hagos, Niamh Belton, Ronan P. Killeen, Kathleen M. Curran, Brian Mac Namee | 2023-04-14T12:36:43Z | http://arxiv.org/abs/2304.07097v2 | # Weighted Siamese Network to Predict the Time to Onset of Alzheimer's Disease from MRI Images
###### Abstract
Alzheimer's Disease (AD), which is the most common cause of dementia, is a progressive disease preceded by Mild Cognitive Impairment (MCI). Early detection of the disease is crucial for making treatment decisions. However, most of the literature on computer-assisted detection of AD focuses on classifying brain images into one of three major categories: healthy, MCI, and AD; or categorising MCI patients into one of (1) _progressive_: those who progress from MCI to AD at a future examination time during a given study period, and (2) _stable_: those who stay as MCI and never progress to AD. This misses the opportunity to accurately identify the trajectory of progressive MCI patients. In this paper, we revisit the brain image classification task for AD identification and re-frame it as an ordinal classification task to predict _how close a patient is to the severe AD stage_. To this end, we select progressive MCI patients from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and construct an ordinal dataset with a prediction target that indicates the time to progression to AD. We train a siamese network model to predict the time to onset of AD based on MRI brain images. We also propose a weighted variety of siamese networks and compare its performance to a baseline model. Our evaluations show that incorporating a weighting factor to siamese networks brings considerable performance gain at predicting how close input brain MRI images are to progressing to AD. Code is shared online1.
Footnote 1: [https://github.com/Msgun/WeightedSiamese](https://github.com/Msgun/WeightedSiamese)
Keywords:Alzheimer's Disease Mild Cognitive Impairment Computer Assisted Diagnosis.
## 1 Introduction
Although it has been more than a century since Alois Alzheimer first described the clinical characteristics of Alzheimer's Disease (AD) [2], the disease still eludes |
2307.03979 | Attacking (EC)DSA scheme with ephemeral keys sharing specific bits | In this paper, we present a deterministic attack on (EC)DSA signature scheme,
providing that several signatures are known such that the corresponding
ephemeral keys share a certain amount of bits without knowing their value. By
eliminating the shared blocks of bits between the ephemeral keys, we get a
lattice of dimension equal to the number of signatures having a vector
containing the private key. We compute an upper bound for the distance of this
vector from a target vector, and next, using Kannan's enumeration algorithm, we
determine it and hence the secret key. The attack can be made highly efficient
by appropriately selecting the number of shared bits and the number of
signatures. | M. Adamoudis, K. A. Draziotis, D. Poulakis | 2023-07-08T14:07:55Z | http://arxiv.org/abs/2307.03979v1 | # Attacking (EC)DSA scheme with ephemeral keys sharing specific bits
###### Abstract.
In this paper, we present a deterministic attack on (EC)DSA signature scheme, providing that several signatures are known such that the corresponding ephemeral keys share a certain amount of bits without knowing their value. By eliminating the shared blocks of bits between the ephemeral keys, we get a lattice of dimension equal to the number of signatures having a vector containing the private key. We compute an upper bound for the distance of this vector from a target vector, and next, using Kannan's enumeration algorithm, we determine it and hence the secret key. The attack can be made highly efficient by appropriately selecting the number of shared bits and the number of signatures.
Key words and phrases:Digital Signature Algorithm; Elliptic Curve Digital Signature Algorithm; Lattice; LLL; Kannan's Enumeration Algorithm. 2010 Mathematics Subject Classification: 94A60
## 1. Introduction - Statement of results
In August 1991, the U.S. government's National Institute of Standards and Technology (NIST) proposed an algorithm for digital signatures. The algorithm is known as DSA, for Digital Signature Algorithm [26, 22, 18]. It is an efficient variant of the ElGamal digital signature scheme [8] intended for use in electronic mail, electronic funds transfer, electronic data interchange, software distribution, data storage, and other applications which require data integrity assurance and data authentication. In 1998, an elliptic curve analogue called Elliptic Curve Digital Signature Algorithm (ECDSA) was proposed and standardized [16, 17, 18].
### The (EC)DSA Signature Scheme
First, we recall the DSA schemes. The signer selects a prime \(p\) of size between 1024 and 3072 bits with increments of 1024, as recommended in FIPS 186-3 [9, page 15]. Also, he selects a prime \(q\) of size 160, 224 or 256 bits, with \(q|p-1\) and a generator \(g\) of the unique order \(q\) subgroup \(G\) of the multiplicative group \(\mathbb{F}_{p}^{*}\) of the prime finite field \(\mathbb{F}_{p}\). Furthermore, he selects a randomly \(a\in\{1,\ldots,q-1\}\) and computes \(R=g^{a}\) mod \(p\). The public key of the signer is \((p,q,g,R)\) and his private key \(a\). He also publishes a hash function \(h:\{0,1\}^{*}\to\{0,\ldots,q-1\}\). To sign a message \(m\in\{0,1\}^{*}\), he selects randomly \(k\in\{1,\ldots,q-1\}\) which is the ephemeral key, and computes \(r=(g^{k}\mod\,p)\mod\,q\) and \(s=k^{-1}(h(m)+ar)\) mod \(q\). The signature of \(m\) is \((r,s)\). The signature is accepted as valid if and only if the following holds:
\[r=((g^{s^{-1}h(m)\bmod\,\,\,q}R^{s^{-1}r\bmod\,\,\,q})\mod\,p)\mod\,q.\]
Next, let us recall the ECDSA scheme. The signer selects an elliptic curve \(E\) over \(\mathbb{F}_{p}\), a point \(P\in E(\mathbb{F}_{p})\) with order a prime \(q\) of size at least 160 bits. Following
FIPS 186-3, the prime \(p\) must belongs to the set \(\{160,224,256,512\}.\) Further, he chooses randomly \(a\in\{1,\ldots,q-1\}\) and computes \(Q=aP\). Finally, he publishes a hash function \(h:\{0,1\}^{*}\to\{0,\ldots,q-1\}\). The public key of the signer is \((E,p,q,P,Q)\) and his private key \(a\). To sign a message \(m\), he selects randomly \(k\in\{1,\ldots,q-1\}\) which is the ephemeral key and computes \(kP=(x,y)\) (where \(x\) and \(y\) are regarded as integers between \(0\) and \(p-1\)). He computes \(r=x\mod q\) and \(s=k^{-1}(h(m)+ar)\mod q\). The signature of \(m\) is \((r,s)\). The verifier computes
\[u_{1}=s^{-1}h(m)\mod q,\ \ u_{2}=s^{-1}r\mod q,\ \ u_{1}P+u_{2}Q=(x_{0},y_{0}).\]
He accepts the signature if and only if \(r=x_{0}\mod q\).
### Previous Results
Researchers have explored various attacks on DSA schemes by analyzing the signature equation \(s=k^{-1}(h(m)+ar)\) mod \(q\) and using lattice reduction techniques such as LLL and CVP algorithms. One study focused on the use of a linear congruential pseudorandom number generator (LCG) for generating random numbers in DSA [3], showing that combining the DSA signature equations with LCG generation equations can lead to a system of equations that provide the secret key. To recover the secret key, several heuristic attacks have been proposed [15] in another study, which assume the revelation of a small fraction of the corresponding nonce \(k\). However, these attacks are based on heuristic assumptions, making it difficult to make precise statements on their theoretical behavior.
The first rigorous lattice attack on (EC)DSA was presented in [27]. The authors successfully decreased the security of (EC)DSA to a Hidden Number Problem (HNP), which can then be further reduced to an approximation Closest Vector Problem (CVP) for a specific lattice. The signer's secret key \(a\) can be computed using this reduction in polynomial time. The attack was also adapted to the case of ECDSA, as described in [28].
The paper [4] describes an attack on DSA schemes that uses the LLL reduction method and requires one message. By computing two short vectors of a three-dimensional lattice, the attack derives two intersecting lines in \((a,k)\), provided that \(a\) and \(k\) are sufficiently small, and the second shortest vector is sufficiently short. If two messages are available, the same attack can be applied to derive a linear congruence relating to the corresponding ephemeral keys.
The papers [29] and [6] describe attacks on DSA schemes using the LLL algorithm and one or two messages. In [29], the combination of LLL with algorithms for finding integral points of two classes of conics gives \(a\), provided that at least one of the sets \(\{a,k^{-1}\mod q\}\), \(\{k,a^{-1}\mod q\}\), \(\{a^{-1}\mod q,k^{-1}\mod q\}\) is sufficiently small. In [6], the Lagrange Reduction algorithm is applied on a 2-dimensional lattice defined by a signed message, and provides two straight lines intersecting at \((a,k)\). Similar attacks can be applied to the pairs \((k^{-1}\mod q,k^{-1}a\mod q)\) and \((a^{-1}\mod q,a^{-1}k\mod q)\). If two signed messages are available, the above two attacks can be applied to the equation relating the two ephemeral keys.
The article [7] presents an attack using Coppersmith's method to compute the secret key \(a\). The attack works when \(a\) and \(k\) satisfy a specific inequality, and in this case, the secret key \(a\) can be efficiently computed.
The article [30] describes an attack that involves constructing a system of linear congruences using signed messages. This system has at most one unique solution below a certain bound, which can be computed efficiently. Thus, if the length of a vector containing the secret and ephemeral keys of a signed message is quite small,
the secret key can be computed using the above system. The article [1] presents an improved version of this attack.
In [24, 25], the proposed attacks take advantage using of the bits in the ephemeral key and the Fast Fourier Transform.
In [32], it is shown that, using lattice reduction under some heuristic assumptions, that partial information about the nonces of multiple signatures can lead to recovery of the full private key. The original approach to doing so is based on discrete Fourier analysis techniques [5, 2].
A very important issue is the attacks on cryptosystems based on the malicious modification of memory registers. These attacks may affect the randomness of the secret parameters, and so, to force certain bits of the ephemeral key to be equal, without their values being known. In [19], it is discussed how such attacks could occur in a real-life scenario. Following the line of research from [19], the authors of [10] focus on an attack scenario where ephemeral keys share specific bits, such as the least significant bits (LSB) and/or most significant bits (MSB), either within multiple blocks. By eliminating the shared blocks of bits between the ephemeral keys, a lattice of dimension equal to the number of signatures is provided, which contains a quite short vector with components that reveal the secret key. Then, the LLL algorithm is used for the computation of this vector. Note that these attacks are based on heuristic assumptions. Later, in [11], the authors further improved upon the attack proposed in [10] by providing a probabilistic attack with a success probability approaching \(1\) when the pair \((\delta,n)\) is appropriately selected, where \(n\) represents the number of signatures, and \(\delta\) represents the number of shared bits in the ephemeral keys. This attack relies on a mild assumption regarding the hash function used in (EC)DSA.
### Our Contribution
Our study builds on the research presented in [10, 11], and we present a deterministic attack that, although not always polynomial in complexity, proves to be highly efficient in practical scenarios. Instead of using methods like LLL, approximate, or exact CVP, which were employed in previous attacks, we use enumeration on a suitable lattice to find lattice vectors that are close to a specific target vector. From these solutions, we can readily extract the secret key to the system.
It is important to highlight that the attacks presented in [10] rely on heuristics assumptions that aim to force the presence of a vector containing the private key as a solution to the Shortest Vector Problem (SVP) in a relatively large lattice. In [11], the authors provide a probabilistic approach to [10], where an assumption for the hash function is made and the attack is modelled as a Closest Vector Problem (CVP). Due to the computational complexity of finding such a vector using a deterministic algorithm, an approximation algorithm can be used instead.
Our approach takes a different path. We calculate a bound for the distance between the vector of the lattice containing the private key and a target vector. Then, we leverage Kannan's enumeration algorithm to determine this vector and, consequently, extract the secret key. Our experiments demonstrate that the attack can be made highly efficient by appropriately selecting values for \(\delta\) and \(n\). Finally, we improve the results provided in [11].
### Our results
In the subsequent Theorem, we apply the framework suggested by [11, 10, 19], which presupposes that we have access to a collection of signed
messages with ephemeral keys that are shorter than \(q\). These messages have some of their most and least significant bits in common, with a total of \(\delta\) bits shared.
**Theorem 1.1**.: _Suppose we have a \((EC)DSA\) scheme with a binary length \(\ell\) prime number \(q\) and secret key \(a.\) Let \(m_{j}\)\((j=0,\ldots,n)\) be messages signed with this scheme, \((r_{j},s_{j})\) their signatures, and \(k_{j}=\sum_{i=1}^{\ell}k_{j,i}2^{\ell-i}\) (where \(k_{j,i}\in\{0,1\}\)) are the corresponding ephemeral keys, respectively. Set \(A_{j}=-r_{j}s_{j}^{-1}\ \mathrm{mod}\ q\). Suppose that \(0<k_{j}<q\)\((j=0,\ldots,n)\), and there are integers \(\delta>0\) and \(0\leq\delta_{L}\leq\delta\) such that the following conditions hold:_
1. \(k_{0,i+1}=\cdots=k_{n,i+1}\)__\((i=1,\ldots,\delta-\delta_{L},\ell-\delta_{L},\ldots,\ell-1)\)_._
2. _For_ \(i=0,\ldots,n\)_, set_ \(C_{i,j}=(A_{j-1}-A_{i})2^{-\delta_{L}}\ \ \mathrm{mod}\ q\)_,_ \((j=1,\ldots,i)\)_, and_ \(C_{i,j}=(A_{j}-A_{i})2^{-\delta_{L}}\ \ \mathrm{mod}\ q\)__\((j=i+1,\ldots,n)\)_. The shortest vector of the lattice_ \(\mathcal{L}_{i}\) _spanned by the vectors_ \[(2^{\delta+1}q,0,\ldots,0),\ldots,(0,\ldots,0,2^{\delta+1}q,0),(2^{\delta+1}C_ {i,1},\ldots,2^{\delta+1}C_{i,n},1)\] _has length_ \[>\frac{1}{2}\,(2^{\delta+1}q)^{\frac{n}{n+1}}.\]
_Then, the secret key \(a\) can be computed in_
\[\mathcal{O}(2^{\ell-\delta n+2n}\,n\,((n\ell)^{c}\,2^{\mathcal{O}(n)}+\ell^{4} 2^{n}(n+1)^{\frac{n+1}{2}}))\]
_bit operations, for some \(c>0\)._
**Remark 1.1**.: By the Gaussian heuristic [14, Section 6.5.3] the length of the vectors of the lattice \(\mathcal{L}\) is \(>q^{n/(n+1)}\). Thus, the hypothesis (2) of Theorem 1.1 will very often be satisfied.
**Remark 1.2**.: In the above complexity estimate, if \(\ell\leq\delta n\), then the time complexity is polynomial in \(\ell\).
**Roadmap**. The paper is structured as follows: Section 2 presents an auxiliary lemma that will prove crucial in the proof of Theorem 1.1. Section 3 is dedicated to the proof of Theorem 1.1, providing a detailed explanation and justification. In Section 4, an attack on (EC)DSA, derived from Theorem 1.1, is presented. Additionally, several experiments are conducted to illustrate the effectiveness of the attack. Finally, Section 5 concludes the paper, summarizing the main findings and discussing potential avenues for future research.
## 2. Lattices
Let \(\mathcal{B}=\{\mathbf{b}_{1},\ldots,\mathbf{b}_{n}\}\subset\mathbbm{Z}^{n}\) be a basis of \(\mathbbm{R}^{n}\). A _n-dimensional lattice_ spanned by \(\mathcal{B}\) is the set
\[\mathcal{L}=\{z_{1}\mathbf{b}_{1}+\cdots+z_{n}\mathbf{b}_{n}/\ z_{1},\ldots,z_ {n}\in\mathbbm{Z}\}.\]
Recall that the scalar product of two vectors \(\mathbf{u}=(u_{1},\ldots,u_{n})\) and \(\mathbf{v}=(v_{1},\ldots,v_{n})\) in \(\mathbb{R}\) is the quantity \(\langle\mathbf{u},\mathbf{v}\rangle=u_{1}v_{1}+\cdots+u_{n}v_{n}\), and the _Euclidean norm_ of a vector \(\mathbf{v}=(v_{1},\ldots,v_{n})\in\mathbbm{R}^{n}\) the quantity
\[\|\mathbf{v}\|=\langle\mathbf{v},\mathbf{v}\rangle^{1/2}=(v_{1}^{2}+\cdots+v_ {n}^{2})^{1/2}.\]
The Gram-Schmidt orthogonalisation (GSO) of the basis \(\mathcal{B}\) is the orthogonal family \(\{\mathbf{b}_{1}^{\star},\ldots,\mathbf{b}_{n}^{\star}\}\) defined as follows:
\[\mathbf{b}_{i}^{\star}=\mathbf{b}_{i}-\sum_{j=0}^{i-1}\mu_{i,j}\mathbf{b}_{j}^{ \star},\quad\text{with}\quad\mu_{i,j}=\frac{\langle\mathbf{b}_{i},\mathbf{b}_{j }^{\star}\rangle}{\|\mathbf{b}_{j}^{\star}\|^{2}}\quad(j=0,\ldots,i-1).\]
Let \(L\) be a lattice. If \(K\) is a convex body in \(\mathbbm{R}^{n+1}\) symmetric about the origin, we denote by \(\lambda_{i}(K,L)\)\((i=1,\ldots,n+1)\) the \(i\)th successive minimum of \(K\) with respect to \(L\) which it is defined as follows
\[\lambda_{i}(K,L)=\inf\{\lambda>0/\ (\lambda K)\cap L\ \text{contains}\ i\ \text{ linearly independent points}\}.\]
Further, we denote by \(s(L)\) the length of a shortest vector in \(L\).
**Lemma 2.1**.: _Let \(B_{\mathbf{v}}(R)\) be the closest ball of center \(\mathbf{v}\) and radius \(R\) in \(\mathbb{R}^{n+1}\) and \(L\) a lattice. Then,we have:_
\[|B_{\mathbf{v}}(R)\cap L|<\left(\frac{2R}{s(L)}+1\right)^{n+1}.\]
Proof.: Set
\[\mathcal{D}_{\mathbf{v}}(R)=\{\mathbf{x}-\mathbf{y}/\ \mathbf{x},\mathbf{y} \in B_{\mathbf{v}}(R)\}.\]
Then, \(\mathcal{D}_{\mathbf{v}}(R)\) is a convex body, symmetric about the origin. Then [21] implies:
\[|B_{\mathbf{v}}(R)\cap L|<\prod_{i=1}^{n+1}\left(\frac{1}{\lambda_{i}( \mathcal{D}_{\mathbf{v}}(R),L)}+1\right). \tag{2.1}\]
Let \(\mathbf{x},\mathbf{y}\in B_{\mathbf{v}}(R)\). Then, we have:
\[\|\mathbf{x}-\mathbf{y}\|\leq\|\mathbf{x}-\mathbf{v}\|+\|\mathbf{v}-\mathbf{y }\|\leq 2R.\]
It follows that \(\mathcal{D}_{\mathbf{v}}(R)\subseteq B_{\mathbf{0}}(2R)\), and so we deduce
\[\lambda_{1}(B_{\mathbf{0}}(2R),L)\leq\lambda_{i}(\mathcal{D}_{\mathbf{v}}(R), L)\ \ (i=1,\ldots,n). \tag{2.2}\]
Further, we have
\[\lambda_{1}(B_{\mathbf{0}}(2R),L)\geq s(L)/2R. \tag{2.3}\]
Combining the inequalities (2.1), (2.2) and (2.3), we obtain:
\[|B_{\mathbf{v}}(R)\cap L|<\left(\frac{2R}{s(L)}+1\right)^{n+1}.\]
## 3. Proof of Theorem 1.1
Let \(a\) be the secret key and \(k_{j},\ j=0,\ldots,n\) the ephemeral keys. We put \(A_{j}=-r_{j}s_{j}^{-1}\bmod\ q\) and \(B_{j}=-h(m_{j})s_{j}^{-1}\bmod\ q\) for \(j=0,\ldots,n.\) The signing equation for (EC)DSA provides that,
\[k_{j}+A_{j}a+B_{j}\equiv 0\ (\bmod\ q)\quad(j=0,\ldots,n). \tag{3.1}\]
Suppose first that \(k_{0}=\min\{k_{0},\ldots,k_{n}\}\). We set \(\delta_{M}=\delta-\delta_{L}.\) From the hypothesis of the Theorem we get
\[z_{j}=k_{j}-k_{0}=\varepsilon 2^{\ell-\delta_{M}-1}+\cdots+\varepsilon^{\prime }2^{\delta_{L}},\]
for some integers \(x_{1},\ldots,x_{n+1}.\) By setting \((x_{1},\ldots,x_{n+1})=(c_{1},\ldots,c_{n},-a),\) we get the lattice vector
\[{\bf u}=(2^{\delta+1}(c_{1}q-C_{1}a),\ldots,2^{\delta+1}(c_{n}q-C_{n}a),-a).\]
Further we consider the vector in the span of \(\mathcal{L},\)
\[{\bf v}=(D_{1}2^{\delta+1}+2^{\ell},\ldots,2^{\delta+1}D_{n}+2^{\ell},0).\]
Now, we have
\[{\bf u}-{\bf v}=(2^{\delta+1}(qc_{1}-C_{1}a-D_{1})-2^{\ell},\ldots,2^{\delta+1 }(qc_{n}-C_{n}a-D_{n})-2^{\ell},-a),\]
and inequalities (3.2) yield:
\[\|{\bf u}-{\bf v}\|<2^{\ell}\sqrt{n+1}. \tag{3.3}\]
Put \(R=2^{\ell}\sqrt{n+1}\). Then \({\bf u}\in B_{\bf v}(R)\).
Next, we compute a \(LLL\)-reduced basis for \(\mathcal{L}\), say \(\mathcal{B}=\{{\bf b}_{1},\ldots,{\bf b}_{n+1}\}\). This can be done in time \(\mathcal{O}(n^{6}(\log q)^{3})\). By hypothesis (2) of Theorem, we have:
\[s(\mathcal{L})>\frac{1}{2}\,(2^{\delta+1}q)^{\frac{n}{n+1}}.\]
Let \(\{{\bf b}_{1}^{*},\ldots,{\bf b}_{n+1}^{*}\}\) the Gram-Schmidt orthogonalisation of \(\mathcal{B}\). By [14, Theorem 6.66], we get:
\[4\|{\bf b}_{1}^{*}\|^{2}\geq 2\|{\bf b}_{i-1}^{*}\|^{2}\geq\|{\bf b}_{i-1}\|^{2 }\geq s(L)^{2}\]
Thus, we obtain:
\[\frac{1}{4}\,(2^{\delta+1}q)^{\frac{n}{n+1}}\leq\|\mathbf{b}_{i}^{*}\|\quad(i=1, \ldots,n+1). \tag{3.4}\]
Next, using Kannan's enumeration algorithm [12], we compute all the elements of \(B_{\mathbf{v}}(R)\cap\mathcal{L}\). Combining [13, Theorem 5.1] with the inequality (3.4), we obtain that the bit complexity of the procedure is
\[(n\log q)^{c}\ 2^{\mathcal{O}(n)}\left(\frac{2^{\ell+2}}{(2^{\delta+1}q)^{ \frac{n}{n+1}}}\right)^{n+1},\]
where \(c\) is a constant \(>0\). Then we check whether the last coefficient of \(\mathbf{u}\in B_{\mathbf{v}}(R)\cap\mathcal{L}\) is the minus of the secret key \(-a\mod q\). Every such operation needs \(\mathcal{O}((\log q)^{4})\) bit operations [33, Lemma 6.2, p.237]. If none of the elements of \(\mathbf{u}\in B_{\mathbf{v}}(R)\cap\mathcal{L}\) gives the secret key, then we repeat the procedure assuming that \(k_{1}=\min\{k_{0},\ldots,k_{n}\}\), and we continue until we found the secret key. By Lemma 2.1, we have:
\[|B_{\mathbf{v}}(R)\cap\mathcal{L}|<\left(\frac{2^{\ell+2}\sqrt{n+1}}{(2^{ \delta+1}q)^{\frac{n}{n+1}}}+1\right)^{n+1}.\]
Thus, the overall bit complexity of the computation of \(a\) is
\[\mathcal{O}\bigg{(}n(n\log q)^{c}\ 2^{\mathcal{O}(n)}\left(\frac{2^{\ell+2}}{(2^ {\delta+1}q)^{\frac{n}{n+1}}}\right)^{n+1}+n\left(\frac{2^{\ell+2}\sqrt{n+1}}{ (2^{\delta+1}q)^{\frac{n}{n+1}}}+1\right)^{n+1}(\log q)^{4}\bigg{)},\]
whence the result.
## 4. The attack
The proof of Theorems 1.1 yields the following attack:
**Algorithm 4.1**.: ATTACK-DSA
_Input:_ Messages \(m_{j}\) (\(j=0,\ldots,n\)) and \((r_{j},s_{j})\) their (EC)DSA signatures and integers \(\delta>0\) and \(0\leq\delta_{L}\leq\delta\) and the public key \((p,q,g,R)\) (resp. \((E,p,q,P,Q)\)).
_Output:_ The private key \(a\).
1. For \(j=0,\ldots,n\) compute \(A_{j}=-r_{i}s_{i}^{-1}\bmod\ q\), \(B_{j}=-h(m_{j})s_{j}^{-1}\bmod\ q\).
2. For \(i=0,\ldots,n\), 1. For \(j=1,\ldots,i\) compute \[C_{i,j}=(A_{j-1}-A_{i})2^{-\delta_{L}}\mod\ q,\ \ D_{i,j}=(B_{j-1}-B_{i})2^{-\delta_{L}}\mod\ q,\] \[\text{ and for }j=i+1,\ldots,n\text{ compute}\] \[C_{i,j}=(A_{j}-A_{i})2^{-\delta_{L}}\mod\ q,\ \ D_{i,j}=(B_{j}-B_{i})2^{-\delta_{L}}\mod\ q.\] 2. Consider the lattice \(\mathcal{L}_{i}\) spanned by the rows of the matrix \[J_{i}=\left(\begin{array}{ccccc}2^{\delta+1}q&0&0&\ldots&0&0\\ 0&2^{\delta+1}q&0&\ldots&0&0\\ 0&0&2^{\delta+1}q&\ldots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\ldots&2^{\delta+1}q&0\\ 2^{\delta+1}C_{i,1}&2^{\delta+1}C_{i,2}&2^{\delta+1}C_{i,3}&\ldots&2^{\delta+1 }C_{i,n}&1\end{array}\right)\] and compute a \(LLL\)-basis \(\mathcal{B}_{i}\) for \(\mathcal{L}_{i}\).
* Consider the vector \(\mathbf{v}_{i}=(2^{\delta+1}D_{i,1}+2^{\ell},\ldots,2^{\delta+1}D_{i,n}+2^{\ell},0)\), and using Kannan's enumeration algorithm with basis \(\mathcal{B}_{i}\), compute all \(\mathbf{u}\in\mathcal{L}_{i}\) satisfying \(\|\mathbf{u}-\mathbf{v}_{i}\|<2^{\ell}\sqrt{n+1}\).
* Check whether the last coordinate of \(\mathbf{u}\) say \(u_{n+1}\) satisfies \(g^{-u_{n+1}}\equiv R\pmod{q}\) (resp. \(P(-u_{n+1})=Q\)). If it is so, then return the secret key \(-u_{n+1}\mod q=a\).
**Remark 4.1**.: For the Pseudocode of Kannan's Enumeration Algorithm, one can see [13, Section 5.1, Algorithm 10].
**Remark 4.2**.: Supposing that condition (2) is satisfied, taking \(n\) quite small and \(n\delta\geq\ell\), Theorem 1.1 implies that the attack is polynomial in \(\ell\). Furthermore, if \(s(L)\) is closed to the Gauss heuristic, then the upper bound for the number of points of \(B_{\mathbf{v}}(R)\cap\mathcal{L}\) will be the smaller possible, and so it is expect that the attack will be quite efficient.
**Experiments.** We conducted a thorough analysis of our experiments, and we compared our results with those presented by Gomez et al. [11]. Our findings indicate a significant improvement in almost all cases. Our experiments were conducted on a Linux machine with an i5-12400 CPU, using Sagemath 9.8 [31]. We made the assumption that we already knew the minimum ephemeral key. However, in the general case, where the minimum key is unknown, we would need to perform \(n\) executions, where \(n+1\) represents the number of signatures. This worst-case scenario would require multiplying the execution time of each experiment by \(n\). Overall, our results demonstrate a notable improvement compared to the previous findings (see the Table below). Finally, we have successfully found the secret key even when the shared bits in the ephemeral keys are only \(5\). Remarkably, in this case, we only needed a minimum of \(58\) signatures. It is worth noting that in [11], no successful attack was provided for the specific scenario where \(\delta=5\).
\begin{table}
\begin{tabular}{|c||c|c|} \hline \(\delta:\) shared bits & signatures ([11]) & signatures (this paper) \\ \hline \(5\) & \(?\) & \(58\) \\ \hline \(6\) & \(\approx 50\) & \(40\) \\ \hline \(8\) & \(\approx 27\) & \(25\) \\ \hline \(10\) & \(\approx 20\) & \(18\) \\ \hline \(12\) & \(\approx 17\) & \(14\) \\ \hline \(14\) & \(\approx 14\) & \(12\) \\ \hline \(16\) & \(\approx 12\) & \(11\) \\ \hline \(18\) & \(\approx 11\) & \(9\) \\ \hline \(20\) & \(\approx 10\) & \(8\) \\ \hline \end{tabular}
\end{table}
Table 1. We considered \(\ell=160\) and \(R=2^{\ell}\sqrt{n+1}.\) We found the private key in every experiment we executed. Instead of LLL we used BKZ with block size \(8\). For each row, we conducted \(10\) random experiments, and the results in the third column were computed on average in under two minutes and thirty seconds. For the cases, \(\delta\in\{22,24,26,28,30\}\) we get the same number of signatures as in [11]. See [https://github.com/drazioti/dsa/](https://github.com/drazioti/dsa/).
## 5. Conclusion
Attacks based on the malicious modification of memory registers is a topic of high importance, since it may affect the randomness of the secret parameters by forcing a limited number of bits to a certain value, which can be unknown to the attacker. In this context, we developed a deterministic attack on the DSA schemes, providing that several signatures are such that the corresponding ephemeral keys share a number of bits without knowing their value.
Our attack is deterministic, meaning that it always produces a result for a given input every time. However, it is important to note that while the attack is deterministic, it may not always be practical to execute. Deterministic attacks on the (EC)DSA are relatively rare, as they typically rely on heuristic assumptions.
While deterministic attacks on (EC)DSA, are rare, our attack demonstrates practical feasibility in specific scenarios, surpassing previous results, (see Table 1). However, it is important to note that the practicality and effectiveness of our attack may vary depending on the specific choice of \((\delta,n)\).
**Acknowledgement**
The author, Marios Adamoudis is co-financed by Greece and the European Union (European Social Fund-ESF) through the Operational Programme "Human Resources Development, Education and Lifelong Learning" in the context of the Act "Enhancing Human Resources Research Potential by undertaking a Doctoral Research" Sub-action 2: IKY Scholarship Programme for PhD candidates in the Greek Universities.
|
2308.04893 | Electrofreezing of Liquid Water at Ambient Conditions | Water is routinely exposed to external electric fields (EFs). Whether, e.g.,
at physiological conditions, in contact with biological systems, or at the
interface of polar surfaces in countless technological and industrial settings,
water responds to EFs on the order of a few V/{\AA} in a manner that is still
under intense investigation. Dating back to the $19^{th}$ century, the
possibility of solidifying water upon applying an EF instead of adjusting
temperature and pressure -- a process known as electrofreezing -- is an
alluring promise that has canalized major efforts since, with uncertain
outcomes. In this work, we perform long \emph{ab initio} molecular dynamics
simulations \textcolor{black}{of water at ambient conditions exposed at EFs of
different intensities. While the response of single water molecules is almost
instantaneous, the cooperativity of the hydrogen bonds induces slower
reorganizations that can be captured by dividing the trajectories in disjoint
time windows and by performing analysis on each of them separately. Upon
adopting this approach, we find} that EFs of $0.10\leq$EFs$\leq0.15$~V/{\AA}
induce electrofreezing \textcolor{black}{occurring after $\sim150$~ps. We
observe a continuous transition to a disordered state characterized by frozen
dynamical properties, damped oscillations, lower energy, and enhanced local
structural properties. Therefore, we ascribe this state to} a new ferroelectric
amorphous phase, which we term f-GW (ferroelectric glassy water). Our work
represents the first evidence of electrofreezing of liquid water at ambient
conditions and therefore impacts several fields, from
\textcolor{black}{fundamental chemical physics to} biology
\textcolor{black}{and} catalysis. | Giuseppe Cassone, Fausto Martelli | 2023-08-09T11:48:58Z | http://arxiv.org/abs/2308.04893v1 | # Electrofreezing of Liquid Water at Ambient Conditions
###### Abstract
Water is routinely exposed to external electric fields (EFs). Whether, e.g., at physiological conditions, in contact with biological systems, or at the interface of polar surfaces in countless technological and industrial settings, water responds to EFs on the order of a few V/A in a manner that is still under intense investigation. Dating back to the \(\mathbf{19^{th}}\) century, the possibility of solidifying water upon applying an EF instead of adjusting temperature and pressure - a process known as electrofreezing - is an alluring promise that has canalized major efforts since, with uncertain outcomes. In this work, we perform long _ab initio_ molecular dynamics simulations of water at ambient conditions exposed at EFs of different intensities. While the response of single water molecules is almost instantaneous, the cooperativity of the hydrogen bonds induces slower reorganizations that can be captured by dividing the trajectories in disjoint time windows and by performing analysis on each of them separately. Upon adopting this approach, we find that EFs of \(\mathbf{0.10\leq}\)EFs\(\mathbf{\leq 0.15}\) V/A induce electrofreezing occurring after \(\mathbf{\sim 150}\) ps. We observe a continuous transition to a disordered state characterized by frozen dynamical properties, damped oscillations, lower energy, and enhanced local structural properties. Therefore, we ascribe this state to a new ferroelectric amorphous phase, which we term f-GW (ferroelectric glassy water). Our work represents the first evidence of electrofreezing of liquid water at ambient conditions and therefore impacts several fields, from fundamental chemical physics to biology and catalysis.
**Keywords: Water, Amorphous Ice, Electric Field, Density Functional Theory, Electrofreezing**
## 1 Introduction
With at least 20 known crystalline forms and counting, the baroque phase diagram of water is the most complex of any pure substance [1] and is continuously under construction. Two amorphous ices, a low-density amorphous (LDA) and a high-density amorphous (HDA) ice [2], encompass a large set of sub-classes [3]; a third, medium density amorphous ice has recently been proposed [4], while a plastic amorphous ice has been suggested to exist at high pressures [5]. Water is also routinely exposed to external electric fields (EFs). The range of strengths \(0.1-1\) V/A is particularly relevant, as it represents the range continuously produced by molecular dipoles fluctuations [6] in aqueous solutions [7, 8, 9] and to which water is exposed in countless technological/industrial settings [10]. Recent developments have shown that the reaction rates of common organic reactions can be increased by one to six orders of magnitude upon applying external EFs [11, 12, 13, 14, 15, 16], hence paving the way to the adoption of EFs as efficient catalyzers. Comparable EFs, generated by charge separation, endow microdroplets with strong and surprising catalytic power [17, 18, 19, 20].
Historically, the possibility of manipulating water kinetics via EFs was first proposed by Dufour in 1862 [21]. As experimental techniques matured over the years, such an opportunity became more tangible: the role of EFs on the heterogeneous nucleation of ice in cirrus clouds was addressed in the 1960s [22], and several other investigations followed, starting a vivid scientific debate [23, 24, 25, 26, 27, 28]. Recently, Ehre et al. [29] have shown that the kinetics of electrofreezing of supercooled water on pyroelectric materials is highly heterogeneous, favoring the crystallization on positively charged surfaces. Early - and pioneering - computational investigations based on classical molecular dynamics simulations also joined forces. According to these studies, liquid water undergoes electrofreezing to crystalline ice when exposed to external static EFs in the order of \(\sim 0.5\) V/A [30, 31], and the effects of oscillating EFs have also been investigated [32]. On the other hand, _ab initio_ molecular dynamics (AIMD), which account for chemical reactions, and experiments have more recently shown that \(\sim 0.3\) V/A represents a threshold above which water molecules undergo dissociation into oxonium (H\({}_{3}\)O)\({}^{+}\) and hydroxide (OH)\({}^{-}\) ions [33, 34, 35, 36]. Seemingly, below this threshold, thermal energy and the associated large intrinsic field fluctuations taking place at the molecular scale impede the ordering of the hydrogen bond network (HBN), a necessary step for crystallization to occur. This task can instead be achieved, according to classical simulations, upon tuning the working pressure to \(\sim 5\) kPa and imposing external EFs of \(\sim 0.2\) V/A [37].
The application of EFs to liquid water induces a fast response of water molecules which align their dipole parallel to the field direction. On the other hand, the intrinsic cooperativity of HBs acts as a competing force, slowing down the relaxation and, in turn, driving the sample out of equilibrium. Therefore, in order to follow the response
of water, it is necessary to probe the system at (non-)overlapping time windows rather than averaging over the entire simulation, and this is the paradigm we have decided to adopt (we report, in the SI, a comparison between quantities of interest computed at disjoint time windows and averaged over entire trajectories). In this study, we perform long (\(\sim 250\) ps) AIMD simulations and show that EFs in the order of 0.10 \(\leq\)EFs\(\leq\) 0.15 V/A induce a structural transition to a new ferroelectric glassy state that we will call f-GW (ferroelectric glassy water). This transition occurs after \(\sim 150\) ps and is signaled by the freezing of the translational degrees of freedom, the suppression of the fluctuations of the HBN, and the drop in the potential energy. Our work represents the first evidence of electrofreezing of liquid water occurring at _ambient conditions_.
## 2 Results
In Fig. 1 we report the infrared (IR) spectra for bulk water without field (violet line) and for increasingly higher applied fields (0.05 V/A, blue; 0.10 V/A, orange; 0.15 V/A, red) computed over the time window \([200-250]\) ps of the respective trajectories. In the absence of applied fields, the position of the OH stretching band - located at 3220 cm\({}^{-1}\) - and that of low-frequency libration mode - at 560 cm\({}^{-1}\) - are in good agreement with the experimental data [38]. Upon exposure of the water sample to external EFs, we observe a contraction of the frequency range ascribed to the vibrational Stark effect [39], as also reported in Ref. [40] and - on limited frequency domains - in Ref. [41]. This contraction indicates that the field imposes novel selection rules on the molecular vibrations. The largest frequency shift is associated with the OH stretching band; the corresponding red-shift is in the order of \(\sim 75\) cm\({}^{-1}\) each 0.05 V/A, up to an EF of 0.10 V/A. However, a milder further red-shift in the order of 35 cm\({}^{-1}\) occurs at a field of 0.15 V/A. The red-shift of the OH stretching is generally associated with stronger hydrogen bonds (HBs) [42] and the development of more "ice-like" environments [43]. The reduced magnitude of the relative red-shift upon increasing the field from 0.10 V/A to 0.15 V/A, as quantified by the difference in frequencies reported in the inset of Fig. 1, suggests that the effect of the applied field becomes less intense. Moving towards lower frequencies, the weak libration+bending combination mode band at 2200 cm\({}^{-1}\) is commonly associated with the strength of the hydrogen bond network (HBN) [44]. The presence of the external EFs induces an enhancement and a concurrent slight blue-shift of this band, further suggesting that the EF causes a strengthening of the HBN. A similar effect has been reported on the IR spectra of water undergoing supercooling [45] where the strengthening of the HBN is, instead, induced by the reduction of thermal energy. A stronger effect on the vibrational spectrum occurs at lower frequencies, the signature of librational modes. The application of an external EF induces a significant blue-shift and, at 0.10 V/A and 0.15 V/A, the development of a clear new band peaked at \(\sim 1000\) cm\({}^{-1}\). This band has been ascribed to the breaking of the isotropy of molecular rotations and the preferential alignment with the field direction [40], as shown in Fig. S1 of the SI.
The picture emerging from the inspection of the IR spectra, therefore, indicates that the EF affects the topology of the HBN in several ways: the red-shift of the OH
stretching band occurring upon increasing the applied EF is indicative of a strengthening of the HBs, while the blue-shift of the libration+bending combination mode band and that of the librations at lower frequencies suggests some degree of ordering of the HBN. At the same time, the appearance of a new peak in the librational band indicates an alignment of the molecular dipoles along the field direction.
The strengthening of the HBs (or their stiffening, as shown in Fig. S7) and the alignment of the molecular dipoles along the field direction mirror an enhancement of spatial correlations that also persists over time. In order to test this hypothesis, we report, in Fig. 2, the \(G_{OO}(r,t)\), the Van Hove correlation function computed between oxygen atoms only and in the same time window \([201-250]\) ps on which we have computed the IR spectra shown in Fig. 1. In Fig. 2 a) we report \(G_{OO}(r,t)\) in the absence of external fields. We can observe that weak spatial correlation in the region \(\sim 2.8\) A and \(\sim 4.5\) A, corresponding to the first and second shells of neighbours, rapidly wear off in timescales of \(\sim 5-10\) ps. The application of a field of 0.05 V/A, reported in panel b), induces an extension of spatial correlations over slightly longer timescales. Radically stronger responses are induced by more intense fields: a field of 0.10 V/A (panel c)) and a field of 0.15 V/A (panel d)) clearly strengthen spatial correlations between \(\sim 2-3\) A and \(4-5\) A and extend them to timescales above \(\sim 35\) ps.
By projecting the partial Van Hove correlation functions on the reduced domain constituted by the spatial distances only (i.e., by removing the temporal dependence), we obtain the oxygen-oxygen radial distribution functions \(g_{OO}(r)\). In Fig. 3 a) we report the \(g_{OO}(r)\) computed in the time window \([201-250]\) ps. Without any applied field (violet), the \(g_{OO}(r)\) is that of bulk liquid water with a first peak located at \(\sim 2.8\) A and a second peak at \(\sim 4.5\) A. Adding a small EF of 0.05 V/A (blue) we observe an increase in the intensity of both the first and the second peaks with a reduction of the population between the first and second peaks. Upon doubling the intensity of the field and reaching 0.10 V/A (orange) we observe an enhanced increase in the intensity of both the first and second peaks and a further depletion of water molecules populating the interstitial region. An additional increase in the field intensity to 0.15 V/A (red) does not show appreciable changes in the \(g_{OO}(r)\) with respect to the previous case, suggesting that no further major structural changes occur in the sample. Fig. S4 of the SI reports the \(g_{OO}(r)\) computed in consecutive time windows of 50 ps starting from the beginning of our simulations, hence providing a glimpse of the dynamical structural transformations. In agreement with the profiles of the Van Hove functions, it is possible to observe that the \(g_{OO}(r)\) for 0.05 V/A converges to the same profile after 50 ps (Fig. S3-a), while convergence is achieved only after 150 ps for 0.10 V/A (S3-b) and 200 ps for 0.15 V/A (S3-c). We notice, at this point, that the \(g_{OO}(r)\) at fields of 0.10 V/A and 0.15 V/A at convergence, i.e., after 200 ps of simulation, strikingly resemble the \(g_{OO}(r)\) of supercooled water or that of low-density amorphous (LDA) ice [3]. This comparison is, instead, less accurate if one does not take into account the out-of-equilibrium nature that drives the process, and computes the radial distribution functions over the entire trajectories, as reported in Fig. S10-a as well as in several previous works. In order to rule out the effect of the simulation box, we have performed longer simulations (up to \(\sim 500\) ps) for systems with 256 H\({}_{2}\)O
molecules at densities of 0.92 g/cm\({}^{3}\) and 0.95 g/cm\({}^{3}\). Our results, reported in Fig. S8 of the SI, show that the development of a glassy-like \(g_{OO}(r)\) is independent of the system size and density.
Considering the high computational cost of performing AIMD simulations, we can not produce an equilibrated supercooled sample or an LDA via realistic quenching rates to compare the relative radial distribution functions. Therefore, in order to understand whether our \(g_{OO}(r)\)s belong to a glassy sample or to a supercooled sample, we look at dynamical properties, namely the diffusivity measured via the mean squared displacement (MSD). Our results are reported in Fig. 3 b). We stress here that, like for the \(g_{OO}(r)\), the MSD are computed on the time window \([201-250]\) ps. It is possible to appreciate how the slope of the MSD drastically drops as soon as we introduce an EF. In the presence of a weak field of 0.05 V/A (blue) the sample is still liquid, although the mobility is strongly reduced compared to the case without field (violet). Upon increasing the field to 0.10 V/A (orange) and to 0.15 V/A (red) the MSD profiles indicate that water's translational degrees of freedom are confined to molecular vibration and to the rattling within the cage of the local neighbourhood. Computing
Figure 1: Infrared (IR) absorption spectra of liquid water determined at zero field (violet line) and under different field intensities as detailed in the legend. Arrows are guides for the eye qualitatively following the field-induced modifications of the bands. In the inset, we report the vibrational Stark effect of the OH stretching band. Data are computed on the time window \([201-250]\) ps.
the MSD over wider time windows implies accounting for the contribution of water molecules still in the liquid phase, hence artificially increasing the slope of the MSD, as shown in Fig. S10-b. We posit that this might be one of the reasons why the f-GW phase has been overlooked in previous studies.
In Fig. 4 we report the profile of the potential energy computed performing single point calculations on 1000 configurations randomly chosen within the time window \([201-250]\) ps. Panel a) reports the profile as a function of the chosen molecular configurations. Without any applied field (violet) the potential energy fluctuates around the dashed violet line. Upon introducing a field of 0.05 V/A (blue) we observe a decrease in potential energy for almost all configurations, with an average value (dashed blue line) sitting below the case of water without field. Stronger drops in potential energy occur in the presence of EFs of 0.10 V/A (orange) and of 0.15 V/A (red). In panel b) we report the average potential energy - relative to the zero-field case in kcal/mol - as a function of the field strength. The drop in potential energy is clearly visible and shows how EFs of 0.10 V/A and 0.15 V/A drag the system into lower potential energy
Figure 2: Partial Van Hove correlation functions between the oxygen atoms (i.e., \(G_{OO}(r,t)\)) as a function of the time and of the intermolecular distance in the absence of the field (a) and in presence of static electric fields with intensities equal to 0.05 (b), 0.10 (c), and 0.15 V/Å (d). Data are computed on the time window \([201-250]\) ps.
basins [46; 47]. It is worth noticing that the reduction in potential energy occurs along with a reduction of \(\sim 14\%\) of the entropy, as reported in Ref. [48] for the same system and numerical setups.
The amorphization of liquid water involves a sensitive change in the fluctuations and topology of the HBN [49], which can be quantitatively inspected via the ring statistics. Therefore, in order to confirm that the structural and dynamical changes induced by the EFs indeed prompt a rearrangement in the HBN, we compute \(P(n)\)
Figure 4: Potential energy computed via single point calculations on 1000 configurations randomly chosen within the time window \([201-250]\) ps of each simulation. (a) Profile of the potential energy in a.u. for water without field (violet), water in the presence of 0.05 V/Å (blue), water in the presence of 0.10 V/Å (orange), and water in the presence of 0.15 V/Å (red). Dashed lines represent the average value. (b) Average potential energy relative to the zero-field case in kcal/mol as a function of the field strength.
Figure 3: (a) Oxygen-oxygen radial distribution functions at different electric field strengths (see legend). Dashed arrows qualitatively depict field-induced modulation of the hydration shells. (b) Mean squared displacement (MSD) of the oxygen atoms at various field intensities (see legend). In the inset, a logarithmic plot of the self-diffusion coefficient of the oxygen atoms as a function of the field strength. Data are computed on the time window \([201-250]\) ps.
the normalized probability of having a ring of length \(n\in[3,10]\) in time windows of 50 ps. In hexagonal/cubic ice at 0 K and without defects, the \(P(n)\) is centered at \(n=6\), indicating that only hexagons are present. In Fig. 5 we report \(P(n)\) for strengths of 0.05 V/A, 0.10 V/A, and 0.15 V/A. Each case is reported against the \(P(n)\) determined in the absence of the EF (cyan circles). In the case of 0.05 V/A during the first 50 ps (black circles, panel a)), we can observe that the topology of the HBN overlaps almost perfectly with that of liquid water. Upon increasing the simulation time, the topology of the HBN responds to the presence of the field by increasing the number of hexagonal and heptagonal rings while reducing the number of longer rings. Overall, the response of the HBN to the presence of a weak field resembles the transformation of the HBN topology upon cooling [49, 50].
Upon doubling the field intensity to 0.10 V/A (panel b)), the topology of the HBN drastically changes even within the first 50 ps of simulation. In particular, we observe an increase in hexagonal and heptagonal rings with a corresponding decrease in longer rings. At consecutive simulation time windows, we observe a further sharpening of the \(P(n)\) with a considerable increase of hexagonal rings and a depletion of octagonal and longer rings. The topology of the HBN within the last 50 ps of our simulation is remarkably similar to that of LDA (obtained from classical simulations [49]).
A similar behaviour occurs when we apply a field of 0.15 V/A (panel c)): the HBN reacts to the presence of the field increasing the population of hexagonal and heptagonal rings while decreasing the population of longer rings. Upon increasing the simulation time, the topology of the HBN further increases the population of hexagonal rings while decreasing longer rings, including heptagonal rings.
The gradual rearrangement of the topology of the HBN described above occurs on slower timescales compared to the alignment of water's dipole moment (see Fig. S3) and clearly shows that, although single water molecules react very quickly to the presence of EFs, the overall network of bonds reorganizes itself into new steady configurations on longer times, as also partially reported in Ref. [51]. Such time-dependence, key in our investigation, can be seen in the gradual build-up of four-coordinated water molecules shown in Fig. S9. This gradual build-up in time leads to an increase in four-coordinated molecules up to 15% from the early stages of the simulation. Such an increase in the percentage of four-coordinated environments also induces a gradual enhancement of the local order. We report, in Fig. S6, \(P(I)\), the probability distribution of the local structure index \(I\) estimated on consecutive windows of 50 ps. It is possible to appreciate the development of bimodality in the later stages of our simulations for fields of 0.10 V/A (middle panel) and 0.15 V/A (lower panel). The lower panel of Fig. S6 reports a comparison between \(P(I)\) computed in the time window \([201-250]\) ps for 0.15 V/A and for LDA at \(T=200\) K obtained from classical molecular dynamics simulations. Despite the differences in simulation techniques, the local structure of liquid water under EF strongly resembles that of LDA.
The information collected so far indicates that our samples gradually readjust to the presence of external EFs. The slow evolution in time involves (i) the gradual development of four-folded configurations interacting via stronger HBs, (ii) the congruent development of more ordered local environments, (iii) the slow reduction of translational and rotational degrees of freedom, (iv) a drop in the potential energy, and (v) the
gradual rearrangement of the HBN topology towards configurations richer in hexagonal rings. Eventually, after exposing the samples of liquid water to a field of \(0.15\) V/A for \(\sim 150\) ps, we observe a complete freezing of translational degrees of freedom, hence suggesting that our sample might be glass. Although the definition of glassy water is precise (molecular relaxation time exceeding \(100\) s or the shear viscosity reaching to \(1013\) poise), our simulations are too short to access these quantities. On the other hand, it has been recently shown that the transition to glass upon quenching liquid water is clearly signaled by the damping in the fluctuations of the HBN topology [49], which we here evaluate and report in Fig. 6 for the three cases in presence of the EF and against the fluctuations computed in liquid water without EF (cyan circles). For all cases, we determine \(\sigma(n)\) in time windows of \(50\) ps. In the case of \(0.05\) V/A, we can observe that, with respect to the case in the absence of the field, the fluctuations are strongly damped but for hexagonal and pentagonal rings, which fluctuate in a comparable measure. Upon increasing the simulation time, the fluctuations of the HBN are reduced for all cases but for the hexagonal rings, which become increasingly
Figure 5: Probability distribution \(P(n)\) of having a ring of length \(n\in[3,10]\) computed at different time windows during our simulations. The upper panel refers to the applied field \(E=0.05\) V/Å, the middle panel to the applied field \(E=0.10\) V/Å, the lower panel to the applied field \(E=0.15\) V/Å. The cyan circles refer to the zero-field case. The black squares refer to the first \(50\) ps, red diamonds to the time window \(51-100\) ps, blue upper triangles to the time window \(101-150\) ps, the left magenta triangles to the time window \(151-200\) ps, and the green lower triangles to the window \(201-250\) ps. The dashed arrows emphasize the change in \(P(n)\) at consecutive time windows.
enhanced with the simulation time. Considering that the sample is liquid (although with strongly reduced diffusion), we posit that the diffusion occurs via changes in the HBN mostly involving hexagonal rings.
At \(0.10\) V/A and \(0.15\) V/A, we observe a drastic suppression of the fluctuations of the HBN, to values well below those of the liquid. Such marked reduction of the fluctuations is responsible for the suppression of long-range density fluctuations occurring in correspondence with the transition to glassy water [49, 52], a characteristic that differentiates liquid water from glassy states [53]. Therefore, our findings are strongly indicative of a transition to a glass.
## 3 Discussion
In this work, we have performed long _ab initio_ simulations of bulk water at _ambient conditions_ in the presence of applied external electric fields (EFs) in the range
Figure 6: Fluctuations \(\sigma(n)\) computed on the ring statistics at different time windows during our simulations. The upper panel refers to the applied field of \(E=0.05\) V/Å, the middle panel to the applied field of \(V=0.10\) V/Å, and the lower panel to the applied field of \(E=0.15\) V/Å. The cyan circles refer to the case of no field. The black squares refer to the first \(50\) ps, red diamonds to the time window \(51-100\) ps, blue upper triangles to the time window \(101-150\) ps, the left magenta triangles to the time window \(151-200\) ps, and the green lower triangles to the window \(201-250\) ps. The dashed arrows emphasize the \(P(n)\) change at consecutive time windows.
\(0.05\leq\)EFs\(\leq 0.15\) V/A. We have inspected the out-of-equilibrium process at disjoint time windows and recorded the results on each window. In the presence of an EF of \(0.05\) V/A, the dipoles align along the direction parallel to the EF while the diffusivity becomes sluggish. Overall, the inspected quantities computed within the last \(\sim 150\) ps of simulation are stable in time, indicating that the system is genuinely a liquid.
Upon increasing the EF to \(0.10\) V/A and to \(0.15\) V/A, we observe a transition to a new ferroelectric glass that we call f-GW (ferroelectric glassy water). The amorphization occurs after \(\sim 150-200\) ps and is signaled by the freezing of the translational degrees of freedom and a drop in the potential energy, indicating that the sample has reached a metastable basin on the potential energy landscape. The evolution in time of the radial distribution functions and of other structural descriptors report an enhancement of the first and the second shells of neighbours along with a drastic depletion of the entries populating the space between them, as expected in the low-density glassy water. Similarly, the hydrogen bond network (HBN) undergoes a progressive structural reorganization favoring hexagonal motifs and a corresponding suppression of its fluctuations, as expected in the low-density glassy water state [49].
Our work represents the first evidence of electrofreezing of liquid water at _ambient conditions_, a task that has been attempted since 1862 [21]. The new f-GW phase can be unveiled only by accessing and isolating late portions of long AIMD simulations, and is a new tile in the complex phase diagram of water. Therefore, it enriches our understanding of the physics of this complex material. Nonetheless, the conditions explored in this work are ubiquitous in industrial and natural settings, fields that can potentially benefit from this work. For example, water is routinely exposed to natural EFs comparable to the ones explored in this work when at the interface with enzymes, proteins, and biological membranes, defining the biological functionality and stabilizing such complex structures.
We infer that an experimental validation of our finding and the realization of the f-GW phase might be relatively straightforward by exploiting modern experimental settings. Many laboratories are nowadays capable of quantifying the field strengths generated in the proximity of emitter tips [10, 13, 54] - such those established by STM and AFM apparatus -, which fall in the same range required to transition to the new f-GW. Nonetheless, we posit that lower fields may induce electrofreezing to f-GW on longer time scales, accessible to accurate interaction potentials such as, e.g., MB-Pol [55] or Neural Network potentials [56].
## 4 Methods
### Numerical simulations
We performed _ab initio_ molecular dynamics (AIMD) simulations using the software package CP2K [57], based on the Born-Oppenheimer approach. The external electric fields (EFs) are static, homogeneous and directional (i.e., along the \(z\)-axis). The implementation of external EFs in Density Functional Theory (DFT) codes can be achieved _via_ the modern theory of polarization and Berry's phases [58, 59, 60]. In particular, owing to the seminal work carried out by Umari and Pasquarello [61], nowadays AIMD simulations under the effect of static EFs with periodic boundary conditions
are routinely performed. The reader who is interested in the implementation of EFs in atomistic simulations can refer to the following literature: Refs. [58; 59; 61; 62; 63; 64; 65; 66]. The main simulation here presented consists of a liquid water sample containing 128 H\({}_{2}\)O molecules arranged in a cubic cell with side parameter \(a=15.82\) A, so as to reproduce a density of 0.97 g\(\cdot\)cm\({}^{-3}\). Furthermore, additional simulations were executed on bigger cubic cells composed of 256 water molecules and having edges of 20.05 A and 20.26 A. In such a case, lower densities of 0.95 g\(\cdot\)cm\({}^{-3}\) and 0.92 g\(\cdot\)cm\({}^{-3}\) were simulated, respectively. To minimize undesirable surface effects, the structures were replicated in space by employing periodic boundary conditions. We applied static and homogeneous EFs of intensities equal to 0.05 V/A, 0.10 V/A, and 0.15 V/A from a zero-field condition in parallel simulation runs. The maximum field strength of 0.15 V/A was chosen to prevent water splitting known to occur at larger field intensities [33; 34; 35; 36; 67]. In the zero-field case we performed dynamics of 50 ps whereas, for each other value of the field intensity, we ran dynamics of at least 250 ps. Besides, as for the simulations of the lower-density states only a single field intensity of 0.15 V/A was simulated - in addition to the fieldless cases - for time-scales of \(\sim 500\) ps (\(\rho=0.95\) g\(\cdot\)cm\({}^{-3}\)) and \(\sim 450\) ps (\(\rho=0.92\) g\(\cdot\)cm\({}^{-3}\)). This way, we accumulated a global simulation time approaching 2 ns, whilst a time-step of 0.5 fs has been chosen.
Wavefunctions of the atomic species have been expanded in the TZVP basis set with Goedecker-Teter-Hutter pseudopotentials using the GPW method [68]. A plane-wave cutoff of 400 Ry has been imposed. Exchange and correlation (XC) effects were treated with the gradient-corrected Becke-Lee-Yang-Parr (BLYP) [69; 70] density functional. Moreover, in order to take into account dispersion interactions, we employed the dispersion-corrected version of BLYP (i.e., BLYP+D3(BJ)) [71; 72]. The adoption of the BLYP+D3 functional has been dictated by the widespread evidence that such a functional, when dispersion corrections are taken into account, offers one of the best adherence with the experimental results among the standard GGA functionals [73]. It is well-known, indeed, that neglecting dispersion corrections leads to a severely over-structured liquid (see, e.g., Ref. [74] and references therein). Moreover, a nominal temperature slightly higher than the standard one has been simulated in the main simulations to better reproduce the liquid structure (i.e., \(T=350\) K). Furthermore, the additional simulations at lower density regimes were executed at a lower (supercooling) temperature of \(T=250\) K (see the SI for the respective results).
Albeit the BLYP+D3 functional represents e reasonably good choice, computationally more expensive hybrid functionals, such as revPBE0, when simulated along with the quantum treatment of the nuclei performs excellently well for water, as demonstrated by Marsalek and Markland [75]. However, since sufficiently large simulation boxes are necessary to track structural transitions, the inclusion of the nuclear quantum effects is beyond the scope of the present work. Moreover, IR absorption line shapes of liquid water (and ice) are overall reproduced remarkably well by standard AIMD simulations, which include by their nature the explicit quantum adiabatic response of the electrons [76]. In addition, the adherence of the IR and of the Raman spectra evaluated by some of us [40] under zero-field conditions with recent experimental results [38; 77] justifies _a posteriori_ the classical treatment of the nuclei. As a consequence, the dynamics of ions was simulated classically within a constant number,
volume, and temperature (NVT) ensemble, using the Verlet algorithm whereas the canonical sampling has been executed by employing a canonical-sampling-through-velocity-rescaling thermostat [78] set with a time constant equal to 10 fs. IR spectra have been determined by means of the software TRAVIS [79] (see the SI for further information).
### Network topology
In order to probe the topology of the hydrogen bond network (HBN), we employed ring statistics, a theoretical tool that has proven to be instrumental in investigating the network topology in numerically simulated network-forming materials. The ring statistics is only one of many graph-based techniques to investigate network topologies and, in the case of water, it has helped in understanding the connections between water anomalies and thermodynamic response functions [50, 80] as well as the properties of glassy water [52]. We construct rings by starting from a tagged water molecule and recursively traversing the HBN until the starting point is reached or the path exceeds the maximal ring size considered (10 water molecules in our case). The definition of hydrogen bond follows Ref. [81]. We do not distinguish between the donor-acceptor character of the starting water molecule.
Supplementary information.A Supporting Information (SI) file with additional analyses and results accompanies the current work.
Acknowledgments.G. C. acknowledges support from ICSC - Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union - NextGenerationEU - PNRR, Missione 4 Componente 2 Investimento 1.4. G. C. is thankful to CINECA for an award under the ISCRA initiative, for the availability of high performance computing resources and support.
## References
* [1] Salzmann, C.G.: Advances in the experimental exploration of water's phase diagram. The Journal of chemical physics **150**(6), 060901 (2019)
* [2] Mishima, O., Calvert, L., Whalley, E.: An apparently first-order transition between two amorphous phases of ice induced by pressure. Nature **314**(6006), 76-78 (1985)
* [3] Amann-Winkel, K., Bellissent-Funel, M.-C., Bove, L.E., Loerting, T., Nilsson, A., Paciaroni, A., Schlesinger, D., Skinner, L.: X-ray and neutron scattering of water. Chemical reviews **116**(13), 7570-7589 (2016)
* [4] Rosu-Finsen, A., Davies, M.B., Amon, A., Wu, H., Sella, A., Michaelides, A., Salzmann, C.G.: Medium-density amorphous ice. Science **379**(6631), 474-478 (2023)
* [5] Zimon, M.J., Martelli, F.: Molecular rotations trigger a glass-to-plastic fcc heterogeneous crystallization in high-pressure water. The Journal of Chemical Physics **158**(11), 114501 (2023)
* [6] Geissler, P.L., Dellago, C., Chandler, D., Hutter, J., Parrinello, M.: Autoionization in liquid water. Science **291**(5511), 2121-2124 (2001)
* [7] Chalmet, S., Ruiz-Lopez, M.F.: The reaction field of a water molecule in liquid water: Comparison of different quantum/classical models. The Journal of Chemical Physics **115**(11), 5220-5227 (2001)
* [8] Smith, J.D., Cappa, C.D., Wilson, K.R., Cohen, R.C., Geissler, P.L., Saykally, R.J.: Unified description of temperature-dependent hydrogen-bond rearrangements in liquid water. Proceedings of the National Academy of Sciences **102**(40), 14171-14174 (2005)
* [9] Ruiz-Lopez, M.F., Martins-Costa, M.T.C., Francisco, J.S., Anglada, J.M.: Tight electrostatic regulation of the oh production rate from the photolysis of hydrogen peroxide adsorbed on surfaces. Proceedings of the National Academy of Sciences **118**(30), 2106117118 (2021)
* [10] Che, F., Gray, J.T., Ha, S., Kruse, N., Scott, S.L., McEwen, J.-S.: Elucidating the roles of electric fields in catalysis: A perspective. ACS Catalysis **8**(6), 5153-5174 (2018)
* [11] Shaik, S., Mandal, D., Ramanan, R.: Oriented electric fields as future smart reagents in chemistry. Nature Chemistry **8**(12), 1091-1098 (2016)
* [12] Cassone, G., Pietrucci, F., Saija, F., Guyot, F., Saitta, A.M.: One-step electric-field driven methane and formaldehyde synthesis from liquid methanol. Chem. Sci. **8**, 2329-2336 (2017)
* [13] Aragones, A.C., Haworth, N.L., Darwish, N., Ciampi, S., Bloomfield, N.J., Wallace, G.G., Diez-Perez, I., Coote, M.L.: Electrostatic catalysis of a diels-alder reaction. Nature **531**(7592), 88-91 (2016)
* [14] Huang, X., Tang, C., Li, J., Chen, L.-C., Zheng, J., Zhang, P., Le, J., Li, R., Li, X., Liu, J., Yang, Y., Shi, J., Chen, Z., Bai, M., Zhang, H.-L., Xia, H., Cheng, J., Tian, Z.-Q., Hong, W.: Electric field–induced selective catalysis of single-molecule reaction. Science Advances **5**(6), 3072 (2019)
* [15] Meir, R., Chen, H., Lai, W., Shaik, S.: Oriented electric fields accelerate diels-alder reactions and control the endo/exo selectivity. ChemPhysChem **11**(1), 301-310 (2010)
* [16] Hao, H., Leven, I., Head-Gordon, T.: Can electric fields drive chemistry for an aqueous microdroplet? Nature Communications **13**(1), 280 (2022)
* [17] Lee, J.K., Samanta, D., Nam, H.G., Zare, R.N.: Micrometer-sized water droplets induce spontaneous reduction. Journal of the American Chemical Society **141**(27), 10585-10589 (2019)
* [18] Xiong, H., Lee, J.K., Zare, R.N., Min, W.: Strong electric field observed at the interface of aqueous microdroplets. The Journal of Physical Chemistry Letters **11**(17), 7423-7428 (2020)
* [19] Song, X., Basheer, C., Zare, R.N.: Making ammonia from nitrogen and water microdroplets. Proceedings of the National Academy of Sciences **120**(16), 2301206120 (2023)
* [20] Martins-Costa, M.T., Ruiz-Lopez, M.F.: Electrostatics and chemical reactivity at the air-water interface. Journal of the American Chemical Society **145**, 1400-1406 (2023)
* [21] Dufour, L.: Ueber das gefrieren des wassers und uber die bildung des hagels. Annalen der Physik **190**, 530-554 (1862)
* [22] Pruppacher, H.R.: The effects of electric fields on cloud physical processes. Zeitschrift fur angewandte Mathematik und Physik ZAMP **14**(5), 590-599 (1963)
* [23] Doolittle, J.B., Vali, G.: Heterogeneous freezing nucleation in electric fields. Journal of the Atmospheric Sciences **32**(2), 375-379 (1975)
* [24] Ice nucleation of agi -- cubr nucleants in the presence of electric field. Materials Chemistry and Physics **27**(4), 385-392 (1991)
* [25] Effects of dipole polarization of water molecules on ice formation under an electrostatic field. Cryobiology **56**(1), 93-99 (2008)
* [26] Controlled ice nucleation under high voltage dc electrostatic field conditions. Food Research International **42**(7), 879-884 (2009)
* [27] Peleg, Y., Yoffe, A., Ehre, D., Lahav, M., Lubomirsky, I.: The role of the electric field in electrofreezing. The Journal of Physical Chemistry C **123**(50), 30443-30446 (2019)
* [28] Fundamental interfacial mechanisms underlying electrofreezing. Advances in Colloid and Interface Science **251**, 26-43 (2018)
* [29] Ehre, D., Lavert, E., Lahav, M., Lubomirsky, I.: Water freezes differently on positively and negatively charged surfaces of pyroelectric materials. Science **327**(5966), 672-675 (2010)
* [30] Svishchev, I.M., Kusalik, P.G.: Crystallization of liquid water in a molecular dynamics simulation. Phys. Rev. Lett. **73**, 975-978 (1994)
* [31] Svishchev, I.M., Kusalik, P.G.: Electrofreezing of liquid water: A microscopic perspective. Journal of the American Chemical Society **118**(3), 649-654 (1996)
* [32] English, N.J.: Molecular dynamics simulations of microwave effects on water using different long-range electrostatics methodologies. Molecular Physics **104**(2), 243-253 (2006) [https://doi.org/10.1080/14733140500352322](https://doi.org/10.1080/14733140500352322) [https://doi.org/10.1080/14733140500352322](https://doi.org/10.1080/14733140500352322)
* [33] Saitta, A.M., Saija, F., Giaquinta, P.V.: Ab initio molecular dynamics study of dissociation of water under an electric field. Phys. Rev. Lett. **108**, 207801 (2012)
* [34] Cassone, G.: Nuclear quantum effects largely influence molecular dissociation and proton transfer in liquid water under an electric field. The Journal of Physical Chemistry Letters **11**(21), 8983-8988 (2020)
* [35] Stuve, E.M.: Ionization of water in interfacial electric fields: An electrochemical view. Chemical Physics Letters **519-520**, 1-17 (2012)
* [36] Hammadi, Z., Descoins, M., Salancon, E., Morin, R.: Proton and light ion nanobeams from field ionization of water. Applied Physics Letters **101**(24), 243110 (2012)
* [37] Zhu, W., Huang, Y., Zhu, C., Wu, H.-H., Wang, L., Bai, J., Yang, J., Francisco, J.S., Zhao, J., Yuan, L.-F., Zeng, X.C.: Room temperature electrofreezing of water yields a missing dense ice phase in the phase diagram. Nature Communications **10**(1), 1925 (2019)
* [38] Bertie, J.E., Lan, Z.: Infrared intensities of liquids xx: The intensity of the oh stretching band of liquid water revisited, and the best current values of the optical constants of h2o (l) at 25 c between 15,000 and 1 cm- 1. Applied Spectroscopy **50**(8), 1047-1057 (1996)
* [39] Chattopadhyay, A., Boxer, S.G.: Vibrational stark effect spectroscopy. Journal of the American Chemical Society **117**(4), 1449-1450 (1995)
* [40] Cassone, G., Sponer, J., Trusso, S., Saija, F.: Ab initio spectroscopy of water under electric fields. Phys. Chem. Chem. Phys. **21**, 21205-21212 (2019)
* [41] Futera, Z., English, N.J.: Communication: Influence of external static and alternating electric fields on water from long-time non-equilibrium ab initio molecular dynamics. The Journal of Chemical Physics **147**(3), 031102 (2017) [https://doi.org/10.1063/1.4994694](https://doi.org/10.1063/1.4994694) [https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/1.4994694/14781490/031102_1_online.pdf](https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/1.4994694/14781490/031102_1_online.pdf)
* [42] Rice, s.A.: Topics in Current Chemistry. Springer, New York (1975)
* [43] Wang, Z., Pakoulev, A., Pang, Y., Dlott, D.D.: Vibrational substructure in the
-oh stretching transition of water and hod. The Journal of Physical Chemistry A **108**(42), 9054-9063 (2004)
* [44] Verma, P.K., Kundu, A., Puretz, M.S., Dhoonmoon, C., Chegwidden, O.S., Londergan, C.H., Cho, M.: The bend+libration combination band is an intrinsic, collective, and strongly solute-dependent reporter on the hydrogen bonding network of liquid water. The Journal of Physical Chemistry B **122**(9), 2587-2599 (2018)
* [45] Perakis, F., Hamm, P.: Two-dimensional infrared spectroscopy of supercooled water. The Journal of Physical Chemistry B **115**(18), 5289-5293 (2011)
* [46] Debenedetti, P.G., Stillinger, F.H.: Supercooled liquids and the glass transition. Nature **410**(6825), 259-267 (2001)
* [47] Sastry, S.: The relationship between fragility, configurational entropy and the potential energy landscape of glass-forming liquids. Nature **409**(6817), 164-167 (2001)
* [48] Conti Nibali, V., Maiti, S., Saija, F., Heyden, M., Cassone, G.: Electric-field induced entropic effects in liquid water. The Journal of Chemical Physics **158**(18), 184501 (2023)
* [49] Martelli, F.: Steady-like topology of the dynamical hydrogen bond network in supercooled water. PNAS Nexus **1**(3), 090 (2022)
* [50] Formanek, M., Martelli, F.: Probing the network topology in network-forming materials: The case of water. AIP Advances **10**(5), 055205 (2020)
* [51] Jung, D.H., Yang, J.H., Jhon, M.S.: The effect of an external electric field on the structure of liquid water using molecular dynamics simulations. Chem. Phys. **244**(2-3), 331-337 (1999)
* [52] Formanek, M., Torquato, S., Car, R., Martelli, F.: Molecular rotations, multiscale order, hyperuniformity, and signatures of metastability during the compression/decompression cycles of amorphous ices. J. Phys. Chem. B **127**(17), 3946-3957 (2023)
* [53] Martelli, F., Torquato, S., Giovambattista, N., Car, R.: Large-scale structure and hyperuniformity of amorphous ices. Phys. Rev. Lett. **119**(13), 136002 (2017)
* [54] Balke, N., Jesse, S., Carmichael, B., Okatan, M.B., Kravchenko, I.I., Kalinin, S.V., Tselev, A.: **28**(6), 065704 (2017)
* [55] Zhu, X., Riera, M., Bull-Vulpe, E.F., Paesani, F.: Mb-pol(2023): Sub-chemical accuracy for water simulations from the gas to the liquid phase. Journal of Chemical Theory and Computation **19**(12), 3551-3566 (2023)
* [56] Zhang, L., Han, J., Wang, H., Car, R., Weinan, E.: Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics. Phys. Rev. Lett. **120**(14), 143001 (2018)
* quickstep: Efficient and accurate electronic structure calculations. The Journal of Chemical Physics **152**(19), 194103 (2020)
* [58] King-Smith, R.D., Vanderbilt, D.: Theory of polarization of crystalline solids. Phys. Rev. B **47**, 1651-1654 (1993)
* [59] Resta, R.: Macroscopic polarization in crystalline dielectrics: the geometric phase approach. Rev. Mod. Phys. **66**, 899-915 (1994)
* [60] Berry, M.V.: Quantal phase factors accompanying adiabatic changes. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences **392**(1802), 45-57 (1984)
* [61] Umari, P., Pasquarello, A.: Ab initio molecular dynamics in a finite homogeneous electric field. Phys. Rev. Lett. **89**, 157602 (2002)
* [62] Nunes, R.W., Vanderbilt, D.: Real-space approach to calculation of electric polarization and dielectric constants. Phys. Rev. Lett. **73**, 712-715 (1994)
* [63] Nunes, R.W., Gonze, X.: Berry-phase treatment of the homogeneous electric field perturbation in insulators. Phys. Rev. B **63**, 155107 (2001)
* [64] Resta, R.: Quantum-mechanical position operator in extended systems. Phys. Rev. Lett. **80**, 1800-1803 (1998) [https://doi.org/10.1103/PhysRevLett.80.1800](https://doi.org/10.1103/PhysRevLett.80.1800)
* [65] Gonze, X., Ghosez, P., Godby, R.W.: Density-polarization functional theory of the response of a periodic insulating solid to an electric field. Phys. Rev. Lett. **74**, 4035-4038 (1995)
* [66] Gonze, X., Ghosez, P., Godby, R.W.: Density-functional theory of polar insulators. Phys. Rev. Lett. **78**, 294-297 (1997)
* [67] Lee, W.-K., Tsoi, S., Whitener, K.E., Stine, R., Robinson, J.T., Tobin, J.S., Weerasinghe, A., Sheehan, P.E., Lyuksyutov, S.F.: Robust reduction of graphene fluoride using an electrostatically biased scanning probe. Nano Research **6**(11), 767-774 (2013)
* [68] Krack, M.: Pseudopotentials for h to kr optimized for gradient-corrected exchange-correlation functionals. Theoretical Chemistry Accounts **114**(1), 145-152 (2005)
* [69] Becke, A.D.: Density-functional exchange-energy approximation with correct asymptotic behavior. Phys. Rev. A **38**, 3098-3100 (1988)
* [70] Lee, C., Yang, W., Parr, R.G.: Development of the colle-salvetti correlation-energy formula into a functional of the electron density. Phys. Rev. B **37**, 785-789 (1988)
* [71] Grimme, S., Antony, J., Ehrlich, S., Krieg, H.: A consistent and accurate ab initio parametrization of density functional dispersion correction (dft-d) for the 94 elements h-pu. The Journal of Chemical Physics **132**(15), 154104 (2010)
* [72] Grimme, S., Ehrlich, S., Goerigk, L.: Effect of the damping function in dispersion corrected density functional theory. Journal of Computational Chemistry **32**(7), 1456-1465 (2011)
* [73] Lin, I.-C., Seitsonen, A.P., Tavernelli, I., Rothlisberger, U.: Structure and dynamics of liquid water from ab initio molecular dynamics--comparison of blyp, pbe, and revpbe density functionals with and without van der waals corrections. Journal of Chemical Theory and Computation **8**(10), 3902-3910 (2012)
* [74] Gillan, M.J., Alfe, D., Michaelides, A.: Perspective: How good is dft for water? The Journal of Chemical Physics **144**(13), 130901 (2016)
* [75] Marsalek, O., Markland, T.E.: Quantum dynamics and spectroscopy of ab initio liquid water: The interplay of nuclear and electronic quantum effects. The Journal of Physical Chemistry Letters **8**(7), 1545-1551 (2017) [https://doi.org/10.1021/acs.jpclett.7b00391](https://doi.org/10.1021/acs.jpclett.7b00391) [https://doi.org/10.1021/acs.jpclett.7b00391](https://doi.org/10.1021/acs.jpclett.7b00391). PMID: 28296422
* [76] Sharma, M., Resta, R., Car, R.: Intermolecular dynamical charge fluctuations in water: A signature of the h-bond network. Phys. Rev. Lett. **95**, 187401 (2005)
* [77] Pattenaude, S.R., Streacker, L.M., Ben-Amotz, D.: Temperature and polarization dependent raman spectra of liquid h2o and d2o. Journal of Raman Spectroscopy **49**(11), 1860-1866 (2018)
* [78] Bussi, G., Donadio, D., Parrinello, M.: Canonical sampling through velocity rescaling. The Journal of Chemical Physics **126**(1), 014101 (2007)
* [79] Brehm, M., Thomas, M., Gehrke, S., Kirchner, B.: Travis--a free analyzer for trajectories from molecular simulation. The Journal of Chemical Physics **152**(16), 164105 (2020)
* [80] Martelli, F.: Unravelling the contribution of local structures to the anomalies of
water: The synergistic action of several factors. The Journal of chemical physics **150**(9), 094506 (2019)
* [81] Luzar, A., Chandler, D.: Hydrogen-bond kinetics in liquid water. Nature **379**(6560), 55-57 (1996)
# Supporting Information - Electrofreezing of Liquid Water at Ambient Conditions
Giuseppe Cassone
Fausto Martelli
###### Abstract
In the present paper, we present a model for the chemical-Physical Processes, National Research Council,
Viale F. Stagno d'Alcontres 37, Messina, 98158, Italy.
IBM Research Europe, Keckwik Lane, Daresbury, WA4 4AD, United Kingdom.
Department of Chemical Engineering, University of Manchester,
Oxford Road, Manchester, M13 9PL, United Kingdom.
*Corresponding author(s). E-mail(s): [email protected];
[email protected];
## 1 Additional results
Infrared spectra shown in Fig. 1 of the main text have been determined by means of the software TRAVIS [1, 2] from the centers of the Maximally Localised Wannier Functions (MLWFs) [3, 4] calculated on the fly during the _ab initio_ molecular dynamics (AIMD) simulations. Molecular dipoles from MLWFs centers can be determined as:
\[\mu=-2e\sum_{i}{\bf r}_{i}+e\sum_{j}Z_{j}{\bf R}_{j}\,, \tag{1}\]
where \(e\) is the electron charge, \({\bf r}_{i}\) is the position vector of the MLWF center \(i\), \(Z_{j}\) is the atomic number of the nuclei \(j\) whilst \({\bf R}_{j}\) is the position vector of this latter. This way, the IR spectra at the investigated field intensities were computed as the Fourier transform of the molecular dipole autocorrelation function along the last 50 ps of the respective simulation trajectories.
In order to track molecular reorientations under the field action, we compute the distributions of the angle \(\theta\) formed between the instantaneous water molecular dipole vectors and the field direction (i.e., \(z\)-axis), Fig. S1. Interestingly, whilst the field is capable of reorienting a large fraction of water dipoles already at 0.05 V/A, the electrostatic potential gradient producing this field strength does not induce a net
suppression of the translational degrees of freedom of the molecules, as shown in Fig. 3 of the main text. The enhancement of the water dipoles at increasingly high fields is also visible from the dipole distributions reported in Fig. S2-a, showing a progressive shift towards larger magnitudes and a slight narrowing of the distributions.
**Fig. S2** (a) Distributions of the magnitude of the water dipoles extracted from the last 50 ps of the respective simulations at different field intensities determined from the MLWFs centers. (b) Average water dipole and associated standard deviation extracted from the distributions in (a). It is noteworthy pointing out the interruption of the linear response regime for field strengths producing water electrofreezing.
Fig. S2-b reports the profile of the dipole moment with the field strength. It is possible to recognize a linear regime holding up to a strength of \(0.10\) V/A. Thus, the transition from the liquid to the f-GW phase is also marked by the breakdown of the linear response regime to external electric fields (EFs). Additionally to this analysis, it is worth monitoring the temporal dependence of the P(\(\theta\)) distributions at disjoint time windows, a procedure allowing for disclosing the dynamical response of the sample. As displayed in Fig. S3, the field-induced reorientation of the molecular dipoles takes place on fast timescales and achieves saturation within the first \(50\) ps of dynamics at all field intensities, with the exception of the weakest field (Fig. S3-a), where nonetheless the convergence of the dipolar response is reached in less than \(100\) ps.
Fig. S4 reports the oxygen-oxygen radial distribution functions computed at consecutive, disjoint time windows of \(50\) ps. At \(0.05\) V/A, the \(g_{OO}(r)\) converges to a steady profile after \(50\) ps (Fig. S4-a), while convergence is achieved only after \(150\) ps for \(0.10\) V/A (Fig. S4-b) and \(200\) ps for \(0.15\) V/A (Fig. S4-c). The \(g_{OO}(r)\) computed within the last \(50-100\) ps for \(0.10\) V/A and \(0.15\) V/A resemble the \(g_{OO}(r)\) of a low-density amorphous (LDA) (see main text). To shed some light on the dynamical reorganization of the water structure induced by the external field, we have evaluated the oxygen-oxygen radial distribution function differences between the last \(50\) ps and the first \(50\) ps time frames of each simulation, as reported in Fig. S4-d. Whereas at \(0.05\) V/A structural differences between the initial and the final time windows appear to be small - as also visible in Fig. S4-a -, a field of intensity equal to \(0.10\) V/A induces much larger global reorganizations towards more structured molecular correlations in the system (Fig. S4-d, yellow curve). On the other hand, the evidence that these differences are smaller in the sample exposed to a \(0.15\) V/A field (Fig. S4-d, red curve) has to be ascribed to a faster initial reorganization taking place since the first \(50\) ps of dynamics, whereas longer timescales (\(\sim 200\) ps) are somehow needed for bringing to completion the structural transition in the simulated sample, as shown in Fig. S4-c.
In Fig. S5 we report \(P(q)\), the distribution of the tetrahedral order parameter \(q\) defined as
\[q=1-\frac{3}{8}\sum_{j=1}^{3}\sum_{k=j+1}^{4}\left(\cos\psi_{jk}+\frac{1}{3} \right)^{2} \tag{2}\]
where \(\psi_{jk}\) is the angle formed between the oxygen atoms of the water molecule under consideration and its nearest neighbour oxygen atoms \(j\) and \(k\). The tetrahedral order parameter \(q\) was originally proposed by Chau and Hardwick [5] and subsequently rescaled by Errington and Debenedetti [6] so that the average value of \(q\) varies from 0 for an ideal gas to 1 for a regular tetrahedron. From Fig. S5 it is possible to observe that the samples in the absence of a field and in the presence of a field of 0.05 V/A show a very similar tetrahedral character. A major change occurs at stronger fields signaling the transition to the more ordered f-GW phase.
Similar conclusions can be drawn upon inspecting the local structure index (LSI) [7; 8], an insightful order parameter that can be employed to characterize the LDL and HDL molecular environments and defined as the inhomogeneity on the distribution of radial distances
\[I=\frac{1}{N}\sum_{j=1}^{N}\left[\Delta_{j+1,j}-\left\langle\Delta\right\rangle \right]^{2} \tag{3}\]
where \(\Delta_{j+1,j}=r_{j+1}-r_{j}\) is the distance between particles within a cutoff distance of 3.7 A from a reference molecule and \(\left\langle\Delta\right\rangle\) is the average overall neighbours of a molecule within the given cutoff. The LSI, therefore, provides a convenient quantitative measure of the fluctuations in the distance distribution surrounding a given water molecule within a sphere defined by a radius of 3.7 A. In doing so, the index \(I\) measures the extent to which a given water molecule is surrounded by well-defined first and second coordination shells. In Fig. S6, we report the LSI computed for the three EFs here inspected at time windows of 50 ps. It is possible to observe the development of hints of a bimodal distribution in the cases of 0.10 V/A and 0.15 V/A in correspondence with the transition to f-GW. This can be also appreciated from the lower panel of Fig.S6, reporting the LSI computed in the last time window \([201-250]\) ps for 0.15V/A and for the LDA simulated via classical molecular dynamics at \(T=200\) K. The latter has been obtained upon quenching liquid water from \(T=300\) K to \(T=200\) K at a quenching rate of 1 K/ns, as reported in Refs. [9; 10; 11; 12]
Somewhat related to the local and global degree of order of the H-bond network, is its kinetics. In particular, we performed a structural analysis of the H-bond network
and identified a H-bond through the following geometric conditions (that must be simultaneously fulfilled): two water molecules are considered as H-bonded if \(R^{(OO)}\leq 3.5\) A and \(\angle\)O-H\(\cdot\cdot\cdot\)O\(\leq 30^{\circ}\), where \(R^{OO}\) is the instantaneous distance between the oxygen atoms. From this, we calculated the time autocorrelation function of H-bonds as:
\[c(t)=\frac{\sum_{\langle i,j\rangle}s_{ij}(t_{0})s_{ij}(t_{0}+t)}{\sum_{ \langle i,j\rangle}s_{ij}(t_{0})}\,, \tag{4}\]
where the indices \(i\) and \(j\) run on all pairs of first-neighbour molecules which at \(t_{0}\) were H-bonded, \(t_{0}\) being the time at which the measurement process begins; \(s_{ij}=1\) if the criterion for the presence of a H-bond is fulfilled, \(s_{ij}=0\) otherwise. The results were averaged over hundreds of initial configurations. Fig. S7 shows the continuous (Fig. S7-a) and intermittent (Fig. S7-b) autocorrelation functions \(c(t)\) of the H-bonds for different field intensities. Within the intermittent definition of \(c(t)\), a given H-bond is allowed to cleave within timescales \(\leq\) 5 fs to account for bond fluctuations. Thus, within this latter fast timescale, we always assign to \(s_{ij}\) a value equal to 1 when considering the intermittent autocorrelation function (see eq. (4)).
The application of a 0.05 V/A field induces only a relatively moderate - with respect to the zero-field case - slow down of the dynamics of the H-bond network recorded by means of the continuous \(c(t)\) function (Fig. S7-a). Instead, significantly more drastic effects are recorded upon applying fields of 0.10 and 0.15 V/A. Interestingly, the changes produced on the H-bond network kinetics by these field regimes qualitatively resemble those induced by a sizable (\(\sim\) 40 K) decrease of the temperature [13]. This is also visible from the intermittent H-bond autocorrelation function displayed in Fig. S7-b. Although the H-bond characteristic time recorded at zero field (violet curve) is extended by the application of a field strength of 0.05 V/A (blue curve), a visible decay of the intermolecular correlations within the timescales of our simulations is recorded at the latter regime. Instead, a field of 0.10 V/A (orange curve) and a field of 0.15 V/A (red curve) clearly strengthen the H-bond persistence over sizably longer timescales. These results are fully consistent with the picture emerging from the partial Van Hove correlation functions shown in the main text (Fig. 2).
In Fig. S8 we report the oxygen-oxygen radial distribution function computed with a larger simulation box of 256 water molecules, at densities of 0.92 g\(\cdot\)cm\({}^{-3}\) and 0.95 g\(\cdot\)cm\({}^{-3}\) and at a temperature of 250 K (panels (a) and (b), respectively) for a sample without EF and a sample with a field of 0.15 V/A. These simulations reach \(\sim\) 500 ps. It is possible to observe the development of an f-GW-like \(g_{OO}(r)\) at both
densities, indicating that the transition to f-GW reported in our work is not an artifact of small simulation boxes and that takes place for different densities.
In Fig. S9 we report \(d=4\), the percentage of four-folded water molecules at consecutive time windows. The blue stripe corresponds to the case of liquid water in the absence of EFs. In the presence of 0.05 V/A (red squares), \(d=4\) increases from \(\sim 50\%\) to \(\sim 53\%\) within the first 50 ps of the simulation, and keeps gradually increasing reaching a maximum of \(\sim 56\%\) in the last two time windows. Upon increasing the field to 0.10 V/A (green diamonds) we can observe that \(d=4\) computed within the first 50 ps is roughly the same as the case for 0.05 V/A computed on the same time window. On the other hand, the \(d=4\) linearly increases by \(\sim 6\%\) in the second and in the third time window. The \(d=4\) reaches then a plateau in correspondence with the last two time windows. Upon increasing the field strength to 0.15 V/A, we observe a sudden increase in the \(d=4\) to \(\sim 56\%\) within the first time window. Further increases occur at the later stages of the simulation as for the cases previously inspected with lower field strengths. It is worth noticing that the percentage of four-coordinated water molecules for 0.10 V/A and 0.15 V/A is almost indistinguishable towards the end of the simulation.
To further stress the relevance of the sampling at disjoint time windows, we report in Fig. S10 a series of structural and dynamical observables determined at the highest field intensity here explored (i.e., 0.15 V/A) and calculated over the whole 250-ps-long trajectory and on the last 50 ps of dynamics. The \(g_{OO}(r)\) at 0.15 V/A determined over the whole trajectory exhibits smaller (higher) peaks (dips) with respect to the same quantity calculated over the last 50 ps of dynamics of the same trajectory, as displayed in Fig. S10-a. Interestingly, the importance of sampling at consecutive time frames is even more visible from dynamical rather than structural properties. In fact, to adequately evaluate the field effects on the translational degrees of freedom, the sampling at consecutive time windows here adopted appears to be necessary for the timescales
affordable by _ab initio_ simulations. As shown in Fig. S10-b, indeed, the mean squared displacement of the oxygen atoms determined over the whole trajectory at 0.15 V/A witnesses the mixing of different translational regimes, a circumstance leading to an underestimation of the EF-induced damping effect. This is not only true for translational but also for rotational degrees of freedom, which are intimately related to the dynamics of the H-bond network. By direct comparison of the continuous (Fig. S10-c) and intermittent (Fig. S10-d) H-bond autocorrelation functions determined over different timescales (see legends), diverse H-bond characteristic times emerge. All these findings prove that relevant information on the effects produced by the application of external fields on liquid water can be unveiled only by accessing and isolating late portions of long _ab initio_ simulations. Exclusively by adopting this strategy it is possible to catch the _electrofreezing_ effect induced by the field on the roto-translational degrees of freedom of water and, presumably, of other H-bonded systems. |
2306.12585 | Investigating the accelerated expansion of the Universe through updated
constraints on viable $f(R)$ models within the metric formalism | Modified theories of gravity encompass a class of $f(R)$-models that seek to
elucidate the observed late time accelerated expansion of the universe. In this
study, we examine a set of viable $f(R)$ models (Hu-Sawicki: two cases,
Satrobinsky, Tsujikawa, exponential and arcTanh models) in metric formalism,
using recent cosmological data sets: type Ia supernovae data, cosmic
chronometer observations, baryonic acoustic oscillations data, data from
H\textsc{ii} starburst galaxies, and local measurements of the Hubble parameter
$H_0$. The model parameters are constrained using a Bayesian analysis with the
Monte Carlo Markov Chain method. We employ statistical tools such as the Akaike
Information Criterion, Bayesian Information Criterion, and reduced chi-square
statistics to conduct a comparative investigation of these models. We determine
the transition redshift, the evolution of total equation-of-state (EoS)
parameter, and the EoS for the component responsible for current accelerated
expansion to characterize the expansion's evolution. Taking into account the
``Hubble tension," we perform the study with and without a Gaussian prior for
$H_0$ from local measurements. Our findings are as follows: (i) in many cases
the $f(R)$ models are strongly favored over the standard $\Lambda$CDM model,
(ii) the deviation parameter ($b$) significantly deviates from zero in several
cases, (iii) the inclusion of local $H_0$ not only increases the fitted value
of $H_0$ (as expected) but also affects the gap between predictions of $f(R)$
models and the $\Lambda$CDM model, and (iv) the relevant quantities
characterizing the (accelerated) expansion of the universe obtained in our
models are consistent with those obtained in a model-independent way by others.
Our investigation and results present a compelling case for pursuing further
research on $f(R)$ models with future observations to come. | Kumar Ravi, Anirban Chatterjee, Biswajit Jana, Abhijit Bandyopadhyay | 2023-06-21T21:58:28Z | http://arxiv.org/abs/2306.12585v1 | Investigating the accelerated expansion of the Universe through updated constraints on viable \(f(R)\) models within the metric formalism
###### Abstract
Modified theories of gravity encompass a class of \(f(R)\)-models that seek to elucidate the observed late time accelerated expansion of the universe. In this study, we examine a set of viable \(f(R)\) models (Hu-Sawicki: two cases, Satrobinsky, Tsujikawa, exponential and arcTanh models) in metric formalism, using recent cosmological data sets: type Ia supernovae data, cosmic chronometer observations, baryonic acoustic oscillations data, data from Hi starburst galaxies, and local measurements of the Hubble parameter (\(H_{0}\)). We re-parameterize the \(f(R)\) models using a distortion/deviation parameter (\(b\)) which is a measure of their deviation from the standard \(\Lambda\)CDM model. The model parameters are constrained using a Bayesian analysis with the Monte Carlo Markov Chain method. We employ statistical tools such as the Akaike Information Criterion, Bayesian Information Criterion, and reduced chi-square statistics to conduct a comparative investigation of these models. We determine the transition redshift, the evolution of total equation-of-state (EoS) parameter, and the EoS for the component responsible for current accelerated expansion to characterize the expansion's evolution. Taking into account the "Hubble tension," we perform the study with and without a Gaussian prior for \(H_{0}\) from local measurements. Our findings are as follows: (i) in many cases the \(f(R)\) models are strongly favored over the standard \(\Lambda\)CDM model, (ii) the deviation parameter (\(b\)) significantly deviates from zero in several cases, (iii) the inclusion of local \(H_{0}\) not only increases the fitted value of \(H_{0}\) (as expected) but also affects the gap between predictions of \(f(R)\) models and the \(\Lambda\)CDM model, and (iv) the relevant quantities characterizing the (accelerated) expansion of the universe obtained in our models are consistent with those obtained in a model-independent way by others. Our investigation and results present a compelling case for pursuing further research on \(f(R)\) models with future observations to come.
## I Introduction
In the last two decades, cosmological investigations have revealed an accelerated expansion of the current universe and have also identified a transition from a decelerating phase to the current accelerated phase occurring during the late time phase of cosmic evolution. The first empirical aspect of discovering this phenomenon came from the interpretation of luminosity distance and redshift measurements of type Ia supernovae (SNe Ia) events [1; 2; 3; 4]. Furthermore, observation of baryon acoustic oscillations [5; 6], analysis of cosmic microwave background radiation [7; 8; 9; 10], and examination of the power spectrum of matter distributions in the universe [11; 12] have substantiated evidence of this late-time cosmic acceleration. A general label describing the origin of the observed late time cosmic acceleration is "Dark energy" which refers to a theoretical unclustered form of energy exerting a negative pressure to counteract the gravitational attraction and thereby causing the cosmic acceleration.
Despite extensive research over many years, the true nature and origin of dark energy remains an enigma. Various theoretical perspectives have emerged, each attempting to construct models that can explain the observed cosmic acceleration. The introduction of the \(\Lambda\) term into Einstein's equation, known as the "\(\Lambda\) Cold Dark Matter (\(\Lambda\)CDM) model", is the simplest model that can explain the present accelerated expansion of the universe. However, this model encounters several challenges, such as the cosmic coincidence issue [13] and the fine-tuning problem [14] when considered in the context of particle physics. This motivates development and exploration of alternative dark energy models from a range of perspectives. An important class comprises field theoretic models that involve the incorporation of a scalar field within the energy-momentum tensor of the Einstein equation. The scalar field plays the role in generating the necessary negative pressure to drive cosmic acceleration, either through the slowly varying potentials (quintessence models [15]) or by means of their kinetic energy (k-essence models [16]).
Within the context of this study, another crucial category of models under consideration is modified gravity models which attempt to explain the acceleration through the geometry itself without modifying the energy-momentum tensor of the Einstein equation. These models primarily involve modifications to the geometric component of Einstein's equation, which may result from higher-order corrections to the Einstein
Hilbert action. By introducing suitable modifications, it becomes feasible to induce cosmic acceleration. The most straightforward types of modifications involve the extension of the Ricci scalar \(R\) to an arbitrary function \(f(R)\). The meticulous choice with appropriate justification of this particular arbitrary function play a crucial role in all modified gravity models. Theoretical considerations, like the need for a ghost-free theory with stable perturbations and the presence of Noether symmetries, impose initial constraints on the forms of the arbitrary function \(f(R)\). However, ability to reproduce the observed features of cosmic evolution, the behavior of local (solar) systems etc. also serve as the primary tool to further constrain the \(f(R)\)-models.
In this study, our objective is to provide updated constraints on viable \(f(R)\) gravity models using the latest cosmological data, including supernovae type Ia(SNIa) from Pantheonplus compilation, cosmic chronometer(CC) observations, baryonic acoustic oscillations(BAO) data, Hii starburst galaxies(HiiG) data, and data from local measurements of the Hubble parameter (\(H_{0}\)). Specifically, we focus on six viable \(f(R)\) models: the Hu-Sawicki model(two cases), the Starobinsky model, the exponential model, the Tsujikawa model, and the arcTanh model. By considering these models, we obtained an update of best-fit values and uncertainties at different confidence levels of the associated parameters of the models. We have chosen these particular \(f(R)\) gravity models because most of the modified gravity models either fail to explain the matter-dominated era or have already been ruled out by observational data sets. The selected models, mentioned above, represent a few remaining options to study the impact of modified gravity theory within the framework of metric or Palatini formalism. For our investigation, we adopt the metric formalism techniques in the context of a homogeneous and isotropic universe. Rather than working with original forms of these \(f(R)\) models(which for some models give a false impression that they are non-reducible to the \(\Lambda\)CDM model), we chose to re-parameterize these models in terms of what is called "deviation/distortion parameter(\(b\))". In this form, with \(b\to 0\), a \(f(R)\) model tends to the \(\Lambda\)CDM model.
The current state of research in this specific context, which involves constraining various \(f(R)\) models within the metric formalism using cosmologically relevant data, is experiencing a high level of activity. Here we refer to several such relevant previous works. In [17], the authors obtained constraints on the Hu-Sawicki and the Starobinsky models by utilizing approximate analytical solutions derived from Taylor series expansions for the Hubble parameter. The constraints were derived using SNIa data from Union 2.1 compilation, BAO data, cosmic microwave background (CMB) shift parameter data and growth rate data. In [18], the constraints on Hu-Sawicki, Starobinsky, exponential, and Tsujikawa models were obtained by utilizing CC data, local measurements of \(H_{0}\), SNIa data from the Joint-Lightcurve-Analysis (JLA) compilation, and BAO data. Furthermore, [19] constrained the exponential model using SNIa (Union 2.1), CC, BAO, and CMB data, while also discussing the viability of this model in describing the entire cosmological history. Exploration of various \(f(R)\) models, including the Hu-Sawicki and the exponential models, with data sets such as SNIa (JLA), CC, BAO, CMB, local \(H_{0}\), and growth rate, was conducted in [20]. Other significant works that investigated Hu-Sawicki and/or exponential models, utilizing various data sets including gravitational lensing data, are [21; 22; 23]. In [24], constraints on Hu-Sawicki, Starobinsky, exponential, and Tsujikawa models were examined in both flat and non-flat spacetimes, utilizing various cosmological data sets. Recently, [25] have advocated for the use of quasar X-ray and UV fluxes data to investigate \(f(R)\) models and have constrained Hu-Sawicki and exponential models accordingly. Moreover, the Taylor series approach from [17] was extended to include the arcTanh model in [26]. Using Gaussian process reconstruction of the Hubble diagram with CC and HiiG data, the parameters of these three \(f(R)\) models were determined.
This paper is organised as follows. In Section II we derive the modified Friedmann equations and other relevant equations from the action for \(f(R)\) gravity. Section III covers discussions of the conditions that any viable \(f(R)\) models must satisfy, along with brief introductions of the specific \(f(R)\) models investigated in this work. In Section IV, we introduce the cosmological data sets and the corresponding equations that establish connection between theory and data. Additionally, we discuss the statistical procedures employed to obtain constraints on the model parameters. The obtained constraints on the model parameters for all the models are presented in Section V. The performance of the different models is assessed using statistical tools in Section VI. Section VII is dedicated to deriving the relevant quantities that characterize expansion history of the universe based on the model constraints. In Section VIII we provide the concluding remarks for this work. Due to need for careful considerations in solving the modified Friedmann equations, such as avoiding numerical instabilities and minimizing computation time, Appendix A is included to discuss the method used for numerically solving the Friedmann equations. The data for baryonic acoustic oscillations and cosmic chronometer, collected from different sources, are tabulated in Appendix in the Tables 3 and 4, respectively.
Unless otherwise mentioned we have set \(c=1\) (where, \(c\) denotes speed of light in vacuum), and value of the Hubble parameter is expressed in the unit \(\mathrm{km\,s^{-1}Mpc^{-1}}\)
\(f(R)\) cosmology in metric formalism
The \(f(R)\) theories of gravity involve generalisation of Lagrangian by making it an arbitrary function of the Ricci scalar \(R\) where \(f(R)=R\) corresponds to the standard Einstein theory of gravity. Algebraic expressions for \(f(R)\), different from \(f(R)=R\), define different \(f(R)\) - models. The generalized Einstein-Hilbert action for \(f(R)\) theories is
\[S=\frac{1}{2\kappa}\int\,d^{4}x\,\sqrt{-g}\,f(R)+S_{\rm m}+S_{\rm r}\,, \tag{1}\]
where \(\kappa=8\pi G\), \(g\) is the determinant of the metric tensor, \(S_{\rm m}\) and \(S_{\rm r}\) are the actions for matter fields and radiation fields, respectively. The variation of this action with respect to the metric gives the corresponding field equation in metric formalism as
\[FR_{\mu\nu}-\frac{1}{2}fg_{\mu\nu}-\left(\nabla_{\mu}\nabla_{\nu}-g_{\mu\nu} \Box\right)F=\kappa T_{\mu\nu}\,, \tag{2}\]
where \(F=\partial f/\partial R\), \(\Box\equiv g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\) is the covariant D'Alambertian and \(T_{\mu\nu}\) is the energy-momentum tensor of matter and radiation. This field equation can also be expressed as
\[FG_{\mu\nu}-F^{\prime}\nabla_{\mu}\nabla_{\nu}R-F^{\prime\prime }(\nabla_{\mu}R)(\nabla_{\nu}R)\] \[+g_{\mu\nu}\left[\frac{1}{2}(RF-f)+F^{\prime}\Box R+F^{\prime \prime}(\nabla R)^{2}\right]=\kappa T_{\mu\nu}\,, \tag{3}\]
where \(G_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\) is Einstein tensor and prime(\({}^{\prime}\)) denotes derivative with respect to \(R\). Taking trace of both sides of Eq. 2 we obtain
\[\Box R=\frac{1}{3F^{\prime}}\left[\kappa T-3F^{\prime\prime}(\nabla R)^{2}+2f -RF\right]\,, \tag{4}\]
using which in Eq. 3 we rewrite the field equation as
\[G_{\mu\nu} = \frac{1}{F}\bigg{[}F^{\prime}\,\nabla_{\mu}\nabla_{\nu}R+F^{ \prime\prime}(\nabla_{\mu}R)(\nabla_{\nu}R) \tag{5}\] \[-\frac{g_{\mu\nu}}{6}\left(RF+f+2\kappa T\right)+\kappa T_{\mu \nu}\bigg{]}\,,\]
where \(T\equiv T_{\mu}^{\mu}\) is the trace of energy-momentum tensor.
In this work we consider the universe to be isotropic and homogeneous at large scale and described by spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric
\[ds^{2}=-dt^{2}+a(t)^{2}\left[dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta\,d\phi^{ 2})\right]\,, \tag{6}\]
where \(a\) is the FLRW scale factor. The Hubble parameter(\(H\)) is defined as \(H\equiv\dot{a}/a\) where we use a generic notation \(\dot{X}\) to denote derivative of any quantity \(X\) with respect to cosmological time (\(t\)). In this formalism, for flat FRLW geometry, the Ricci scalar relates to the Hubble parameter by the relation
\[R=6\left(2H^{2}+\dot{H}\right)\,. \tag{7}\]
We consider the content of universe to be a perfect fluid comprising of two components: radiation and matter in the form of pressureless dust (non-relativistic). For the epochs(\(0\leq z<10^{4}\)) when any interaction between matter and radiation could be ignored, the energy-momentum tensor for the fluid can be written as \(T_{\nu}^{\mu}={\rm diag}(-\rho,p,p,p)\) with \(\rho=\rho_{\rm m}+\rho_{\rm r}\) and \(p=p_{\rm r}\) (with radiation pressure \(p_{\rm r}=\rho_{\rm r}/3\)), where the subscripts'm' and 'r' stand for matter and radiation, respectively. Each of the non-interacting components separately follows the continuity equations
\[\dot{\rho}_{\rm m}+3H\rho_{\rm m}=0,\quad\dot{\rho}_{\rm r}+4H\rho_{\rm r}=0\,. \tag{8}\]
The solution to these conservation equations are \(\rho_{\rm m}=\rho_{\rm m0}/a^{3}\), \(\rho_{\rm r}=\rho_{\rm r0}/a^{4}\) where subscript '0' denotes values at present epoch.
In the context of the FLRW universe filled with an ideal perfect fluid characterised by \(\rho\) and \(p\), Eq. 4 reduces to
\[F^{\prime}\ddot{R}+F^{\prime\prime}\dot{R}^{2}=-\left[\frac{\kappa(3p-\rho)}{3 }+\frac{2f-RF}{3}\right]-3HF^{\prime}\dot{R}\,. \tag{9}\]
using which the '00' and '\(ii\)' components of the field Eq. 5 take following respective forms:
\[-3H^{2} = -\frac{1}{F}\bigg{[}\kappa\rho+\frac{RF-f}{2}-3HF^{\prime}\dot{R }\bigg{]}\,, \tag{10}\] \[-2\dot{H} = \frac{1}{F}\bigg{[}\kappa(\rho+p)+F^{\prime\prime}\dot{R}^{2}+( \ddot{R}-H\dot{R})F^{\prime}\bigg{]}\,. \tag{11}\]
The Eqs. 10 and 11 are modified form of the Friedmann equations for \(f(R)\)-models. The temporal profile of the Hubble parameter \(H\) is commonly expressed in the form \(H(z)\), \(z\) being the redshift related to the FLRW scale factor by \(1+z=1/a\) (where, \(a\) is normalised to unity at the present epoch). We require this profile of \(H(z)\) from the cosmological data sets for obtaining observational constraints on \(f(R)\)-models. For this purpose one may solve either the system of: (i) Eqs. 9, 10 and 11 or, (ii) Eqs. 7, 10 and 11. We take the path (ii) to solve the system. Eq. 10 serves as a constraint equation which fixes the initial conditions and must be satisfied at every integration step during the process of finding solutions. Finding analytical solutions is almost impossible, and therefore numerical methods are employed. However, solving this system of ordinary differential equations (ODEs) using a naive approach often leads to numerical instability. Additional details on how to solve this system of ODEs are discussed in Appendix A.
If we compare the modified Friedmann Eqs. 10 and 11 to the usual Friedmann equations with a dark energy component characterised by energy density \(\rho_{\rm DE}\) and pressure \(p_{\rm DE}\), _i.e._ with the equations \(3H^{2}=\kappa\left(\rho_{\rm m}+\rho_{\rm r}+\rho_{\rm DE}\right)\) and \(-2\dot{H}=\kappa\left(\rho_{\rm m}+\rho_{\rm r}+\rho_{\rm DE}+p_{\rm m}+p_{\rm r }+p_{\rm DE}\right)\), we can deduce the "effective (geometric) dark energy" with density and
pressure corresponding to \(f(R)\)-theory as
\[\rho_{\rm DE}=\frac{1}{\kappa}\left[\frac{RF-f}{2}-3HF^{\prime}\dot{R}+3(1-F)H^{2 }\right]\,, \tag{12}\]
and,
\[\begin{split} p_{\rm DE}&=\frac{1}{\kappa}\bigg{[} \frac{f-RF}{2}+F^{\prime}\ddot{R}+2HF^{\prime}\dot{R}+F^{\prime\prime}\dot{R}^ {2}\\ &\qquad-(1-F)(2\dot{H}+3H^{2})\bigg{]}\,,\end{split} \tag{13}\]
with the equation-of-state parameter for this effective dark energy defined as
\[w_{\rm DE}=\frac{p_{\rm DE}}{\rho_{DE}}\,. \tag{14}\]
Using Eqs. 10 and 11, we may recast Eq. 14 into a more computationally advantageous form (which we use later) as
\[w_{\rm DE}=\frac{w_{\rm tot}-\kappa p_{\rm r}/(3H^{2})}{1-\kappa\left(\rho_{ \rm m}+\rho_{\rm r}\right)/(3H^{2})}\,, \tag{15}\]
where
\[w_{\rm tot}=-1+\frac{2(1+z)}{3H}\frac{dH}{dz}\,, \tag{16}\]
and we have taken \(p_{\rm m}=0\) for the pressureless matter.
The relevant quantities obtained from observations, depending on cosmological models or even through cosmography (a model-independent kinematical approach), indicate that the Universe has recently undergone a transition from a phase of decelerated expansion to accelerated expansion. These quantities include \(w_{\rm tot}|z=0\sim-0.7\), \(w_{\rm DE}|_{z=0}\sim-1\), and a transition redshift (\(z\)t) \(\sim 0.5-1\). The transition redshift signifies the redshift at which the transition from decelerated to accelerated expansion occurred and is determined by the zero-crossing of the deceleration parameter (\(q(z)\)) given by
\[q(z)\equiv-\frac{\ddot{a}}{aH^{2}}=-1+\frac{(1+z)}{H}\frac{dH}{dz}\,. \tag{17}\]
In the \(\Lambda\)CDM model, the value of \(w_{\rm DE}\) is fixed at \(-1\) for all redshifts due to the presence of the cosmological constant (\(\Lambda\)) term. However, in \(f(R)\) models, the source of \(w_{\rm DE}|_{z=0}\sim-1\) arises from the underlying geometry itself. Unlike the \(\Lambda\)CDM model, there is no need to invoke the existence of "dark energy" in \(f(R)\) theories of gravity.
## III The specific \(f(R)\) models and their viability conditions
In this section we provide a brief introduction to the models examined in this study, both in their originally proposed forms and their subsequent transformations into more generalized representations. These transformations aim to highlight how these models can be more readily reduced to the \(\Lambda\)CDM model under appropriate conditions. However, before going into the details of each model, it is essential to discuss the viability conditions.
In the metric formalism (unlike Palatini formalism), any viable \(f(R)\) cosmological model must satisfy the following set of stringent theoretical conditions (see [27; 28] for detailed discussions):
\[F>0\,,\quad{\rm for}\;\;R\geq R_{0}>0\,, \tag{18}\] \[F^{\prime}>0\,,\quad{\rm for}\;\;R\geq R_{0}>0\,,\] (19) \[f(R)\approx R-2\Lambda\,,\quad{\rm for}\;\;R\gg R_{0}\,,\] (20) \[{\rm and,}\quad\;0<\frac{RF^{\prime}}{F}(r)<1\quad{\rm at}\;\;r=- \frac{RF}{f}=-2\,, \tag{21}\]
where \(R_{0}\) denotes the Ricci scalar at the present epoch.
These conditions arise from various considerations. Firstly, the effective gravitational constant (\(G_{\rm eff}=G/F\)) must be positive, ensuring the avoidance of anti-gravity (as expressed in Eq. 18). Besides, any acceptable \(f(R)\)-model should exhibit stability under perturbations and avoid the instability of Dolgov-Kawasaki type (Eq. 19). Similar to the \(\Lambda\)CDM model, a viable \(f(R)\) model must also be consistent with local gravity tests (Eqs. 19 and 20). Furthermore, the existence of a matter-dominated epoch in cosmological dynamics necessitates that an \(f(R)\) model satisfies the conditions given by Eqs 19 and 20. Lastly, Eq. 20 ensures the stability of the late-time de-Sitter solution, from which the late-time accelerated expansion of the Universe is usually inferred.
The viability conditions mentioned above can be more easily assessed for an \(f(R)\)-model if we can somehow transform that model into the following form:
\[f(R)=R-2\Lambda y(R,b,\Lambda)\,, \tag{22}\]
where the function \(y(R,b,\Lambda)\) serves to measure the deviation from the \(\Lambda\)CDM model, with the parameter \(b\) (referred to as the "distortion/deviation parameter") specifically quantifying the extent of the deviation [17]. For this reason, all the models examined in this study are recast into the form of Eq. 22, allowing for a clearer evaluation of their compliance with the viability conditions.
More specifically, the viability conditions in Eqs. 19 and 20 can be alternatively written as \(\lim\limits_{R\to\infty}f(R)=R+C_{0}\), where \(C_{0}\) represents a constant value. In order for a candidate \(f(R)\)-model to exhibit asymptotic behavior similar to the standard \(\Lambda\)CDM model (which is supported by observations of the cosmic microwave background), we can identify that \(C_{0}=-2\Lambda\), where \(\Lambda\) corresponds to the cosmological constant introduced by Einstein and Hilbert in their action. This amounts to writing symbolically \(\Lambda^{\Lambda}=\Lambda^{f(R)}\) where the superscripts
\(\Lambda\) and \(f(R)\) denote quantities in reference to the \(\Lambda\)CDM model and any viable \(f(R)\,\)- model, respectively. This relation may also be written as
\[\Omega^{\Lambda}_{\Lambda,0}(H^{\Lambda}_{0})^{2}=\Omega^{f(R)}_{ \Lambda,0}(H^{f(R)}_{0})^{2}\,, \tag{23}\]
where subscript '0' denotes values at present time. Furthermore, considering the definition of the energy-momentum tensor and the resulting conservation equation(Eq. 8), we can infer that both the \(\Lambda\)CDM model and any viable \(f(R)\)-model yield identical matter density and radiation density at the current epoch _i.e._
\[\Omega^{\Lambda}_{i,0}(H^{\Lambda}_{0})^{2}=\Omega^{f(R)}_{i,0}(H^{f(R)}_{0})^ {2}=\frac{8\pi G}{3}\rho_{i,0}\,, \tag{24}\]
where the subscript \(i=\) (m, r) stands for (matter, radiation). Also any viable \(f(R)\)-model is expected to exhibit deviations from the standard \(\Lambda\)CDM model predominantly during the late times, in order that the explanation for accelerated expansion at present epoch comes from vanishing \(\Lambda\). In general terms, this implies
\[\Omega^{\Lambda}_{i,0}\neq\Omega^{f(R)}_{i,0}\,,\quad H^{\Lambda}_{0}\neq H^{ f(R)}_{0}\,. \tag{25}\]
Based on Eqs. 23-25 and the initial condition requirements for solving the ODE system described in Eqs. 7, 10, and 11 (see Appendix A), we utilize parameters \((\Omega^{\Lambda}_{\rm m0},\,b,\,H^{\Lambda}_{0})\) for model-fitting to the data. However, while reporting our findings, we express the results in terms of the parameters (\(\Omega^{f(R)}_{\rm m0}\), \(b\), \(H^{f(R)}_{0}\)), which are determined using Eq. 24.
### The Hu-Sawicki Model
The Hu-Sawicki model, initially proposed in [29], is described by the following equation:
\[f(R)_{\rm HS}=R-\mu^{2}\frac{c1(R/\mu^{2})^{n_{\rm HS}}}{1+c2(R/ \mu^{2})^{n_{\rm HS}}}\,, \tag{26}\]
where \(c_{1}\) and \(c_{2}\) are dimensionless parameters, \(n_{\rm HS}\) is a positive constant typically assumed to be an integer, and \(\mu^{2}\approx\Omega_{\rm m0}H^{2}_{0}\). By defining \(\mu^{2}c_{1}/2c_{2}\equiv\Lambda\) and \(2\left(c_{2}^{1-1/n_{\rm HS}}\right)/c_{1}\equiv b\), we can express Eq. 26 as [17]:
\[f(R)_{\rm HS}=R-2\Lambda\left[1-\left\{1+\left(\frac{R}{b\Lambda} \right)^{n_{\rm HS}}\right\}^{-1}\right]\,. \tag{27}\]
We identify the parameter \(\Lambda\) as the usual cosmological constant, and \(b\) as the deviation parameter, which indicates the model's deviation from the \(\Lambda\)CDM model. In order to satisfy the viability conditions \(F>0\) and \(F^{\prime}>0\) for \(R\geq R_{0}\), it is necessary to consider \(b>0\) (especially when \(n_{\rm HS}\) is an odd integer). Although many researchers have constrained the scenarios where \(n_{\rm HS}=1\) and/or 2 [17; 18; 21; 22; 24; 25; 26], they have acknowledged computational challenges as a hindrance to explore the case of \(n_{\rm HS}=3\) (further elaboration on this point is provided in Sec. V). In this investigation, we have also imposed constraints on the case \(n_{\rm HS}=3\).
### The Starobinsky Model
The model proposed by Starobinsky [30] is
\[f(R)_{\rm ST}=R-\lambda R_{\rm S}\left[1-\left(1+\frac{R^{2}}{R_ {\rm S}^{2}}\right)^{-n_{\rm S}}\right]\,, \tag{28}\]
where \(n_{\rm S}\) is a positive constant, \(\lambda(>0)\) and \(R_{\rm S}\approx R_{0}\) are free parameters (where \(R_{0}\) denotes the Ricci scalar at present epoch). This model too can be reformulated in a more general form as [17]
\[f(R)_{\rm ST}=R-2\Lambda\left[1-\left\{1+\left(\frac{R}{b\Lambda} \right)^{2}\right\}^{-n_{\rm S}}\right] \tag{29}\]
with \(\Lambda=\lambda R_{\rm S}/2\) and \(b=2/\lambda\). In this study, we have imposed constraints on the cases where \(n_{\rm S}=1\) and the reason for not exploring higher values of \(n_{\rm S}\) is explained later in Sec. V. Note that the Hu-Sawicki model (Eq. 27) with \(n_{\rm HS}=2\) and the Starobinsky model (Eq. 29) with \(n_{\rm S}=1\) are equivalent. Unlike the Hu-Sawicki model (with \(n_{\rm HS}=1\)), the viability condition for the Starobinsky model does not require \(b>0\). Based on the algebraic form of the Starobinsky model (Eq. 29), we can infer that regardless of the data used, the parameter \(b\) must exhibit a symmetric distribution (centered around \(b=0\)) from the MCMC fitting procedure. Since our interest lies in investigating deviations from the \(\Lambda\)CDM model supported by the data, we considered \(b>0\) without loss of generality.
### The Exponential Model
The exponential model, initially proposed as a viable \(f(R)\) model in [31], has been further investigated in [19; 25; 32; 33] (also see references therein), is given by
\[f(R)_{\rm E}=R+\alpha\left[\exp(-\beta R)-1\right]\,, \tag{30}\]
where \(\alpha\) and \(\beta\) are the parameters of this model. For large \(R\), an acceptable \(f(R)\)-model must approximately resemble the \(\Lambda\)CDM, which is achievable only when \(\alpha>0\) and \(\beta>0\). By substituting \(\Lambda=\alpha/2\) and \(b=2/(\alpha\beta)\) the exponential model can be expressed as
\[f(R)_{\rm E}=R-2\Lambda\left[1-\exp\left(-\frac{R}{b\Lambda} \right)\right]\,, \tag{31}\]
from where it becomes evident that when \(R\) becomes significantly larger than \(b\Lambda\) (\(R\gg b\Lambda\)), the function \(f(R)_{\rm E}\) approaches \(R-2\Lambda\).
### The Tsujikawa Model
Tsujikawa proposed an alternative model [34] as
\[f(R)_{\rm T}=R-\xi R_{\rm T}\tanh\left(\frac{R}{R_{\rm T}}\right)\,, \tag{32}\]
where \(\xi(>0)\) and \(R_{\rm T}(>0)\) are the model parameters. With \(\Lambda=\xi R_{\rm T}/2\) and \(b=2/\xi\), the model can be rewritten as
\[f(R)_{\rm T}=R-2\Lambda\tanh\left(\frac{R}{b\Lambda}\right)\,. \tag{33}\]
We see clearly that when the parameter \(b\to 0\) (which corresponds to \(\xi\to\infty\), \(R_{\rm T}\to 0\), while \(\xi R_{\rm T}\) remains finite), the model reduces to \(f(R)_{\rm T}=R-2\Lambda\).
### The ArcTanh Model
In this study we also examined a model proposed in [20] as
\[f(R)_{\rm aTanh}=R-\frac{2\Lambda}{1+b\,{\rm arctanh}\left(\frac{\Lambda}{R} \right)}\,, \tag{34}\]
where the parameter \(b\) is required to be positive, in order to prevent any occurrence of future singularities.
## IV Observed Cosmological Data
In this section we present a concise overview of the cosmological data sets utilized in this study. For any given \(f(R)\) model, the system of Eqs. 7, 10 and 11 is solved, primarily yielding the function \(H(z)\). So, it becomes necessary to outline the theoretical equations connecting \(H(z)\) with various observed quantities. Additionally, at the end of this section, a brief introduction is provided on the statistical techniques employed to extract model parameters from the data.
### Type Ia Supernova Data
Type Ia Supernovae (SNIa), known as standard candles [35], have significantly contributed to our comprehension of cosmology. The SNIa observations [2; 4] played a pivotal role in the discovery of the accelerated expansion of the present-day universe. In this study we used the apparent magnitude data for SNIa obtained from the recently released PantheonPlus compilation [36]. This compilation comprises of 1701 distinct light curves of 1550 unique spectroscopically confirmed SNIa, sourced from 18 surveys. This compilation provides SNIa data within the range \(0.00122<z_{\rm HD}<2.26137\), where \(z_{\rm HD}\) denotes the Hubble diagram redshift. It offers a considerably larger number of low-redshift data compared to the previous Pantheon compilation [37]. In our current work, we employ the apparent magnitude at maximum brightness (\(m_{\rm b}\)), heliocentric redshift (\(z_{\rm hel}\)), cosmic microwave background corrected redshift (\(z_{\rm cmb}\)), and the total (statistical + systematic) covariance matrix from this compilation [36].
The theoretical definition of the apparent magnitude involves the luminosity distance, which is given by the integral expression:
\[d_{\rm L}(z) = c(1+z_{\rm hel})\int_{0}^{z_{\rm cmb}}\frac{dz^{\prime}}{H(z^{ \prime})}\,, \tag{35}\]
where the function \(H(z)\) is obtained by solving the system of Eqs. 7, 10 and 11. The apparent magnitude is defined as
\[m_{\rm th}=M+5\log_{10}\left(\frac{c/H_{0}}{\rm Mpc}\right)+5\log_{10}\left( D_{\rm L}(z)\right)+25\,, \tag{36}\]
where \(M\) represents the absolute magnitude, and \(D_{\rm L}(z)\equiv H_{0}d_{\rm L}(z)/c\) is the dimensionless Hubble-free luminosity distance. The computation of apparent magnitude involves a degeneracy between the absolute magnitude (\(M\)) and the Hubble parameter at the current epoch (\(H_{0}\)), as evident from the Eq. 36. Therefore, after marginalizing over these nuisance parameters (\(M\) and \(H_{0}\)), the appropriate residual that needs to be minimized for model fitting is given by [38]
\[\tilde{\chi}_{\rm sn}^{2}=A-\frac{B^{2}}{D}+\log\frac{D}{2\pi}\,, \tag{37}\]
where \(A=(m_{\rm b}-m_{\rm th})^{T}C^{-1}(m_{\rm b}-m_{\rm th})\), \(B=(m_{\rm b}-m_{\rm th})^{T}C^{-1}\mathbf{1}\), and \(D=\mathbf{1}^{T}C^{-1}\mathbf{1}\) and \(C\) represents the total covariance matrix of the data (provided in the Pantheon+ compilation). \(\mathbf{1}\) is an array of ones of length equal to the number of data points.
### Cosmic Chronometers Data
By examining the differential age evolution [39; 40; 41] of old elliptical galaxies, where star formation and interactions with other galaxies have ceased, previous studies have provided 32 data points for \(H(z)\) in the redshift range of 0.07-1.965 [41; 42; 43; 44; 45; 46; 47; 48] (compiled in the Table 4 of the Appendix). This so called differential age method, employed in these studies, uses the relation
\[H(z)=-\frac{1}{1+z}\frac{dz}{dt}\simeq-\frac{1}{1+z}\frac{\Delta z}{\Delta t}\,, \tag{38}\]
to obtain \(H(z)\). The parameters of any model are estimated by minimizing the following residual:
\[\chi_{{}_{\rm CC}}^{2}=\sum_{i=1}^{32}\frac{\left(H_{\rm obs}(z_{i})-H_{\rm th }(z_{i})\right)^{2}}{\sigma_{H,i}^{2}}\,, \tag{39}\]
where \(H_{\rm obs}(z_{i})\)'s are the observed values of the Hubble parameter function at redshift \(z_{i}\), while \(\sigma_{H,i}^{2}\) denotes the corresponding uncertainties associated with the measurements of \(H_{\rm obs}(z_{i})\). The theoretical Hubble function at redshift \(z_{i}\), which is model-dependent and obtained from the solutions of Eqs. 7, 10, and 11, is denoted by \(H_{\rm th}(z_{i})\) in the above Eq. 39.
### BAO Data
In the Big Bang Model of the universe, prior to decoupling of matter and radiation components, the contents of the Universe were evenly distributed, albeit with small fluctuations. Photons and baryons were strongly coupled through Thomson scattering. As the universe expanded and cooled, resulting in a decrease in temperature and density, the fluctuations were amplified by gravity. The gravitational pull caused the tightly bound photon-baryon mixture to condense in regions with higher densities, resulting in compressions and rarefactions in the form of acoustic waves known as Baryonic Acoustic Oscillations (BAO). Matter and radiation were then decoupled and this epoch which is marked by the release of baryons from the Compton drag of photons is known as drag epoch (\(z_{\rm d}\)), after which the photons travelled freely, whereas acoustic waves remained frozen in the baryons. The length scale characterizing the maximum distance traveled by the acoustic wave before decoupling is known as sound horizon at the epoch of drag (\(r_{\rm d}\)). BAO, therefore, holds the status of standard ruler for length scale in Cosmology [49; 50].
We have compiled a collection of 30 data points representing various BAO observables from a range of surveys, as documented in the literature [51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70]. These data points, which are used in our current study, are listed in the Table 3(in the Appendix). In our calculations for the drag epoch \(z_{d}\) and the sound horizon at the epoch of drag \(r_{d}\), we employ improved fits from [71]. The BAO observables include the Hubble distance, which is defined as \(D_{\rm H}=c/H(z)\), the transverse comoving distance (\(D_{\rm M}(z)\)), the angular diameter distance (\(D_{\rm A}(z)\)), and the volume-averaged distance (\(D_{\rm V}(z)\)) which are defined as
\[D_{A}(z)=\frac{c}{1+z}\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})}\,, \tag{40}\]
\[D_{M}(z)=(1+z)D_{A}(z)\,, \tag{41}\]
and,
\[D_{V}(z)=\left[(1+z)^{2}D_{A}^{2}(z)\frac{z}{H(z)}\right]^{1/3}\,. \tag{42}\]
Once again \(H(z)\) in the above relations is obtained by solving Eqs. 7, 10, and 11.
For fitting any model to the uncorrelated data points of the BAO data, the following residual has been used
\[\chi^{2}_{\rm BAO-UC}=\sum_{i=1}^{20}\left[\frac{A_{\rm th}(z_{i})-A_{\rm obs} (z_{i})}{\sigma_{i}}\right]^{2}\,, \tag{43}\]
where \(A_{\rm obs}(z_{i})\) and \(\sigma_{i}\) respectively denote the observed values and their uncertainties at redshift \(z_{i}\) and \(A_{\rm th}\) denotes the theoretical prediction from model under consideration. These quantities are given in the columns 2-4 of the Table 3. For correlated data points the appropriate residual to be minimized is
\[\chi^{2}_{\rm BAO-C}=\sum_{j=1}^{4}\left[\left({\bf A}_{\rm th}-{\bf A}_{\rm obs }\right)_{j}^{T}{\bf C}_{j}^{-1}\left({\bf A}_{\rm th}-{\bf A}_{\rm obs} \right)_{j}\right]\,, \tag{44}\]
where \({\bf C}_{j}\)'s are the covariance matrices of the 4 different data-sets and \(\left({\bf A}_{\rm th}-{\bf A}_{\rm obs}\right)_{j}\) denotes an array of difference between observed and theoretical values for each of the 4 data-sets. These data and the covariance matrix can be found from the corresponding source papers referred in the last column of Table 3. For the whole data-set of BAO, the \(\chi^{2}\) which is to be minimized is
\[\chi^{2}_{\rm BAO}=\chi^{2}_{\rm BAO-UC}+\chi^{2}_{\rm BAO-C}\,. \tag{45}\]
### H ii starburst Galaxies Data
Hii galaxies (HiiG) are massive and compact starburst structures surrounded by ionized hydrogen gas. Their optical emission spectra exhibit strong and narrow Balmer H\(\alpha\) and H\(\beta\) lines, along with a weak continuum. The cosmological significance of HiiG observation comes from the empirically established correlation between the luminosity (\(L\)) of the H\(\beta\) lines and the velocity dispersion (\(\sigma\), which is a measure of the width of the spectral lines). This correlation is attributed to the fact that an increase in the mass of the starburst component leads to a simultaneous increase in the number of ionized photons (and thus the luminosity \(L\)) and the turbulent velocity (hence, the velocity dispersion \(\sigma\)) (see [72] and references therein).
In this study, we have used a total of 181 data points from [72; 73; 74; 75; 76] corresponding to the emission of the Balmer H\(\beta\) line. These data points span a redshift range of \(0.0088<z<2.5449\). It is important to note that the redshift coverage provided by the latest SNIa data is as follows: we have 8 data points for \(z_{\rm SN}>1.4\), while for \(z_{\rm HiiG}>1.4\) we have 69 data points, and for \(z_{\rm HiiG}>z_{\rm SN,\,max}\) we have 11 observations of HiiG. Consequently, the observations of HiiG not only explore hitherto unexplored higher redshift regions compared to SNIa but also provide a denser coverage of these higher redshift regions in comparison to SNIa.
The empirical relation between the luminosity \(L\) and the dispersion of velocity \(\sigma\) is given by
\[\log\left[\frac{L}{\rm erg/s}\right]=\beta\log\left[\frac{\sigma}{\rm km/s} \right]+\alpha\,, \tag{46}\]
where we take \(\beta=5.022\pm 0.058\) and \(\alpha=33.268\pm 0.083\) from [72; 76]. From the luminosity distance of HiiG,
\(d_{\rm L}=\left[L/(4\pi F)\right]^{1/2}\), we can derive the distance modulus as
\[\mu_{o}=2.5\left(\alpha+\beta\log\left[\frac{\sigma}{\rm km/s}\right]-\log \left[\frac{\sigma}{\rm erg/s/cm^{2}}\right]-40.08\right). \tag{47}\]
Using Eq. 47, we compute the distance moduli of HiiG from the observables \(L\), \(\sigma\), and \(F\). For any given cosmological model with theoretical distance moduli \(\mu_{\theta}\) (represented as \(m_{\rm th}-M\) in Eq. 36), the parameters associated with that model can be constrained by minimising the following \(\chi^{2}\) function
\[\chi^{2}_{\rm H_{0}}=\sum_{i=1}^{181}\frac{\left[\mu_{o}-\mu_{\theta}\right]^{ 2}}{\epsilon^{2}}\,, \tag{48}\]
where
\[\epsilon^{2}=\epsilon^{2}_{\mu_{o,\rm stat}}+\epsilon^{2}_{\mu_{\theta,\rm stat }}+\epsilon^{2}_{\rm sys}\,, \tag{49}\]
with
\[\epsilon^{2}_{\mu_{o,\rm stat}}=6.25\left(\epsilon^{2}_{\log F}+\beta^{2} \epsilon^{2}_{\log\sigma}+\epsilon^{2}_{\beta}(\log\sigma)^{2}+\epsilon^{2}_{ \alpha}\right)\,, \tag{50}\]
\[\epsilon^{2}_{\mu_{\theta,\rm stat}}=\left[\frac{5}{\ln 10}\left(\frac{c(1+z) }{d_{L}(z)H(z)}+\frac{1}{1+z}\right)\sigma_{z}\right]^{2}, \tag{51}\]
and \(\epsilon_{\rm sys}=0.0.257\) as suggested in [72; 76; 77]. The uncertainty contribution of \(\epsilon_{\mu_{\theta,\rm stat}}\) in the distance modulus is due to the uncertainties in redshifts(\(\sigma_{z}\sim 10^{-4}\)) of HiiG and is derived from simple error propagation theory.
### Local Measurement of \(H_{0}\)
A contentious issue remains regarding the current value of the Hubble parameter \(H_{0}\). The Planck constraints provide a value of \(H_{0}=67.4\pm 0.5\)[78], while the locally measured value by the SH0ES collaboration yields \(H_{0}=73.04\pm 1.04\)[79]. The discrepancy between these two measurements, known as the Hubble tension, is significant at the \(4-5\sigma\) level. To address this tension, we have included in our study both scenarios - with visa-vis without the SH0ES prior for \(H_{0}\). The contribution of this data point to the total \(\chi^{2}\) is given by
\[\chi^{2}_{\rm SH0ES}=\frac{(H_{0}-73.04)^{2}}{1.04^{2}}\,. \tag{52}\]
### Methodology
We employed the well-known Markov Chain Monte Carlo (MCMC) analysis for parameter estimations of the models. This involves maximizing the likelihood function, given by
\[\mathcal{L}=\exp\left(-\sum_{i}\chi^{2}_{i}/2\right)\,, \tag{53}\]
where the subscript '\(i\)', depending on the combination of different data-sets used, stands for one or more from the set: { 'SN', 'CC', 'BAO', 'HiG', 'SH0ES'}. These \(\chi^{2}_{i}\)'s are defined in Eqs. 37, 39, 45, 48 and 52. To get posterior probability distribution of the model parameters we also need to set priors on them. For the reasons mentioned in the last section and will further be discussed in the Appendix A, for all the \(f(R)\) models in this work we do data-fitting in terms of the parameters: \((\Omega^{\Lambda}_{m0},b,H^{\Lambda}_{0})\). We used the uniform priors \(\Omega^{\Lambda}_{m0}\in[0,1]\) and \(H^{\Lambda}_{0}\in[50,90]\) for all the models, whereas priors for \(b\) require further considerations. From the algebraic expressions of these six \(f(R)\,\)- models we worked out and established a hierarchy of similarity of these models with the standard \(\Lambda\)CDM model using some "measures-of-similarity"(_e.g._ mean-square-error, correlation, dynamic time warping _etc._). A ballpark hierarchy of similarity we find is: the most similar \(f(R)\) models are the Tsujikawa model(TSUJI) and the Hu-Sawicki model(\(n_{\rm HS}=3\))(HS3), followed by the Starobinsky(\(n_{\rm S}=1\))(ST1) and the exponential(EXP) models, and the least similar ones are the Hu-Sawicki model(\(n_{\rm HS}=1\))(HS1) and the arcTanh(aTanh) models. To allow for the exploration that whether the data supports a model different from the \(\Lambda\)CDM model, an \(f(R)\) model which is more similar to the \(\Lambda\)CDM requires a bigger change in \(b\) to make them noticeably different from the \(\Lambda\)CDM model. Consequently we use uniform priors \(b\in[0,b_{\rm max}]\) where \(b_{\rm max}\) is 7, 7, 5, 5, 3 and 3 for the models HS3, TSUJI, ST1, EXP, HS1 and aTanh, respectively.
We developed our own PYTHON codes using the publicly available PYTHON modules: (i) for solving stiff ODEs [80], (ii) to perform MCMC analysis -- EMCEE [81], and (iii) for plotting of posterior probability distributions of the parameters -- GetDist [82].
## V Results: Observational Constraints on \(f(R)\) Models
In this section we will discuss our findings regarding the parameters of the six \(f(R)\) models and the \(\Lambda\)CDM model. We have considered five combinations of data sets: SNIa+CC (SC), SNIa+BAO+CC (SBC), SNIa+CC+HiiG (SCHii), BAO+CC+HIIG (BCHii), and SNIa+BAO+CC+HIIG (SBCHii). We also separately included a SH0ES prior for the Hubble parameter \(H_{0}\) for each of these combinations. While each data set individually can constrain any model, they may not necessarily provide tight constraints on the parameters. The choice of data combinations is also guided by considerations of goodness-of-fit.
_Acronym and Color convention:_ In addition to the above mentioned acronyms for different data sets, when including the SH0ES prior for \(H_{0}\), we append the notations as follows: SC\(H_{0}\) to denote SNIa+CC+\(H_{0}\) and SC(\(H_{0}\)) to denote SNIa+CC or SNIa+CC+\(H_{0}\), depend
ing on the case. Similar notations are used for the other four data sets. Unless otherwise specified, the following color and data set correspondences are used in the upcoming figures: (i) SC(\(H_{0}\))--Blue, (ii) SBC(\(H_{0}\))--Red, (iii) SCHhi(\(H_{0}\))--Black, (iv) BCHii(\(H_{0}\))--Orange, and (v) SBCHii(\(H_{0}\))--Green.
We generated MCMC samples of size 875,000 (with 25 walkers, each taking 35,000 steps) for each parameter in all the cases, where a case refers to a specific combination of a given model and a data set. These raw MCMC sample chains underwent tests for convergence and independence. To ensure convergence, an initial portion of the chain was discarded to obtain a "burned chain." We removed the first 125,000 steps (i.e., 5,000 initial steps of each walkers) to obtain the burned chain. In order to have independent and uncorrelated samples, the burned chains needed to be properly thinned. We thinned the burned chains by a factor of 0.75 times the integrated auto-correlation time. As a result, depending on the case, we obtained convergent and independent samples of varying sizes, approximately ranging from 15,000 to 20,000. All the statistical inferences about the model parameters were then obtained from these burned and thinned subsamples.
The median values of model parameters and 1-sigma confidence intervals on them are presented in the Tables 1 and 2. The Figs 4-17 display 2D contour plots depict
Figure 2: This figure is helpful in getting a sense of variations of the deviation parameter \(b\) from model to model and with data for a given model. Different colors represent different data-set combinations and these are same as in any of the parameter distribution plots(e.g. Fig. 6) or see 2nd paragraph of Sec.V. The blank star and blank circle markers represent median values for the cases with and without SH0ES prior for \(H_{0}\), respectively. The thick and thin horizontal colored bars represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals for the cases with SH0ES prior. The colored continuous/dashed lines represent 1-sigma(68.26%, with smaller cap size) and 2-sigma(95.44%, with bigger cap size) confidence intervals for the cases without SH0ES prior.
Figure 1: This figure is helpful in getting a sense of variations of \(\Omega_{m0}\) from model to model and with data for a given model. Different colors represent different data-set combinations and these are same as in any of the parameter distribution plots(e.g. Fig. 6) or see 2nd paragraph of Sec.V. The blank star and blank circle markers represent median values for the cases with and without SH0ES prior for \(H_{0}\), respectively. The thick and thin horizontal colored bars represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals for the cases with SH0ES prior. The colored continuous/dashed lines represent 1-sigma(68.26%, with smaller cap size) and 2-sigma(95.44%, with bigger cap size) confidence intervals for the cases without SH0ES prior.
ing the posterior probability distribution of the parameters, which provide an indication of potential correlations among the parameters, as well as 1D marginalised distributions of each parameter are also shown there.
Before getting into the detailed results for individual models, we would like to highlight some overall observations regarding the model parameters in the light of Figs. 1, 2, and 3. (i) \(\Omega_{m0}\): For all models and data-set combinations, either the median values of \(\Omega_{m0}\) fall within the 1-2 sigma interval of the Planck value (\(\Omega_{m0,\text{Planck}}=0.315\pm 0.007\)[78]) or, the Planck value is within the 1-2 sigma intervals of the model's median values of \(\Omega_{m0}\) (for cases with or without the SH0ES prior for \(H_{0}\)). (ii) \(b\): There is a general trend of a shift towards lower values of \(b\) when considering the SH0ES prior for \(H_{0}\) compared to the corresponding cases without the SH0ES prior for the \(\text{SBC}(H_{0})\), \(\text{BCHi}(H_{0})\), and \(\text{SBC}\text{Hi}(H_{0})\) data-sets, whereas opposite trends are observed for the \(\text{SC}(H_{0})\) and \(\text{SCHi}(H_{0})\) data-sets. (iii) \(H_{0}\): Without the SH0ES prior for \(H_{0}\), for all models and data-set combinations, either the median values of \(H_{0}\) are within a 1-2 sigma interval of the Planck value (\(H_{0,\text{Planck}}=67.4\pm 0.5\)[78]) or, the Planck value is within a 1-2 sigma interval of the model's median values of \(H_{0}\). However, with the SH0ES prior for \(H_{0}\), there are slight departures towards higher values of \(H_{0}\) (as expected), but they are not close to \(H_{0,\text{SH0ES}}=73.04\pm 1.04\)[79]. It is worth noting that the cases with the SH0ES prior for \(H_{0}\) generally have tighter bounds on the model's \(H_{0}\) values (this observation does not hold true for \(\Omega_{m0}\) or \(b\)). Additionally, there is less overall tension among the model-fitted values of \(H_{0}\) compared to the tension between \(H_{0,\text{Planck}}\) and \(H_{0,\text{SH0ES}}\).
### Constraints on \(\Lambda\)CDM Model
We have also included the results for the \(\Lambda\)CDM model, which is commonly used as a benchmark for assessing \(f(R)\) models. We consider a two-parameter \(\Lambda\)CDM model, with the Hubble parameter given by \(H=H_{0}\sqrt{\Omega_{m0}(1+z)^{3}+(1-\Omega_{m0})}\), where the two parameters \(\Omega_{m0}\) and \(H_{0}\) stand for the matter density and the Hubble parameter at the present epoch, respectively. The median values along with 1-sigma confidence intervals for these parameters are presented in Tables 1 and 2. The posterior probability distribution of \(\Omega_{m0}\) and \(H_{0}\), both for the respective cases - without and with the SH0ES prior for \(H_{0}\) - are depicted in Figs 4 and
Figure 4: The \(\Lambda\)CDM Model(without SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
Figure 3: Hubble Tension: Different colors represent different data-set combinations and these are same as in any of the parameter distribution plots(e.g. Fig. 6) or see 2nd paragraph of Sec.V. The blank star and blank circle markers represent median values for the cases with and without SH0ES prior for \(H_{0}\), respectively. The thick and thin horizontal colored bars represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals for the cases with SH0ES prior. The colored continuous/dashed lines represent 1-sigma(68.26%, with smaller cap size) and 2-sigma(95.44%, with bigger cap size) confidence intervals for the cases without SH0ES prior.
5. For all data sets (except SC), the values of \(\Omega_{m0}\) are compatible with \(\Omega_{m0,\rm Planck}\) within 1-2(3) sigma. There is a tension of approximately 1.5-3 sigma between the values of \(H_{0}\) when comparing the cases with and without the SH0ES prior for \(H_{0}\). There exists more or less same order of tensions between the fitted values of \(H_{0}\) and \(H_{0,\rm Planck}\) or \(H_{0,\rm SHOES}\).
### Constraints on Hu-Sawicki Model and Starobinsky Model
Previous studies (see [18] and references therein) have established that there exists a degeneracy between \(n_{{}_{\rm HS}}/n_{{}_{\rm S}}\) and \(\Omega_{m0}\). One, therefore, often works with fixed values of \(n_{{}_{\rm HS}}\) and \(n_{{}_{\rm S}}\) to address this degeneracy. In [83], it has been worked out that \(n_{{}_{\rm HS}}\), \(n_{{}_{\rm S}}>0.9\), while
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Data & \(\Omega_{m0}\) & \(b\) & \(H_{0}\) & \(z_{\rm t}\) & \(w_{\rm DE0}\) & \(w_{\rm tot0}\) & \(\chi^{2}\) & \(\chi^{2}_{\rm red}\) & AIC & BIC & \(\Delta\)AIC & \(\Delta\)BIC \\ \hline \hline \(\Lambda\)CDM & & & & & & & & & & & & \\ \(\rm{SC}\) & \(0.36^{+0.018}_{-0.018}\) & — & \(66.42^{+1.703}_{-1.674}\) & \(0.53^{+0.040}_{-0.039}\) & \(-1\) & \(-0.64^{+0.018}_{-0.018}\) & 1777.31 & 1.03 & 1781.31 & 1792.22 & 0.0 & 0.0 \\ \(\rm{SBC}\) & \(0.32^{+0.010}_{-0.019}\) & — & \(69.20^{+0.684}_{-0.663}\) & \(0.63^{+0.025}_{-0.025}\) & \(-1\) & \(-0.68^{+0.010}_{-0.010}\) & 1815.39 & 1.03 & 1819.39 & 1830.34 & 0.0 & 0.0 \\ \(\rm{SCH}_{\rm{II}}\) & \(0.34^{+0.016}_{-0.017}\) & — & \(68.21^{+1.192}_{-1.159}\) & \(0.56^{+0.037}_{-0.037}\) & \(-1\) & \(-0.66^{+0.017}_{-0.017}\) & 2145.62 & 1.12 & 2149.62 & 2160.73 & 0.0 & 0.0 \\ \(\rm{BCH}_{\rm{II}}\) & \(0.30^{+0.017}_{-0.011}\) & — & \(68.85^{+0.419}_{-0.008}\) & \(0.68^{+0.029}_{-0.029}\) & \(-1\) & \(-0.70^{+0.011}_{-0.011}\) & 408.22 & 1.69 & 412.22 & 419.19 & 0.0 & 0.0 \\ \(\rm{SBCH}_{\rm{II}}\) & \(0.31^{+0.010}_{-0.009}\) & — & \(69.24^{+0.613}_{-0.599}\) & \(0.63^{+0.024}_{-0.024}\) & \(-1\) & \(-0.69^{+0.010}_{-0.009}\) & 2180.25 & 1.12 & 2184.25 & 2195.39 & 0.0 & 0.0 \\ \(\rm{HS1}\) & & & & & & & & & & & & \\ \(\rm{SC}\) & \(0.27^{+0.039}_{-0.041}\) & \(1.46^{+0.662}_{-0.711}\) & \(67.10^{+1.721}_{-1.701}\) & \(0.78^{+0.188}_{-0.132}\) & \(-0.78^{+0.070}_{-0.078}\) & \(-0.57^{+0.031}_{-0.033}\) & 1773.75 & 1.03 & 1779.75 & 1796.11 & -1.56 & 3.9 \\ \(\rm{SBC}\) & \(0.30^{+0.011}_{-0.011}\) & \(0.93^{+0.368}_{-0.303}\) & \(64.99^{+1.175}_{-1.145}\) & \(0.68^{+0.037}_{-0.035}\) & \(-0.84^{+0.027}_{-0.041}\) & \(-0.50^{+0.027}_{-0.027}\) & 1800.65 & 1.02 & 1806.65 & 1823.07 & -12.74 & -7.26 \\ \(\rm{SCH}_{\rm{II}}\) & \(0.25^{+0.032}_{-0.032}\) & \(1.64^{+0.553}_{-0.676}\) & \(68.67^{+1.181}_{-1.213}\) & \(0.87^{+0.188}_{-0.145}\) & \(-0.77^{+0.061}_{-0.071}\) & \(-0.57^{+0.029}_{-0.021}\) & 1239.44 & 1.12 & 2145.44 & 2162.1 & -4.18 & 1.37 \\ \(\rm{BCH}_{\rm{II}}\) & \(0.30^{+0.012}_{-0.010}\) & \(0.23^{+0.305}_{-0.065}\) & \(67.73^{+0.971}_{-0.963}\) & \(0.67^{+0.032}_{-0.033}\) & \(-0.96^{+0.065}_{-0.032}\) & \(-0.67^{+0.050}_{-0.025}\) & 408.19 & 1.7 & 414.19 & 4244.64 & 1.97 & 5.44 \\ \(\rm{SBCH}_{\rm{II}}\) & \(0.31^{+0.010}_{-0.010}\) & \(0.57^{+0.206}_{-0.206}\) & \(66.66^{+0.927}_{-0.927}\) & \(0.65^{+0.023}_{-0.029}\) & \(-0.89^{+0.039}_{-0.039}\) & \(-0.61^{+0.026}_{-0.027}\) & 2171.26 & 1.12 & 2177.26 & 2193.97 & -6.99 & -1.42 \\ \(\rm{ST1/HS2}\) & & & & & & & & & & & & \\ \(\rm{SC}\) & \(0.30^{+0.031}_{-0.034}\) & \(2.75^{+0.786}_{-1.165}\) & \(67.01^{+1.740}_{-1.746}\) & \(0.75^{+0.155}_{-0.118}\) & \(-0.81^{+0.085}_{-0.017}\) & \(-0.56^{+0.042}_{-0.035}\) & 1773.24 & 1.02 & 1779.24 & 179.561 & -2.07 & 3.39 \\ \(\rm{SBC}\) & \(0.31^{+0.011}_{-0.011}\) & \(2.64^{+0.068}_{-0.154}\) & \(65.72^{+1.082}_{-1.082}\) & \(0.72^{+0.042}_{-0.069}\) & \(-0.82^{+0.044}_{-0.047}\) & \(-0.57^{+0.032}_{-0.030}\) & 1799.22 & 1.02 & 1805.22 & 1821.64 & -14.17 & -8.7 \\ \(\rm{SCH}_{\rm{II}}\) & \(0.28^{+0.033}_{-0.032}\) & \(2.99^{+1.033}_{-1.033}\) & \(68.62^{+1.190}_{-1.190}\) & \(0.84^{+0.432}_{-0.132}\) & \(-0.78^{+0.024}_{-0.074}\) & \(-0.56^{+0.038}_{-0.038}\) & 2138.76 & 1.12 & 2144.76 & 2161.43 &
[84] proposed that \(n_{{}_{\rm HS}}\) and \(n_{{}_{\rm S}}\) should be integers. The difficulty in constraining higher \(n_{{}_{\rm HS}}/n_{{}_{\rm S}}\) values due to numerical instability issues while solving the modified Hubble equations is also well known. Consequently, it has become common practice to work with \(n_{{}_{\rm HS}}=1\) and \(n_{{}_{\rm S}}=1\). In our study we have also explored the case of \(n_{{}_{\rm HS}}=3\). Note that in the re-parameterized version using the deviation parameter \(b\), the Hu-Sawicki model with \(n_{{}_{\rm HS}}=2\) is equivalent to the Starobinsky model with \(n_{{}_{\rm S}}=1\).
The quantitative results of fitting the Hu-Sawicki model (Eq. 27) with \(n_{{}_{\rm HS}}=1,3\) and the Starobinsky model (Eq. 29) with \(n_{{}_{\rm S}}=1\), following the procedure discussed in Sec. IV, are presented in the Tables 1 and 2. The 2D contour plots of the posterior probability dis
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Data & \(\Omega_{m0}\) & \(b\) & \(H_{0}\) & \(z_{\rm t}\) & \(w_{\rm DE0}\) & \(w_{\rm tot0}\) & \(\chi^{2}\) & \(\chi^{2}_{\rm red}\) & AIC & BIC & \(\Delta\)AIC & \(\Delta\)BIC \\ \hline \hline \(\Lambda\)CDM & & & & & & & & & & & & \\ \({\rm SC}H_{0}\) & \(0.34^{+0.016}_{-0.016}\) & — & \(71.28^{+0.898}_{-0.887}\) & \(0.58^{+0.038}_{-0.037}\) & \(-1\) & \(-0.66^{+0.016}_{-0.016}\) & 1788.16 & 1.03 & 1792.16 & 1803.07 & 0.0 & 0.0 \\ \({\rm SBC}H_{0}\) & \(0.32^{+0.010}_{-0.010}\) & — & \(70.40^{+0.591}_{-0.589}\) & \(0.61^{+0.025}_{-0.028}\) & \(-1\) & \(-0.68^{+0.010}_{-0.010}\) & 1824.91 & 1.04 & 1828.91 & 1839.86 & 0.0 & 0.0 \\ \({\rm SCH}H_{0}\) & \(0.33^{+0.015}_{-0.015}\) & — & \(70.98^{+0.799}_{-0.799}\) & \(0.60^{+0.036}_{-0.036}\) & \(-1\) & \(-0.67^{+0.015}_{-0.015}\) & 2154.83 & 1.13 & 2158.83 & 2169.94 & 0.0 & 0.0 \\ \({\rm SCH}H_{0}H_{0}\) & \(0.30^{+0.011}_{-0.011}\) & — & \(69.97^{+0.547}_{-0.547}\) & \(0.66^{+0.028}_{-0.028}\) & \(-1\) & \(-0.70^{+0.011}_{-0.011}\) & 420.25 & 1.74 & 424.25 & 431.23 & 0.0 & 0.0 \\ \({\rm SBC}H_{{\rm II}}H_{0}\) & \(0.32^{+0.010}_{-0.010}\) & — & \(70.25^{+0.552}_{-0.551}\) & \(0.62^{+0.023}_{-0.023}\) & \(-1\) & \(-0.68^{+0.010}_{-0.009}\) & 2190.08 & 1.13 & 2194.08 & 2205.23 & 0.0 & 0.0 \\ \({\rm HS}\) & & & & & & & & & & & & \\ \({\rm SC}H_{0}\) & \(0.24^{+0.036}_{-0.031}\) & \(1.55^{+0.499}_{-0.499}\) & \(71.43^{+0.897}_{-0.889}\) & \(0.89^{+0.168}_{-0.145}\) & \(-0.77^{+0.054}_{-0.067}\) & \(-0.59^{+0.027}_{-0.029}\) & 1782.1 & 1.03 & 1788.1 & 1804.47 & -4.06 & 1.4 \\ \({\rm SBC}H_{0}\) & \(0.32^{+0.010}_{-0.010}\) & \(0.15^{+0.134}_{-0.134}\) & \(69.70^{+0.735}_{-0.799}\) & \(0.61^{+0.026}_{-0.025}\) & \(-0.97^{+0.045}_{-0.019}\) & \(-0.66^{+0.022}_{-0.016}\) & 1824.39 & 1.04 & 1830.39 & 1846.81 & 1.48 & 6.95 \\ \({\rm SCH}H_{0}\) & \(0.23^{+0.013}_{-0.097}\) & \(1.67^{+0.432}_{-0.832}\) & \(71.18^{+0.188}_{-0.829}\) & \(0.96^{+0.158}_{-0.149}\) & \(-0.76^{+0.048}_{-0.064}\) & \(-0.59^{+0.026}_{-0.029}\) & 2146.44 & 1.12 & 2152.44 & 2169.1 & -6.39 & -0.84 \\ \({\rm SCH}H_{0}\) & \(0.31^{+0.011}_{-0.011}\) & \(0.05^{+0.068}_{-0.023}\) & \(69.76^{+0.572}_{-0.572}\) & \(0.65^{+0.029}_{-0.028}\) & \(-0.99^{+0.011}_{-0.028}\) & \(-0.69^{+0.014}_{-0.013}\) & 420.28 & 1.74 & 426.28 & 436.73 & 2.03 & 5.51 \\ \({\rm SBC}H_{{\rm II}}H_{0}\) & \(0.32^{+0.010}_{-0.010}\) & \(0.14^{+0.023}_{-0.091}\) & \(69.69^{+0.572}_{-0.688}\) & \(0.62^{+0.024}_{-0.024}\) & \(-0.98^{+0.011}_{-0.016}\) & \(-0.66^{+0.015}_{-0.015}\) & 2189.75 & 1.13 & 2195.75 & 2212.46 & 1.67 & 7.24 \\ \({\rm SIC}H_{0}\) & \(0.27^{+0.029}_{-0.029}\) & \(2.85^{+0.601}_{-0.973}\) & \(71.45^{+0.0901}_{-0.091}\) & \(0.86^{+0.168}_{-0.132}\) & \(-0.79^{+0.084}_{-0.069}\) & \(-0.57^{+0.044}_{-0.035}\) & 2148.75 & 1781.71 & 1.03 & 1787.71 & 1804.08 & -4.45 & 1.01 \\ \({\rm SBC}H_{0}\) & \(0.32^{+0.010}_{-0.040}\) & \(0.83^{+0.078}_{-0.746}\) & \(69.51^{+0.800}_{-0.800}\) & \(0.63^{+0.030}_{-0.029}\) & \(-0.94^{+0.045}_{-0.045}\) & \(-0.63^{+0.033}_{-0.034}\) & 1823.2 & 1.04 & 1829.2 & 1845.62 & 0.28 & 5.75 \\ \({\rm SCH}H_{0}\) & \(0.26^{+0.030}_{-0.030}\) & \(3.03^{+0.080}_{-0.088}\) & \(71.19^{+0.829}_{-0.892}\) & \(0.94^{+0.144}_{-0.149}\) & \(-0.77^{+0.044}_{-0.054}\) & \(-0.57^{+0.033}_{-0.033}\) & 2145.93 & 1.12 & 2151.93 & 2168.6 &
tribution of the model parameters and 1D marginalised distribution of each of the parameters are shown in the Figs. 6 and 7 (for \(n_{\text{\tiny HS}}=1\)), Figs. 8 and 9 (for \(n_{\text{\tiny HS}}=2\) or, \(n_{\text{\tiny HS}}=1\)), Figs. 10 and 11 (for \(n_{\text{\tiny HS}}=3\)), for the respective cases - without and with the SH0ES prior for \(H_{0}\).
\(n_{{}_{\rm HS}}\) increases (see Fig. 2). Also, the constraints on \(b\) become weaker. We may infer that for higher values of \(n_{{}_{\rm HS}}\) (say 4, 5,...) and \(n_{{}_{\rm S}}\) (say 2, 3,..), the constraints on \(b\) would become even more weaker. This implies that with further increase in \(n_{{}_{\rm HS}}\) or \(n_{{}_{\rm S}}\), one can obtain a model that closely resembles the \(\Lambda\)CDM model in terms of its predictions for the physical parameters \(\Omega_{m0}\) and \(H_{0}\), despite the constraints on \(b\) moving towards larger values. In a sense, this undermines the purpose of exploring \(f(R)\) models further, and there is a strategic motivation to limit the exploration to \(n_{{}_{\rm HS}}=1\), 2 only, putting aside the consideration of computational difficulties in constraining for the higher values of \(n_{{}_{\rm HS}}\) or \(n_{{}_{\rm S}}\). For all three models, there are instances where \(b=0\) is only very marginally allowed(i.e., within a 2-3 sigma limit), indicating distinguishable models from the \(\Lambda\)CDM model. This finding is also an important result of the present work.
### Constraints on Exponential Model
In the Tables 1 and 2, we present the median values and 1-sigma (68.26%) confidence intervals for the parameters of the exponential model (Eq. 31). The corresponding 2D contour plots illustrating the posterior probability distribution of the parameters, along with the 1D marginal distributions for each parameter, are depicted in Figs 12 and 13 for the cases - without and with the SH0ES prior for \(H_{0}\), respectively. We observe from Figs 12, 13, and 2 that except for the BCHii(\(H_{0}\)) dataset, the parameter \(b\) clearly deviates from zero, with \(b=0\) barely allowed. Despite this, the values of the parameters \(\Omega_{m0}\) and \(H_{0}\) (as illustrated in Figs. 1 and 3) are reasonably close to the standard values derived from Planck constraints [78]. The significance of this result becomes further clear in the subsequent section on model comparisons. When considering the SH0ES prior for \(H_{0}\), the range of model values of \(H_{0}\) is \(\sim 69.7\) to \(71.5\) km s\({}^{-1}\)Mpc\({}^{-1}\).
Figure 11: The Hu-Sawicki Model(\(n_{{}_{\rm HS}}=3\), with SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
Figure 10: The Hu-Sawicki Model(\(n_{{}_{\rm HS}}=3\), without SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
Figure 9: The Starobinsky Model(\(n_{{}_{\rm S}}=1\), with SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
### Constraints on Tsujikawa Model
The constraints on the parameters of the Tsujikawa model described by Eq. 33 are presented in the Tables 1 and 2, as well as in Figs. 14 and 15. These constraints are shown separately for cases without and with the SH0ES prior on \(H_{0}\). Similar to the findings of the exponential model, here also we observe that the deviation parameter \(b\) is mostly nonzero, with only a very marginal allowance for it to be zero, across all the datasets, except BCHii\((H_{0})\). We disregard the results from BCHii\((H_{0})\) on account of very high value of the reduced \(\chi^{2}_{\rm min}/\nu\). Hence, we obtain an \(f(R)\) that is clearly distinguishable from the \(\Lambda\)CDM model. The fitted values of \(\Omega_{m0}\) and \(H_{0}\) also fall within reasonable limits compared to \(\Omega_{m0,{\rm Planck}}\) and \(H_{0,{\rm Planck}}\) (or \(H_{0,{\rm SH0ES}}\)). With SH0ES prior for \(H_{0}\), the model's median values for \(H_{0}\) range from approximately 69.7 to 71.5 km s\({}^{-1}\)Mpc\({}^{-1}\), which introduces a tension of
Figure 14: The Tsujikawa Model(without SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
Figure 13: The Exponential Model(with SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
Figure 12: The Exponential Model(without SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
Figure 15: The Tsujikawa Model(with SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
2-3 sigma with \(H_{0,\text{Planck}}\) or \(H_{0,\text{SH0ES}}\).
### Constraints on arcTanh Model
The results obtained by constraining the aTanh model are presented in the Tables 1 and 2, as well as in the Figs. 16 and 17, separately for cases without and with the SH0ES prior for \(H_{0}\). In several cases, excluding BCHii(\(H_{0}\)), the parameter \(b=0\) falls well within the 1-2 sigma range. However, there are instances where \(b=0\) lies outside the 2-sigma range. In the subsequent section on model comparison, we will observe that these latter cases are also more favored, making this outcome significant. The fitted values of \(\Omega_{m0}\) for the model fall within the 1-2 sigma range of \(\Omega_{m0,\text{Planck}}\). Furthermore, \(H_{0,\text{Planck}}\) is approximately within the 1-2 sigma range of that predicted by the model when the SH0ES prior is not considered. Upon inclusion of the SH0ES prior for \(H_{0}\), the model's predicted median values for \(H_{0}\) range from approximately 69.65 to 71.47 \(\text{km}\,\text{s}^{-1}\text{Mpc}^{-1}\), which are in 2-3 sigma tension with \(H_{0,\text{Planck}}\) or \(H_{0,\text{SH0ES}}\).
It is important to highlight that the outcomes of this model are not significantly different from those of the HS1 model. This similarity would not sound surprising, if we examine Eqs 27 and 34 along with the fact that \(\text{arctanh}(\Lambda/R)\approx\Lambda/R\) for \(\Lambda/R\ll 1\) (which holds true in this case since \(\Lambda\sim 0.7H_{0}^{2}\) and \(R\gtrsim 8H_{0}^{2}\)).
## VI Model Comparison
The standard statistical tools commonly used to assess model fitting (and model comparison) in cosmology include the reduced chi-square statistics (\(\chi_{\nu}^{2}\)), the Akaike Information Criterion (AIC) [85], and the Bayesian Information Criterion (BIC) [86] (also see [87, 88, 89]). The last six columns of Tables 1 and 2 contain essential quantities related to the current analysis, specifically pertaining to the utilization of these statistical tools. The reduced chi-square statistics is defined as \(\chi_{\nu}^{2}=\chi_{\text{min}}^{2}/\nu\) whereas the AIC and the BIC are respectively defined by the following equations
\[\text{AIC} = -2\ln\mathcal{L}_{\text{max}}+2k\,, \tag{54}\] \[\text{BIC} = -2\ln\mathcal{L}_{\text{max}}+k\ln N. \tag{55}\]
The number of degrees of freedom, represented by the symbol \(\nu\), is determined by subtracting the number of model parameters (\(k\)) from the total number of data points (\(N\)). The minimum value of \(\chi^{2}\) is denoted as \(\chi_{\text{min}}^{2}\) and is connected to the maximum likelihood (\(\mathcal{L}_{\text{max}}\)) through the relation \(\chi_{\text{min}}^{2}=-2\ln\mathcal{L}_{\text{max}}\). While comparing multiple competing models using a given data set, the model with the lowest values of \(\chi^{2}\), AIC, and BIC is considered to be more favoured by the data. However, it is insufficient to rely solely on \(\chi^{2}\) due to the principle of Occam's razor, which emphasizes the importance of considering the number of model parameters. Typically, for a nested model, as the number of parameters increases, the fit improves, leading to a decrease in \(\chi_{\text{min}}^{2}\) (or an increase in likelihood), regardless of the relevance of the newly included parameter(s). Both AIC and BIC incorporate a penalty term (\(2k\) and \(k\ln N\), respectively) that penalizes models with more parameters, in addition to the \(\chi_{\text{min}}^{2}\) term, taking into account any improvement
Figure 16: The arcTanh Model(without SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
Figure 17: The arcTanh Model(with SH0ES prior for \(H_{0}\)): The posterior probability distribution plots of fitted parameters. The color correspondence for different data-set combinations can be seen in the figure legends. The darker and lighter shades of colors represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals, respectively.
in fit. Thus a balance between the quality of fit and the complexity of the model is achieved through AIC and BIC. Comparing the Eqs. 54 and 55 for AIC and BIC, respectively, it can be observed that the penalty for models with a greater number of parameters is more severe in BIC than in AIC. Unfortunately, the conclusions derived from applying these two criteria may sometimes disagree. In such situations of disagreement, it is necessary to investigate whether any violations of the assumptions on which AIC and BIC are based have occurred (for details see [87, 88, 89]).
To perform a comparative analysis of two models, a useful measure \(\Delta X=X_{2}-X_{1}\), computed as the difference between the values of \(X\) (= AIC or BIC) for the two models: 1 and 2, facilitates a quantitative assessment of the evidence supporting the preference of model 1 over model 2. A rule of thumb which is commonly followed as a guideline to indicate the degree of strength of the evidence with which the model 2 (model 1) is favoured over the other, is as follows: (i) \(0<\Delta X\leqslant 2\) (\(-2\leqslant\Delta X<0\)): weak evidence, (ii) \(2<\Delta X\leqslant 6\) (\(-6\leqslant\Delta X<-2\)): positive evidence, (iii) \(6<\Delta X\leqslant 10\) (\(-10\leqslant\Delta X<-6\)): strong evidence, (iv) \(\Delta X>10\) (\(\Delta X<-10\)): very strong (highly pronounced) evidence.
As can be seen from Tables 1 and 2, the reduced \(\chi^{2}\) values for the BCHii(\(H_{0}\)) data-set do not come close to the value 1, for all the models. Consequently, these cases are not considered in our subsequent discussions related to model comparisons. The purpose of inclusion of these cases here is to illustrate that not all possible combinations of data-sets yield significantly meaningful results.
When we analyze different data-sets without considering the SH0ES prior for \(H_{0}\), we observe interesting patterns in Table 1. The \(\Delta\)AICs values indicate varying degrees of evidence ranging from weak to very strong in favor of \(f(R)\) models compared to the \(\Lambda\)CDM model. On the other hand, the values of \(\Delta\)BIC suggest evidence that is either weak or positive against \(f(R)\) models in certain cases, while in other cases, the evidence is weak, positive, or even strong in favour of \(f(R)\) models. The strongest support for all \(f(R)\) models is from the SBC data-set, followed by the SBCHii data-set. In both cases, the trends of \(\Delta\)AICs and \(\Delta\)BICs align, guiding us in the same direction.
When the SH0ES prior for \(H_{0}\) is included, the results, as shown in Table 2, exhibit some differences. The overall support for \(f(R)\) models weakens compared to the previous case. Based on consideration of the \(\Delta\)AICs, the maximum support for any \(f(R)\) model is now only strong (no longer very strong), and weak evidence against the \(f(R)\) models is observed in a few cases. On the other hand, analyzing the \(\Delta\)BICs for data-sets with the SH0ES prior for \(H_{0}\) reveals weak, positive, or strong evidence against the \(f(R)\) models when compared to the \(\Lambda\)CDM model. Among the data-sets with the SH0ES prior for \(H_{0}\), the strongest evidence against the \(f(R)\) models is observed in the SBCHii\(H_{0}\) data-set, followed by the SBCH\({}_{0}\) data-set. Furthermore, Table 2 highlights that when a SH0ES prior for \(H_{0}\) is employed, the evidence against models such as HS3, EXP, and TSUJI (which are more similar to the \(\Lambda\)CDM model) is weak or mildly positive according to \(\Delta\)BIC. Conversely, for models such as HS1, aTanh, and ST (which are less or least similar to the \(\Lambda\)CDM model), the evidence against them is strong when evaluated using \(\Delta\)BIC. Consequently, we can conclude that with a SH0ES prior for \(H_{0}\), the \(\Delta\)BIC criterion disfavours models that significantly deviate from the \(\Lambda\)CDM model.
A crucial observation to note is that the cases of \(f(R)\) models with strong or very strong evidence for being preferred over the \(\Lambda\)CDM model (as indicated by \(\Delta\)AIC and/or \(\Delta\)BIC) also correspond to the cases where a non-zero value of the deviation parameter \(b\) is favored, with the possibility of a very marginal acceptance of \(b=0\). This finding is an important result of this study, which can be observed from Figs. 6-17 and Tables 1 and 2.
## VII Accelerated expansion of the universe
In this section we discuss the findings regarding the relevant quantities that describe the current accelerated expansion of the universe. The behavior of the deceleration parameter (\(q(z)\)), which is defined as \(q(z)\equiv-a\ddot{a}/\dot{a}^{2}=-\ddot{a}/(H^{2}a)\), provides valuable insight into whether the cosmic expansion is accelerating (\(q<0\)) or decelerating (\(q>0\)), serving as an indicator of the respective phases. Through several model-independent measurements of \(q(z)\), it has been observed that \(q(z=0)<0\) and \(q(z>z_{\rm t})>0\)[90, 91]. The quantity \(z_{\rm t}\), called transition redshift, indicates the redshift at which the universe has undergone transition from a decelerated phase to an accelerated phase of expansion.
The total equation-of-state (EoS) parameter (\(w_{\rm eff}\)) as a function of redshift provides valuable information about the evolution of the universe through the eras of radiation domination, matter domination, and the subsequent late time accelerated phase of expansion dubbed as dark energy era. During these three phases, \(w_{\rm eff}\) takes values approximately equal to \(1/3\), \(0\), and \(-1\) (for \(\Lambda\)CDM). On the other hand, the EoS parameter (\(w_{\rm DE}\)) associated with the dark energy component remains constant at \(-1\) for all values of redshift (both in the past and future) in \(\Lambda\)CDM model. However, in the case of any viable \(f(R)\) model, value of the parameter \(w_{\rm DE}\) can cross the so called "phantom-divide" line (\(w_{\rm DE}(z)=-1\)) multiple times before finally settling at \(-1\) in the far future [92, 93].
In the Tables 1 and 2 we present the median values
and 1-sigma confidence intervals for the quantities \(z_{\rm t}\), \(w_{\rm DE,0}\), and \(w_{\rm eff,0}\) for two cases - without and with the SH0ES prior on \(H_{0}\). In Fig 18, we display the values of \(z_{\rm t}\) for all models and data-sets. This figure shows the median values of \(z_{\rm t}\sim 0.5-1\) for all the models, which is compatible with the model-independent predictions for \(z_{\rm t}\)[90, 91]. In the Figs. 19 and 20 we have plotted the evolution of \(w_{\rm DE}\) with redshift based on the best constraints obtained for the models, (which happens to be the constraints from the data-set SCHii\((H_{0})\)). In all \(f(R)\) models, the values of \(w_{\rm DE}(z)\) mark a crossover from the so-called "phantom regime" in the distant past to the so-called "quintessence regime" in the more recent past, with the present-epoch values (\(w_{\rm DE,0}\)) falling within the quintessence region. When the SH0ES prior on \(H_{0}\) is included, the deviations of \(w_{\rm DE}(z)\) for \(f(R)\) models from the \(\Lambda\)CDM model are reduced and the transition from the phantom regime to the quintessence regime happens more recently compared to the situation without the SH0ES prior.
For any viable \(f(R)\) model, the \(w_{\rm DE}(z)\) curve is expected to eventually reach a stable, constant value of \(-1\) in the distant past - a behavior which is necessary to account for the matter-dominated era in the context of the model. From Figs. 19 and 20, we can readily observe that the Tsujikawa model and the Exponential model exhibit a clear convergence to \(-1\), while for the other models this convergence takes place in a more remote past, characterized by higher values of \(z\). The trend for possible future crossing into phantom regime is clearly visible for the HS1 model, the Starobinsky model and the aTanh model, whereas this trend is not as apparent in the other models. However, theoretical studies [92, 93] have indicated that these models are also expected to cross the phantom divide in the future, albeit in an oscillatory manner. Although it would be an intriguing future project to extend the analysis of the \(w_{\rm DE}(z)\) curve up to \(z\to-1(a\to\infty)\), it is beyond the scope of the present work. This is due to the potential limitations of the employed ODE system, which may not accurately capture the oscillatory features in such a far future regime.
The behavior of \(w_{\rm eff}(z)\) is depicted in Figs. 21 and 22, revealing a transition from a matter-dominated era (represented by \(w_{\rm eff}(z)\to 0\)) to a dark energy dominated era around \(z\sim 5\) for all the \(f(R)\) models. Considering the SH0ES prior for \(H_{0}\) leads to relatively lower values of \(w_{\rm eff,0}\) for all the \(f(R)\) models compared to the cases without the prior. Furthermore, the discrepancy between the predictions of \(w_{\rm eff}(z)\) obtained from the \(\Lambda\)CDM model and those from the \(f(R)\) models reduces when the SH0ES prior is included.
## VIII Conclusions
In this study, we used the recently released Pantheon-plus SNIa data along with cosmic chronometer measurements, baryonic acoustic oscillation measurements, Hii starburst galaxies data, and the local measurement of the Hubble parameter, to constrain popularly studied \(f(R)\) models in the metric formalism. The redshift coverage of these data-sets is \(0.0012\lesssim z\lesssim 2.55\), and the sparse region beyond \(z>1.4\) is complemented by the data from Hii galaxies. The inclusion of Hii galaxies data has shown improvements in parameter constraints, even though its use in constraining \(f(R)\) models is not yet widespread. In light of the so-called "Hubble tension", we have also examined cases with vis-avis without a SH0ES prior for \(H_{0}\) -- a feature which is very often lacking in many earlier studies (see references in Sec I). It is noteworthy to find that incorporating this prior for \(H_{0}\), based on local measurements, generally leads to reduction in the differences between predictions of \(f(R)\) models and the \(\Lambda\)CDM
Figure 18: Transition Redshift: The blank star and blank circle markers represent median values for the cases with and without SH0ES prior for \(H_{0}\), respectively. Different colors represent different data-set combinations and these are same as in any of the parameter distribution plots(e.g. Fig. 6) or see 2nd paragraph of Sec.V. The thick and thin horizontal colored bars represent 1-sigma(68.26%) and 2-sigma(95.44%) confidence intervals for the cases with SH0ES prior. The colored continuous/dashed lines represent 1-sigma(68.26%, with smaller cap size) and 2-sigma(95.44%, with bigger cap size) confidence intervals for the cases without SH0ES prior.
model
The six viable \(f(R)\) models examined in this study are the Hu-Sawicki model with \(n_{\rm HS}=1,3\), the Starobinsky model (\(n_{\rm S}=1\)), the exponential, the Tsujikawa and the arcTanh model. Reparametrisation of these \(f(R)\) models using the deviation parameter \(b\), shows that in the limit \(b\to 0\), these models converge to the standard \(\Lambda\)CDM model. Each of these \(f(R)\) models is characterized by three parameters: the deviation parameter \(b\), the present-epoch values of the matter density parameter \(\Omega_{m0}\), and the Hubble parameter \(H_{0}\).
Regardless of whether the SH0ES prior for \(H_{0}\) is considered, the \(\Omega_{m0}\) values for all the examined models fall within the 1-2 sigma range of the Planck constrained value (\(\Omega_{m0,{\rm Planck}}=0.315\pm 0.007\)) or, conversely, the Planck value lies within the 1-2 sigma range of the model values. When no SH0ES prior for \(H_{0}\) is considered, the fitted values of \(H_{0}\) for the models are also within the 1-2 sigma range of that from the Planck constraint (\(H_{0,{\rm Planck}}=67.4\pm 0.5\)), or alternatively, the Planck value lies within the 1-2 sigma range of the fitted model values. When incorporating the SH0ES prior on \(H_{0}\), the fitted model values of \(H_{0}\) tend to lie within the 2-3 sigma range on the lower side of the SH0ES measurement of \(H_{0,{\rm SH0ES}}=73.04\pm 1.04\). An additional impact resulting from the SH0ES prior on \(H_{0}\) is the tightening of constraints on all three model parameters for all the models and data-sets.
In our study, we have found instances of all the investigated \(f(R)\) models where the deviation parameter \(b\) significantly deviates from zero, indicating that \(b=0\) is only marginally allowed. It is worth highlighting that these cases also correspond to the situations where \(\Delta\)AIC and/or \(\Delta\)BIC based analysis suggest that the \(f(R)\) models are (very) strongly favoured over the \(\Lambda\)CDM model. Based on our knowledge of previous studies (referred in Sec I), it can be stated that up until
Figure 21: Evolution of the total EoS parameter for all \(f(R)\) models and the \(\Lambda\)CDM model obtained from the data-set SBCHii. Different colored solid lines show median values for different models as indicated by legends whereas dashed lines(or shaded area) with colors show 1-sigma confidence interval. For better contrast we choose \(\log_{10}(z)\) as independent variable.
Figure 19: Evolution of EoS parameter of the “effective geometric dark energy” for all \(f(R)\) models obtained from the data-set SBCHii. Different colored solid lines show median values for different models as indicated by legends whereas with same color shaded areas show 1-sigma confidence interval.
Figure 20: Evolution of EoS parameter of the “effective geometric dark energy” for all \(f(R)\) models obtained from the data-set SBCHii\(H_{0}\). Different colored solid lines show median values for different models as indicated by legends whereas shaded areas with colors show 1-sigma confidence interval.
now, the support for \(f(R)\) models has mainly been weak or positive (based on \(\Delta\)AIC and/or \(\Delta\)BIC). However, our current research yields (very) strong support for \(f(R)\) models, along with support for non-zero values of the deviation parameter \(b\). This indicates that the viability of \(f(R)\) models have not yet been ruled out by the available cosmological data sets.
We find that the relevant quantities characterizing the (accelerated) expansion of the universe -- \(z_{\rm t}\), \(w_{\rm eff,0}\), and \(w_{\rm DE,0}\) -- estimated in this study are compatible with their model-independent estimates from earlier works. All the models examined in this study, predict that the universe underwent a transition from a decelerated phase of expansion to an accelerated phase of expansion in the recent past, occurring at \(z_{\rm t}\sim 0.5-1\). Furthermore, the current values of \(w_{\rm DE}\) fall within the quintessential region, having recently crossed over from the phantom region (around \(z\sim 0.5-2\), depending on the specific models).
The instances of strong evidence in favor of \(f(R)\) models accompanied by clear non-zero \(b\)(or only very marginal allowance for \(b=0\)), agreements between the derived quantities \(z_{\rm t}\), \(w_{\rm eff,0}\), and \(w_{\rm DE,0}\) here and their expected values from model-independent predictions, suggest that the analyzed cosmological data sets allow room for the consideration of viability of \(f(R)\) models as an explanation for the observed late time cosmic acceleration. In fact, it would be worthwhile to conduct even further investigations of these models using future data sets.
## Data availability
The observed cosmological data such as SNIa, CC, BAO, Hii starburst galaxies and local measurement of \(H_{0}\) are publicly available -- the references to which are cited in the text. The simulated data generated in this work are available from the corresponding author, KR, upon reasonable request.
## Acknowledgements
K.R. would like to thank HoD, Dept. of Comp. Sc., RKMVERI, for providing computational facilities. A.C. would like to thank Indian Institute of Technology, Kanpur, for supporting this work by means of Institute Post-Doctoral Fellowship (Ref. No. DF/PDF197/2020-IITK/970).
## Appendix A The Modified Friedmann Equations as a System of First Order ODEs
The strategies employed to solve the system of Eqs. 7, 10 and 11 have been proposed and utilized in various earlier studies[94; 95; 96; 97; 98; 99; 24; 19]. Although there has been proposals for using model-dependent schemes [19; 25] too, in our current work, we were able to obtain stable solutions for all the considered \(f(R)\) models using a single scheme. This scheme involves transforming the ODEs into a system of first-order ODEs using dynamical variables, which avoids numerical instability and reduces overall computational cost. Based on the literature reviewed, we define the following dimensionless variables
\[\begin{split}& X_{1}=\frac{R}{6(\eta H_{0}^{\Lambda})^{2}}\,,\,\,X_{2}= \frac{\dot{R}F^{\prime}}{(\eta H_{0}^{\Lambda})F},\,\,X_{3}=\frac{f}{6(\eta H _{0}^{\Lambda})^{2}F}\,,\\ & O_{\rm m}=\frac{\Omega_{\rm m0}^{\Lambda}(1+z)^{3}}{\eta^{2}F},\,\,O_{\rm r}=\frac{\Omega_{\rm r,0}^{\Lambda}(1+z)^{4}}{\eta^{2}F}\,,\\ & H=\eta H_{0}^{\Lambda},\,\,\Gamma=\frac{F}{RF^{\prime}},\,\,r= \frac{R}{R_{\star}}\,,\end{split} \tag{10}\]
where \(z\) denotes redshift, \(\Omega_{\rm m0}^{\Lambda}\) and \(\Omega_{\rm r,0}^{\Lambda}\) respectively represent the matter and radiation density at present epoch, and \(\eta=H/H_{0}^{\Lambda}\). The superscript \(\Lambda\) denotes quantities inferred in a \(\Lambda\)CDM paradigm. The parameter \(R_{\star}\) has the dimension of Ricci scalar, and it's value gets fixed by the choice of a specific \(f(R)\) model (e.g. for the Hu-Sawicki model \(R_{\star}\equiv\mu^{2}\), for the Starobinsky model \(R_{\star}\equiv R_{\rm S}\), for the exponential model \(R_{\star}\equiv 1/\beta\) and for the Tsujikawa model \(R_{\star}\equiv R_{\rm T}\)). The fact that any \(f(R)\) model is expected to approach \(\Lambda\)CDM cosmology at sufficiently high redshifts (say \(z^{i}\), initial redshift) allows us to set the necessary initial conditions for the ODE system. The determination of \(z^{i}\) will be discussed later in this section.
Figure 22: Evolution of the total EoS parameter for all \(f(R)\) models and the \(\Lambda\)CDM model obtained from the data-set SBCHii\(H_{0}\). Different colored solid lines show median values for different models as indicated by legends whereas dashed lines(or shaded area) with colors show 1-sigma confidence interval. For better contrast we choose \(\log_{10}(z)\) as independent variable.
In terms of the above dynamical/dimensionless variables defined in Eq. 14, the Eqs. 7 and 10 can be written as
\[\frac{d\eta}{dz}=\frac{\eta}{(1+z)}(2-X_{1})\,, \tag{15}\]
and,
\[1=O_{\rm m}+O_{\rm r}+X_{1}-X_{2}-X_{3}\,, \tag{16}\]
respectively. Thus it becomes evident from the expression in Eq. 16 that it serves the purpose of setting the initial conditions and monitoring the solutions at each steps of integration. We can also express the equation for effective geometric dark energy(Eq. 15) in terms of above variables as
\[w_{\rm DE}=\frac{w_{\rm tot}-O_{\rm r}F/3}{1-(O_{\rm m}+O_{\rm r})F}\,. \tag{17}\]
We further define \(N=-\log(1+z)\), which allows us to rewrite the system of first-order ODEs that need to be solved as follows:
\[\frac{dX_{1}}{dN} = X_{1}\left(X_{2}\Gamma-2X_{1}+4\right)\,, \tag{18}\] \[\frac{dX_{2}}{dN} = (O_{\rm m}+2X_{1}-X_{2}-4X_{3}-X_{1}X_{2}-X_{2}^{2})\,,\] (19) \[\frac{dX_{3}}{dN} = (X_{1}X_{2}\Gamma-X_{2}X_{3}+4X_{3}-2X_{1}X_{3})\,,\] (20) \[\frac{dO_{\rm m}}{dN} = -O_{\rm m}(-1+2X_{1}+X_{2})\,,\] (21) \[\frac{dr}{dN} = X_{2}\Gamma r\,,\] (22) and, \[\frac{dO_{\rm r}}{dN} = -O_{\rm r}(2X_{1}+X_{2})\,, \tag{23}\]
where \(\Gamma=F/(RF^{\prime})\). We choose \(\Omega_{\rm r,0}^{\Lambda}=2.9656\times 10^{-4}\Omega_{\rm m0}^{\Lambda}\), and with this choice, Eq. 14 becomes redundant as it reduces to Eq. 21. Therefore, we only need to solve the system of ODEs consisting of Eqs. 18 to 22 and not Eq. 14. The reason for considering \(\Omega_{\rm r,0}^{\Lambda}\) (and hence \(O_{r}\)) will be explained later in this section. Ultimately, by solving the above system, we obtain \(H=\sqrt{rR_{\star}/(6X_{1})}\). Using this expression for \(H(z)\), we can determine other cosmological observables as defined in Sec. IV. While we perform model-fitting with parameters \(\Omega_{\rm m0}^{\Lambda}\), \(b\) and \(H_{0}^{\Lambda}\), we eventually obtain the relevant parameters for the \(f(R)\) models, **viz.**\(\Omega_{m0}^{f(R)}\), \(b\), and \(H_{0}^{f(R)}\) (using Eq. 24), where \(H_{0}^{f(R)}\) is the numerical solution of the aforementioned \(H\) at \(z=0\).
The initial conditions needed to solve the system of Eqs. 18 to 22 are given by
\[X_{1}^{i} = \frac{\Omega_{\rm m0}^{\Lambda}(1+z^{i})^{3}+4\Omega_{\Lambda,0 }^{\Lambda}}{2\eta^{i\,2}}\,,\] \[X_{2}^{i} = 0\,,\] \[X_{3}^{i} = \frac{\Omega_{\rm m0}^{\Lambda}(1+z^{i})^{3}+2\Omega_{\Lambda,0 }^{\Lambda}}{2\eta^{i\,2}}\,, \tag{24}\] \[O_{\rm m}^{i} = 1-O_{\rm r}^{i}-X_{1}^{i}+X_{2}^{i}+X_{3}^{i}\,,\] \[r^{i} = \frac{3(H_{0}^{\Lambda})^{2}}{R_{\star}}\left[(\Omega_{\rm m0}^{ \Lambda}(1+z^{i})^{3}+4\Omega_{\Lambda,0}^{\Lambda}\right]\,,\]
where \(\eta^{i\,2}=\Omega_{\rm m0}^{\Lambda}(1+z^{i})^{3}+\Omega_{\rm r,0}^{\Lambda} (1+z^{i})^{4}+\Omega_{\Lambda,0}^{\Lambda}\), \(O_{\rm r}^{i}=\frac{\Omega_{\rm r,0}^{\Lambda}(1+z^{i})^{4}}{\eta^{i\,2}}\) and \(\Omega_{\Lambda,0}^{\Lambda}=(1-\Omega_{\rm m0}^{\Lambda}-\Omega_{\rm r,0}^{ \Lambda})\).
In order to estimate \(z_{i}\) we set \(f(R(z_{i}))=R-2\Lambda(1-\epsilon)\) where \(\epsilon\ll 1\). Here the expressions for \(R\) and \(\Lambda\) correspond to those in the \(\Lambda\)CDM cosmology. With this one obtains the following expression for \(z_{i}\) (see [19; 24] for derivation):
\[z_{i}=\left[\frac{4\Omega_{\Lambda,0}^{\Lambda}}{\Omega_{\rm m0}^{\Lambda}} \left(\frac{b}{4\nu_{\rm f}}-1\right)\right]^{1/3}-1\,, \tag{25}\]
where \(\nu_{\rm f}\) for different models (f) are given by following
Figure 23: The color projection plot of relation Eq. 25 which show \(z^{i}(\Omega_{\rm m0}^{\Lambda}\), \(b)\) below which the solutions need to be switched from the \(\Lambda\)CDM to \(f(R)\)(at \(z^{i}\) we obtain the initial conditions for the ODE system Eqs. 18 to 23 from \(\Lambda\)CDM approximations). Top panel is plot is for the Tsujikawa model and the bottom panel plot is for the Hu-Sawicki model(\(n_{{}_{\rm HS}}=1\)).
expressions:
\[\begin{split}\nu_{{}_{\rm HS}}&=\frac{1}{(1/\epsilon-1 )^{1/n_{\rm HS}}},\ \ \nu_{{}_{\rm ST}}=\sqrt{(1/\epsilon)^{1/n_{\rm S}}-1}\,,\\ \nu_{{}_{\rm E}}&=-1/\ln\epsilon\,,\ \ \nu_{{}_{\rm T}}= 1/{\rm arctanh}(1-\epsilon)\,,\ \ \nu_{{}_{\rm ATMh}}\approx\epsilon\,.\end{split} \tag{20}\]
If, during MCMC sampling, the value of \(z^{i}\) for a particular set of parameters \((\Omega_{\rm m0}^{\Lambda},\,b)\) is found to be smaller than the maximum redshift (\(z_{\rm max}\)) of the data, we switch from the \(\Lambda\)CDM solution within the range \([z^{i},z_{\rm max}]\) to the \(f(R)\) solution for the range \([0,z^{i}]\) at \(z^{i}\). Depending on \(f(R)\) models, for some samples of \((\Omega_{\rm m0}^{\Lambda},\,b)\) Eq. 20 may yield \(z^{i}\leq 0\). This indicates that for such situations the \(\Lambda\)CDM cosmology is the solution for all \(z>0\)(for example, see the top panel plot in Fig. 23).
Prior to running the MCMC code for a large number of samples, we conducted a preliminary run with a smaller sample size (approximately 50,000) to determine suitable values for \(\epsilon\). This avoids long computation times without comprimsing with accuracy of the results. Based on the findings of this pilot project, we choose \(\epsilon=10^{-10}\) for the exponential and the Tsujikawa models, \(\epsilon=10^{-8}\) for the Starobinsky model and the Hu-Sawicki model with \(n_{{}_{\rm HS}}=3\), and \(\epsilon=10^{-6}\) for the Hu-Sawicki model with \(n_{{}_{\rm HS}}=1\) and the aTanh model.
We have included a color projection plot in Fig. 23 illustrating the relationship described in Eq 20 for the Tsujikawa model and the Hu-Sawicki model (\(n_{{}_{\rm HS}}=1\)). From this plot, it can be observed that for some samples of \((\Omega_{\rm m0}^{\Lambda},\,b)\), the value of \(z^{i}\) can reach as high as 200. In such cases, when we have high \(z^{i}\) one must include \(\Omega_{r,0}^{\Lambda}\) in the definition of \(\eta^{i}\) even though \(\Omega_{r,0}^{\Lambda}\) is not a free parameter and one has data-set comprising lower redshifts (as in our case). Through our preliminary investigations, we observed that excluding \(\Omega_{r,0}^{\Lambda}\) led to numerical instability and inaccuracies when solving the system of ODEs described by Eqs. 21 to 20. Therefore, based on both theoretical principles and our practical experience, we made the decision to include \(\Omega_{r,0}^{\Lambda}\) in our analysis.
## Appendix B BAO and CC Data
The data-sets for BAO and CC used in this work is given in the Tables 3 and 4, respectively. The covariance matrices, taken from respective cited references, for BAO data points 21-25, 27-28 and 29-30, respectively, are given in the following equations:
\[C_{1}=\begin{bmatrix}624.707&23.729&325.332&8.34963&157.386&3.57778\\ 23.729&5.60873&11.6429&2.33996&6.39263&0.968056\\ 325.332&11.6429&905.777&29.3392&515.271&14.1013\\ 8.34963&2.33996&29.3392&5.42327&16.1422&2.85334\\ 157.386&6.39263&515.271&16.1422&1375.12&40.4327\\ 3.57778&0.968056&14.1013&2.85334&40.4327&6.25936\\ \end{bmatrix}, \tag{21}\]
\[C_{2}=\begin{bmatrix}0.0911&-0.0338\\ -0.0338&0.22\\ \end{bmatrix}, \tag{22}\]
and,
\[C_{3}=\begin{bmatrix}0.3047&0.1707\\ 0.1707&0.6373\\ \end{bmatrix}. \tag{23}\]
|
2302.00476 | Approaching Landauer's Bound In A Spin-Encoded Quantum Computer | It is commonly recognized that the Landauer bound holds in (irreversible)
quantum operations. In this study, we verified this bound by extracting a
single spin from a spin-spin magnetic interaction experiment to demonstrate
that the Landauer bound can be approached quantitatively with an approaching
rate of 79.3 percent via quantum spin tunneling. An optically manipulated
spin-encoded quantum computer is designed, in which energy bound near kB T to
erase a spin qubit is theoretically sensible and experimentally verified. This
work may represent the last piece of the puzzle in quantum Landauer erasure in
terms of a single spin being the smallest information carrier. | Frank Zhigang Wang | 2023-01-18T18:00:56Z | http://arxiv.org/abs/2302.00476v2 | # Breaking Landauer's bound in a spin-encoded quantum computer
###### Abstract
It is commonly recognized that Landauer's bound holds in (irreversible) quantum measurement. In this study, we overturned this common sense by extracting a single spin from a spin-spin magnetic interaction experiment to demonstrate that Landauer's bound can be broken quantitatively by a factor of \(10^{4}\sim 10^{10}\) via quantum spin tunneling. It is the quantum limit (\(\hbar/2\approx 10^{-34}\) J \(\cdot\) s), rather than Landauer's bound, that governs the performance of a spin qubit. An optically-manipulated spin-encoded quantum computer is designed, in which energy bound well below \(k_{B}T\) to erase a spin qubit at the expense of a long spin relaxation time is theoretically sensible and experimentally verified. This work may represent the last piece of the puzzle in quantum Landauer erasure in terms of a single spin being the smallest and the closest to the quantum limit.
Quantum computer Qubit Landauer's bound Spin Quantum spin tunneling 2022
## 1 Introduction
Quantum computing expressed by unitary operation is notably reversible whereas the projective initialization needed to initialize the system in an entangled state and the projective measurement needed to recover classical information from the computation are not. Landauer's bound [1] limits these irreversible operations so that the increased number of computations per joule of energy dissipated will come to a halt around 2050 [2; 3].
Landauer's bound was proposed in 1961, when Landauer argued that information is physical and that the erasure of a bit of classical information requires a minimum
energy of \(\Delta E=k_{B}T\ln 2\). Profoundly, Landauer's principle defined the ultimate physical limit of computation [1].
In March 2012, Landauer's bound was experimentally verified by Berut et al. in a single silica glass bead (2 \(\upmu\)m diameter) as a Brownian particle. The particle was trapped in a double-well potential. The mean dissipated heat was observed to saturate at the bound in the limit of long erasure cycles [4]. In June 2012, Alexei et al. reported the first experimental test of Landauer's principle in logically reversible operations, in which they measured energy dissipations much less than Landauer's bound (at the sub-\(k_{B}T\) level) whereas irreversible operations dissipate much more than Landauer's bound [5].
In 2014, Jun et al. verified the bound in a fluorescent particle (200 nm). They demonstrated using small particles in traps and reducing the exerted work to the Landauer limit during erasure [6].
In 2016, Hong et al. extended the principle to orientation-encoded information and measured an energy dissipation of 4.2 zeptojoules in a single-domain nanomagnet (comprising more than \(10^{4}\) spins). They used a laser probe to measure the energy dissipation when a bit was flipped from off to on [7].
In May 2018, a team led by Feng reported a single-atom demonstration of Landauer's principle in a fully quantum system (Fig. 1a), in which a trapped ultracold \({}^{40}\) Ca\({}^{+}\) ion was used as an atom qubit (comprised of its two internal states) [8]. The erasure procedure was completed with the heat reservoir (the ion's own vibrational modes) and the work involved was measured [8, 9]. In June 2018, Gaudenzi et al. also extended Landauer's principle to the quantum realm in a collective \(S_{z}=\pm 10(20\mu_{B})\) giant spin at 1 K, with a superconducting quantum interference device (SQUID) [10].
In March 2020, Saira et al. measured Landauer's bound at 500 mK [11]. In June 2020, Cetiner et al. measured Landauer's bound in ion channels, which are smaller than the florescence molecules [6] but larger than the spins [12].
In March 2021, Holtzman et al. showed that Landauer's bound is enforced by the contraction of the physical system's phase-space volume during the bit erasure and then suggested that, if the energy of the system is precisely known, it is possible to implement an erasable bit with no thermodynamic cost in a Hamiltonian memory [13]. However, they also pointed out that their proposal is of a purely theoretical nature and any uncertainty in the energy (i.e., the knowledge of the system's energy is limited in any realistic situation) results back in Landauer's bound [13]. In April 2021, Chiribella et al. found that even a logically reversible quantum operation (running on a physical processor operating on different energy levels) requires energy and quantified the upper and lower bounds [14]. Their bounds are present even if the evolution is entirely reversible [14]. Remarkably, their bounds can be compared quantitatively with the classical Landauer bound, which is present when the evolution is irreversible [14]. In November 2021, Georgescu reviewed 60 years of Landauer's principle and summarized that this principle imposes a fundamental energy bound on both irreversible bit operations in classical systems (which is the traditional domain of Landauer's principle) and even the reversible operations of quantum computation in spite of the distinction between these two operations [15].
Here, Landauer's bound will be studied in a single spin as the smallest among various information carriers, as shown in Fig. 1b. In the era of quantum computing,
one naturally wonders if there is a way around the bound considering that quantum and classical bits are fundamentally different [9]. We will attempt to answer this question in this article.
## 2 Position-encoded information
The statistical-mechanical formula for the free energy \(F\) is: \(F=-k_{B}T\ln Z\), where \(k_{B}\) is the Boltzmann constant and \(Z\) is the partition function [16]. In one-dimensional Brownian motion, the position-encoded system (a solid particle as an information
Figure 1: **a** A quantum computer acting on few spin qubits, whose state can be represented by a point on the so-called Bloch sphere. At the start of an erasure, the spin is equally likely to be in either of the up/down states (corresponding to the center of the Bloch sphere) and thus has a maximal entropy \(\mathrm{S}=k_{B}\ln 2\). The qubit ends up in a pure quantum state (a point on the Bloch sphere’s surface), in which it has a zero entropy \(S=0\)[9]. This information erasure is an irreversible manipulation of the created information, i.e., the “Maxwell demon” [4] or the observer that “created” the information loses the ability to extract work from the system if the information is already “burnt”. This energy bound is achievable even if a computation is carried out by a complex quantum circuit with many individual unitary gates [13]. **b** Various experimental verifications of Landauer’s bound in different information carriers at their respective operating temperatures. This study on a single spin may represent the last piece of the puzzle in quantum Landauer erasure
carrier trapped in a chamber with impenetrable walls as shown in Fig. 2a) can be approximated as one in internal thermodynamic equilibrium at each given value of the coordinate \(x\) of the particle. The subsystem formula is: \(F(x)=-k_{B}T\,\ln Z(x)\), where \(F(x)\) is the subsystem free energy at \(x\), and \(Z(x)\) is obtained by summation of the microstates at \(x\)[16, 17].
For a bit of position-encoded (classical) information in Fig. 2a, the information carrier for "random data" is equally likely to be in the \(L\) or \(R\) chamber, i.e., the probabilities are \(P(L)=P(R)=1/2\). After erasure, the carrier is assuredly reset to a fixed reference state (the \(L\) chamber in this case): \(P(L)=1\); \(P(R)=0\).
The work to push the information carrier (with two possible positions) to the desired half (\(L\)) is:
\[W\geq F(x)=k_{B}T\,\ln 2, \tag{1}\]
where \(Z(x)=\frac{1}{2}\) since the information carrier has only two possible positions in the chamber.
Interestingly, the above operation could be performed by a "Maxwell's demon" that consumes energy to observe the position of the carrier and insert the partition, where the consumed energy still equals the work exerted for erasure.
## 3 Orientation-encoded information
A single-domain nanomagnet, comprising more than \(10^{4}\) spins [7] and being large enough to be treated as classical [10], was used to represent a bit of datum by encoding its (magnetization) orientation, as shown in Fig. 2b. Due to thermal agitation, the orientation (\(x\)) of a magnetic moment fluctuates and may therefore take an arbitrary direction. The probability [that is proportional to \(Z(x)\)] to find \(x\) at thermal equilibrium can be deduced from Eq. 1. Hence, we have:
\[F(x)=-k_{B}T\,\ln Z(x)=k_{B}T\,\ln 2, \tag{2}\]
where \(Z(x)=\frac{1}{2}\) since the direction of a magnetic moment is either "up" or "down" along the easy axis. The two possible orientations of a magnetic moment are analogous to the two possible positions of a Brownian particle in the position-encoded information system. In other words, these two information systems share the same thermodynamics.
As a quantum computing paradigm, a single or giant spin can be used as a bit of quantum spin information by encoding its spin orientation, as shown in Fig. 2c. The spin angular momentum is quantized with only two possible \(z\)-components. At efficiently low temperatures, direct tunneling via the ground state becomes relevant and often provides the dominant spin relaxation channel [10]. As illustrated in Fig. 2c, quantum spin tunneling through the barrier from "1" to "0" is combined with excitation absorbing resonant phonons to reach the (tunneling) state and de-excitation emitting a phonon to the ground state [10].
Figure 2: Comparison of the three erasure protocols. For a bit of position-encoded (classical) information (e.g., a silica bead [4], and a fluorescent particle [6]) in (**a**), the erasure (\(L\)) state is reached from the random data state via the free state (the carrier can move freely between the two chambers) by removing the partition. As an isothermal contraction, the erasure creates \(k_{B}T\) ln2 (Landauer’s bound) by introducing a frictionless piston and pushing it toward the \(L\) direction. For a bit of orientation-encoded (classical) information (e.g., a single-domain nanomogrine more than \(10^{4}\) spins [7] and being large enough to be treated as classical [10]) in (**b**), the erasure (_Up_) state is reached from the random data state by applying a magnetic field \(B\) along z to overcome the barrier \(k_{B}T\) ln2 (this field also tilts the potential). For a bit of quantum spin information (e.g., a \(S_{z}=\pm\) 10 giant spin [10] and a single spin in this work) in (**c**), the erasure (_Up_) state is reached from the random data state by applying a small magnetic field \(B\). In (**c**), the position of a wavefunction (in blue) represents the (lower-lying) quantum energy level in contrast to the classical double-well potential landscape (in red) that is needed for the Landauer erasure (Color figure online)
The energy of flipping a spin undergoing a magnetic field \(B\) is:
\[\Delta E_{\uparrow\downarrow}=\vec{\mu}_{B}\cdot\vec{B}=\vec{\mu}_{B}\cdot B\hat{ z}=\gamma\,\hat{S}_{z}B=\gamma\,B\frac{\hbar}{2}(\mid\uparrow\rangle\ \langle\uparrow\mid-\mid\downarrow\rangle\ \langle \downarrow\mid), \tag{3}\]
where \(\mu_{B}\) is the Bohr magneton, \(\gamma\) is the gyromagnetic ratio of an isolated electron, \(\hat{S}_{z}=\frac{\hbar}{2}\!\left[\begin{array}{cc}1&0\\ 0&-1\end{array}\right]\) is the quantum-mechanical operator associated with spin-\(\frac{1}{2}\) observable in the \(z\) axis, and \(\hbar\) is the reduced Planck constant.
Superficially, this new energy bound of flipping a spin is decoupled from the environmental temperature \(T\), but, taking the giant spin as an example, (phonon-mediated/assisted) quantum spin tunneling is still coupled to the'surrounding world', including the environmental temperature \(T\)[10]. Namely, the spin relaxation time, \(\tau_{\rm rel}\) approximately follows Arrhenius's law: \(\tau_{\rm rel}=\tau_{0}{\rm exp}\!\left(\frac{U}{k_{B}T}\right)\), where \(\tau_{0}=10^{-8}\) s, \(U\) is the activation energy determined by the tunneling channel and \(\tau_{\rm rel}\geq 100\) s as \(T\) decreases to 1 K [10].
## 4 Experiment of spin-spin magnetic interaction with quantum spin tunneling
Extremely weak magnetic interactions between the two ground-state spin-1/2 (1\(\mu_{B}\)) valence electrons of two \({}^{88}\)Sr\({}^{+}\) ions across a separation (\(d=2.18\sim 2.76\)\(\mu\)m) were reportedly measured as shown in Fig. 3[18]. The two ions were trapped in a linear _rf_ Paul trap with a radial trap frequency (\(\Gamma=2\pi\times 2.5\) MHz) and laser-cooled to 1 mK [18; 19; 20]. The measurement takes full advantage of the quantum lock-in method [19] to spectrally separate weak signals from noise.
In this experiment, it was found that the spin-spin magnetic interaction obeys the inverse-cube law and spin entanglement was observed [18]. As the smallest magnet (the Bohr magneton), a spin (\(\vec{\mu}_{B}\)) applies a magnetic field to another spin. While the two spins are aligned along the line connecting the two ions [18], the strength of the magnetic field is:
\[B=\frac{\mu_{0}}{4\pi}\frac{2\mu_{B}}{d^{3}}=(0.88\sim 1.79)\times 10^{-13}\ { \rm T}, \tag{4}\]
where \(\mu_{0}\) is the vacuum permeability constant.
According to Eq. 3, we have the energy of flipping a spin:
\[\Delta E_{\uparrow\downarrow}=\mu_{B}B=\mu_{B}\frac{\mu_{0}}{4\pi}\frac{2\mu _{B}}{d^{3}}=(0.82\sim 1.66)\times 10^{-36}\ {\rm J}. \tag{5}\]
This energy represented by \(\frac{2\Delta E_{\uparrow\downarrow}}{\hbar}=\frac{2(0.82\sim 1.66)\times 10^{-36} \ {\rm J}}{6.63\times 10^{-34}\ {\rm J}\cdot{\rm s}}=(2.47\sim 5.01)\) mHz matches the measured frequency range (2-5 mHz) in the spin-spin magnetic interaction experiment [18].
A fault-tolerant quantum computer with imperfect quantum logic gates in practice needs to perform long computations without succumbing to some inevitable errors and noise, which raises a good concern in reliability or error probability. We stress that a single spin can be switched reliably (with a typical detection fidelity of 98% in the presence of magnetic noise that is six orders of magnitude greater than the applied magnetic field) [18]. The spin evolution was restricted to a decoherence-free subspace (DFS) that is immune to collective magnetic field noise [18] since the two qubits "see" the same (time-dependent) magnetic noise from the environment, whose wavelength is much larger than the separation \(d\) (Fig. 3). Since DFS encodes information through its sets of states, then it can be viewed as a quantum error-correcting (QEC) code to protect the two entangled spins against errors (decohering processes).
As a popular technique to raise (or "pump") electrons from a lower energy level in an atom or molecule to a higher one, optical pumping was also used here to pump electrons bound within the ions to a well-defined quantum state \(|\uparrow\downarrow\rangle\) or \(|\downarrow\uparrow\rangle\), as shown in Fig. 4. The frequency and polarization of the pump laser determined the sublevel in which the electron was oriented. This experiment displayed the ability
Figure 3: Extremely weak magnetic interaction between the two ground-state spin-1/2 (\(1\mu_{B}\)) valence electrons of two \({}^{88}\)Sr+ ions was reportedly measured [18]. This magnetic interaction can then impose a change in their orientation. The two ions were laser-cooled to their ground state and entangled across a separation (\(d=2.18\sim 2.76\)\(\mu\)m). An ion will absorb more photons if they move toward the light source and the net result is a reduced speed of the ion, which is equivalent to cooling the ion since the temperature is a measure of the random internal kinetic energy. Although \(|\uparrow\downarrow\rangle\) and \(|\downarrow\uparrow\rangle\) are indistinguishable, the measured energy splitting (in mHz) between the two entangled Bell states \(|\psi\pm\rangle=(|\uparrow\downarrow\rangle\pm|\downarrow\uparrow\rangle)/\sqrt {2}\) in this experiment can be used to verify the calculated energy \(\Delta E_{\uparrow\downarrow}\) of flipping a spin in Eq. 5. This experiment is such an excellent example in (high sensitivity) quantum metrology with the aid of DFS (decoherence-free subspace) that it would be almost impossible to directly measure a magnetic field of this strength since it is six orders of magnitude smaller than magnetic noise [18]. A single spin can be switched reliably with a typical detection fidelity of 98% [18]. A redraw courtesy of Shlomi Kotler (the Hebrew University of Jerusalem)
of coherent electromagnetic radiation (having wavelengths below one millimeter) to effectively pump and unpump these electrons. The infrared 1092 nm and 1033 nm lasers acted as repump lasers [18]. Generation of entangled Bell states of the form \(|\psi\pm\rangle=(|\uparrow\downarrow\rangle\pm|\downarrow\uparrow\rangle)/\sqrt{2}\) was done using a Sorenson-Molmer entangling gate [18].A pure quantum state represented by the Bloch vector can be located by measuring its projection on an equal superposition, e.g. the \((|\uparrow\uparrow\downarrow\rangle\pm|\downarrow\uparrow\rangle)/\sqrt{2}\) basis (i.e., y) if rotating it around x, via a parity observable. These collective rotations do not change the relative orientation of the spins, leaving the spin-spin interaction invariant [18]. The parity observable measures the coherence between \(|\uparrow\downarrow\rangle\) and \(|\downarrow\uparrow\rangle\): it is \(+1\) if the spins are aligned and \(-1\) if they are anti-aligned.
In magneto-optical traps (MOTs), the actual temperature is \((10\sim 30)T_{\rm Doppler}\)[21]. The minimum Doppler temperature is:
\[T_{\rm Doppler}=\frac{\hbar\Gamma}{2k_{B}}=\frac{1.05\times 10^{-34}\;{\rm J} \cdot{\rm s}\times 2\pi\times 2.5\times 10^{6}\;{\rm s}^{-1}}{2\times 1.38 \times 10^{-23}\;{\rm J}\cdot{\rm K}^{-1}}=5.07\times 10^{-5}\;{\rm K}, \tag{6}\]
where \(\Gamma\) is broad natural linewidth (measured in radians per second), hence the calculated temperature is \(T=(10\sim 30)\times 5.07\times 10^{-5}\;{\rm K}=(0.51\sim 1.52)\;{\rm mK}\), which agrees reasonably with the measured temperature of 1 mK. Landauer's bound can be expressed by the Doppler temperature as \(k_{B}T\;{\rm ln}\;2=k_{B}(10\sim 30)\times T_{\rm Doppler}\;{\rm ln}\;2=(0.96 \sim 2.87)\times 10^{-26}\;{\rm J}\) at 1 mK.
Therefore, Landauer's bound near 0 K was quantified by the Doppler temperature. Landauer's bound \(\left[(0.96\sim 2.87)\times 10^{-26}\;{\rm J}\right]\) at 1 mK is \(10^{-5}\) times Landauer's bound (3 \(\times\) 10\({}^{-21}\;{\rm J}\)) at room temperature (300 K) as it is proportional to the temperature.
Noticeably, according to Eq. 5, the input energy [\((0.82\sim 1.66)\times 10^{-36}\;{\rm J}\)] to erase a spin quantum datum is \(10^{-10}\) times Landauer's bound \(\left[(0.96\sim 2.87)\times 10^{-26}\;{\rm J}\right]\) at 1 mK. This verification simply makes full use of the measured data from this spin-spin
Figure 4: Optical pumping was used to create and manipulate the spin qubits. A circularly polarized on-resonant 422 nm laser cyclically pumped the two electrons bound within the two ions to a well-defined quantum state \(|\uparrow\downarrow\rangle\) or \(|\downarrow\uparrow\rangle\), followed by a spin rotation [18]
experiment backdated to 2014 in a completely different (magnetic-interaction) context [18], whose authors wrote to us "It's exciting to hear that our work is useful in new areas of research that we were not aware of when doing the experiment." after we shared this manuscript with them.
Although the spin-spin experiment [18] is in a completely different (magnetic-interaction with the inverse-cube law) context, it equivalently includes a complete erasure protocol and gives the measurement of the work involved, as shown in Fig. 5. This equivalence is based on \(\left|\downarrow\uparrow\right\rangle=\frac{1}{\sqrt{2}}(\left|\boldsymbol{ \psi}+\right\rangle-\left|\boldsymbol{\psi}-\right\rangle)=\frac{1}{\sqrt{2} }\left(\left[\frac{\left|\uparrow\downarrow\right\rangle+\left|\downarrow \uparrow\right\rangle}{\sqrt{2}}\right]-\left[\frac{\left|\uparrow \downarrow\right\rangle-\left|\downarrow\uparrow\right\rangle}{\sqrt{2}} \right]\right)\). This complete erasure protocol has all the necessary steps as defined in [8]: at the start step of erasure, a circularly on-resonant 422 nm laser was used to cyclically pump the two electrons bound within the two ions to a maximally mixed quantum state (see Fig. 4 for details): the spin is equally likely to be in either of the up/down states (corresponding to the center of the Bloch sphere) and thus has a maximal entropyS = \(k_{B}\)ln2; at the mediate step of erasure, the (optically) created qubit is then erased by the (tiny) magnetic field produced by another spin; at the end step of erasure, the qubit ends up in a (ground) quantum state \(\left|\uparrow\right\rangle\) (a point on the Bloch sphere's surface), in which it has a zero entropy \(S=0\). The measurement of the applied magnetic field whose strength is six orders of magnitude smaller than magnetic noise was conducted with the aid of DFS (see Fig. 3 for details) [18].
To verify the completeness (start/erasure/end) of the erasure protocol, one may compare the spin-spin experiment [18] and the single-atom demonstration that completed
Figure 5: Although the spin–spin experiment [18] is in a completely different (magnetic-interaction) context, it equivalently includes a complete erasure protocol and gives the measurement of the work involved. In information erasure, the work one needs to do in order to reset a bit register irrespective of its initial state has to compensate for the illustrated entropy drop \(\Delta S\)[8; 9]. Rabi flopping between the two levels illuminated with light exactly resonant with the transition occurs at the Rabi frequency [22]
the erasure protocol to support quantum Landauer principle [8]. Essentially differently, in this study of a single spin as the smallest information carrier, we attempt to overturn the common sense that Landauer's bound holds in quantum measurement.
To identify the dominant factors in our problem, we rewrote (the spin part of) the two-ion Hamiltonian in the spin-spin experiment [18] as:
\[H=\underbrace{0.5\hbar\big{(}\omega_{A,1}\sigma_{z,1}+\omega_{A,2}\sigma_{z,2} \big{)}}_{\text{MagneticNoise(kHz)}}\underbrace{+2\hbar\varsigma\sigma_{z,1} \sigma_{z,2}}_{\text{Spin(mHz)}}\underbrace{-\hbar\varsigma\big{(}\sigma_{x,1} \sigma_{x,2}+\sigma_{y,1}\sigma_{y,2}\big{)}}_{\text{RabiFlopping(kHz)}}. \tag{7}\]
Here \(\sigma_{j,i}\) is the \(j\in\{x,\,y,\,z\}\) Pauli spin operator of the \(i\)th spin, within which \(\sigma_{z,1}\,\sigma_{z,2}\) does not cause any spin-flips and acts as a phase gate in quantum computing whereas \(\sigma_{x,1}\sigma_{x,2}\) and \(\sigma_{y,1}\sigma_{y,2}\) lead to Rabi flopping of \(\ket{\uparrow}\leftrightarrow\ket{\downarrow}\uparrow\); \(\omega_{A,i}=2\mu_{B}B_{i}/2\hbar\), where \(B_{i}\) is the external magnetic field. The spin-spin interaction strength is \(\zeta=\mu_{0}\mu_{B}^{2}/4\pi\,\hbar d^{3}\), which is consistent with Eq. 5. The first term on the right-hand side of Eq. 7 describes the Zeeman shift of the spins' energy due to the magnetic field fluctuations, which is equivalent to kHz in the spin Larmor frequency \(\omega_{A,i}\) (\(i=1,\,2\)) [18] that characterizes the precession of a transverse magnetization about a static magnetic field. The second term describes the spin-spin magnetic interaction, which is equivalent to 2-5 mHz, as shown in Fig. 3[18]. The third term results in a collective spin flip, in which spin rotation is performed by pulsing a resonant oscillating magnetic field, resulting in a Rabi frequency in kHz, as shown in Fig. 5[18]. Both the spin Larmor frequency in the first term and the Rabi frequency in the third term are on the same kHz order, they largely cancel out each other due to their opposite signs. Owing to this cancelation, it is possible to single out the tiny magnetic field (the two spins apply to each other) for the measurement in the presence of magnetic noise that is six orders (resulting from \(\frac{\text{kHz}}{\text{mHz}}\)) of magnitude greater than it [18]. It is the second term (the spin-spin magnetic interaction at 2-5 mHz) that is at the focus of our study.
Even if we conservatively use the Rabi frequency on the kHz order (rather than 2-5 mHz we used above) to calculate the energy of flipping a spin, we can still safely say that the input energy to erase a spin quantum datum is \(10^{-4}\) times Landauer's bound \(\big{[}(0.96\sim 2.87)\times 10^{-26}\text{ J}\big{]}\) at 1 mK.
An analytical model to explain the above experimental verifications follows.
## 5 Spinor wavefunction of an isolated electron
The Schrodinger-Pauli equation for an isolated electron (the smallest magnet being an info carrier shown in Fig. 6a) is:
\[i\hbar\frac{d|\Psi}{dt}=\hat{H}|\,\Psi\rangle\,, \tag{8}\]
where the spinor wavefunction is \(\ket{\Psi(\text{t})}=C^{+}(t)\ket{\uparrow}+C^{-}(t)\ket{\downarrow}\), and the Hamiltonian is \(\widehat{H}=-\gamma\,B\frac{\hbar}{2}(\ket{\uparrow}\bra{\uparrow}-\ket{ \downarrow}\bra{\downarrow})\) according to Eq. 3. Substitutions into Eq. 7 give:
\[i\hbar\big{(}\dot{C}^{+}\ket{\uparrow}+\dot{C}^{-}\ket{\downarrow} \big{)}= -\gamma\,B\frac{\hbar}{2}(\ket{\uparrow}\bra{\uparrow}-\ket{ \downarrow}\bra{\downarrow})\big{(}C^{+}\ket{\uparrow}+C^{-}\ket{\downarrow} \big{)}\] \[= -\gamma\,B\frac{\hbar}{2}\big{(}C^{+}\ket{\uparrow}-C^{-}\ket{ \downarrow}\big{)}, \tag{9}\]
\[\bigg{[}\dot{C}^{+}\atop\dot{C}^{-}\bigg{]}=\frac{i}{2}\gamma\,B\bigg{[} \begin{matrix}1&0\\ 0&-1\end{matrix}\bigg{]}\bigg{[}\begin{matrix}C^{+}\\ C^{-}\end{matrix}\bigg{]}=\frac{i}{2}\gamma\,B\bigg{[}\begin{matrix}C^{+}\\ -C^{-}\end{matrix}\bigg{]}. \tag{10}\]
Figure 6: **a** Quantum spin tunneling penetrates the thermal energy barrier (Landauer’s bound) and provides a “shortcut” for spin reversal, which is different from classical information manipulations. The cost in erasing a bit does not come from “climbing a barrier”, but rather from compressing phase space with dissipative dynamics. **b** Heisenberg’s time-energy uncertainty relation (TEUR) [23; 24] is used to define information quantitatively from a measuring perspective: the smallest error in measurement is 1 bit. The higher the input energy is, the shorter the time is needed to write/erase a bit of information and vice versa. At the origin (\(t=0\)), there is a very high, but narrow energy barrier. This new definition of information is an important part of our theory in the sense that it is the quantum limit (\(h/2\approx 10^{-34}\) J\(\cdot\) s), rather than Landauer’s bound, that governs the performance of a spin qubit in terms of the energy time product being a constant, as vividly illustrated here. That is, energy bound well below \(k_{B}T\) to erase a spin qubit at the expense of a long spin relaxation time is theoretically sensible and experimentally verified due to this unchanged product (the shaded areas). Our new definition of information based on Heisenberg’s principle allows us to determine the trade-off between energy and speed of manipulating a spin qubit
The WKB (Wentzel-Kramers-Brillouin) approximation rewrites the (complex-valued) spinor wavefunction as:
\[|\Psi(t)\,=\,\left[\begin{array}{c}C^{+}(t)\\ C^{-}(t)\end{array}\right]=\left[\begin{array}{c}C^{+}(0)e^{\Phi(t)}\\ C^{-}(0)e^{-\Phi(t)}\end{array}\right]. \tag{11}\]
The time evolution takes place under \(\Delta E_{\uparrow\downarrow}=\mu_{B}\,B=\frac{1}{2}\gamma\,\hbar B\) in the presence of Landauer's bound \(L_{B}=k_{B}\,T\ln 2\) so that\(B=B_{z}+\delta B\), where \(B_{z}\) is the (real) magnetic field (along the \(z\) axis) and \(\delta B\) is an imaginary magnetic field [to which the thermal perturbation (Landauer's bound\(L_{B}\)) translates itself in the electron's rest frame]. As far as the imaginary perturbation magnetic field \(\delta B\) is concerned, it acts in a direction opposite of the (real) magnetic field \(B_{z}\) and thus is mathematically ascribed as a negative value. Without losing generality, \(\Delta E_{\uparrow\downarrow}(t)\) is assumed as a positive constant \(E\) during \(-t_{E}/2\leq t\leq t_{E}/2\). Then, we obtain:
\[\Phi\left(t=\frac{t_{E}}{2}\right)= i\,\frac{1}{\hbar}\,\int\limits_{-\infty}^{t}\left(-\gamma\,B\,\frac{ \hbar}{2}\right)\mathrm{d}t\Bigg{|}_{t=t_{E}/2}\] \[= i\,\frac{1}{\hbar}\,\int\limits_{-\infty}^{t}\left[\Delta E_{ \uparrow\downarrow}(t)-L_{B}\right]\mathrm{d}t\Bigg{|}_{t=t_{E}/2}\] \[= i\,\frac{1}{\hbar}(Et_{E}-L_{B}t_{L}). \tag{12}\]
Then, Eq. 11 simplifies to:
\[|\,\Psi(t=t_{E}/2))\,=\left[\begin{array}{c}C^{+}(0)e^{\Phi(t)}\\ C^{-}(0)e^{-\Phi(t)}\end{array}\right]=\left[\begin{array}{c}C^{+}(0)e^{i\, \frac{1}{\hbar}(Et_{E}-L_{B}t_{L})}\\ C^{-}(0)e^{-i\,\frac{1}{\hbar}(Et_{E}-L_{B}t_{L})}\end{array}\right]. \tag{13}\]
Equation 13 shows that, underneath the potential hill (\(E<L_{B}\)), behaving like a free and oscillating wave, the single spin with less energy tunnels through the energy hill and appears on the other side with a probability \(|\Psi|^{2}\) to complete a reversal in the spin-spin magnetic interaction experiment [18]. A similar (quantum spin tunneling) phenomenon was observed in a collective \(S_{z}=\pm 10(20\mu_{B})\) giant spin [10].
In Eq. 13, it is (\(Et_{E}-L_{B}t_{L}\)) that defines the wavefunction. In other words, although \(L_{B}\gg E\), it is the energy-time product, rather than any of these four parameters (\(E\), \(t_{E}\), \(L_{B}\), or \(t_{L}\)) individually, that determines the behavior of the spin datum. We see that the probability of tunneling is affected more by (\(Et_{E}-L_{B}t_{L}\)) than by \(C^{+/-}(0)\). It seems that the quantum erasure differs dramatically from its classical counterpart.
In dissipative dynamics, erasing a bit of information requires probability concentration in phase space, which leads to Landauer's bound. In Hamiltonian dynamics, it is possible to take a particle from say the left well to the right one at zero cost (or as low as you want) [13]. Therefore, the problem in a Hamiltonian memory may be that, at the same time, the particle in the right well goes to the left well (or somewhere else--in any case it does not stay on the same well). Fortunately, the tunneling in the spin-spin magnetic interaction experiment [18] is irreversible since the energy is
input by applying a magnetic field that only favors and flips a spin with the opposite direction and a single spin can be switched reliably with a typical detection fidelity of 98% [18]. As mentioned above, a similar phenomenon (the spins can tunnel to the opposite side of the potential barrier, thus leading to an effectively lower activation energy for the spin reversal) was also observed in a collective \(S_{z}=\pm 10(20\mu_{B})\) giant spin [10]. That is, the erasure of the spin datum in the giant spin experiment [10] and the spin-spin experiment [18] is not pure Hamiltonian dynamics and the probability concentration in phase space can still be seen.
## 6 Using Heisenberg's principle to define information
To further interchange information with energy over time, we used Heisenberg's time-energy uncertainty relation (TEUR) in 1927 [23] to define information, as illustrated in Fig. 6b. From a measuring perspective, one bit of information is the smallest error in physical measurement. A bit of information is quantitatively defined as follows:
\[1(\text{bit})=\frac{1}{\hbar}\Delta E\,\Delta t, \tag{14}\]
where we embraced a new interpretation of the TEUR: a quantum state with spread in energy \(\Delta E\) takes time at least \(\Delta t\) to evolve to an orthogonal (distinguishable) state [24].
Note that the above mentioned "one bit of information as the smallest error in physical measurement" should not be interpreted as "the smallest error one makes is one bit when mapping the measured analog value to a discrete sequence of digits". Here one bit is physically a quantum as the minimum amount of a conjugate pair of observables (energy/time) involved in an interaction. According to Heisenberg's TEUR, this amount corresponds to Planck's reduced constant (\(\hbar=1.054571817\times 10^{-34}\) J \(\cdot\) s) that defines the quantum nature of energy and relates the energy of a photon to its frequency. Ergo, this new definition of information reflects the essence of quantum physics: the magnitude of the physical property can take on only discrete values consisting of integer multiples of one quantum (a multiple of Planck's reduced constant). Also note that \(\Delta E\,\Delta t/\hbar\) is unitless, which does not violate the definition of information in units.
This energy-time product is ultimate for a bit of information no matter what kind of information carrier (a bead, an atom, an ion, a nanomagnet, a giant spin, a single spin, or a photon) is used and what mechanism [classical physics (electrical, magnetic, optical, chemical or even mechanical), or quantum physics] is used to encode/manipulate it.
If Landauer's bound at room temperature is used, the time needed to write/erase a bit of information (that is physically equivalent to the duration of the energy measurement in the TEUR [24] since energy is consumed throughout the write/erase protocol) is:
\[\Delta t=\frac{\hbar}{\Delta E}=\frac{1.05\times 10^{-34}\text{ J }\cdot\text{s}}{3\times 10^{-21}\text{ J}}=3.50\times 10^{-14}\text{ s}. \tag{15}\]
This calculation result agrees reasonably with the Brillouin's principle [25].
If we use the calculated energy of (\(0.82\sim 1.66\)) \(\times\)\(10^{-36}\) J of flipping a quantum spin at the Doppler temperature (Sect. 4, the corresponding timescale is:
\[\Delta t_{\uparrow\downarrow}=\frac{\hbar}{\Delta E_{\uparrow\downarrow}}= \frac{1.05\times 10^{-34}\;\mathrm{J}\cdot\mathrm{s}}{0.82\times 10^{-36}\; \mathrm{J}}=128\;\mathrm{s}, \tag{16}\]
which is surprisingly long but still agrees reasonably with the measured interrogation time (total tunneling time) (\(2T_{\mathrm{Bell}}=67\;\mathrm{s}\times 2=134\;\mathrm{s}\)) for the rotation of the Bloch vector from the south pole toward the north pole through the equatorial plane (Fig. 3) [18]. It is also consistent with a very long spin relaxation time (\(\tau_{\mathrm{rel}}\geq 100\;\mathrm{s}\)) at 1 K in the aforementioned giant spin experiment [10].
Historically, more than one definition of information existed [26; 27; 28], which implies that information can be studied from different angles and its definition may not be unique. In this study, our new definition of information agrees reasonably with the above experiment [18]:
\[1(\mathrm{bit})=\Delta E\,\Delta t/\hbar=\frac{8.2\times 10^{-37}\;\mathrm{J} \cdot 134\;\mathrm{s}}{1.05\times 10^{-34}\;\mathrm{J}\cdot\mathrm{s}}\approx 1 (\mathrm{bit}) \tag{17}\]
which unveils that the reduced Planck constant (\(\hbar=1.05\times 10^{-34}\;\mathrm{J}\cdot\mathrm{s}\)) is the limit for a spin qubit, below which quantum computing does not make sense.
The above analysis clearly shows that Landauer's bound can be broken quantitatively in such a single spin [at the cost of a long spin relaxation time (tens of seconds)] in terms of the bound being defined as the smallest amount of the energy used to erase a bit of information. One can use the work of \(\mu_{B}\,B=0.82\times 10^{-36}\;\mathrm{J}\) to erase a bit of quantum spin datum in the presence of Landauer's bound [\(k_{B}\,T\,\ln 2=(0.96\sim 2.87)\times 10^{-26}\;\mathrm{J}\)] at 1 mK. The former is 10 orders of magnitude smaller than the latter.
The energy-time cost of flipping a spin according to Heisenberg's TEUR agrees with the spinor wavefunction analysis in Eq. 13. This cost is: \(\Delta E_{\uparrow\downarrow}\,\Delta t_{\uparrow\downarrow}=0.82\times 10^{-3 6}\;\mathrm{J}\times 67\;\mathrm{s}\times 2=1.10\times 10^{-34}\;\mathrm{J}\cdot \mathrm{s}\), which is very close to the Heisenberg limit (a. k. a. the quantum limit): \(\hbar/2\approx 10^{-34}\;\mathrm{J}\cdot\mathrm{s}\)[23]. Among various information carriers, a spin is the closest to the quantum limit (Fig. 7a).
Noticeably, in the giant spin experiment, the work required for the erasure of each bit is still equivalent to the theoretical Landauer bound at the experimental temperature of 1 K [10]. In spite of quantum spin tunneling, Landauer's bound still holds in a giant spin that is thought to be still too large to break Landauer's bound since each nanomagnetic bit is composed of eight spin-5/2 Fe3+ ions coupled to each other by competing antiferromagnetic interactions to form a collective \(S_{z}=\pm\) 10 (20 \(\mu_{B}\)) giant spin. The distance to the (energy-time) quantum limit is indicated in Fig. 7a so we can easily identify the two best performing ones: a single spin [18] and a giant spin [10]. Remarkably, the energy-time cost of the former is orders of magnitude better than that of the latter. This result indicates that the size of an information carrier still matters in terms of using a certain quantum effect (e.g., quantum spin tunneling) to improve the performance of a classical computing machine.
This section is a necessary part of our theory in the sense that it is the quantum limit (\(\hbar/2\approx 10^{-34}\) J \(\cdot\) s), rather than Landauer's bound, that governs the performance of a spin qubit in terms of the energy time product being a constant, as vividly illustrated in Fig. 6b. That is, energy bound well below \(k_{B}T\) to erase a spin qubit at the expense of slow operation is theoretically sensible and experimentally verified due to this unchanged product (the shaded areas).
Heisenberg's TEUR is relevant for the spin dynamics here and we used it to set the maximum speed at which a spin can modify its energy by a given amount (in this case, the splitting induced by the magnetic field). It is not for the thermodynamic energy balance, which is solely related to the need of compensating for the entropy decrease required to erase the bit no matter how quickly the process actually takes place.
## 7 Conclusion & discussions
This study depicts an optically-manipulated spin-encoded quantum computer (Fig. 1) that is not bound by Landauer's bound any longer and may represent the last piece of the puzzle in quantum Landauer erasure.
Without any circular reasoning, a chain of evidence for a single spin is shown in Fig. 7b in terms of the energy-time cost being a constant (closest to the quantum limit). Landauer's bound exists at the Doppler temperature of 1 mK in the spin-spin experiment [18], which is indispensable and imperative since \(\Delta E\propto T\) (\(T>0\)). All experimental data match their theoretically estimated counterparts at room temperature (300 K) and the Doppler temperature (1 mK), respectively.
Figure 7: **a** The energy–time cost of various information carriers. A single spin in this study is the smallest and the closest to the quantum limit. **b** A chain of evidence to support Landauer’s bound in a spin in terms of the energy time product being a constant (closest to the quantum limit)
In classical computing, the energy of erasing a bit of classical information remains the same (Landauer's bound) regardless of whether we are dealing with a bit of position-encoded information, orientation-encoded information, or anything else. In quantum computing, Landauer's bound can be broken quantitatively although we still need to "anchor" or "trap" an isolated electron (as a quantum spin information carrier since an electron's charge and a spin are inseparable) against thermal fluctuation with an energy barrier greater than the classical Landauer bound. It is the quantum spin tunneling phenomenon whereby a wavefunction can propagate through a potential barrier (the classical Landauer bound) in such a quantum computer.
Today's few-qubit quantum computers require large cooling machinery external to the actual quantum processors whereas the fundamental energy requirement as given by Eq. 5 merely represents a minor part of the overall energy bill. However, with the progress of the quantum technology, the cooling energy is likely to scale less than linearly with the number of qubits, hence its proportion may become less dominant [14]. Nonetheless, such a spin-encoded quantum computer may be slow although it can operate at the ultimate (energy) limit to computation set by physics, as mentioned in Sect. 6.
In future work, we will keep track of the new spin-spin magnetic interaction experiment (as well as other ones similar to this) since the authors of Ref. [18] proposed a redesign of the ion trap at high voltage (higher than 400 V) to facilitate weaker magnetic interaction with larger inter-ion separations (currently \(2.18\sim 2.76\,\mathrm{\SIUnitSymbolMicro m}\)) [18]. The challenge is that larger separations result in a diminishing signal-to-noise ratio and the measurement accuracy needs to be further improved. Only in theory, it is possible to use an arbitrarily small magnetic field to flip a spin whereas, in practice, this magnetic field must have a lower bound [in analogue to the world record of the coldest temperature (3.8 pK) although, in theory, one can get as close as possible to absolute zero] and the current world record (B = 8.8 pT) was made in this experiment [18]. Landauer's bound may be further broken by a new factor (currently \(10^{4}\sim 10^{10}\) in this study).
Landauer's bound is widely accepted as one of the fundamental limits in computer science and physics, but it has still been challenged for using circular reasoning and faulty assumptions [28]. In 2000, Shenker argued that Landauer's dissipation thesis (logically irreversible operations are dissipative by \(k_{B}\)ln2 per bit of lost information) is plainly wrong since logical irreversibility has nothing to do with dissipation [29]. In 2003, Bennett suggested that a no-erasure demon is subject to an extended form of Landauer's principle to refute Shenker's argument and claimed that, although in a sense it is indeed a straightforward consequence or restatement of the second law of thermodynamics, it still has considerable pedagogic and explanatory power [30]. In 2005, Norton pointed out that, due to the illicit formation, Bennett's extension in order to exorcise the no-erasure demon failed [31]. In 2007, Ladyman et al. defended the qualitative form of Landauer's Principle, and clarified its quantitative consequences (assuming the second law of thermodynamics) [32]. In 2008, Sagawa and Ueda showed that Landauer's principle is a consequence of the second law of thermodynamics with discrete quantum feedback control [33]. In 2009, Cao and Feito illustrated some consequences by computing the entropy reduction in feedback controlled systems [34]. In 2011, Norton showed that the previous proofs selectively neglect thermal fluctuations that may fatally disrupt their intended operation [35]. In 2019, Jordan and Manikandan disagreed with Norton and found the principle to be easily derivable from basic principles
of thermodynamics and statistical physics [36]. In 2019, Norton argued that Jordan and Manikandan were mistaken with their saying (dissipation is only necessitated when logically irreversible processes are required) since the existence of thermal fluctuations and the high thermodynamic cost of suppressing them are still unavoidable [37].
In light of the above research, we will further investigate those direct/indirect proofs [35] of Landauer's principle to see whether it is just a direct consequence or restatement of the second law of thermodynamics (the information erasure results in a decreased entropy). This investigation is important and necessary no matter whether we still want to regard Landauer's principle as fundamental as the second law of thermodynamics. We will also study whether it is possible to implement an erasable bit without thermodynamic cost by compressing phase space with dissipative dynamics [13; 38; 39].
In spite of plenty of mysteries with Landauer's bound, we may have to presume its demise based on our study (the bound is no longer the smallest amount of energy of erasing a spin qubit) and the concerns (something is fundamentally awry in the literature based on unsound, incoherent foundations/principles/methods/frameworks) expressed by other researchers [28; 29; 31; 33; 34; 35; 37]. As well as having significant practical importance, understanding the fundamental limits on what we can achieve with our computing machines [24] is nothing less than understanding the limits of the world in which we live and preparing for revolutions, such as post-quantum computing-paradigm-shifts.
###### Acknowledgements.
We thank Dr. Shlomi Kotler (the Hebrew University of Jerusalem) for discussions on their magnetic spin-spin interaction experiment and his kind permission for us to redraw the experimental setup. We also thank Dr. Sai Vinjanampathy (Indian Institute of Technology Bombay) for commenting the first draft of this paper. This research was partially funded by an EC grant, PIIFGA2012332059, Marie Curie Fellow: Prof. Leon Chua (UC Berkeley), Scientist-in-charge: Prof. Frank Wang (University of Kent).
## Author contributions
Frank Wang conceived the research idea, analyzed all the experiments, developed the new theory to explain the experimental verifications, and wrote the manuscript.
## Data availability
All data generated and analyzed during this study are included in this published article.
## Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit [http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/).
|
2310.10474 | Optimal transport for some symmetric, multidimensional integer
partitions | A result of Hohloch links the theory of integer partitions with the Monge
formulation of the optimal transport problem, giving the optimal transport map
between (Young diagrams of) integer partitions and their corresponding
symmetric partitions. Our aim is to extend Hohloch's result to the higher
dimensional case. In doing so, we show the Kantorovich formulation of the
optimal transport problem provides the tool to study the matching of higher
dimensional partitions with their corresponding symmetric partitions. | Daniel Owusu Adu, Daniel Keliher | 2023-10-16T14:55:57Z | http://arxiv.org/abs/2310.10474v1 | # Optimal transport for some symmetric,
###### Abstract.
A result of Hohloch links the theory of integer partitions with the Monge formulation of the optimal transport problem, giving the optimal transport map between (Young diagrams of) integer partitions and their corresponding symmetric partitions. Our aim is to extend Hohloch's result to the higher dimensional case. In doing so, we show the Kantorovich formulation of the optimal transport problem provides the tool to study the matching of higher dimensional partitions with their corresponding symmetric partitions.
Key words and phrases:Optimal transport, integer partitions
## 1. Introduction
This paper concerns the intersection of the theory of integer partitions and of optimal transport. Hohloch has made this connection in [H] for one-dimensional integer partitions, where the Monge formulation of optimal transport problem [M] was used as a tool to describe and relate some bijections coming from the theory of integer partitions (e.g. self-symmetric partitions and partitions associated via Euler's identity). While Hohloch, in [H] does not provide a specific practical scenario for exploring the connection between two seemingly unrelated fields, the theory of integer partitions and of optimal transport, one potential application of the link between optimal transport and integer partitions could be in data analysis. Optimal transport can be used to compare probability distributions, and integer partitions can be used to represent data in a structured way. By linking these two fields, it may be possible to develop new methods for analyzing and comparing data sets that are represented as integer partitions.
To state the one-dimensional result in [H] more precisely, we begin with the following notations and definitions; given an integer \(n\in\mathbb{N}\), let \(\mathcal{P}(n)\) be the set of partitions of \(n\) and \(\pi\in\mathcal{P}(n)\) represent a partition of \(n\). For any \(\pi\in\mathcal{P}(n)\), one can associate a unique diagram called a Young diagram, \(Y(\pi)\) (see Definition 2.3). Given \(\pi\) and the corresponding Young diagram \(Y(\pi)\), by reflecting the Young diagram \(Y(\pi)\) across the line \(y=x\) we obtain another Young diagram. We denote the reflected Young diagram by \(Y(\text{sym}(\pi))\), where \(\text{sym}(\pi)\) is called the symmetric partition of \(\pi\) and is the corresponding partition for \(Y(\text{sym}(\pi))\) (see Figure 1). Given \(Y(\pi)\) and \(Y(\text{sym}(\pi))\), one can construct probability measures \(\delta_{\pi}\) and \(\delta_{\text{sym}(\pi)}\). Hohloch, in [H], constructed such measures using Dirac measures concentrated on the corners of each square of a Young diagram closest to the origin. This raises two natural
questions: what is the optimal way to match \(\pi\) to \(\operatorname{sym}(\pi)\), and what properties of \(\operatorname{sym}(\pi)\) can we infer from \(\pi\)? We summarize one result from [H] as follows.
1. If the cost function in Monge problem [M] is Euclidean distance, then the function which is the identity map on \(\operatorname{spt}(\delta_{\pi})\cap\operatorname{spt}(\delta_{\operatorname{ sym}(\pi)})\) and is otherwise reflection across \(y=x\), is optimal for \(\delta_{\pi}\) and \(\delta_{\operatorname{sym}(\pi)}\), where \(\operatorname{spt}(-)\) denotes the support of the measure.
2. We have \(\pi=\operatorname{sym}(\pi)\) if an only if \(\delta_{\pi}=\delta_{\operatorname{sym}(\pi)}\), i.e. the identity map is optimal.
For instance, in Figure 1, the map which is optimal between the left-hand and right-hand diagrams is the one which leaves the four common squares (i.e. the intersection of the supports of the two corresponding measures) fixed, and moves the squares with lower left corners \((2,0)\) and \((3,0)\) in the left-hand diagram to the ones with lower left corners \((0,2)\) and \((0,3)\), respectively, in the right-hand diagram.
In [H, Conjecture 4.2], Hohloch conjectures that the results in (1) and (2) above can be extended to higher dimensional integer partitions. The main contribution of this note is to prove the conjecture: see Theorem 4.1 and Theorem 4.2.
### Outline
In Section 2, we provide formal definitions related to integer partitions and their higher dimensional analogues, as well as describe how we interpret the \(m\)-dimensional partitions as the appropriate probability measures which will allow us compare different partitions using optimal transport. For this reason, we review some results from optimal transport in Section 3. We state and provide a proof of our main result in Section 4. Finally, Section 5 includes concluding remarks and some possible directions of future investigation.
## 2. Integer Partitions
In this section we briefly recall some basic definitions related to integer partitions and their higher dimensional counterparts. The study of integer partitions has a rich history in number theory and combinatorics; see e.g. [vLW].
**Definition 2.1**.: _Let \(n\in\mathbb{N}\). A partition of \(n\) is an ordered tuple of integers \((n_{1},\ldots,n_{k})\), where \(n_{1}\geq n_{2}\geq\ldots\geq n_{k}\geq 1\), \(n_{i}\in\mathbb{N}\) for all \(i\in\{1,\ldots,k\}\), such that \(\sum_{i=1}^{k}n_{i}=n\)._
Given \(n\in\mathbb{N}\), we denote by \(\mathcal{P}(n)\) the collection of all the possible partitions on \(n\) and set \(p(n)=\#\mathcal{P}(n)\). For example,
\[\mathcal{P}(4)=\{(4),(3,1),(2,2),(2,1,1),(1,1,1,1)\}\]
and \(p(4)=5\).
Integer partitions have a natural higher dimensional analogue, which we now define following [H, Definition 3.4].
**Definition 2.2**.: _Let \(n\in\mathbb{N}\). An \(m\)-dimensional partition of \(n\) is an array of integers \(n_{i_{1},...,i_{m}}\in\mathbb{N}\) where \(1\leq i_{j}\leq k_{j}\) for some integers \(1\leq k_{j}\leq n\), \(j=1,...,m\), such that for each index \(i_{j}=1,...,k_{j}\) the integers \(n_{i_{1},...,i_{m}}\) are monotone a decreasing sequence with \(n\geq\max_{i_{j}\in\{1,...,k_{j}\}}n_{i_{1},...,i_{m}}\) and \(\min_{i_{j}\in\{1,...,k_{j}\}}n_{i_{1},...,i_{m}}\geq 1\), and \(\sum_{i_{1}=1}^{k_{1}}\ldots\sum_{i_{m}=1}^{k_{m}}n_{i_{1},\cdots,i_{m}}=n\)._
We write \(\mathcal{P}_{m}(n)\) for the set of all \(m\)-dimensional partitions of \(n\), and set \(p_{m}(n)=\#\mathcal{P}_{m}(n)\).
For example,
\[\left[\begin{array}{cc}1&\\ 2&1\end{array}\right]\text{ and }\left[\begin{array}{cc}1&\\ 2&1&\\ 3&1&1\end{array}\right] \tag{2.1}\]
are 2-dimensional partitions of 4 and 9, respectively.
To represent a partition, we have the convenient notion of a Young diagram1. In the one dimensional case, the Young diagram of a partition \(\lambda=(\lambda_{1},\lambda_{2},...\lambda_{k})\in\mathcal{P}(n)\) is \(n\) squares arranged in left-justified rows where the bottom row has \(\lambda_{1}\) squares, the second row has \(\lambda_{2}\) squares, and so on. Figure 2 shows the Young diagram for two partitions of (2.1) from above. We can think of a Young diagram of a partition \(\pi\in\mathcal{P}_{m}(n)\) as a finite collection of \(n\) unit cubes in \(\mathbb{R}^{m+1}\) with positions regulated by the choice of partition, \(\pi\).
Footnote 1: NB multiple conventions for Young diagrams appear in the literature.
**Definition 2.3**.: _If \(\pi=(n_{i_{1},...,i_{m}})_{\begin{subarray}{c}1\leq n_{j}\leq k_{j}\\ j=1,...,m\end{subarray}}\in\mathcal{P}_{m}(n)\) as in Definition 2.2, the Young diagram of \(\pi\), denoted \(Y(\pi)\), is the following union of unit cubes in \(\mathbb{R}^{m+1}\):_
\[Y(\pi):=\bigcup_{1\leq i_{1},...,i_{m}\leq k_{1},...,k_{m}}\bigcup_{\alpha=1}^ {n_{i_{1},...,i_{m}}}\left([\alpha-1,\alpha]\times\prod_{j=1}^{m}[i_{j}-1,i_{ j}]\right). \tag{2.2}\]
In a similar fashion, we can ascribe to each partition \(\pi\), a probability measure, \(\delta_{\pi}\), which is a sum of point masses as follows:
\[\delta_{\pi}:=\frac{1}{n}\sum_{1\leq i_{1},...,i_{m}\leq k_{1},...,k_{m}}\sum_ {\alpha=1}^{n_{i_{1},...,i_{m}}}\delta(i_{1},...,i_{m},\alpha) \tag{2.3}\]
where \(\delta(x_{1},...,x_{m+1})\) is a Dirac delta at the point \((x_{1},...,x_{m+1})\). Observe that \(\delta_{\pi}(\mathbb{R}^{m+1})=1\) for any partition \(\pi\in\mathcal{P}_{m}(n)\).
The intuition for (2.3) case can be thought of roughly as follows: we can imagine \(\delta_{\pi}\) as assigning a unit point mass to each unit cube in \(Y(\pi)\) taking the value \(1\) on the corner of each such cube with minimal Euclidean distance to the origin, and \(0\) everywhere else.
Given a permutation \(\sigma\in S_{m+1}\) letters, one can associate to any \(m\)-dimensional partition a new partition as follows.
**Definition 2.4** ([H, Definition 4.5]).: _Given \(\sigma\in S_{m+1}\), an element of the symmetric group on \(m+1\) elements, let \(T_{\sigma}:\mathbb{R}^{m+1}\rightarrow\mathbb{R}^{m+1}\) be the linear map defined by \(e_{i}\mapsto e_{\sigma(i)}\) where \(e_{i}\), \(i=1,...,m+1\), is the standard basis of \(\mathbb{R}^{m+1}\). For any \(\pi\in\mathcal{P}_{m}(n)\),_
* _the_ \(\sigma\)_-symmetric partition of_ \(\pi\)_, denoted by_ \(\text{sym}_{\sigma}(\pi)\)_, is the partition whose Young diagram satisfies_ \(Y(\text{sym}_{\sigma}(\pi))=T_{\sigma}(Y(\pi))\)_;_
* _if_ \(\pi=\text{sym}_{\sigma}(\pi)\)_, then we call_ \(\pi\)__\(\sigma\)_-self-symmetric._
Figure 2. Young diagrams of a partition in \(\mathcal{P}_{2}(4)\) (left) and a partition in \(\mathcal{P}_{2}(9)\) (right) from (2.1)
This definition generalizes the concept of self-symmetric partitions in one-dimension, which are partitions whose Young diagrams are invariant under reflection across the \(y=x\) line. The \(\sigma\)-self-symmetric partitions are invariant under a more general type of reflection, determined by the permutation \(\sigma\). Figure 3 gives an example of a partition \(\pi\in\mathcal{P}_{2}(6)\) alongside \(\operatorname{sym}_{(23)}(\pi)\), i.e. partitions which are \((2\ 3)\)-symmetric.
Notice that if \(\tau\in S_{2}\) is not the identity permutation, then any partition \(\pi\in\mathcal{P}_{1}(n)\) has a \(\tau\)-symmetric partition which is just the partition obtained by reflecting the Young diagram of \(\pi\), now in \(\mathbb{R}^{2}\), across the line \(y=x\). In this restricted case, \(\pi\) is called _self-symmetric_ if its Young diagram is invariant under reflection across \(y=x\).
## 3. Optimal Transport
Our goal is to investigate patterns between the \(m\)-dimensional partition to its corresponding symmetric partition. The framework that enables us to establish the pattern is the optimal transport framework. Therefore, we state the problem and an important preliminary result on the theory of optimal transport \([\operatorname{V},\operatorname{G}].\) Readers who are familiar can skip this section and refer to it when needed. In order to state the problem more precisely, we introduce some mathematical notions. Let \(x_{1},x_{2}\in\mathbb{R}_{+}^{m+1}\) be an \(m+1\)-tuples of positive real numbers such that \(\sum_{j=1}^{m+1}x_{1,j}=\sum_{j=1}^{m+1}x_{2,j}=1\) where \(x_{i,j}\), with \(i=1,2\) and \(j=1,...,m+1\), denotes the \(j\)th coordinate of \(x_{i}\) and consider two measures
\[\delta_{x_{1}}=\sum_{j=1}^{m+1}x_{1,j}\delta_{x_{1,j}}\quad\text{ and }\quad \delta_{x_{2}}=\sum_{j=1}^{m+1}x_{2,j}\delta_{x_{2,j}}.\]
\(\delta_{x_{i,j}}\) is the Dirac delta measure on \(x_{i,j}\). Let
\[X:=\{(x_{1,i},x_{2,j})\mid 1\leq i,j\leq m+1\}\]
and let \(c:X\to\mathbb{R}_{+}\cup\{\infty\}\) be a given cost function, we consider the discrete version of Kantorovich [K] problem:
\[\inf_{\gamma\in\Pi(\delta_{x_{1}},\delta_{x_{2}})}\sum_{1\leq i,j\leq m+1}c_{i,j} \gamma_{i,j}, \tag{3.1}\]
where \(c_{ij}=c(x_{1i},x_{2j})\),
\[\Pi(\delta_{x_{1}},\delta_{x_{2}}):=\{\gamma\in\mathbb{R}^{(m+1)\times(m+1)} \mid\gamma\mathbb{1}_{m+1}=\delta_{\pi}\text{ and }\gamma^{\mathrm{T}}\mathbb{1}_{m+1}= \delta_{\mathrm{sym}_{\sigma}(\pi)}\} \tag{3.2}\]
and \(\mathbb{1}_{m+1}\in\mathbb{R}^{m+1}\) is the vector of ones. The matrices \(\gamma\in\Pi(\delta_{x_{1}},\delta_{x_{2}})\) are called _transport plans_. Note that the set (3.2) is the set of doubly stochastic matrices which is a compact set (see [G, Chapter 3]) and hence the existence of optimizers \(\gamma^{*}\) depends on the cost function \(c\). In the continuous case, problem (3.1) is related to the classical Monge problem [M]. In particular, for the case where the cost is \(c(x_{1,i},x_{2,j})=|x_{1,i}-x_{2,j}|^{2}\) it is well-known (see for instance \([\mathrm{ACB}^{+},\mathrm{KS},\mathrm{RR}]\)) that the solution of the Monge problem is obtain from the continuous version of problem (3.1). In general, the Monge problem does not always admit a solution even if the cost function is very regular. We note that optimal transport theory has become a useful tool for other fields (see for instance \([\mathrm{A},\mathrm{ABG},\mathrm{CGP},\mathrm{PC},\mathrm{AC}]\)).
The characterization of the support of optimal transport plans will be useful in establishing our results. To state this result more precisely, we begin with the following definition.
**Definition 3.1**.: _We say that a set \(\Gamma\subset X\) is \(c\)-cyclically monotone, if for any \(k\in\mathbb{N}\), any permutation \(\sigma\in S_{k}\) and any finite family of points \(((x_{1,1},x_{2,1}),\ldots,(x_{1,k},x_{2,k}))\in\Gamma\), we have that_
\[\sum_{i=1}^{k}c(x_{1,i},x_{2,i})\leq\sum_{i=1}^{k}c(x_{1,\sigma(i)},x_{2,\sigma (i)}).\]
The following result will be useful; see \([\mathrm{V},\mathrm{G}]\).
**Theorem 3.1**.: _If \(\gamma^{*}\) is optimal for the cost \(c\) and \(c\) is continuous, then the support of \(\gamma^{*}\) denoted as \(\mathrm{spt}(\gamma^{*})\subset X\) is a \(c\)-cyclical monotone set._
Note that in this discrete setting, since all mass of \(\delta_{x_{i}}\), where \(i=1,2\), are concentrated on isolated points, the \(c\)-cyclical monotone set can be used to define a linear map which will describe the optimal pairings \(x_{1,i}\) and \(x_{2,j}\). Most importantly, if the cost \(c\) is convex, then this linear map is unique.
## 4. Main Results and Proofs
This Section is dedicated to providing a proof of the conjectures stated in [H]. We will demonstrate here that, unlike in [H], the Kantorovich formulation of optimal transport (3.1)-(3.2) offers an alternative, more concise approach for handling the higher-dimensional case. We now state are main results. Recall that for a partition \(\pi\in\mathcal{P}_{m}(n)\) and its \(\sigma\)-symmetric
partition \(\operatorname{sym}_{\sigma}(\pi)\), we associate Young diagrams as in Definition 2.3, and those, we associate measures \(\delta_{\pi}\) and \(\delta_{\operatorname{sym}_{\sigma}(\pi)}\) as in (2.3), and define the Wasserstein distance between \(\delta_{\pi}\) and \(\delta_{\operatorname{sym}_{\sigma}(\pi)}\) as
\[W(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)}):=\min_{\gamma\in\Pi( \delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})}\sum_{i,j=1}^{m+1}c_{ ij}\gamma_{ij}. \tag{4.1}\]
where \(c=(c_{ij})\in\mathbb{R}^{(m+1)\times(m+1)}\), \(c_{ij}=|i-j|^{2}\) and \(\Pi(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})\) is defined in (3.2).
**Theorem 4.1**.: _Let \(\pi\in\mathcal{P}_{m}(n)\) and \(\sigma\in S_{m+1}\). The matrix \(T_{\sigma}=(e_{\sigma(1)},\ldots,e_{\sigma(m+1)})\), where \(e_{1},\ldots,e_{m+1}\) is the standard basis of \(\mathbb{R}^{m+1}\), induces the optimal matrix in (4.1). In particular, the map which is the identity on \(\operatorname{spt}(\delta_{\pi})\cap\operatorname{spt}(\delta_{\operatorname {sym}_{\sigma}(\pi)})\) and is \(T_{\sigma}\) otherwise, is optimal for \(\delta_{\pi}\) and \(\delta_{\operatorname{sym}_{\sigma}(\pi)}\)._
We state here that the optimal matrix corresponding to \(W(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})\) exists in \(\Pi(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})\), since the cost \(c_{ij}\) is Euclidean distance/cost and the constraint set is a compact set.
**Theorem 4.2**.: _A partition \(\pi\in\mathcal{P}_{m}(n)\) is \(\sigma\)-self-symmetric if and only if \(W(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})=0\), where \(W(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})\) is defined in (4.1)._
Proof of Theorem 4.1.: Consider measures \(\mu_{\sigma},\nu_{\sigma},\omega_{\sigma}\in\mathcal{P}(\mathbb{R}^{m+1})\) such that
\[\operatorname{spt}(\omega_{\sigma})= \operatorname{spt}(\delta_{\pi})\cap\operatorname{spt}(\delta_{ \operatorname{sym}_{\sigma}(\pi)}),\] \[\operatorname{spt}(\mu_{\sigma})= \operatorname{spt}(\delta_{\pi})\backslash\left(\operatorname{spt }(\delta_{\pi})\cap\operatorname{spt}(\delta_{\operatorname{sym}_{\sigma}( \pi)})\right),\] \[\operatorname{spt}(\nu_{\sigma})= \operatorname{spt}(\delta_{\operatorname{sym}_{\sigma}(\pi)}) \backslash\left(\operatorname{spt}(\delta_{\pi})\cap\operatorname{spt}( \delta_{\operatorname{sym}_{\sigma}(\pi)})\right).\]
Then we decouple \(\Pi(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})\) as disjoint union
\[\Pi(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})=\Pi(\omega_{ \sigma},\omega_{\sigma})\cup\Pi(\mu_{\sigma},\nu_{\sigma}).\]
where \(\Pi(\omega_{\sigma},\omega_{\sigma})\) is the set of matrices concentrated on entries corresponding to \(\operatorname{spt}(\omega_{\sigma})\times\operatorname{spt}(\omega_{\sigma})\) and \(\Pi(\mu_{\sigma},\nu_{\sigma})\) is the set of matrices concentrated on entries corresponding to the compliment of \(\operatorname{spt}(\omega_{\sigma})\times\operatorname{spt}(\omega_{\sigma})\). Therefore, we have that
\[W(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})=\min_{\hat{\gamma} \in\Pi(\omega_{\sigma},\omega_{\sigma})}\sum_{ij=1}^{m+1}c_{ij}\hat{\gamma}_{ ij}+\min_{\hat{\gamma}\in\Pi(\mu_{\sigma},\nu_{\sigma})}\sum_{ij=1}^{m+1}c_{ij}\hat{ \gamma}_{ij}.\]
However, since \(c_{ij}=|i-j|^{2}\), we have that
\[\min_{\hat{\gamma}\in\Pi(\omega_{\sigma},\omega_{\sigma})}\sum_{ij=1}^{m+1}c_{ ij}\hat{\gamma}_{ij}=0,\]
where \(\hat{\gamma}^{*}\in\Pi(\omega_{\sigma},\omega_{\sigma})\) is the unique diagonal matrix. Therefore,
\[W(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})=\min_{\hat{\gamma} \in\Pi(\mu_{\sigma},\nu_{\sigma})}\sum_{ij}^{m+1}c_{ij}\hat{\gamma}_{ij}. \tag{4.2}\]
Furthermore, from Theorem 3.1, since the support \(\operatorname{spt}(\tilde{\gamma}^{*})\subset\operatorname{spt}(\mu_{\sigma}) \times\operatorname{spt}(\nu_{\sigma})\) for the minimizer \(\tilde{\gamma}\) for (4.2) is a \(c\)-cyclical monotone set in \(\operatorname{spt}(\mu_{\sigma})\times\operatorname{spt}(\nu_{\sigma})\) that depends on \(\sigma\in S_{m+1}\), we have that the optimal transport plan is induced by the matrix \(T_{\sigma}=(e_{\sigma(1)},\ldots,e_{\sigma(m+1)})\) where \(e_{1},\ldots,e_{m+1}\) is the standard basis in \(\mathbb{R}^{m+1}\).
We proceed to the proof of the next result.
Proof of Theorem 4.2.: Suppose \(\pi\in\mathcal{P}_{m}(n)\) is a \(\sigma\)-self-symmetric partition. Then, from Definition 2.4, we have that \(\pi=\operatorname{sym}_{\sigma}(\pi)\) and there exists \(T_{\sigma}:\mathbb{R}^{m+1}\to\mathbb{R}^{m+1}\) such that
\[Y(\operatorname{sym}_{\sigma}(\pi))=T_{\sigma}(Y(\pi)).\]
Then, since \(\pi\in\mathcal{P}_{m}(n)\) is a \(\sigma\)-self-symmetric partition, we have that \(Y(\pi)=T_{\sigma}(Y(\pi))\). This implies that \(W(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})=0\). The optimal transport map and plan are the do-nothing map and plan.
Conversely, suppose \(W(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})=0\). Then there exists an optimal matrix \(\gamma^{*}\in\Pi(\delta_{\pi},\delta_{\operatorname{sym}_{\sigma}(\pi)})\) such that
\[\sum_{ij}^{m+1}c_{ij}\gamma^{*}_{ij}=0.\]
Now, since \(c_{ij},\gamma^{*}_{ij}\geq 0\) the non-zero entries of \(\gamma^{*}\) must be assigned to the entries where \(c_{ij}=0\). Therefore, from Theorem 3.1, the set
\[\{(i,j)\in\operatorname{spt}(\delta_{\pi})\times\operatorname{spt}(\delta_{ \operatorname{sym}_{\sigma}(\pi)}):c_{ij}=0\},\]
is the \(c\)-cyclical monotone set for \(\gamma^{*}\). Since \(c_{ij}=|i-j|^{2}\), this implies that \(i=j\) and hence the \(c\)-cyclical monotone set is a diagonal set and their Young diagram are the same. This implies that \(\operatorname{sym}_{\sigma}(\pi)=\pi\) and hence from Definition 2.4 we conclude that \(\pi\in\mathcal{P}_{m}(n)\) is \(\sigma\)-self-symmetric partition, which completes the proof.
**Example 4.1**.: _Figure 4 gives an example of the optimal transport map for some \(\pi\in\mathcal{P}_{2}(6)\) and \(\operatorname{sym}_{(23)}\pi\)._
## 5. Conclusion and future work
We have studied a class of \(n\)-dimensional partitions using tools from optimal transport. More precisely, we have shown that if the Wasserstein function on two measures from a partition is zero, their Young diagrams are the same and hence they must be self-symmetric partitions. We believe the Kantorovich formulation can also be adapted to study matching between even and odd partitions as addressed in [H] in the case of partitions matched by Euler's identity.
In the future, one can study matching between different partitions and potentially a multi-partition version. In particular, given \(m\)-dimensional partitions \(\pi_{1},\ldots,\pi_{k}\in\mathcal{P}_{m}(n)\), what is
the closest partition to these partitions? This problem we believe is related to multi-marginal optimal transport (see [P] for the survey on this topic).
|
2301.06557 | Finite Dimensional Koopman Form of Polynomial Nonlinear Systems | The Koopman framework is a popular approach to transform a finite dimensional
nonlinear system into an infinite dimensional, but linear model through a
lifting process, using so-called observable functions. While there is an
extensive theory on infinite dimensional representations in the operator sense,
there are few constructive results on how to select the observables to realize
them. When it comes to the possibility of finite Koopman representations, which
are highly important form a practical point of view, there is no constructive
theory. Hence, in practice, often a data-based method and ad-hoc choice of the
observable functions is used. When truncating to a finite number of basis,
there is also no clear indication of the introduced approximation error. In
this paper, we propose a systematic method to compute the finite dimensional
Koopman embedding of a specific class of polynomial nonlinear systems in
continuous-time such that, the embedding, without approximation, can fully
represent the dynamics of the nonlinear system. | Lucian Cristian Iacob, Maarten Schoukens, Roland Tóth | 2023-01-16T18:53:08Z | http://arxiv.org/abs/2301.06557v1 | # Finite Dimensional Koopman Form of
###### Abstract
The Koopman framework is a popular approach to transform a finite dimensional nonlinear system into an infinite dimensional, but linear model through a lifting process, using so-called observable functions. While there is an extensive theory on infinite dimensional representations in the operator sense, there are few constructive results on how to select the observables to realize them. When it comes to the possibility of finite Koopman representations, which are highly important form a practical point of view, there is no constructive theory. Hence, in practice, often a data-based method and ad-hoc choice of the observable functions is used. When truncating to a finite number of basis, there is also no clear indication of the introduced approximation error. In this paper, we propose a systematic method to compute the finite dimensional Koopman embedding of a specific class of polynomial nonlinear systems in continuous-time such that, the embedding, without approximation, can fully represent the dynamics of the nonlinear system.
N +
Footnote †: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement nr. 714663) and from the European Union within the framework of the National Laboratory for Autonomous Systems (RRF-2.3.1-21-2022-00002).
onlinear systems, Koopman operator, Linear embedding
## 1 Introduction
In most engineering fields, due to increasing performance demands, tackling the nonlinear behaviour becomes more and more important. However, the available methods in the field of nonlinear control (e.g. feedback linearization, backstepping, sliding mode control (Khalil, 2002)) are generally complex to design, only offer stability guarantees, and performance shaping of the closed-loop has yet to be achieved. This is in contrast to the systematic and powerful tools available for _linear time invariant_ (LTI) systems. However, using LTI control tools on linearized models offers limited performance when the system evolves away from the operating region. Hence, there is an increasing need to extend the powerful LTI control design and modelling framework to address nonlinear systems. As such, there is a significant interest in finding globally linear surrogate models of nonlinear systems.
One of the more promising approaches to achieve this is given by the Koopman framework (Brunton et al., 2022), (Bevanda et al., 2021), (Mauroy et al., 2020), where the concept is to project the original nonlinear state space representation to a higher dimensional (possibly infinite) but linear space, through observable functions. The Koopman operator is a linear operator and governs the dynamics of the observables. The Koopman framework shows promising results in its application to real-world analysis and control applications (e.g. mechatronic systems (Abraham and Murphey, 2019), (Cisneros et al., 2020), distributed parameter systems (Klus et al., 2020)). For practical use, a finite number of observables needs to be selected. These are then used to construct time shifted data matrices, to compute via least-squares the matrix representation of the Koopman operator. This technique is known as _extended dynamic mode decomposition_ (EDMD) (Williams et al., 2015). However, the main problem is that the choice of the observables is heuristic and there are no guarantees on the quality of the resulting model. To tackle this, one solution is to use data-driven techniques to learn the lifting from data, in order to circumvent the manual selection of observables (Lusch et al., 2018), (Iacob et al., 2021). Nevertheless, this is still an approximation and the questions on how to embed the nonlinear system into an exact linear finite dimensional lifted representation and when this is possible at all are still open. This is an important aspect, because, for control purposes, having an exact finite dimensional embedding allows for the application of the available control tools for linear systems. Moreover, if there exist approximation errors in the model that cannot be quantified, the expected performance will not be achieved. To tackle this, there have been attempts to connect the Koopman framework to immersion (Wang and Jungers, 2020) and Carleman linearization, in order to obtain a clear way of computing the observables. However, in the immersion approach, the existence of a finite dimensional fully linear lifting depends heavily on the observability property of the system and, in general, the resulted embedding contains a nonlinear output injection (Krener and Isidori, 1983), (Jouan, 2003). For the Carleman lineariza
tion (Kowalski and Steeb, 1991), while it offers a systematic way of computing the lifting functions, the resulting embedding is still an infinite dimensional model that needs to be trimmed.
The present paper discusses a novel method to systematically convert a polynomial nonlinear system to an exact finite dimensional linear embedding. Starting from the idea of the simple 2-dimensional example shown in (Brunton et al., 2022), we introduce a state-space model where the state equation is described by a lower triangular polynomial form. We prove that there always exists an exact finite dimensional Koopman representation and we show how to systematically compute it. Furthermore, we also show that, once the autonomous part of the nonlinear system is fully embedded, the extension to systems with inputs is trivial and can be performed in a separate step. Using an example system, we demonstrate that the lifted Koopman model can fully capture the original dynamics, both in an autonomous operation and in the presence of inputs.
The paper is structured as follows. Section 2 describes the Koopman framework and details the proof and steps needed to obtain the finite embedding. In Section 3, we discuss the example and showcase the simulation results. In Section 4, conclusions on the presented results are given together with outlooks on future research.
## 2 Finite Dimensional Embedding
The present section details the Koopman framework and showcases the proposed method to compute an exact finite dimensional embedding. Additionally, we discuss the extension to systems with inputs.
### Koopman framework
Consider the autonomous nonlinear system:
\[\dot{x}=f(x), \tag{1}\]
with \(x:=x(t)\) denoting the state, \(t\in\mathbb{R}\) represents the time and \(f:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{n_{k}}\) is the nonlinear vector field which we consider to be a Lipshitz continuous function. Given an initial condition \(x(0)\in\mathbb{X}\subseteq\mathbb{R}^{n_{k}}\), the solution \(x(t)\) can be described as:
\[x(t)=F(t,x(0)):=x(0)+\int_{0}^{t}f(x(\tau))\,\mathrm{d}\tau. \tag{2}\]
It is assumed that \(\mathbb{X}\) is compact and forward invariant under the flow \(F(t,\cdot)\), such that \(F(t,\mathbb{X})\subseteq\mathbb{X},\forall t\geq 0\). Introduce the family of Koopman operators \(\{\mathcal{K}^{t}\}_{t\geq 0}\) associated to the flow \(F(t,\cdot)\) as:
\[\mathcal{K}^{t}\phi(x(0))=\phi\circ F(t,x(0)),\quad\phi\in\mathcal{F}, \tag{3}\]
where, \(\mathcal{F}\subseteq\mathcal{C}^{1}\) is a Banach function space of continuously differentiable functions and \(\phi:\mathbb{X}\rightarrow\mathbb{R}\) is a scalar observable function. As the flow \(F\) is uniformly Lipshitz and \(\mathbb{X}\) a compact forward-invariant set, the Koopman semigroup \(\{\mathcal{K}^{t}\}_{t\geq 0}\) is strongly continuous on \(\mathcal{F}\)(Mauroy et al., 2020). Thus, we can describe the infinitesimal generator \(\mathcal{L}:\mathcal{D}_{\mathcal{L}}\rightarrow\mathcal{F}\) associated to the Koopman semigroup of operators (Lasota and Mackey, 1994), (Mauroy et al., 2020) as:
\[\mathcal{L}\phi(x_{0})=\lim_{t\downarrow 0}\frac{\mathcal{K}^{t}\phi(x(0))- \phi(x(0))}{t},\quad\phi\in\mathcal{D}_{\mathcal{L}}, \tag{4}\]
where \(\mathcal{D}_{\mathcal{L}}\) is a dense set in \(\mathcal{F}\). Note that, as described in (Lasota and Mackey, 1994), the generator \(\mathcal{L}\) is a linear operator. Through the infinitesimal generator we can thus describe the dynamics of observables as follows:
\[\dot{\phi}=\frac{\partial\phi}{\partial x}f=\mathcal{L}\phi, \tag{5}\]
which is a linear infinite dimensional representation of the nonlinear system (1). If there exists a finite dimensional Koopman subspace \(\mathcal{F}_{n_{l}}\subseteq\mathcal{D}_{\mathcal{L}}\), such that the image of \(\mathcal{L}\) is in \(\mathcal{F}_{n_{l}}\), then, given the set of lifting functions as basis of \(\mathcal{F}_{n_{l}}\), \(\forall\phi\in\Phi\), \(\mathcal{L}\phi\in\mathrm{span}\{\Phi\}\). Thus, the following relation holds:
\[\dot{\phi}_{j}=\mathcal{L}\phi_{j}=\sum_{i=1}^{n_{l}}L_{ij}\phi_{i}, \tag{6}\]
where \(L\) denotes the matrix representation of \(\mathcal{L}\) and the coordinates of \(\mathcal{L}\phi_{j}\) in the basis \(\Phi\) are contained in the column \(L_{\cdot j}\) Let \(A=L^{\top}\in\mathbb{R}^{n_{l}\times n_{l}}\), and, based on (5), the lifted representation of (1) is given by:
\[\dot{\Phi}(x)=\frac{\partial\Phi}{\partial x}(x)f(x)=A\Phi(x). \tag{7}\]
Thus, one can formulate conditions for the existence of a finite dimensional embedding of (1) as:
\[\dot{\Phi}\in\mathrm{span}\{\Phi\},\] (8a) which is equivalent to \[\frac{\partial\Phi}{\partial x}f\in\mathrm{span}\{\Phi\}. \tag{8b}\]
However, the major question is how to compute \(\Phi\) such that the conditions (8) are true. In the Koopman framework, to recover the original states of (1), the existence of a back transformation \(\Phi^{\dagger}(\Phi(x))=x\) is often assumed. For simplicity, this is achieved by adding an extra condition to (8), namely that the original states are contained in \(\Phi\), i.e., the identity function is part of \(\Phi\). Next, in order to explicitly write the LTI dynamics given by the Koopman form, let \(z(t)=\Phi(x(t))\). Then, an associated Koopman representation of (1) is:
\[\dot{z}=Az,\quad\text{with }z(0)=\Phi(x(0)). \tag{9}\]
It is important to note that, by the existing theory, in general one cannot guarantee the existence of a finite dimensional Koopman invariant subspace \(\mathcal{F}_{n_{l}}\). In the sequel we show that, in case of systems described by a state-space representation where the state equation can be written in a lower triangular polynomial form, there always exists an exact finite dimensional Koopman representation of the system in the form of (9) and this representation can be systematically computed.
### Exact finite embedding procedure
Consider the nonlinear system (1) to have the following structure:
\[\begin{split}\dot{x}_{1}&=a_{1}x_{1}\\ \dot{x}_{2}&=a_{2}x_{2}+f_{2}(x_{1})\\ \dot{x}_{3}&=a_{3}x_{3}+f_{3}(x_{1},x_{2})\\ &\vdots\\ \dot{x}_{n}&=a_{n}x_{n}+f_{n}(x_{1},\ldots,x_{n-1})\end{split} \tag{10}\]
where \(f_{n}\) is given by:
\[f_{n}(x_{1},\ldots,x_{n-1})=\sum_{j_{1}=0}^{d_{n}}\cdots\sum_{j_{n-1}=0}^{d_{n}} \alpha_{j_{1}\ldots j_{n-1}}^{n}\prod_{i=1}^{n-1}x_{i}^{j_{i}}, \tag{11}\]
with polynomial terms of the form \(x_{1}^{j_{1}}\ldots x_{n-1}^{j_{n-1}}\). It is assumed that the powers go up to \(d_{n}\), for ease of derivation, but there is no restriction and each power can be arbitrarily large (but finite). It could be viewed that \(d_{n}\) is the maximum power within the polynomial terms. Under these considerations, we can give the following theorem.
**Theorem 1**: _For an autonomous continuous-time nonlinear system that has a polynomial state-space representation in the form of (10), there exists an exact finite-dimensional lifting \(\Phi:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{n_{\ell}}\), containing the states \(x_{i}\), with \(i\in\{1,\ldots,n\}\), such that (8a) holds true._
The theorem is proven by induction. First, we will consider the cases when \(n=1,2,3,4\) and then we will show that if the statement of Theorem 1 holds for \(n\)-number of states then we can prove that it also holds for \(n+1\).
* \(n=1\) (first order system): \[\dot{x}_{1}=a_{1}x_{1}\] (12) Let \(W_{1}=\{x_{1}\}\) and \(\Phi=\operatorname{vec}(W_{1})\), i.e., \(\Phi(x)=x_{1}\). It is trivial to see that condition (8a) holds true as \(\dot{x}_{1}=a_{1}x_{1}\in\operatorname{span}\{\Phi\}\).
* \(n=2\) (second order system): Notice that the dynamics defined by the \(2^{\text{nd}}\)-order system are described by (12), together with \[\dot{x}_{2}=a_{2}x_{2}+\sum_{j_{1}=0}^{d_{2}}\alpha_{j_{1}}^{2}x_{1}^{j_{1}}.\] (13) Here, superscript \(2\) of the coefficient \(\alpha_{j_{1}}^{2}\) denotes that it belongs to the \(2^{\text{nd}}\) state equation and not that the coefficient is raised to power \(2\). Let \(V_{2}=\{x_{1}^{0},\ldots,x_{1}^{d_{2}}\}\) and \(W_{2}=\{x_{2}\}\cup V_{2}\), while \(\Phi=\operatorname{vec}(W_{1}\cup W_{2})\). By calculating \(\dot{\Phi}\), we get the terms associated with \(W_{1}\) and the terms \[\frac{\mathrm{d}}{\mathrm{d}t}\left(x_{1}^{j_{1}}\right)=j_{1}x_{1}^{j_{1}-1} \dot{x}_{1}=j_{1}a_{1}x_{1}^{j_{1}}\] (14) originating from \(V_{2}\). It is easy to observe that all terms in (14) are already contained in \(\Phi\) and \(\dot{x}_{2}\in\operatorname{span}\{\Phi\}\), hence condition (8a) holds true.
* \(n=3\) (third order system): The dynamics of the \(3^{\text{rd}}\)-order system are described by (12), (13) and the following equation: \[\dot{x}_{3}=a_{3}x_{3}+\sum_{j_{1}=0}^{d_{3}}\sum_{j_{2}=0}^{d_{3}}\alpha_{j_{ 1},j_{2}}^{j_{1}}x_{2}^{j_{2}}.\] (15) As performed previously, we take the nonlinear terms \(x_{1}^{j_{1}}x_{2}^{j_{2}}\) and add them to the set of lifting functions \(V_{3}=\{x_{1}^{0}x_{2}^{0},\ldots,x_{1}^{d_{3}}x_{2}^{d_{3}}\}\) and \(W_{3}=\{x_{3}\}\cup V_{3}\), while \(\Phi=\operatorname{vec}(W_{1}\cup W_{2}\cup W_{3})\). By calculating \(\dot{\Phi}\), we get the terms associated with \(W_{1}\), \(W_{2}\) as before and \[\frac{\mathrm{d}}{\mathrm{d}t}\left(x_{1}^{j_{1}}x_{2}^{j_{2}} \right)=j_{1}x_{1}^{j_{1}-1}x_{2}^{j_{2}}\dot{x}_{1}+j_{2}x_{1}^{j_{1}}x_{2}^{j _{2}-1}\dot{x}_{2}\] (16) \[\qquad=(j_{1}a_{1}+j_{2}a_{2})\underbrace{x_{1}^{j_{1}}x_{2}^{j_{ 2}}}_{a}+j_{2}\sum_{j_{1}=0}^{d_{2}}\alpha_{j_{1}}^{2}\underbrace{x_{1}^{j_{1}+ j_{1}}x_{2}^{j_{1}-1}}_{b}\] originating from \(V_{3}\). The following observations can be made:
* The terms \(a\) are already contained in \(V_{3}\).
* For the terms \(b\), we can observe that the power \(j_{2}\) decreases by \(1\) and \(j_{1}\) increases by at most \(d_{2}\). Introduce the operator \(\mathfrak{D}_{\mathrm{b}}\) such that \(\mathfrak{D}_{\mathrm{b}}(x_{1}^{j_{1}}x_{2}^{j_{2}})=\{x_{1}^{j_{1}+j_{1}}x_{2 }^{j_{2}-1}\}_{j_{1}=0}^{d_{2}}\), i.e., it gives the \(b\) terms of (16). Then let \(V_{3}\gets V_{3}\cup\mathfrak{D}_{\mathrm{b}}(V_{3})\). Repeating the process, i.e., applying the time derivative again to \(x_{1}^{j_{1}+j_{1}}x_{2}^{j_{2}-1}\) further decreases \(j_{2}\) and increases the power of \(x_{1}\), and at each step only terms of the form \(a\) and \(b\) are generated. Repeating the process for a finite number of steps gives that \(\mathfrak{D}_{\mathrm{b}}(V_{3})\backslash V_{3}\subseteq\{x_{1}^{0},\ldots,x_{ 1}^{n_{1}}\}\). Hence, based on case \(n=2\), we know that for \(V_{3}\gets V_{3}\cup\mathfrak{D}_{\mathrm{b}}(V_{3})\) taking \(W_{3}=\{x_{3}\}\cup V_{3}\) and \(\Phi=\operatorname{vec}(W_{1}\cup W_{2}\cup W_{3})\) will ensure that condition (8a) holds true.
* \(n=4\) (fourth order system): The dynamics of the \(4^{\text{th}}\)-order system is described by (12), (13), (15), together with: \[\dot{x}_{4}=a_{4}x_{4}+\sum_{j_{1}=0}^{d_{4}}\sum_{j_{2}=0}^{d_{4}}\sum_{j_{3}=0 }^{d_{4}}\alpha_{j_{1},j_{2},j_{3}}^{4}x_{1}^{j_{2}}x_{2}^{j_{3}}.\] (17) To ease readability, let \(\zeta_{j}=x_{1}^{j_{1}}x_{2}^{j_{2}}\) with \(j=j_{1}+(d_{4}+1)j_{2}+1\). This means that \(j=1\) corresponds to \(j_{1}=0,j_{2}=0\), \(j=2\) corresponds to \(j_{1}=1,j_{2}=0\), up until \(j=P=(d_{4}+1)^{2}\), which corresponds to \(j_{1}=d_{4},j_{2}=d_{4}\). Then, (17) can be written as \[\dot{x}_{4}=a_{4}x_{4}+\sum_{j=1}^{P}\sum_{j_{3}=0}^{d_{3}}\tilde{\alpha}_{j_{3} }^{3}\zeta_{j}x_{3}^{j_{3}}.\] (18) Let \(V_{4}=\{\zeta_{1}x_{3}^{0},\ldots,\zeta_{P}x_{3}^{d_{4}}\}\) and \(W_{4}=\{x_{4}\}\cup V_{4}\), while \(\Phi=\operatorname{vec}(\bigcup_{i=1}^{4}W_{i})\). By calculating \(\dot{\Phi}\), we get the terms associated with \(W_{1},W_{2},W_{3}\) as before and \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\zeta_{j}x_{3}^{j_{3}} \right) =\dot{\zeta}_{j}x_{3}^{j_{3}}+j_{3}\zeta_{j}x_{3}^{j_{3}-1}\dot{x}_{3}\] (19) \[=j_{3}a_{3}\underbrace{\zeta_{j}x_{3}^{j_{3}}}_{a}+j_{3}\sum_{j=1}^{P} \tilde{\alpha}_{j}^{2}\underbrace{\zeta_{j+n_{j}}x_{3}^{j_{3}-1}}_{b}+\underbrace{ \dot{\zeta}_{j}x_{3}^{j_{3}}}_{c}.\]
* The terms \(a\) are already contained in \(V_{4}\).
* For the terms \(b\), we can observe that the power \(j_{3}\) decreases by \(1\) and the powers of \(x_{1}\) and \(x_{2}\) within \(\zeta\) increase by at most \(d_{3}\) (which is finite), encoded in terms of \(n_{j}\). Applying the same iterations as in case \(n=3\), we can construct a \(V_{4}\) such that \(\mathfrak{D}_{\mathrm{b}}(V_{4})\setminus V_{4}\subseteq\{\zeta_{1},\ldots, \zeta_{n
* n+1 states (\(n+1\) order system): Assume that for \(\Phi=\mathrm{vec}(\bigcup_{i=1}^{n}W_{i})\), condition (8a) holds true in the \(n^{\mathrm{th}}\)-order case. The dynamics of the \(n+1\) order system is described by (10), together with: \[\dot{x}_{n+1}=a_{n}x_{n}+\sum_{j_{1}=0}^{d_{n+1}}\cdots\sum_{j_{n}=0}^{d_{n+1}} \alpha_{j_{1}\ldots j_{n}}^{n+1}x_{1}^{j_{1}}\ldots x_{n}^{j_{n}}.\] (20) Similar to the \(n=4\) case, introduce \(\zeta_{j}=x_{1}^{j_{1}}\ldots x_{n-1}^{j_{n-1}}\), with \(j=1+\sum_{k=1}^{n-1}j_{k}(d_{n+1}+1)^{k-1}\) and \(P=(d_{n+1}+1)^{n-1}\). With this notation, (20) is equivalent to: \[\dot{x}_{n+1}=a_{n+1}x_{n+1}+\sum_{j=1}^{P}\sum_{j_{n}=0}^{d_{n+1}}\tilde{ \alpha}_{j,j_{n}}^{n+1}\zeta_{j}x_{n}^{j_{n}}.\] (21) Let \(V_{n+1}=\{\zeta_{1}x_{n}^{0},\ldots,\zeta_{P}x_{n}^{d_{n+1}}\}\) and \(W_{n+1}=\{x_{n+1}\}\cup V_{n+1}\), while \(\Phi=\mathrm{vec}(\bigcup_{i=1}^{n+1}W_{i})\). By calculating \(\dot{\Phi}\), we get the terms associated with \(W_{1},\ldots,W_{n}\) as before and \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\zeta_{j}x_{n}^{j_{n}}\right) =\dot{\zeta}_{j}x_{n}^{j_{n}}+j_{n}\zeta_{j}x_{n}^{j_{n}-1}\dot{x}_ {n}\] (22) \[=j_{n}a_{n}\underbrace{\zeta_{j}x_{n}^{j_{n}}}_{a}+j_{n}\sum_{j=1} ^{P}\tilde{\alpha}_{j}^{n}\underbrace{\zeta_{j+n}x_{n}^{j_{n}-1}}_{b}+ \underbrace{\dot{\zeta}_{j}x_{n}^{j_{n}}}_{c}\]
* The terms \(a\) are already contained in \(W_{n}\).
* We can observe that the power \(j_{n}\) decreases by \(1\) and the powers of \(x_{i}\) (\(i\in\{1,\ldots,n-1\}\)) within \(\zeta\) increase by at most \(d_{n}\) (which is finite), encoded in terms of \(n_{\dot{j}}\). Applying the same iterations as in case \(n=4\), recursively leads to \(\mathfrak{D}_{\mathrm{b}}(V_{n+1})\setminus V_{n+1}\subseteq\{x_{1}^{0},\ldots,x_{1}^{n_{1}}\}\) in a finite number of steps.
* As seen in \(n=4\), taking \(\frac{\mathrm{d}}{\mathrm{d}t}\zeta_{j}\) for the terms \(c\), leads to a decrease of the orders of \(x_{1}^{j_{1}},\ldots,x_{n}^{j_{n}}\) in the terms \(\zeta\). By using \(V_{n+1}\gets V_{n+1}\cup\mathfrak{D}_{\mathrm{c}}(V_{n+1})\) in a finite number of steps leads to \(\mathfrak{D}_{c}(V_{n+1})\setminus V_{n+1}\subseteq\{x_{n}^{0},\ldots,x_{n}^{n _{n}}\}\). As noted before, the empty set is also a subset and the terms \(b\) and \(c\) are iterated together. Hence, based on case \(n\), we know that for \(V_{n+1}\gets V_{n+1}\cup\mathfrak{D}_{\mathrm{c}}(V_{n+1})\) taking \(W_{n+1}=\{x_{n+1}\}\cup V_{n+1}\) and \(\Phi=\mathrm{vec}(\bigcup_{i=1}^{n+1}W_{i})\) will ensure that condition (8a) holds true. This completes the proof.
This shows that for an autonomous polynomial nonlinear system with the dynamics described by (10), there exists a finite dimensional lifting \(\Phi\), containing the states and polynomial terms, satisfying \(\dot{\Phi}=\frac{\partial\Phi}{\partial x}f\in\mathrm{span}\{\Phi\}\). This implies that there exists a square real matrix \(A\) such that \(\dot{\Phi}(x)=A\Phi(x)\).
### Systems with input
Consider the following control affine nonlinear system:
\[\dot{x}=f(x)+g(x)u, \tag{23}\]
with the autonomous part given by (10) and \(g:\mathbb{R}^{n_{\mathrm{x}}}\rightarrow\mathbb{R}^{n_{\mathrm{x}}\times n_{ \mathrm{u}}}\) and \(u\in\mathbb{U}\subseteq\mathbb{R}^{n_{\mathrm{u}}}\). To obtain the lifted representation, one can use the sequential method described in (Iacob et al., 2022). First, an exact lifting of the autonomous part is assumed to exist, i.e. conditions (8) hold. Next, the Koopman embedding is computed using the properites of the differential operator. Applying the lifting \(\dot{\Phi}\) and taking the time derivative, one obtains:
\[\dot{\Phi} =\frac{\partial\Phi}{\partial x}(x)\dot{x} \tag{24}\] \[=\frac{\partial\Phi}{\partial x}(x)f(x)+\frac{\partial\Phi}{ \partial x}(x)g(x)u.\]
Using the equivalence of conditions (8b) and (8a), an associated Koopman embedding of (23) is:
\[\dot{\Phi}(x)=A\Phi(x)+B(x)u, \tag{25}\]
with \(B(x)=\frac{\partial\Phi}{\partial x}(x)g(x)\). As described in (Iacob et al., 2022), one can further express (25) as a _linear parameter varying_ (LPV) Koopman representation by introducing a scheduling map \(p=\mu(z)\), where \(z=\Phi(x)\) and defining \(B_{\mathrm{z}}\circ z=B\). Then, the LPV Koopman model is described by:
\[\dot{z}=Az+B_{\mathrm{z}}(p)u, \tag{26}\]
with \(z(0)=\Phi(x(0))\).
## 3 Example
This section presents the embedding of an example 4-dimensional system and shows simulation results for both autonomous and input-driven operation.
Figure 1: State trajectories of the original nonlinear system representation (27)-(30) and the Koopman embedding (31).
Figure 2: Error between the state trajectories of the original nonlinear system representation (27)-(30) and the Koopman embedding (31).
### Autonomous case
Consider the following \(4^{\text{th}}\) order system:
\[\dot{x}_{1} =a_{1}x_{1} \tag{27}\] \[\dot{x}_{2} =a_{2}x_{2}+\alpha_{3}^{2}x_{1}^{3}\] (28) \[\dot{x}_{3} =a_{3}x_{3}+\alpha_{11}^{3}x_{1}x_{2}+\alpha_{02}^{3}x_{2}^{2}\] (29) \[\dot{x}_{4} =a_{4}x_{4}+\alpha_{111}^{4}x_{1}x_{2}x_{3}. \tag{30}\]
We can apply the procedure discussed in Section 2 per state equation to find the observable functions. The resulting lifting functions are as follows: \(W_{1}=\{x_{1}\}\), \(W_{2}=\left\{x_{2},x_{1}^{3}\right\}\), \(W_{3}=\left\{x_{3},x_{1}x_{2},x_{2}^{2},x_{1}^{4},x_{1}^{3}x_{2},x_{1}^{6}\right\}\), and \(W_{4}=\{x_{4},x_{1}x_{2}x_{3},x_{1}^{4}x_{3},x_{1}^{4}x_{2},x_{1}x_{2}^{3},x_{ 1}^{4}x_{2},x_{1}^{4}x_{2},x_{1}^{5}x_{2},x_{1}^{5}x_{2},x_{1}^{6}\}\).
Then, the entire lifting set is \(\Phi=\text{vec}(W_{1},W_{2},W_{3},W_{4})\). For easier interpretability, we can write the observables such that: \(\Phi(x)=[x_{1}\ x_{2}\ x_{3}\ x_{4}\ \bar{\Phi}_{1}^{\top}\ \bar{\Phi}_{2}^{\top}\ \bar{\Phi}_{3}^{\top}\ \bar{\Phi}_{4}^{\top}]^{\top}\) and \(\bar{\Phi}_{i}\) contains the elements of \(W_{i}\), in order, without the state \(x_{i}\). Performing the derivations as described in the proof, we obtain a finite dimensional Koopman representation of the form:
\[\dot{z} =Az \tag{31}\] \[x =Cz,\]
with \(z(t)=\Phi(x(t))\), \(A\in\mathbb{R}^{19\times 19}\) and \(C=[I_{4}\ 0_{4\times 15}]\). The structure of the state matrix \(A\) is detailed in the Appendix. To compare the obtained Koopman representation and the original system description, consider \(a_{1}=a_{2}=a_{3}=a_{4}=-0.5\), \(\alpha_{3}^{2}=\alpha_{11}^{3}=\alpha_{02}^{3}=\alpha_{111}^{4}=-0.2\) and \(x_{0}=[1\ 1\ 1]^{\top}\). We can obtain solution trajectories of these two representations by a Runge-Kutta 4th order solver. Furthermore, once the initial condition is lifted, i.e. \(z(0)=\Phi(x(0))\), the dynamics of the Koopman model are driven forward linearly, as described by (31). The simulation results and solution trajectories are depicted in Fig 1. As it can be observed, there is an exact overlap between the state trajectories of the original system description and the state trajectories obtained from the lifted model (\(z_{1\to 4}\) correspond to \(x_{1\to 4}\)). Fig. 2 shows that the obtained error is in the order of magnitude of \(10^{-15}\), which can be attributed to numerical artifacts.
### Input-driven case
Consider a control affine nonlinear system (23), with the autonomous part given by the equations (27)-(30) and \(g(x)=\left[1\ x_{1}\ x_{2}^{2}\ \sin(x_{3})\right]^{\top}\). Applying the lifting procedure described in Section 2.3, we can derive an exact LPV Koopman model:
\[\dot{z} =Az+B_{x}(p)u \tag{32}\] \[x =Cz,\]
with \(C=[I_{4}\ 0_{4\times 15}]\), \(z(t)=\Phi(x(t))\) and \(p=z\). Note that the state matrix \(A\) coincides with the autonomous case. The explicit form of \(B(x)\) (and, in turn, \(B_{x}\)) is omitted due to space constraints, but it can be easily computed by multiplying \(\frac{\partial\Phi}{\partial x}\) with \(g(x)\). The structure of \(\frac{\partial\Phi}{\partial x}(x)\) is given in the Appendix. We use the same coefficient values as in the autonomous case and consider a step input. After lifting the initial state \(z(0)=\Phi(x(0))\), the dynamics of the Koopman representation are simulated forward in time by (32). Fig. 3 shows the solution trajectories of both the original and the lifted system representations. As in the autonomous case, there is an exact overlap, with the error between the state trajectories being in the order of magnitude of \(10^{-15}\), only due to numerical integration errors. This is depicted in Fig. 4.
## 4 Conclusion
The present paper shows that a finite, exact Koopman embedding exists for a specific system class and an approach is provided to obtain this embedding. Furthermore, as shown, the step to embed nonlinear systems with input is easily achieved once the autonomous part is lifted. Future work will focus on extending the current system description to a more general class of nonlinear systems.
|
2302.07319 | Frustratingly Simple but Effective Zero-shot Detection and Segmentation:
Analysis and a Strong Baseline | Methods for object detection and segmentation often require abundant
instance-level annotations for training, which are time-consuming and expensive
to collect. To address this, the task of zero-shot object detection (or
segmentation) aims at learning effective methods for identifying and localizing
object instances for the categories that have no supervision available.
Constructing architectures for these tasks requires choosing from a myriad of
design options, ranging from the form of the class encoding used to transfer
information from seen to unseen categories, to the nature of the function being
optimized for learning. In this work, we extensively study these design
choices, and carefully construct a simple yet extremely effective zero-shot
recognition method. Through extensive experiments on the MSCOCO dataset on
object detection and segmentation, we highlight that our proposed method
outperforms existing, considerably more complex, architectures. Our findings
and method, which we propose as a competitive future baseline, point towards
the need to revisit some of the recent design trends in zero-shot detection /
segmentation. | Siddhesh Khandelwal, Anirudth Nambirajan, Behjat Siddiquie, Jayan Eledath, Leonid Sigal | 2023-02-14T20:00:30Z | http://arxiv.org/abs/2302.07319v1 | # Frustratingly Simple but Effective Zero-shot Detection and Segmentation:
###### Abstract
Methods for object detection and segmentation often require abundant instance-level annotations for training, which are time-consuming and expensive to collect. To address this, the task of zero-shot object detection (or segmentation) aims at learning effective methods for identifying and localizing object instances for the categories that have no supervision available. Constructing architectures for these tasks requires choosing from a myriad of design options, ranging from the form of the class encoding used to transfer information from seen to unseen categories, to the nature of the function being optimized for learning. In this work, we extensively study these design choices, and carefully construct a simple yet extremely effective zero-shot recognition method. Through extensive experiments on the MSCOCO [25] dataset on object detection and segmentation, we highlight that our proposed method outperforms existing, considerably more complex, architectures. Our findings and method, which we propose as a competitive future baseline, point towards the need to revisit some of the recent design trends in zero-shot detection / segmentation.
## 1 Introduction
Advancements in CNN based deep learning architectures over the years have lead to significant improvements in computer vision recognition tasks such as object detection [26, 35, 37] and segmentation [5, 13], both in terms of recognition quality and speed. However, traditional CNN-based approaches often rely on the availability of abundant supervision for learning, which are both time-consuming and expensive to gather [15, 23]. This effect is more pronounced for instance-level annotations like bounding boxes and segmentation masks [2], thus making scaling of object detection and segmentation to new categories challenging.
As a consequence, research into learning methods that generalize to categories with no available supervision - referred to as _zero-shot learning_ - has gained significant traction, wherein the focus is to develop techniques for information transfer from the supervision abundant _seen_ categories to the supervision-absent _unseen_ categories. Although there has been considerable work towards zero-shot learning on image-level tasks such as image classification [4, 4, 42, 22, 44, 45, 22], more granular recognition problems such as zero-shot detection and segmentation are relatively unexplored [47, 12, 18, 31, 48]. These instance-level
tasks are naturally more challenging as, in addition to accurately identifying the objects present in an image, methods are required to precisely localize them - either via bounding boxes or segmentation masks.
Constructing architectures for the zero-shot detection (or segmentation) requires making certain critical design choices that directly impact performance. These include decisions on the - 1 model characteristics like capacity, and mechanisms for information transfer from seen to unseen categories, 2 learning dynamics like the choice of loss function, and 3 inference procedure like selecting the appropriate trade-off between seen and unseen category performance. Existing approaches often explore only one of these critical options, leading to sub-optimal selection for the remaining choices. Concretely, the recent focus of zero-shot detection (or segmentation) has been towards creating complex models, both in terms of the number of parameters [12, 18] and the use of intricate modules aimed at better information transfer from seen to unseen categories [11, 12, 18, 47, 48].
In this work, we argue that this complexity is unnecessary, and propose a simple solution to zero-shot detection (and instance segmentation) by extensively exploring the aforementioned design decisions. More specifically, our model specifications are dictated by a set of carefully constructed ablation studies, one for each possible choice. Our approach adopts a two-step training scheme, wherein the first step involves training an object detector (or segmentor) like Faster R-CNN [37] (or Masked R-CNN [13]) on the _seen_ categories with abundant instance-level annotations. The second stage _fine-tunes_ a projection layer, trained on the _seen_ categories to learn a transformation from image features to a semantically meaningful space. Information transfer from the _seen_ to _unseen_ categories, for the classifiers, detectors, and segmentors, is achieved by leveraging _normalized_ category-name semantic embeddings obtained from unsupervised approaches like GloVe [29] or ConceptNet [39]. This straightforward fine-tuning approach, when trained using the cross-entropy loss function, outperforms most existing methods that are significantly more complex in design.
The ablations and positive performance of our proposed simple approach motivate the need to more broadly revisit the research direction in the field of zero-shot detection (and segmentation). For example, although a large amount of existing work focuses on improving performance through complex architectures [12, 18], we find that the choice of semantic embeddings (like GloVe [29]) has the largest impact on performance. Despite its untapped potential, this direction is seldom explored in the literature.
**Contributions.** Our foremost contribution is a simple yet extremely effective architecture for zero shot detection and segmentation. The characteristics for our proposed method are carefully curated via extensive exploration over various critical design decisions, and is trained with a straightforward two-step process. We demonstrate the efficacy of our approach by thorough evaluations on the MSCOCO [25] dataset, which show that our proposed solution outperforms existing, considerably more complex, architectures. On the basis of these results, we argue for the need to revisit some of the recent design trends in the field of zero shot detection and segmentation, wherein our proposed method serves as a competitive baseline.
## 2 Related Work
**Zero-shot Detection.** Introduced in [1, 6], the task of zero-shot detection (ZSD) poses the challenge of localizing unseen objects within an image. The majority of existing work in this field has been towards modifying model construction for improved performance [11, 12, 47, 48, 49, 18]. Gupta [11] learn a multi-head network to disentangle visual and semantics spaces, each separately identifying object categories, which are subsequently ensembled. [47, 48, 1] propose methods to better learn semantic embeddings for the non-object (or background) category in an attempt to better distinguish them from the unseen categories. Bansal [1] employ an iterative expectation maximization (EM) like procedure to generate a background embedding vector. [47, 48] both modify the region proposal network (RPN) to learn embeddings to accurately perform the foreground-background binary classification task. Authors in [12, 18, 49] utilize generative model-based methods, wherein the aim is to synthesize features for unseen categories which can be used downstream to learn better classifiers. The focus of [31, 32, 34] is improving learning procedures for ZSD. Experimenting with loss functions, [34] propose a max-margin and clustering based loss, whereas [31] suggest the use of a novel polarity loss to encourage better visual-semantic alignment. Rahman [32], on the other hand, study the transductive learning learning paradigm for ZSD, and pseudo-label unseen category images to generate additional training data. Note that ZSD, which is the setting used in this work, assumes _no_ unseen category supervision. On the other hand, works in the open vocabulary detection literature [9, 10, 43] assume unseen category information through pretrained models, or visuo-lingual annotations, and are therefore not directly comparable.
**Zero-shot Segmentation.** Zero-shot segmentation is a relatively unexplored sub-field [3, 8, 16, 17, 19, 20, 48, 46]. Existing work has largely focused on zero-shot semantic segmentation [3, 8, 16, 17, 20, 46], wherein the aim is to accurately label each pixel in an image. Zhao [46] utilize WordNet [28] hypernym/hyponym relations to segment images. [3, 20, 8] focus on changing model construction to improve the information transfer from seen to unseen categories. Bucher [3] leverage synthetic fea
tures from a generative model to train a classifier for unseen categories, whereas Kato [20] learn a variational mapping over category-name embeddings from the semantic to visual space. Ding [8] instead decompose the zero-shot semantic segmentation problem into sub-tasks of class-agnostic grouping of pixels and subsequent classification over these groupings. Hu [16] emphasize improving learning dynamics by proposing uncertainty aware losses to mitigate the adverse effect of noisy seen category information on model performance. Works in [19, 21, 48] study the task of zero-shot instance segmentation, where the goal is to identify and generate accurate masks for individual object instances. [19, 21] both assume the availability of image-level annotations for unseen categories, either in the form of captions or labels. The work in [48], that learns a separate background category embedding, makes no such assumption, and resembles the setting used in this work.
**Semantic Embeddings.** Most approaches in ZSL use class-label semantic embeddings as the building block for efficient information transfer from seen to unseen categories [16, 17, 18, 12, 16, 17, 18, 19, 20, 21, 46, 47, 48, 49, 3, 8, 11, 12, 46, 49, 11]. Methods to generate these embeddings have been widely explored in NLP and computer vision literature [27, 29, 30, 36, 39, 7]. These embeddings effectively capture semantic and syntactic similarities between words (or sentences). Therefore, in this work, we also utilize these embeddings to project images and category labels into a common feature space, thus allowing detection (or segmentation) of unseen categories.
## 3 Problem Formulation
In this section we formally introduce the zero-shot detection (ZSD) / instance segmentation (ZSI) setup. The tasks assume two sets of disjoint categories - namely _seen_\(\mathcal{C}^{s}\) and _unseen_\(\mathcal{C}^{u}\), where \(\mathcal{C}^{s}\cap\mathcal{C}^{u}=\varnothing\). For the seen categories, consistent with existing work, we assume the availability of abundant instance-level annotations \(\mathcal{D}^{s}=\{(\mathbf{x}_{i},\mathbf{c}_{i},\mathbf{y}_{i})\}\), where \(\mathbf{x}_{i}\) is an input image, \(\mathbf{c}_{i}=\{c_{i,j}\}\) are seen category labels, and \(\mathbf{y}_{i}=\{\mathbf{bbox}_{i,j}\}\) or \(\mathbf{y}_{i}=\{\mathbf{mask}_{i,j}\}\) are the corresponding bounding boxes and/or masks for each instance \(j\) in image \(i\). Note that, for the unseen categories, no supervision is provided, \(\mathcal{D}^{u}=\{\mathbf{x}_{i}\}\). However, it is assumed that semantic embeddings \(\mathbf{E}^{s}\in\mathbb{R}^{|\mathcal{C}^{s}|\times d}\) and \(\mathbf{E}^{u}\in\mathbb{R}^{|\mathcal{C}^{u}|\times d}\) for the seen and unseen categories respectively are known, where \(d\) is the embedding dimensionality. We define \(\mathbf{E}=\{\mathbf{E}^{s},\mathbf{E}^{u}\}\) as the set of all available embeddings. At inference, the goal is to generate category-labels, bounding boxes, and possibly segmentation masks for the unseen categories. Depending on the scenario, the test set may contain only unseen objects (ZSD / ZSI setup), or both seen and unseen objects (GZSD1 / GZSI setup).
Footnote 1: The G stands for _generalized_.
## 4 Approach
Constructing an effective architecture for the task of zero-shot detection (or segmentation) necessitates making certain critical design decisions. By extensively exploring these choices and carefully selecting model components, we propose a simple solution for ZSD and ZSI.
For the task of zero-shot detection, our proposed approach, illustrated in Figure 2, adopts the popular two-stage detector Faster-RCNN [37]. The Faster-RCNN architecture consists of four learnable components, namely - 1 the backbone (like ResNet [14] or VGG [38]), 2 the region proposal network (RPN), 3 proposal-level feature extractor, and 4 classifier and class-aware regressor heads. The first stage of the aforementioned two-stage detector uses the backbone to extract features \(\mathbf{\bar{x}}_{i}\) from an input image \(\mathbf{x}_{i}\), which are subsequently utilized by the RPN to generate class-agnostic object region proposals \(\{\mathbf{pbox}_{i,j}\}\). The second stage involves a detection pipeline wherein the proposal-level feature extractor \(f_{\mathbf{W}^{prop}}\) performs region-of-interest (RoI) pooling, and generates proposal features \(\mathbf{z}_{i,j}=f_{\mathbf{W}^{prop}}\left(\texttt{RoIPool}\left(\mathbf{ \bar{x}}_{i},\mathbf{pbox}_{i,j}\right)\right)\) for each proposal \(j\). The classifier head learns to label the proposal feature \(\mathbf{z}_{i,j}\) into one of the seen categories, and the class-aware regressor head utilizes the proposal feature \(\mathbf{z}_{i,j}\) to refine the bounding box proposals \(\{\mathbf{pbox}_{i,j}\}\). Note that, for zero-shot instance segmentation, our method builds on the Mask-RCNN [13] model that additionally learns a segmentation head to generate masks for the detected objects given \(\mathbf{z}_{i,j}\). Our approach disentangles the learning of feature rep
Figure 2: **Proposed Approach with Two-step Training. The first step trains the learnable parameters of a Faster [37] / Mask [13] RCNN architecture using the seen category annotations. The second step freezes these parameters, and learns the projection matrices which, along with the semantic embeddings, generates embedding-aware classifiers, regressors, and segmentors for both seen and unseen categories.**
resentations and transfer of information from seen to unseen categories, and is therefore trained in two steps.
Feature Representation Learning.The first step entails training the aforementioned learnable components of the Faster/Mask RCNN architecture on seen category instance-level data \(\mathcal{D}^{s}\), guided by the losses described in [13, 37].
Information Transfer from Seen to Unseen Categories.In the second step, we obtain the classification, regression, and segmentation heads for the unseen categories by leveraging the relationships between the seen and unseen semantic embeddings \(\mathbf{E}^{s}\) and \(\mathbf{E}^{u}\). This is achieved via learning a joint visual-semantic feature space that facilitates interactions between image features and class embeddings.
Classifier.The embedding-aware classifier \(f_{\mathbf{W}^{cls}_{seen}}\) for the seen categories uses a projection matrix \(\mathbf{W}^{cls}\) that projects the proposal features \(\mathbf{z}_{i,j}\) to the embedding feature space,
\[f_{\mathbf{W}^{cls}_{seen}}(\mathbf{z}_{i,j})=\left(\mathbf{W}^{cls}\mathbf{ z}_{i,j}\right)\left(\frac{\mathbf{\overline{E}}^{s}}{\|\mathbf{\overline{E}}^{s} \|_{2}}\right)^{T} \tag{1}\]
where \(\|.\|_{2}\) is the L2 norm. \(\mathbf{\overline{E}}^{s}=[\mathbf{E}^{s},\mathbf{b}]\in\mathbb{R}^{(| \mathcal{C}^{s}|+1)\times d}\) is constructed by augmenting the background embedding \(\mathbf{b}\in\mathbb{R}^{d}\) that is learned alongside the projection matrices. We leverage the projection matrix \(\mathbf{W}^{cls}\) to simply define the embedding-aware classifier
\[f_{\mathbf{W}^{cls}_{unseen}}(\mathbf{z}_{i,j})=\left(\mathbf{W}^{cls} \mathbf{z}_{i,j}\right)\left(\frac{\mathbf{E}^{u}}{\|\mathbf{E}^{u}\|_{2}} \right)^{T} \tag{2}\]
During inference, the probabilities \(\mathbf{p}_{i,j}\) over all categories is expressed as,
\[\mathbf{p}_{i,j}=\sigma\left(\left[f_{\mathbf{W}^{cls}_{seen}}(\mathbf{z}_{i,j}),f_{\mathbf{W}^{cls}_{unseen}}(\mathbf{z}_{i,j})\right]\right) \tag{3}\]
where \(\sigma\) is the softmax function, and \([.,.]\) represents the concatenation operation.
Regressor.A valid bounding box can be defined by the top-left and bottom-right \((x,y)\) coordinates. The embedding-aware regressor \(f_{\mathbf{W}^{reg}_{seen}}\) for the seen categories therefore uses four projection matrices \(\mathbf{W}^{reg}_{r},r\in[1,4]\), where a pair of matrices generate one of the two required coordinates,
\[f_{\mathbf{W}^{reg}_{seen}}\left(\mathbf{z}_{i,j}\right)=\left\{\left(\mathbf{ W}^{reg}_{r}\mathbf{z}_{i,j}\right)\left(\frac{\mathbf{E}^{s}}{\|\mathbf{E}^{s} \|_{2}}\right)^{T}\right\};\ r\in[1,4]. \tag{4}\]
The regressor for the unseen categories \(f_{\mathbf{W}^{reg}_{unseen}}\) is analogously defined as,
\[f_{\mathbf{W}^{reg}_{unseen}}(\mathbf{z}_{i,j})=\left\{\left( \mathbf{W}^{reg}_{r}\mathbf{z}_{i,j}\right)\left(\frac{\mathbf{E}^{u}}{\| \mathbf{E}^{u}\|_{2}}\right)^{T}\right\};\ r\in[1,4]. \tag{5}\]
Segmentor.The segmentation head within the Mask-RCNN [13] architecture employs a separate proposal-level feature extractor \(f_{\mathbf{W}^{mask}}\) to extract relevant spatial features \(\mathbf{z}^{m}_{i,j}\in\mathbb{R}^{n\times n\times t}\) for segmentation mask generation, where \(\mathbf{z}^{m}_{i,j}=f_{\mathbf{W}^{mask}}\left(\texttt{RoIPool}\left( \mathbf{\bar{x}}_{i},\mathbf{pbox}_{i,j}\right)\right)\). Here \(n\) and \(t\) represent the spatial and feature dimension respectively. For a particular spatial coordinate \((x,y)\) feature \(\mathbf{z}^{m}_{i,j}[x,y]\), the embedding-aware segmentor \(f_{\mathbf{W}^{seg}_{seen}}\) for the seen categories is implemented as follows,
\[f_{\mathbf{W}^{seg}_{seen}}(\mathbf{z}^{m}_{i,j}[x,y])=\left\{ \left(\mathbf{W}^{seg}\mathbf{z}^{m}_{i,j}[x,y]\right)\left(\frac{\mathbf{E}^ {s}}{\|\mathbf{E}^{s}\|_{2}}\right)^{T}\right\} \tag{6}\]
The segmentor for the unseen categories \(f_{\mathbf{W}^{seg}_{unseen}}\) for the spatial coordinate \((x,y)\) follows a similar formulation,
\[f_{\mathbf{W}^{seg}_{unseen}}(\mathbf{z}^{m}_{i,j}[x,y])=\left\{ \left(\mathbf{W}^{seg}\mathbf{z}^{m}_{i,j}[x,y]\right)\left(\frac{\mathbf{E}^ {u}}{\|\mathbf{E}^{u}\|_{2}}\right)^{T}\right\} \tag{7}\]
The training in the second step involves _freezing_ all the learnable parameters for the Faster/Mask RCNN architecture, and only learning the matrices (\(\mathbf{W}^{cls}\), \(\mathbf{W}^{reg}_{r}\), \(\mathbf{W}^{seg}\)). For the classifier this is done via a cross-entropy loss, the detector utilizes a smooth-L1 loss, and the segmentation head uses a pixel-level binary cross-entropy loss.
During inference, we utilize the aforementioned classifiers, regressors, and segmentors to generate predictions for proposals obtained from the RPN. A category-wise non-maximum suppression (NMS) [37] is applied over these predictions to remove overlapping outputs. We additionally define a threshold \(\beta\) to allow the model to flexibly bias itself towards the unseen categories without the need for retraining. Specifically, for the seen categories, we remove any prediction with a classifier confidence less that \(\beta\), \(\mathbf{p}^{s}_{i,j}<\beta;s\in\mathcal{C}^{s}\). \(\mathbf{p}^{s}_{i,j}\) refers to the probability for the seen category \(s\) (Equation 3). Therefore, as we generate a fixed number of predictions (\(\mathit{e.g.}\)[100], increasing \(\beta\) biases the model towards unseen categories. Such thresholding has been empirically used in literature [31, 47], and serves as a measure to counter-balance the inherent bias against unseen categories due to the lack of training examples.
In the next section we justify our model architecture by exploring certain critical choices through ablations.
### Design Decisions
The design decisions to construct the aforementioned model can be segregated into three groups, depending on whether they impact model characteristics, learning dynamics, or the inference procedure.
Model Characteristics.We explore the impact of - 1 the capacity of the backbone, 2 the source of category embeddings \(\mathbf{E}\), 3 the information transfer mechanism used to obtain the unseen category regressor and segmentor, and 4 the formulation of the background semantic embeddings.
Learning Dynamics.We analyze the effects of - 1 the
type of loss used to train the classifier, and (ii) fine-tuning the learnable parameters within the Faster-RCNN [37] (or Mask-RCNN [13]) framework.
**Inference Procedure.** We study the seen-unseen category performance trade-off. This trade-off can be easily achieved by varying the parameter \(\beta\) during inference. This affords the model the flexibility to operate on a spectrum of performance values without requiring any re-training.
To better understand the impact of a particular design decision, we ablate our proposed model by changing only the corresponding element, while keeping other components the same. Unless otherwise specified, the ablations are done using the Faster-RCNN [37] architecture with a ResNet-50 [14] backbone, the Word2Vec [27] semantic embeddings, and a threshold parameter \(\beta=0.05\). The results are shown on the MSCOCO dataset [25] using the seen-unseen split proposed in [1], with \(48\) seen and \(17\) unseen categories. Further details on the dataset are provided in Section 5. We generate \(100\) predictions per image from the model, and report the mean average precision (mAP), and Recall\(@100\) measured at IoU\(=0.5\), and the harmonic mean (HM) over the seen and unseen category performance. The results are reported on both the ZSD and GZSD setups (see Section 3). Note that, due to the similarities between the Faster-RCNN [37] and Mask-RCNN [13] architectures, the observations from these ablations are directly applicable to the ZSI/GZSI tasks as well.
#### 4.1.1 Backbone Capacity
The capacity of the backbone can be varied by changing the depth of the ResNet [14] architecture. The choice of backbone is often constraint to ResNet-50 [31, 47, 32] or ResNet-101 [12, 18, 41] in existing literature. The table below analyzes the benefits of a deeper backbone architecture.
A deeper backbone with higher capacity performs better on seen categories, which directly translates to improved performance on the unseen objects. This is largely due to the deeper model learning superior projection matrices as the backbone is able to provide richer feature representations.
**Takeaway.** Backbone capacity directly correlates to improved performance.
#### 4.1.2 Semantic Embedding Source.
Semantic embeddings \(\mathbf{E}\) for object categories can be derived from different sources such as lingual data (GloVe [29], Word2Vec [27], ConceptNet [39], and SBERT [36]), and visuo-lingual information (CLIP [30]). Owing to the difference in embedding quality and characteristics, the performance of zero-shot methods is directly impacted by this choice. We train our model with different semantic embeddings to further examine the effects of this decision.
\begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Metric} & \multirow{2}{*}{Embedding} & \multirow{2}{*}{ZSD} & \multicolumn{3}{c}{GZSD} \\ & & & & \multicolumn{1}{c|}{Seen} & \multicolumn{1}{c|}{Unseen} & \multicolumn{1}{c}{HM} \\ \hline \multirow{5}{*}{mAP} & GloVe [29] & \(13.8\) & \(47.0\) & \(8.9\) & \(15.0\) \\ & Word2Vec [27] & \(13.9\) & \(47.3\) & \(9.4\) & \(15.7\) \\ & SBERT [36] & \(13.7\) & \(46.8\) & \(9.8\) & \(16.2\) \\ & ConceptNet [39] & \(15.0\) & \(\mathbf{47.5}\) & \(10.4\) & \(17.1\) \\ & CLIP [30] & \(\mathbf{17.0}\) & \(47.0\) & \(\mathbf{12.7}\) & \(\mathbf{20.0}\) \\ \cline{2-6} & GloVe [29] & \(59.9\) & \(68.2\) & \(55.4\) & \(61.1\) \\ & Word2Vec [27] & \(59.7\) & \(\mathbf{68.5}\) & \(55.1\) & \(61.1\) \\ & SBERT [36] & \(59.1\) & \(67.5\) & \(54.9\) & \(60.6\) \\ & ConceptNet [39] & \(\mathbf{59.9}\) & \(68.3\) & \(\mathbf{55.7}\) & \(\mathbf{61.4}\) \\ & CLIP [30] & \(59.2\) & \(66.9\) & \(55.1\) & \(60.4\) \\ \hline \hline \end{tabular}
The experiment highlights the considerable impact of embedding choice on model performance, wherein encodings obtained from richer sources like ConceptNet [39] that leverages knowledge graphs, or CLIP [30] that is trained on both visual and lingual data, provide better performance.
**Takeaway.** Richer category embeddings are better able to facilitate transfer - identify and localize unseen categories.
#### 4.1.3 Formulation of Background Embeddings
As the task of ZSD require models to accurately localize unseen objects, being able to distinguish them from no-category (background) objects is of paramount importance. Existing works either assign a static embedding to the background categories [1, 34], or learn a background embedding using seen category annotations [47, 48]. To analyze this further, we experiment with three kinds of background embeddings - (i) a fixed embedding \([1,...,0]\) as in [1], (ii) average over the seen category embeddings \(\frac{1}{|\mathcal{C}^{-1}|}\sum_{\mathbf{e}_{s}\in\mathbf{E}^{s}}\mathbf{e}_ {\mathbf{s}}\) as in [34], and (iii) an embedding \(\mathbf{b}\) learned alongside the projection matrices described in Section 4.
Compared to learning a background embedding on the seen class information, having a static background embedding (fixed vector or mean) provides inferior performance on the unseen categories. This can primarily be attributed to the static embeddings not being independent of the training data, and therefore not being able to accurately distinguish background from the unseen category objects.
**Takeaway.** Learning a background embedding is preferable to using a static background embedding.
#### 4.1.4 Formulation of Regressor
Effective localization of unseen objects is heavily conditioned on the quality of unseen category regressors. Existing works have looked at heuristically utilizing the seen category regressors as a proxy for their unseen counterparts [12, 21], or leveraging a semantic space projection from image features to the embedding space [31, 48]. Here we explore the impact of the type of transfer used, by comparing four variants of our proposed model with different formulations for regressor transfer - 1 Using no transfer, and directly using the bounding box predicted by the RPN without any refinement, 2 Using the most similar seen category regressor as a proxy for its unseen counterpart, 3 Using a linear combination of seen category regressor outputs based on embedding similarity between \(\mathbf{E}^{s}\) and \(\mathbf{E}^{u}\), and 4 Using our proposed transfer described in Section 4.
Footnote 1: [https://github.com/](https://github.com/)
It is evident that using a max-margin or L2-error based loss to train the classifier provides inferior identification of unseen category objects. The cross-entropy loss is consistent with the formulation used in Faster-RCNN [37] (or MaskRCNN [13]) to train the classifier, and therefore provides better performance. Additionally, unlike the max-margin loss that relies on the selection of a appropriate margin, the cross-entropy loss has no such hyperparameter.
**Takeaway.** Cross-entropy based formulation is easier to train and provides better performance.
#### 4.1.7 Seen-Unseen Performance Trade-off
We use a threshold \(\beta\) to bias the model towards unseen categories while simultaneously forgoing the need for re-training. We further explore the impact of this biasing by evaluating our trained model on the GZSD setup with different \(\beta\) values, and visualize the results in the figure below (mAP on the left, recall on the right). Note that the ZSD setup is not affected by the choice of \(\beta\) as the seen category predictions are simply ignored (\(\beta>1\)).
**Takeaway.** An appropriate \(\beta\) greatly boosts unseen category performance without the need for re-training.
## 5 Experiments
We compare our proposed model, described in Section 4, which has carefully constructed using the best performing design components (Section 4.1), against existing methods.
**Dataset.** The evaluation is done on the MSCOCO 2014 [25] dataset, which contains \(82,783\) training images and \(40,504\) validation images with \(80\) categories.
**Seen-Unseen Splits.** For the task of ZSD, consistent with existing work in [1, 31], we report performance on two seen-unseen category splits - 1 the \(48/17\) split [1], and 2 the \(65/15\) split [31]. For the task of ZSI, we adopt the \(48/17\) and \(65/15\) splits proposed in [48], Following the setup in [1, 31, 48], for each task and split, we remove _all_ images containing unseen categories from the training set to guarantee that unseen objects will not influence model training.
**Evaluation.** Following existing work, we report performance on the standard MSCOCO metrics, namely mean average precision (mAP) at IoU\(=\)\(0.5\) and recall@100 at three different IoU thresholds \([0.4,0.5,0.6]\). For the GZSD/GZSI tasks, we also compute the harmonic mean (HM) between the seen and unseen category performance.
**Implementation Details.** To enable fair comparison with recent methods [12, 18, 48], we train our proposed approach on the ResNet-101 [14] backbone. The learnable parame
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{3}{*}{Metric} & \multicolumn{2}{c|}{Classifier} & \multicolumn{2}{c|}{GZSD} \\ & \multicolumn{2}{c|}{Loss} & & \multicolumn{2}{c|}{Seen} & \multicolumn{2}{c|}{Unseen} & \multicolumn{1}{c}{HM} \\ \hline \multirow{3}{*}{mAP} & Max Margin [1] & \(12.7\) & \(46.3\) & \(6.7\) & \(11.7\) \\ & L2 Error [11] & \(12.0\) & \(39.8\) & \(5.6\) & \(9.8\) \\ & Cross Entropy (Ours) & \(\mathbf{13.9}\) & \(\mathbf{47.3}\) & \(\mathbf{9.4}\) & \(\mathbf{15.7}\) \\ \cline{2-7} & Max Margin [1] & \(54.3\) & \(67.3\) & \(49.1\) & \(60.1\) \\ \cline{2-7} & L2 Error [11] & \(48.6\) & \(62.1\) & \(34.2\) & \(44.1\) \\ \cline{2-7} & Cross Entropy (Ours) & \(\mathbf{59.7}\) & \(\mathbf{68.5}\) & \(\mathbf{55.1}\) & \(\mathbf{61.1}\) \\ \hline \end{tabular}
\end{table}
Table 1: **Zero-Shot Detection (ZSD).** mAP at IoU\(=\)\(0.5\), Recall@\(100\) at IoU\(=\)\([0.4,0.5,0.6]\) reported for unseen categories. Best result highlighted in red, second best in blue.
difference on mAP is more pronounced with the use of a richer embedding source, wherein our "F\(+\)CLIP" variant achieves a \(37.3\%\) improvement on mAP with \(13.1\%\) increase on recall over the method in [18]. A similar observation holds for the \(65/15\) split, where the "F\(+\)CLIP" variant outperforms the closest baseline [18] by \(23.7\%\) and \(19.6\%\) on mAP and recall respectively. Under the more challenging GZSD setup, as highlighted in Table 2, our variants on average provide \(5\%\) and \(12.9\%\) higher HM recall on the \(47/17\) and \(65/15\) splits respectively. Although our "F\(+\)W2V" variant has a slightly worse performance when compared to RRFS [18] on HM mAP, the "F\(+\)CLIP" variant has an average improvement of \(11\%\) on HM mAP, demonstrating the ability of our simplistic approach to effectively detect both seen and unseen objects simultaneously.
### Comparison to Existing Methods
We report performance using:
\(\frac{\text{i}}{\text{a}}\) a lingual embedding in Word2Vec [27], denoted as "W2V", and
\(\frac{\text{ii}}{\text{a}}\) a visio-lingual embedding in CLIP [27]. The Faster-RCNN [37] architecture, denoted as "F", is used for ZSD/GZSD tasks. Similarly, the Mask-RCNN [13] architecture, denoted as "M", is used for the ZSI/GZSI tasks. We differentiate variants of our method by the architecture and embedding choice. For example, "F \(+\) W2V" represents the use of Faster-RCNN [37] architecture with Word2Vec [27] embeddings.
**Zero-Shot Detection.** Comparisons to existing methods on the ZSD setup are shown in Table 1. For the \(48/17\) split, our "F\(+\)W2V" variant provides \(10.4\%\) higher mAP, and on average \(13.7\%\) higher recall across the three thresholds when compared with the most competitive method in [18]. The difference on mAP is more pronounced with the use of a richer embedding source, wherein our "F\(+\)CLIP" variant achieves a \(37.3\%\) improvement on mAP with \(13.1\%\) increase on recall over the method in [18]. A similar observation holds for the \(65/15\) split, where the "F\(+\)CLIP" variant outperforms the closest baseline [18] by \(23.7\%\) and \(19.6\%\) on mAP and recall respectively. Under the more challenging GZSD setup, as highlighted in Table 2, our variants on average provide \(5\%\) and \(12.9\%\) higher HM recall on the \(47/17\) and \(65/15\) splits respectively. Although our "F\(+\)W2V" variant has a slightly worse performance when compared to RRFS [18] on HM mAP, the "F\(+\)CLIP" variant has an average improvement of \(11\%\) on HM mAP, demonstrating the ability of our simplistic approach to effectively detect both seen and unseen objects simultaneously.
**Zero-Shot Segmentation.** Comparisons to the baseline in [48] on the ZSI and GZSI tasks are presented in Tables 3 and 4 respectively. For the ZSI task, irrespective of the embedding choice, we outperform the closest baseline in [48] by \(84.9\%\) and \(31.3\%\) on mAP and recall respectively, on average, across the two splits. Similar improvements are seen on the GZSI task, wherein our model variants provide an average increase of \(117.4\%\) and \(12.8\%\) on HM mAP and HM Recall respectively over [48] across the two splits, highlighting the superior performance of our approach on both seen and unseen category segmentation.
**Additional Results.** Qualitative visualisations and per-category results are shown in the **appendix**.
## 6 Conclusion
In this work we present a simple approach to zero-shot detection and segmentation that is carefully constructed through extensive ablations over critical design choices. Through extensive experimentation we highlight the superior performance of our method when compared to more complex architectures, and suggest the need to revisit some of the recent design trends in the ZSD/ZSI field, wherein
\begin{table}
\begin{tabular}{c c|c c c c c|c c} \hline \hline \multicolumn{2}{c|}{Method} & \multirow{2}{*}{Split} & \multicolumn{3}{c|}{\begin{tabular}{c} Seen \\ mAP Recall \\ \end{tabular} } & \multicolumn{3}{c|}{Unseen} & \multicolumn{3}{c}{HM} \\ \cline{3-10} & & & \multicolumn{3}{c|}{mAP Recall} & \multicolumn{3}{c|}{mAP Recall} & \multicolumn{3}{c}{mAP Recall} \\ \hline \multirow{5}{*}{\begin{tabular}{c} ZSI \\ \end{tabular} } & PL [31] & \(48/17\) & \(35.9\) & \(38.2\) & \(4.1\) & \(26.3\) & \(7.4\) & \(31.2\) \\ & BLC [47] & \(48/17\) & \(42.1\) & \(57.6\) & \(4.5\) & \(46.4\) & \(8.2\) & \(51.4\) \\ & RRFS [18] & \(48/17\) & \(42.3\) & \(59.7\) & \(13.4\) & \(58.8\) & \(20.4\) & \(59.2\) \\ & OurS\({}_{\text{F}+\text{W2V}}\) & \(48/17\) & \(48.9\) & \(69.2\) & \(10.2\) & \(56.7\) & \(16.9\) & \(62.3\) \\ & OurS\({}_{\text{F}+\text{CLIP}}\) & \(48/17\) & \(48.6\) & \(69.2\) & \(13.9\) & \(56.4\) & \(21.6\) & \(62.1\) \\ & ZSI [48] & \(48/17\) & \(46.5\) & \(70.8\) & \(4.8\) & \(53.9\) & \(8.8\) & \(61.2\) \\ & OurS\({}_{\text{M}+\text{W2V}}\) & \(48/17\) & \(49.5\) & \(70.7\) & \(10.6\) & \(58.0\) & \(17.5\) & \(63.7\) \\ & OurS\({}_{\text{M}+\text{CLIP}}\) & \(48/17\) & \(49.4\) & \(69.8\) & \(13.6\) & \(58.3\) & \(21.3\) & \(63.5\) \\ \hline \multirow{5}{*}{
\begin{tabular}{c} ZSI \\ \end{tabular} } & PL [31] & \(65/15\) & \(34.1\) & \(36.4\) & \(12.4\) & \(37.2\) & \(18.2\) & \(36.8\) \\ & BLC [47] & \(65/15\) & \(36.0\) & \(56.4\) & \(13.1\) & \(51.7\) & \(19.2\) & \(53.9\) \\ & SU [12] & \(65/15\) & \(36.9\) & \(57.7\) & \(19.0\) & \(53.9\) & \(25.1\) & \(55.8\) \\ & RRFS [18] & \(65/15\) & \(37.4\) & \(58.6\) & \(19.8\) & \(61.8\) & \(26.0\) & \(60.2\) \\ & OurS\({}_{\text{F}+\text{W2V}}\) & \(65/15\) & \(40.2\) & \(70.8\) & \(19.3\) & \(64.2\) & \(26.1\) & \(67.3\) \\ & OurS\({}_{\text{F}+\text{CLIP}}\) & \(65/15\) & \(40.3\) & \(70.9\) & \(24.2\) & \(66.6\) & \(30.2\) & \(68.7\) \\ & ZSI [48] & \(65/15\) & \(38.7\) & \(67.1\) & \(13.6\) & \(58.9\) & \(20.1\) & \(62.8\) \\ & OurS\({}_{\text{M}+\text{W2V}}\) & \(65/15\) & \(40.7\) & \(70.0\) & \(18.8\) & \(64.6\) & \(25.7\) & \(67.2\) \\ & OurS\({}_{\text{M}+\text{CLIP}}\) & \(65/15\) & \(40.9\) & \(70.0\) & \(24.9\) & \(66.5\) & \(30.9\) & \(68.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Generalized Zero-Shot Detection (GZSD).** mAP, \(\text{Recall}@100\), and the harmonic mean (HM) between seen and unseen category performance is reported at IoU=\(0.5\). Best result highlighted in red, second best in blue.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline & & & \multicolumn{3}{c|}{Recall\(@100\)} & \multicolumn{1}{c}{mAP} \\ \cline{3-5} & & Split & \multicolumn{3}{c|}{IoU} & \multicolumn{1}{c}{IoU} \\ \cline{3-5} & & & \(0.4\) & \(0.5\) & \(0.6\) & \(0.5\) \\ \hline ZSI [48] & \(48/17\) & \(50.3\) & \(44.9\) & \(38.7\) & \(9.0\) \\ \(\text{Ours}_{\text{M}+\text{W2V}}\) & \(48/17\) & \(62.1\) & \(56.7\) & \(49.1\) & \(14.0\) \\ \(\text{Ours
our method can act as a strong baseline.
|
2303.08303 | SegPrompt: Using Segmentation Map as a Better Prompt to Finetune Deep
Models for Kidney Stone Classification | Recently, deep learning has produced encouraging results for kidney stone
classification using endoscope images. However, the shortage of annotated
training data poses a severe problem in improving the performance and
generalization ability of the trained model. It is thus crucial to fully
exploit the limited data at hand. In this paper, we propose SegPrompt to
alleviate the data shortage problems by exploiting segmentation maps from two
aspects. First, SegPrompt integrates segmentation maps to facilitate
classification training so that the classification model is aware of the
regions of interest. The proposed method allows the image and segmentation
tokens to interact with each other to fully utilize the segmentation map
information. Second, we use the segmentation maps as prompts to tune the
pretrained deep model, resulting in much fewer trainable parameters than
vanilla finetuning. We perform extensive experiments on the collected kidney
stone dataset. The results show that SegPrompt can achieve an advantageous
balance between the model fitting ability and the generalization ability,
eventually leading to an effective model with limited training data. | Wei Zhu, Runtao Zhou, Yao Yuan, Campbell Timothy, Rajat Jain, Jiebo Luo | 2023-03-15T01:30:48Z | http://arxiv.org/abs/2303.08303v1 | [
###### Abstract
Recently, deep learning has produced encouraging results for kidney stone classification using endoscope images. However, the shortage of annotated training data poses a severe problem in improving the performance and generalization ability of the trained model. It is thus crucial to fully exploit the limited data at hand. In this paper, we propose SegPrompt to alleviate the data shortage problems by exploiting segmentation maps from two aspects. First, SegPrompt integrates segmentation maps to facilitate classification training so that the classification model is aware of the regions of interest. The proposed method allows the image and segmentation tokens to interact with each other to fully utilize the segmentation map information. Second, we use the segmentation maps as prompts to tune the pretrained deep model, resulting in much fewer trainable parameters than vanilla finetuning. We perform extensive experiments on the collected kidney stone dataset. The results show that SegPrompt can achieve an advantageous balance between the model fitting ability and the generalization ability, eventually leading to an effective model with limited training data.
S SegPrompt: Using Segmentation Map as a Better Prompt to Finetune Deep Models]SegPrompt: Using Segmentation Map as a Better Prompt to Finetune Deep Models for Kidney Stone Classification Wei Zhu]Wei Zhu\({}^{1}\) Runtao Zhou\({}^{1}\) Yuan Yao\({}^{1}\) Campbell Timothy\({}^{2}\) Rajat Jain\({}^{2}\) Jiebo Luo\({}^{1}\)
\({}^{1}\) University of Rochester \({}^{2}\) University of Rochester Medical Center
Kidney Stone Classification, Prompt Tuning, Parameter Efficient Finetuning
## 1 Introduction
Kidney stone disease (KSD) affects 10% of the US population during their lifetime and results in billions of dollars of the annual cost to society (Chewcharat and Curhan, 2021). The recurrent nature of KSD often causes multiple emergency room visits, hospital admissions, and surgical procedures for the patient. Over the last ten years, laser technology has evolved significantly (Kronenberg and Somani, 2018; Elhilali et al., 2017; Ibrahim et al., 2020). The most common procedure for KSD is ureteroscopy with laser lithotripsy (Heers and Turney, 2016). In this procedure, a semi-rigid or flexible 2-3mm ureteroscope is navigated into the urinary tract to the stone. It is then fragmented using a holmium:YAG laser. This type of laser has been the mainstay of KSD procedures for over 30 years (Zarrabi and Gross, 2011). In the past, it has been standard to fragment the stone into small pieces, which are then removed from the body using a small basket that is passed through the scope. This method allows the urologist to collect small pieces, which can then be sent to a laboratory for chemical analysis. However, this approach typically takes 1-2 months to get the classification result back to the physician, even though the patients can be in a critical condition and suffer from great pain (Ochoa-Ruiz et al., 2022). In this paper, we focus on real-time stone-type prediction with the deep neural network (Ochoa-Ruiz et al., 2022).
Recently, deep learning-based methods have been developed to perform efficient diagnosis using endoscope images, and these methods often directly fine-tune the whole model (Ochoa-Ruiz et al., 2022; Estrade et al., 2022). However, similar to most medically related tasks (Zhu et al., 2020), the limited training data makes it hard to obtain a robust deep model that could generalize to unseen cases with vanilla finetuning (Zang et al., 2022). In this paper, inspired by the recent progress on Visual Prompt Tuning (VPT) (Jia et al., 2022), we propose SegPrompt for kidney stone classification by taking segmentation maps as prompts to tune the pretrained model. On the one hand, SegPrompt integrates the segmentation map into the training process to make the model aware of the regions of interest, which intuitively benefits the classification training process (Khan et al., 2019). The segmentation map is obtained with a pretrained Unet (Ronneberger et al., 2015). On the other hand, as a prompt tuning-based method, SegPrompt does not update the backbone model and thus has much fewer trainable parameters than finetuning. In this way, we can avoid the overfitting problem that often comes with small-scale training data. Moreover, our model allows the image and segmentation tokens to interact mutually with each other so that the model can make full use of the segmentation map.
We highlight our contributions as follows:
1. We propose SegPrompt, which regards segmentation maps as prompts to tune the pretrained deep model for kidney stone classification with limited training data.
2. SegPrompt incorporates the segmentation maps to facilitate the classification training process and only prompt-tunes a small part of the model, thus alleviating the overfitting problem and improving the classification performance.
3. We conduct thorough experiments on the collected dataset to validate the effectiveness of SegPrompt.
## 2 Related Work
### Kidney Stone Classification
Both traditional machine learning and deep learning approaches have been used for kidney stone classification. Serrat _et al._ adopt random forest to classify kidney stone images with hand-crafted texture and color feature vectors (Serrat et al., 2017). Motivated by the encouraging results of deep medical image analysis (Ronneberger et al., 2015), Amado and Alejandro (Torrell Amado, 2018) exploit deep metric learning approaches such as Siamese Networks and Triplet Networks to learn the embedding for kidney stone images and use the k-nearest neighbor algorithm to classify testing images. Black _et al._(Black et al., 2020) incorporate ex-vivo kidney stones images to finetune a pre-trained deep neural network and obtain reasonable results. Estrade _et al._ leverage transfer learning to classify mixed stones using surface and cross-section images of kidney stones (Estrade et al., 2022). In Manoj _et al._'s work (Manoj et al., 2022), they present the visualization analysis of the well-trained kidney stone classifier with Grad-CAM. Finally, Ochoa-Ruiz _et al._(Ochoa-Ruiz et al., 2022) use deep neural networks to classify kidney stones with in-vivo images from medical endoscopes. However, most of these methods finetune the whole model, potentially leading to an overfitting problem. In contrast, SegPrompt has fewer trainable parameters
to enhance the generalization ability and incorporates the segmentation map to facilitate the training.
### Visual Prompt Tuning
Visual Prompt Tuning (VPT) was recently proposed to adjust a pretrained vision transformer for specific tasks with few trainable parameters and shows advantages in generalization ability over vanilla fine-tuning, particularly with limited training data (Jia et al., 2022). VPT simply adds learnable tokens to the input of vision transformers (Jia et al., 2022). Visual Prompting pads the original images with learnable pixels (Bahng et al., 2022). NOAH performs a neural architecture search to learn optimal prompt structure (Zhang et al., 2022). Unified vision and language prompt tuning are proposed to jointly tune the VL models (Zang et al., 2022). S-Prompt is proposed to handle domain incremental learning with prompt tuning (Wang et al., 2022). The prompts used by these methods are simply trainable parameters. In contrast, SegPrompt learns to generate prompts based on the segmentation map, and achieve a better balance between fitting and generalization ability.
## 3 Methodology
In this section, we present the proposed segmentation map-based prompt tuning framework for kidney stone classification. It is designed to improve the model's performance and
Figure 1: Block diagram of SegPrompt. We first extract the segmentation map with a pretrained Unet. The segmentation map is encoded into segmentation embeddings by the first two blocks of a pretrained ResNet18. We add the position embedding and segmentation indicator to the segmentation embedding to obtain segmentation tokens. Finally, we concatenate the image tokens, segmentation tokens, and extra learnable tokens and feed all tokens to the transformer backbone. We only update the segmentation map encoder and the last classifier during training.
generalization ability by better exploiting the knowledge of segmentation maps with fewer trainable parameters.
### Overview
Our method contains a frozen ViT backbone (Dosovitskiy et al., 2020), a segmentation map encoder, and a linear classifier. Similar to most medical image tasks, we also suffer from the scarcity of annotated kidney stone images, and it is expensive to collect more samples (Zhu et al., 2020). The shortage of training data makes it hard to obtain an effective model which could generalize to unseen cases. To alleviate the problem, on the one hand, we involve the segmentation map in the classification training so that the model is aware of the regions of interest. On the other hand, we tokenize the segmentation map into prompts to finetune the backbone model and only update the segmentation map encoder and the last linear classifier. Consequently, the much fewer trainable parameters empower our model with better generalization abilities to avoid the overfitting problem (Jia et al., 2022). Moreover, since we perform self-attention on both image and segmentation tokens (Vaswani et al., 2017), our method allows the model to exploit the knowledge of the segmentation map more comprehensively and flexibly. We show the block diagram of our method in Fig. (1).
### Tokenize the Segmentation Map
We first show how to convert the segmentation map into prompt tokens with the proposed segmentation map encoder \(h\). The encoder \(h\) consists of the first two blocks of an ImageNet pre-trained ResNet18 followed by a projector, where the projector is composed of a 1x1 convolutional layer and an adaptive pooling layer (He et al., 2016). The convolutional layer of the projector is used to match the dimension of Resnet output to that of ViT while the pooling layer reduces the segmentation tokens to a desirable length. Moreover, the encoder \(h\) also contains learnable tokens \(P_{s}\), \(Z_{e}\), and \(r\), which will be introduced later. Given a training image \(X\), we first obtain the binarized pixel-wise segmentation map \(O\in\{-1,1\}\) from a pretrained Unet (Ronneberger et al., 2015), where 1 denotes foreground regions with kidney stones, and \(-1\) denotes background regions, the segmentation map embeddings \(M=\{m^{i}\}_{i=1}^{l_{m}}\in R^{l_{m}\times d}\) can be obtained by flattening the output of segmentation map encoder as
\[M=flatten(h(O)), \tag{1}\]
where \(d\) is the dimension of the backbone model. Then, to convert the embedding \(M\) to tokens, we first add the learnable position embedding \(P_{s}=\{p_{s}^{i}\}_{i=1}^{l_{m}}\in R^{l_{m}\times d}\) to retain the position information and then add a learnable indicator token \(r\in R^{d}\) to enable the model to distinguish the segmentation tokens from the image tokens. Specifically, we obtain the segmentation tokens \(Z_{s}=\{z_{s}^{i}\}_{i=1}^{l_{m}}\in R^{l_{m}\times d}\) by
\[z_{s}^{i}=m^{i}+p_{s}^{i}+r\ \ for\ \ i=1,\ldots,l_{m}. \tag{2}\]
### Prompt Tuning with Segmentation Tokens
To enable the model to better interact with and exploit the segmentation map, we propose to prompt-tune the model with the segmentation tokens (Jia et al., 2022). In particular, we
concatenate the classification token \(z_{cls}\), image tokens \(Z_{x}=\{z_{x}^{i}\}_{i=1}^{l}\in R^{l\times d}\), the segmentation tokens \(Z_{s}\), and some extra learnable tokens \(Z_{e}=\{z_{e}^{i}\}_{i=1}^{l_{e}}\in R^{l_{e}\times d}\) as input \(Z\) to the transformer backbone
\[Z=[z_{cls},z_{x}^{0},\dots,z_{x}^{l},z_{s}^{0},\dots,z_{x}^{l_{s}},z_{e}^{0}, \dots,z_{e}^{l_{e}}] \tag{3}\]
The extra learnable tokens make the pretrained model better adapt to kidney stone classification. The classification token \(z_{cls}\) and image tokens \(Z_{x}\) are frozen during training (Dosovitskiy et al., 2020). We perform multi-head self-attention on the input tokens \(Z\) and take the \(z_{cls}\) from the last layer as the output, which will be further processed by the classifier to get the final prediction. We adopt cross-entropy loss to train the model. During training, we keep the transformer backbone frozen and only update the segmentation map encoder (including the corresponding position embedding \(P_{s}\) and the indicator token \(r\)), the last classifier, and the introduced extra tokens \(Z_{e}\).
We discuss several important features of SegPrompt as follows. Existing tuning methods strive to balance the generalization and fitting abilities to adapt a pretrained model for small-scale tasks. For example, finetuning suffers from the overfitting problem with over-abundant learnable parameters, while VPT may underfit the target dataset, which deviates significantly from the pretrained dataset (Wortsman et al., 2022). Moreover, it is not trivial to integrate additional knowledge (e.g., segmentation map) into the training process for these methods. We empirically find that SegPrompt leads to a powerful model without severe overfitting and can effectively utilize extra knowledge. Specifically, compared with vanilla fine-tuning, the much fewer trainable parameters of SegPrompt significantly alleviate the overfitting problem. Compared with VPT, SegPrompt, equipped with the learnable segmentation map encoder, has a more powerful learning capacity and also does not severely distort the pretrained model because the number of newly introduced tokens (default set to 51) is much fewer than that of the original tokens (197 for ViT-B/16) (Dosovitskiy et al., 2020). To make use of additional knowledge, i.e., the segmentation map, SegPrompt allows image tokens and segmentation tokens to interact with and extract information from each other to improve classification performance. Compared with the joint classification and segmentation model, our framework is more flexible and can directly use human annotations when a good segmentation model cannot be obtained with the current data. Last but not least, one can simply extend SegPrompt to integrate other kinds of knowledge, such as patient demographic information (Daneshjou et al., 2021), device/scanner information (Ji et al., 2022), text descriptions of the symptoms (Qin et al., 2022), and medical record. We leave these as future directions.
## 4 Experiments
### Dataset
We collect 1496 kidney stone images from 5 different videos. We filter out low-quality and background images and obtain 867 images from 3 videos with COM (calcium oxalate monohydrate) stones and 629 images from 2 videos with CAP(calcium phosphate) stones. We split the dataset into training and validation sets in a video-wise fashion to prevent any data leaks. We perform 6-fold cross-validation, and the averaged accuracy, precision,
recall, and F1 scores with standard deviation are reported on the validation sets composed of images from two hold-out videos.
### Implementation of Segmentation Model
The kidney stone regions are labeled by an undergraduate student advised by a specialist, and we train a Unet (Ronneberger et al., 2015) implemented by the MMsegmentation framework (Contributors, 2020) to perform the segmentation. Besides the training data from different folds, we also leverage some extra images without kidney stone labels to facilitate the segmentation model training. In total, there are eight additional kidney stone videos without kidney stone labels, which provide 1860 additional images. We manually annotate the segmentation maps for these images and only include them in the segmentation model training. Finally, we obtain segmentation models with the pixel-level accuracy averaged over different folds as 96.7% and the dice score as 92.93%. The outputs are binarized to obtain final segmentation maps. We visualize segmentation results from the validation set in Fig. (2).
### Baseline Methods and Implementation Details
Three finetuning-based models are included as the baselines to validate the superiority of the proposed SegPrompt for the kidney stone classification task as FT (Finetuning), FT-crop (Finetuning-crop), FT-concat (Finetuning-concat). FT uses raw images without segmentation maps. FT-crop takes the cropped regions of interest as input. As for FT-concat, we channel-wisely concatenate the segmentation maps to the images as the input. The concatenated images then passed through a 1x1 convolution layer to reduce their channel size to 3 before feeding them into the ViT backbone. We also implement FT-based ResNet for comparison. Moreover, we also compare our method with Visual Prompt Tuning (VPT) and VPT-Deep (Jia et al., 2022). VPT adds the learnable tokens to the input, while VPT-deep
Figure 2: Illustration of segmentation results. Each column represents one image sample. The images come from the validation set of different folds.
adds different tokens for each layer. Similar to VPT and VPT-deep, we also implement two variants of SegPrompt as SegPrompt and SegPrompt-Deep.
For all methods, the standard image preprocessing steps, such as resizing and normalization, have been applied to the training images. The batch size, number of epochs, and learning rate are 16, 20, and 0.001, respectively. We adopt an ImageNet pretrained ViT-B/16 as the backbone, which contains 196 image tokens and one classification token (Dosovitskiy et al., 2020). As for the proposed SegPrompt (Deep), we set the number of segmentation tokens to \(l_{s}=49\), and the extra learnable tokens to \(l_{e}=2\). The number of learnable tokens for VPT (Deep) is searched from \(\{8,16,32,51\}\).
### Experimental Results
We present the experimental results in Table 1, and draw several interesting conclusions according to the results. First, FT outperforms FT-crop, suggesting that the cropped clean kidney stone images do not benefit the classification performance, which in turn suggests that the surrounding background regions contain critical information. Second, FT-concat utilizes the segmentation map without removing the background regions and it slightly outperforms FT for 0.8% in terms of F1 score. This shows that the classification performance could be boosted by properly exploiting the information of the segmentation map. The performance of FT-ResNet has consistent results. Third, the overfitting problem is crucial in adapting the pre-trained models to the kidney stone classification with limited training data. VPT and VPT-deep outperform all FT-based methods with much fewer trainable parameters (Jia et al., 2022). In particular, VPT obtains a 1.54% improvement over FT-concat in terms of the F1 score and also surpasses its direct counterpart VPT-deep which has much more trainable parameters. Finally, the proposed SegPrompt makes better use of the segmentation map with few trainable parameters and achieves the best overall performance. For example, SegPrompt improves the F1 score from 96.07% to 99.45% compared with the second-best method VPT. We also note that SegPrompt-deep only slightly degrades the performance compared with SegPrompt, which shows the possibility of applying our method to handle large-scale tasks that require more learnable parameters to fit the training set.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Methods & Accuracy & Precision & Recall & F1 & AUC \\ \hline FT & \(95.07\pm 3.2\) & \(94.21\pm 4.7\) & \(95.46\pm 2.7\) & \(93.73\pm 5.9\) & \(95.37\pm 3.4\) \\ FT-crop & \(94.34\pm 4.0\) & \(94.14\pm 4.2\) & \(94.19\pm 3.9\) & \(92.96\pm 5.6\) & \(94.17\pm 3.4\) \\ FT-concat & \(95.18\pm 2.5\) & \(94.91\pm 2.5\) & \(95.13\pm 2.3\) & \(94.53\pm 3.0\) & \(95.10\pm 2.9\) \\ \hline ResNet50 & \(94.75\pm 2.7\) & \(94.06\pm 4.1\) & \(95.49\pm 2.0\) & \(93.55\pm 5.1\) & \(95.18\pm 1.6\) \\ ResNet50-crop & \(95.28\pm 3.8\) & \(95.71\pm 3.3\) & \(94.66\pm 3.7\) & \(94.32\pm 4.4\) & \(94.58\pm 3.7\) \\ ResNet50-concat & \(96.72\pm 2.6\) & \(96.44\pm 3.3\) & \(96.93\pm 2.5\) & \(96.44\pm 2.8\) & \(96.84\pm 2.5\) \\ \hline VPT & \(96.87\pm 1.1\) & \(96.60\pm 1.7\) & \(96.65\pm 1.3\) & \(96.07\pm 2.8\) & \(96.52\pm 1.8\) \\ VPT-Deep & \(95.85\pm 0.8\) & \(95.49\pm 1.2\) & \(95.60\pm 1.2\) & \(95.13\pm 2.4\) & \(95.32\pm 1.7\) \\ \hline SegPrompt & \(\mathbf{99.56\pm 0.3}\) & \(\mathbf{99.45\pm 0.4}\) & \(\mathbf{99.60\pm 0.3}\) & \(\mathbf{99.45\pm 0.5}\) & \(\mathbf{99.57\pm 0.4}\) \\ SegPrompt-Deep & \(99.19\pm 0.3\) & \(99.06\pm 0.3\) & \(99.24\pm 0.3\) & \(99.26\pm 0.2\) & \(99.23\pm 0.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Kidney Stone Classification Results averaged over 6 Folds (corresponding to all possible combinations of videos). The best results are highlighted in bold. (%)
### Ablation Studies
Extensive ablation studies have been performed to verify the effectiveness of different components of SegPrompts. All variants are developed based on SegPrompt instead of SegPrompt-Deep due to its superior performance and simplicity. We first study the importance of indicator token \(r\) and the extra learnable tokens \(z_{e}\). The indicator token \(r\) enables the model to distinguish the segmentation map tokens from other tokens, and the extra learnable tokens could make the pretrained model better adapt to our task. The two variants are denoted as SegPrompt w/o \(r\) and SegPrompt w/o \(z_{e}\), respectively. The experimental results are shown in Table 2. According to the results, we find that the indicator token is essential for SegPrompt while the extra learnable tokens further slightly improve the performance. We also conduct experiments to study the influence of different numbers of segmentation tokens \(l_{s}\), and select \(l_{s}\in\{25,26,49,64,81\}\). Based on the results shown in Table 3, the increasing number of segmentation tokens benefits the final performance, while over-large values may lead to a slight decrease. We default set \(l_{s}=49\).
## 5 Conclusions and Future Work
In this paper, we present a novel segmentation map-based prompt tuning method for kidney stone classification with limited data, named SegPrompt. We first employ a well-trained Unet to extract the segmentation maps, which are further converted to segmentation tokens by a segmentation map encoder. Then SegPrompt takes the concatenation of image, segmentation, and some extra tokens as input to the transformer. During training, we only prompt-tune the segmentation map encoder and the linear classifier. SegPrompt can better exploit the knowledge of segmentation maps with few trainable parameters and significantly outperforms existing methods for kidney stone classification. The main limitation of our work is the small-scale training dataset, and we will collect more data to improve our model for more types of kidney stones. Moreover, we plan to extend our work to other medical tasks to further validate the effectiveness of SegPrompt (Daneshjou et al., 2022).
**Acknowledgments** This work is supported in part by NSF #2050842, NIH 1P50NS108676-01, and NIH 1R21DE030251-01.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Methods & Accuracy & Precision & Recall & F1 & AUC \\ \hline SegPrompt & \(99.56\pm 0.3\) & \(99.45\pm 0.4\) & \(99.60\pm 0.3\) & \(99.45\pm 0.5\) & \(99.57\pm 0.4\) \\ \hline SegPrompt w/o \(r\) & \(99.07\pm 0.6\) & \(98.90\pm 0.7\) & \(99.11\pm 0.5\) & \(98.87\pm 0.8\) & \(99.08\pm 0.4\) \\ SegPrompt w/o \(z_{e}\) & \(99.38\pm 0.5\) & \(99.34\pm 0.5\) & \(99.36\pm 0.8\) & \(99.30\pm 0.3\) & \(99.42\pm 0.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation Studies. (%)
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \# \(l_{s}\) & Accuracy & Precision & Recall & F1 & AUC \\ \hline
25 & \(98.81\pm 1.0\) & \(98.75\pm 0.9\) & \(98.73\pm 1.0\) & \(98.69\pm 0.9\) & \(98.70\pm 0.8\) \\
36 & \(99.13\pm 0.6\) & \(99.16\pm 0.4\) & \(98.87\pm 1.2\) & \(98.78\pm 1.4\) & \(98.92\pm 0.6\) \\
49 & \(\mathbf{99.56\pm 0.3}\) & \(\mathbf{99.45\pm 0.4}\) & \(\mathbf{99.60\pm 0.3}\) & \(\mathbf{99.45\pm 0.5}\) & \(\mathbf{99.57\pm 0.4}\) \\
64 & \(99.28\pm 0.2\) & \(99.32\pm 0.1\) & \(99.22\pm 0.3\) & \(99.18\pm 0.4\) & \(99.24\pm 0.3\) \\
81 & \(99.22\pm 0.6\) & \(99.14\pm 0.7\) & \(99.25\pm 0.6\) & \(99.13\pm 0.7\) & \(99.23\pm 0.8\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance with the different number of segmentation tokens. (%) |
2302.10665 | LoS sensing-based superimposed CSI feedback for UAV-Assisted mmWave
systems | In unmanned aerial vehicle (UAV)-assisted millimeter wave (mmWave) systems,
channel state information (CSI) feedback is critical for the selection of
modulation schemes, resource management, beamforming, etc. However, traditional
CSI feedback methods lead to significant feedback overhead and energy
consumption of the UAV transmitter, therefore shortening the system operation
time. To tackle these issues, inspired by superimposed feedback and integrated
sensing and communications (ISAC), a line of sight (LoS) sensing-based
superimposed CSI feedback scheme is proposed. Specifically, on the UAV
transmitter side, the ground-to-UAV (G2U) CSI is superimposed on the
UAVto-ground (U2G) data to feed back to the ground base station (gBS). At the
gBS, the dedicated LoS sensing network (LoSSenNet) is designed to sense the U2G
CSI in LoS and NLoS scenarios. With the sensed result of LoS-SenNet, the
determined G2U CSI from the initial feature extraction will work as the priori
information to guide the subsequent operation. Specifically, for the G2U CSI in
NLoS, a CSI recovery network (CSI-RecNet) and superimposed interference
cancellation are developed to recover the G2U CSI and U2G data. As for the LoS
scenario, a dedicated LoS aid network (LoS-AidNet) is embedded before the
CSI-RecNet and the block of superimposed interference cancellation to highlight
the feature of the G2U CSI. Compared with other methods of superimposed CSI
feedback, simulation results demonstrate that the proposed feedback scheme
effectively improves the recovery accuracy of the G2U CSI and U2G data.
Besides, against parameter variations, the proposed feedback scheme presents
its robustness. | Chaojin Qing, Qing Ye, Wenhui Liu, Zilong Wanga, Jiafan Wang, Jinliang Chen | 2023-02-21T13:25:58Z | http://arxiv.org/abs/2302.10665v1 | # LoS sensing-based superimposed CSI feedback for UAV-Assisted mmWave systems
###### Abstract
In unmanned aerial vehicle (UAV)-assisted millimeter wave (mmWave) systems, channel state information (CSI) feedback is critical for the selection of modulation schemes, resource management, beamforming, etc. However, traditional CSI feedback methods lead to significant feedback overhead and energy consumption of the UAV transmitter, therefore shortening the system operation time. To tackle these issues, inspired by superimposed feedback and integrated sensing and communications (ISAC), a line of sight (LoS) sensing-based superimposed CSI feedback scheme is proposed. Specifically, on the UAV transmitter side, the ground-to-UAV (G2U) CSI is superimposed on the UAV-to-ground (U2G) data to feed back to the ground base station (gBS). At the gBS, the dedicated LoS sensing network (LoS-SenNet) is designed to sense the U2G CSI in LoS and NLoS scenarios. With the sensed result of LoS-SenNet, the determined G2U CSI from the initial feature extraction will work as the priori information to guide the subsequent operation. Specifically, for the G2U CSI in NLoS, a CSI recovery network (CSI-RecNet) and superimposed interference cancellation are developed to recover the G2U CSI and U2G data. As for the LoS scenario, a dedicated LoS aid network (LoS-AidNet) is embedded before the CSI-RecNet and the block of superimposed interference cancellation to highlight the feature of the G2U CSI. Compared with other methods of superimposed CSI feedback, simulation results demonstrate that the proposed feedback scheme effectively improves the recovery accuracy of the G2U CSI and U2G data. Besides, against parameter variations, the proposed feedback scheme presents its robustness.
_Keywords:_ Channel state information (CSI); superimposed CSI feedback; line of sight (LoS) sensing; integrated sensing and communications (ISAC); unmanned aerial vehicle (UAV)-assisted millimeter wave (mmWave) systems
## 1 Introduction
Unmanned aerial vehicle (UAV)-assisted millimeter wave (mmWave) systems have triggered wide research due to their high reliability, excellent flexibility, and large bandwidth availability [1]. In UAV-assisted mmWave systems, the modulation scheme selection, resource management, beamforming, etc., require sufficient channel state information (CSI) [2]. To this end, the CSI from ground-to-UAV (G2U) links needs to be estimated by UAV and then fed back to the ground base station (gBS) in frequency division duplex (FDD) mode. Usually, the gBS in UAV-assisted mmWave systems equips massive antennas to remedy the significant path attenuation of mmWave propagation [3]. This inevitably causes huge feedback overhead at UAVs. Besides, prolonging the battery life of a UAV has always been a challenge to be considered. For instance, the limited battery life constraints the operating time of small autonomous rotorcraft and makes it challenging to complete complex exploration and extensive communication missions [4]. Unfortunately, the necessary CSI feedback substantively increases the energy consumption of the UAV transmitter, which shortens its battery life. However, there is little literature focusing on the G2U CSI feedback for UAV-assisted mmWave systems with low energy consumption, and the existing neural network (NN)-based CSI feedback methods (e.g., [5, 6, 7]) have not been verified its applicability. Thus, it is vital to develop a CSI feedback scheme to save energy consumption (or prolong battery life).
As an alternative, the mode of superimposed CSI feedback aims to save battery consumption for transmitting G2U CSI and avoids the extra bandwidth occupation for CSI feedback [8][9]. However, superimposed CSI feedback inevitably causes superimposed interference [9]. In [8], an iterative-based interference cancellation method was proposed, yet with extremely high computational complexity. In [9], an extreme learning machine (ELM)-based method was investigated for the superimposed CSI feedback, which improves the accuracy of CSI recovery and data detection with reduced complexity. Nevertheless, the CSI feedback methods in [8][9] are not dedicated developments for UAV-assisted mmWave systems, so their applicability needs to be further validated. Especially, the inherent properties of UAV-assisted mmWave systems have not been exploited. For UAV-to-ground (U2G) links, the transmission in line of sight (LoS) scenario (LoS transmission) is usually observed, and the strength of the LoS path in the LoS scenario may be critically over 20 dB stronger than those of the non-LoS (NLoS) paths [10]. That is, LoS is a usual scenario (with a high probability) in UAV-assisted mmWave systems [11], which inspires us to develop LoS features to assist superimposed G2U CSI feedback to alleviate superimposed interference.
Recently, integrated sensing and communications (ISAC)-based techniques have aroused great attention, in which sensing signals are derived from received signals for auxiliary communication signals [12]. Many applications have been promoted, e.g., sensing-assisted beam training [13], sensing-assisted beam tracking and prediction [14], and sensing-assisted resource allocation [15], etc. For UAV-assisted mmWave systems, ISAC techniques are developed for the UAV's deployment [16], the flight trajectory of the UAV [17],
and the transmit beamforming [18]. Even so, LoS sensing by using the idea of ISAC to assist CSI feedback for UAV-assisted mmWave systems has not been investigated. Inspired by ISAC, the LoS sensing is employed to alleviate superimposed interference of superimposed CSI feedback, and thus a LoS sensing-assisted superimposed G2U CSI feedback for UAV-assisted mmWave systems is developed in this paper.
### _Challenge and motivation summary_
The challenges faced in the G2U CSI feedback of UAV-assisted mmWave systems are as follows. 1) Due to the significant feedback overhead, the energy consumption of UAV transmitters is significantly increased, which hinders UAV prolong its battery life. 2) For UAV-assisted mmWave systems, the inherent property has not yet been exploited that data transmission experiences LoS scenarios with a high probability.
Motivated by the challenges mentioned above, this paper jointly considers the following factors: 1) It is vital to save energy consumption for UAV-assisted mmWave systems, which promotes us to develop a superimposed mode for its CSI feedback. 2) Since the LoS path is long-lived in UAV scenarios, it should be sensed and exploited to alleviate the superimposed interference of superimposed CSI feedback and thus improve feedback performance, e.g., the accuracy of the G2U CSI recovery and U2G data detection. Therefore, we propose a LoS sensing-assisted superimposed CSI feedback scheme by taking full advantage of LoS sensing.
### _Contributions_
To the best of our knowledge, the solution of applying the LoS sensing to aid superimposed CSI feedback has not been well studied in the UAV-assisted mmWave system. The main contributions of this paper are summarized as follows:
1. We propose a superimposed CSI feedback scheme for UAV-assisted mmWave systems. As far as we know, the superimposed mode for CSI feedback in UAV-assisted mmWave systems has not been well investigated. In the proposed scheme, the G2U CSI is superimposed on the U2G data at the UAV transmitter to feed back to the gBS. To this end, the energy consumption of the UAV transmitter for CSI feedback is significantly reduced, and the battery life of the UAV is prolonged. Besides, the occupation of U2G bandwidth resources is avoided, which improves the spectral efficiency during the CSI feedback phase of UAV-assisted mmWave systems. Our work builds a bridge to save the energy consumption of the UAV transmitter and the bandwidth occupation of the UAV-assisted mmWave system, which solves the practical difficulties of CSI feedback for UAV-assisted mmWave systems.
2. We develop an ISAC-inspired LoS sensing network to alleviate the superimposed interference of the superimposed CSI feedback scheme. To the best of our knowledge, LoS sensing-based superimposed CSI feedback has not been investigated. As the inherent property of U2G links, the LoS scenario occurs with a high probability in UAV-assisted mmWave systems [11], which is exploited according to the sensing approach to suppress the superimposed interference in this paper. Unlike conventional LoS sensing (e.g., [19] and [20]), the developed LoS sensing network, named as LoS-SenNet, is inspired by the idea of ISAC. That is, the same signal, which is received as communication functions in usual systems, is employed for LoS transmission sensing, G2U CSI reconstruction, and U2G data detection at the gBS. Thus, the LoS sensing-based superimposed CSI feedback scheme is formed without extra hardware/equiment.
3. We construct lightweight neural networks to reduce the processing latency and computational complexity for the gBS receiver. Due to the single sensing task for LoS transmission, LoS-SenNet is constructed with a lightweight network architecture. Nevertheless, the sensing results from LoS-SenNet are particularly effective to design the lightweight structures of the subsequent networks. With the assistance of LoS-SenNet, LoS aid network (LoS-AidNet) and CSI recovery network (CSI-RecNet) are also constructed with lightweight network architecture. Especially, LoS-AidNet and CSI-RecNet employ the same network architecture, which is beneficial for hardware reuse and thus saves hardware costs. Besides, compared with [8], the proposed scheme improves the performance of G2U CSI recovery and U2G data detection with reduced processing latency and computational complexity of the gBS receiver.
_Notations_: Boldface upper case and lower case letters denote matrix and vector, respectively. \({(\cdot)}^{T}\) denotes transpose; \({\bf I}_{P}\) is the identity matrix of size \(P\times P\); \(\|\cdot\|\) is the Euclidean norm; \({\rm Re}(\cdot)\) and \({\rm Im}(\cdot)\) represent the operation of taking the
real and imaginary parts of a complex value, respectively; \(E[\cdot]\) represents the expectation operation. \(\mathrm{vec}(\cdot)\) denotes the vectorizing of a matrix.
## 2 System model
The system model is given in Fig. 1, in which one gBS employs a uniform linear array (ULA) with \(N\) antennas and \(U\) single-antenna UAVs are deployed [21]. In this section, the channel model and the G2U CSI feedback process are elaborated, respectively.
### Channel model
In a UAV-assisted mmWave system, its wireless channel generally embodies spatial characteristics [21]. For G2U links, the channel vector of the \(l\)th cluster \(\mathbf{h}_{l}\in\mathbb{C}^{1\times N}\) in the spatial domain can be expressed as [21]
\[\mathbf{h}_{l}=\sum_{k=1}^{K}\xi_{k}\mathbf{a}_{\text{R}}\left( \theta_{\text{rx},k}\right)\mathbf{a}_{\text{T}}^{H}\left(\theta_{\text{rx},k }\right), \tag{1}\]
where \(K\) is the number of multipath in the \(l\)th cluster, and \(\xi_{k}\) is the complex gain of the \(k\)th multipath in the \(l\)th cluster. \(\mathbf{a}_{\text{R}}\left(\theta_{\text{rx},k}\right)\in\mathbb{C}^{N_{r}\times 1}\) and \(\mathbf{a}_{\text{T}}\left(\theta_{\text{tx},k}\right)\in\mathbb{C}^{N_{t} \times 1}\) are the array response vectors for \(\theta_{\text{rx},k}\) and \(\theta_{\text{rx},k}\) along the \(k\)th multipath, respectively. \(N_{r}\) and \(N_{t}\) are the numbers of receiving antennas and transmitting antennas, respectively. Due to the antenna deployment, we have \(N_{r}=1\) and \(N_{t}=N\). Thus, \(\mathbf{a}_{\text{T}}\left(\theta_{\text{tx},k}\right)\) can be written as \(\mathbf{a}_{\text{T}}\left(\theta_{\text{tx},k}\right)=\left[1,e^{-j2\pi\frac{ \pi}{k}\sin(\theta)},\dots,e^{-j2\pi\frac{N-1\mathrm{d}}{N}\sin(\theta)}\right]\) with \(\lambda\) and \(d\) being the G2U link wavelength and the distance between adjacent antennas [22], respectively. Then, on the UAV-\(u\) side with \(u=1,2,\cdots,U\), the G2U CSI in the time-spatial domain is denoted as
\[\widetilde{\mathbf{H}}_{u}=\left[\mathbf{h}_{1}^{T},\mathbf{h}_{2}^{T},\dots, \mathbf{h}_{L}^{T}\right]^{T}\in\mathbb{C}^{L\times N}, \tag{2}\]
where \(L\) is the number of cluster [23]. Subsequently, \(\widetilde{\mathbf{H}}_{u}\) is transformed into the time-angular domain by using the inverse discrete Fourier transform, which is expressed as
\[\widetilde{\mathbf{H}}_{u}=\widetilde{\mathbf{H}}_{u}\mathbf{F}_{N}^{H}, \tag{3}\]
where \(\mathbf{F}_{N}\) is an \(N\times N\) discrete Fourier transform matrix [24]. In the time-angular domain, \(\widetilde{\mathbf{H}}_{u}\in\mathbb{C}^{L\times N}\) is typically sparse, and wherein almost all non-zero entries concentrate in the first \(L_{a}\) rows of matrix \(\widetilde{\mathbf{H}}_{u}\)[25]. To reduce the feedback overhead, the first \(L_{a}\) rows of \(\widetilde{\mathbf{H}}_{u}\) are extracted and denoted as \(\widetilde{\mathbf{H}}_{u}\in\mathbb{C}^{L_{a}\times N}\)[5]. Finally, the vectorized G2U CSI, denoted as \(\mathbf{h}_{u}\in\mathbb{C}^{1\times L_{a}N}\), is expressed as
\[\mathbf{h}_{u}=\mathrm{vec}(\widetilde{\mathbf{H}}_{u}). \tag{4}\]
### G2U CSI feedback
As shown in Fig. 1, on the UAV transmitter side, the G2U CSI is first compressed by using \(\mathbf{h}_{u}\) and a random compression matrix \(\mathbf{\Phi}\in\mathbb{C}^{L_{a}N\times N}\) to save bandwidth resources and the energy consumption of the UAV transmitter [26]. Then, the compressed CSI is spread by the pseudo-random codes (e.g., the Walsh codes [26]) to alleviate the superimposed interference caused by the subsequent process of superimposition, i.e.,
\[\left\{\begin{array}{l}\mathbf{z}_{u}=\mathbf{h}_{u}\mathbf{\Phi}\\ \mathbf{s}_{u}=\mathbf{z}_{u}\mathbf{Q}^{T}\end{array}\right., \tag{5}\]
where \(\mathbf{z}_{u}\in\mathbb{C}^{1\times N}\) is the compressed G2U CSI, \(\mathbf{s}_{u}\in\mathbb{C}^{1\times M}\) is the spread CSI, and \(\mathbf{Q}\in\mathbb{R}^{M\times N}\) is the spreading matrix satisfying \(\mathbf{Q}^{T}\mathbf{Q}=M\mathbf{I}_{N}\). Here, by superimposing the spread CSI \(\mathbf{s}_{u}\) onto the modulated U2G data \(\mathbf{d}_{u}\in\mathbb{C}^{1\times M}\), the transmitted superimposed signal \(\mathbf{x}_{u}\in\mathbb{C}^{1\times M}\) of the \(u\)th UAV is given by [8]
\[\mathbf{x}_{u}= \sqrt{\rho E_{u}}\mathbf{s}_{u}+\sqrt{(1-\rho)E_{u}}\mathbf{d}_ {u}, \tag{6}\]
where \(\rho\in[0,1]\) stands for the power proportional coefficient of G2U CSI, and \(E_{u}\) represents the transmitted power of UAV-\(u\). Without loss of generality, due to the main task of U2G data transmission services, the length of U2G data is longer than that of the compressed G2U CSI, i.e., \(M>N\).
At the gBS, after the process of matched-filter, the received signal \(\mathbf{Y}_{u}\) of the UAV-\(u\) is given by
\[\mathbf{Y}_{u}= \mathbf{g}_{u}\mathbf{x}_{u}+\mathbf{N}_{u}, \tag{7}\]
where \(\mathbf{N}_{u}\in\mathbb{C}^{N\times M}\) represents the circularly symmetric complex Gaussian (CSCG) noise with zero-mean and variance \(\sigma_{u}^{2}\) for each U2G feedback link, and \(\mathbf{g}_{u}\in\mathbb{C}^{N\times 1}\) denotes the U2G channel vector from the UAV-\(u\) to the gBS.
To avoid additional hardware devices or reception overhead, inspired by the idea of ISAC, the proposed scheme obtains the signal to be sensed (i.e., U2G channel) and the transmitted superimposed signal (i.e., U2G data and G2U CSI) via the same received signal \(\mathbf{Y}_{u}\). Specifically, with the received signal \(\mathbf{Y}_{u}\), the lightweight LoS-SenNet is developed to sense the existence of LoS paths in U2G channels, thereby obtaining the sensed result \(\chi_{u}\). Then, by employing the conventional method of superimposed interference cancellation, the initial feature of compressed G2U CSI denoted by \(\widehat{\mathbf{z}}_{u}\) is extracted from \(\mathbf{Y}_{u}\). With the sensed \(\chi_{u}\) and extracted \(\widehat{\mathbf{z}}_{u}\), the recovery accuracy of the U2G data and G2U CSI are enhanced.
## 3 LoS sensing-based superimposed CSI feedback
In this section, we present the proposed LoS sensing-based superimposed CSI feedback. In Section 3.1, the LoS-SenNet will sense the existence of LoS path. With the sensed result, the recovery of the U2G data and G2U CSI is elaborated in Section 3.2.
### LoS-SenNet
The LoS-SenNet is developed to exploit the inherent property of UAV-assisted mmWave systems that LoS scenarios appear with a high probability for U2G links. Inspired by the idea of ISAC [12], we use the same received signal \(\mathbf{Y}_{u}\) to obtain the U2G channel matrix, thereby sensing the existence of LoS paths in U2G channels via LoS-SenNet.
_Network Design:_ According to the CNN network structure in [19], the LoS-SenNet consists of a convolutional layer, a maximum pooling layer, a flattening layer, and a fully connected layer. The network architecture of LoS-SenNet is summarized in TABLE I, and detailed descriptions are given as follows.
Compared with the existing sensing networks (e.g., [19], [20]), the proposed LoS-SenNet is designed to hold a lightweight network structure. The input and output sizes of the convolutional layer are both \(L_{a}\times 2N\times 1\). The size of the convolution kernel is \(3\times 3\), and the number of convolution kernel is 1. After the maximum pooling layer and the flattening layer, the output size becomes \(\lfloor L_{a}/3\rfloor\lfloor N/3\rfloor\times 1\). Specifically, the number of neurons of the output layer is \(1\) to obtain the sensed result more intuitively. For the activation functions of the convolutional layer and output layer, the rectified linear unit (ReLU) [19] and sigmoid functions are employed, respectively.
_Remark 1_: The network lightweight of designing LoS-SenNet is mainly embodied in two aspects: 1) only one convolution kernel is designed to capture the features of the LoS path, and 2) only one neuron is employed for the output layer. The main consideration is that the features of the LoS path are easy to be observed and captured due to its high strength [10], and LoS-SenNet is only used to sense the existence of the LoS path.
For training the LoS-SenNet, we employ the least-squares (LS) channel estimation for estimating the U2G channel \(\mathbf{g}_{u}\) to form the U2G channel matrix \(\widehat{\mathbf{G}}_{u}\in\mathbb{C}^{L_{a}\times N}\)[27]. To match the convolutional layer input size and input data type, the complex-valued \(\widehat{\mathbf{G}}_{u}\) is first transformed to a real-valued matrix. Then, the real-valued matrix is reshaped to \(\widetilde{\mathbf{G}}_{u}\in\mathbb{R}^{L_{a}\times 2N\times 1}\), which is expressed as
\[\widetilde{\mathbf{G}}_{u}=f_{\text{res}}\left(\left[\text{Re}( \widehat{\mathbf{G}}_{u}),\text{Im}(\widehat{\mathbf{G}}_{u})\right]\right), \tag{8}\]
where we use \(f_{\text{res}}\left(\cdot\right)\) to denote the reshaping operation. Using \(\widetilde{\mathbf{G}}_{u}\) as the input of the LoS-SenNet, the sensed result \(\chi_{u}\) of whether \(\widetilde{\mathbf{G}}_{u}\) contains the LoS path is obtained by
\[\begin{cases}o_{u}=f_{\text{LoS-SenNet}}(\widetilde{\mathbf{G}}_{u},\mathbf{ \Theta}_{\text{LoS-SenNet}})\\ \chi_{u}=f_{\text{dec}}(o_{u})\end{cases}, \tag{9}\]
where \(o_{u}\) is the output of the LoS-SenNet, \(f_{\text{LoS-SenNet}}\left(\cdot\right)\) denotes the mapping of LoS sensing operation, and \(\mathbf{\Theta}_{\text{LoS-SenNet}}\) is the network parameter. The \(f_{\text{dec}}(\cdot)\) is the hard decision operation with a threshold of 0.5, yielding two types of \(\chi_{u}\), i.e., 0 and 1. \(\chi_{u}=1\) indicates the U2G channel is with LoS path, otherwise, \(\chi_{u}=0\).
### _Superimposed CSI recovery_
During the sensing phase, we utilize the superimposed interference cancellation method in [8] to obtain the initial feature of the compressed G2U CSI in LoS or NLoS scenario. Subsequently, with the sensed result \(\chi_{u}\) and the initial feature of the compressed G2U CSI, the processing to recover G2U CSI and U2G data is performed.
#### Iii-21 Initial feature extraction
To obtain the initial feature of compressed G2U CSI from the received signal \(\mathbf{Y}_{u}\), we adopt the superimposed interference cancellation method according to [8]. Specifically, as in [8], we first perform a despread operation on the received signal \(\mathbf{Y}_{u}\) to obtain a despread signal \(\mathbf{V}_{u}\in\mathbb{C}^{N\times N}\), i.e.,
\[\mathbf{V}_{u}=\mathbf{Y}_{u}\mathbf{Q}/M. \tag{10}\]
Then, the minimum mean squared error (MMSE) estimation of G2U CSI is obtained according to \(\mathbf{V}_{u}\)[8], which is expressed by
\[\widetilde{\mathbf{z}}_{u}=f_{\text{MMSE}}(\mathbf{V}_{u}),\]
where \(\widetilde{\mathbf{z}}_{u}\in\mathbb{C}^{1\times N}\) is the estimated G2U CSI and \(f_{\text{MMSE}}(\cdot)\) denotes the mapping function of the MMSE estimator. With \(\widetilde{\mathbf{z}}_{u}\), the interference cancellation technique is utilized to eliminate the impact of G2U CSI on the detection of U2G data [8], i.e.,
\[\widetilde{\mathbf{Y}}_{u}=\mathbf{Y}_{u}-\sqrt{\rho E_{u}/N}\mathbf{g}_{u} \widetilde{\mathbf{z}}_{u}\mathbf{Q}^{T}. \tag{11}\]
Subsequently, the MMSE detection is used to obtain the initial detected U2G data \(\widetilde{\mathbf{d}}_{u}\in\mathbb{C}^{1\times M}\), i.e., \(\widetilde{\mathbf{d}}_{u}=\mathcal{D}_{\text{MMSE}}(\widetilde{\mathbf{Y}}_{u})\) with \(\mathcal{D}_{\text{MMSE}}(\cdot)\) denoting the mapping function of the MMSE detector. With the initial detected U2G data \(\widetilde{\mathbf{d}}_{u}\), the impact of U2G data on the estimated G2U CSI is eliminated by utilizing the interference cancellation technique, i.e., \(\widetilde{\mathbf{Y}}_{u}=\mathbf{Y}_{u}-\sqrt{\left(1-\rho\right)E_{u} \mathbf{g}_{u}\widetilde{\mathbf{d}}_{u}}\). Similar to (10), we utilize \(\widetilde{\mathbf{Y}}_{u}\) to obtain an improved despread signal \(\widetilde{\mathbf{V}}_{u}\), i.e., \(\widetilde{\mathbf{V}}_{u}=\widetilde{\mathbf{Y}}_{u}\mathbf{Q}/M\). Finally, we perform MMSE estimation on \(\widetilde{\mathbf{V}}_{u}\) to obtain the initial feature of compressed G2U CSI \(\widetilde{\mathbf{z}}_{u}\in\mathbb{C}^{1\times N}\), i.e.,
\[\widehat{\mathbf{z}}_{u}=f_{\text{MMSE}}(\widetilde{\mathbf{V}}_{u}). \tag{12}\]
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Layer & Size of output & Size of kernel & Number of kernel & Activation function \\ \hline Input & \(L_{a}\times 2N\times 1\) & - & - & - \\ \hline Conv & \(L_{a}\times 2N\times 1\) & \(3\times 3\) & \(1\) & ReLU \\ \hline Maxpool & \(\lfloor L_{a}/3\rfloor\times\lfloor N/3\rfloor\times 1\) & - & - & - \\ \hline Flattening & \(\lfloor L_{a}/3\rfloor\lfloor N/3\rfloor\times 1\) & - & - & - \\ \hline Full Connection & \(1\times 1\) & - & - & sigmoid \\ \hline \end{tabular}
\end{table} TABLE I: Architecture of LoS-SenNet.
It should be noted that, the MMSE detection is employed to present the initial feature extraction to hold the same detection method in [8]. Other detection methods, e.g., zero forcing (ZF) detection, can also be adopted.
#### Iii-A2 G2U CSI and U2G data recovery
After initial feature extraction, we perform the recovery of G2U CSI and U2G data. Specifically, with the sensed result \(\chi_{u}\), the initial feature of compressed G2U CSI \(\widehat{\mathbf{z}}_{u}\) with LoS or NLoS is determined due to the same propagation environment [28]. For the convenience of description, we denote the compressed G2U CSI with LoS as \(\widehat{\mathbf{z}}_{u,\text{LoS}}\) and the compressed G2U CSI without LoS as \(\widehat{\mathbf{z}}_{u,\text{NLoS}}\). Then, for \(\widehat{\mathbf{z}}_{u,\text{NLoS}}\), we develop the CSI recovery network (CSI-RecNet) and superimposed interference cancellation to recover G2U CSI and U2G data. For \(\widehat{\mathbf{z}}_{u,\text{LoS}}\), relative to the G2U CSI in NLoS, a dedicated LoS aid network (LoS-AidNet) is embedded (between the CSI-RecNet and the block of superimposed interference cancellation) to highlight the feature of compressed G2U CSI.
Due to the significant feature of the G2U CSI in LoS scenarios [20], a lightweight network architecture for LoS-AidNet is adopted. For the CSI-RecNet, the assistance of the extracted features by using the initial feature extraction or LoS-AidNet is exploited. With the extracted features, the CSI-RecNet can learn along with \(\widehat{\mathbf{z}}_{u,\text{NLoS}}\) or the output of LoS-AidNet, and thus can be designed as a lightweight network as well. To this end, both LoS-AidNet and CSI-RecNet are designed as the same neural network structure containing only a single hidden layer, which is beneficial for hardware reuse to save its costs. The network architecture of LoS-AidNet and CSI-RecNet are summarized in TABLE II, and the detailed descriptions are given as follows.
In LoS-AidNet, the neurons of the input layer, hidden layer, and output layer are set as \(2N\), \(4N\), and \(2N\), respectively. For CSI-RecNet, \(2N\), \(4L_{a}N\), and \(2L_{a}N\) neurons are adopted for the input layer, hidden layer, and output layer, respectively. The leaky rectified linear unit (LReLU) and tanh [25] are employed as activation functions for the hidden layers of LoS-AidNet and CSI-RecNet, respectively, and linear activation is adopted as the activation functions of their output layers. For training the LoS-AidNet, we first convert complex-valued \(\widehat{\mathbf{z}}_{u,\text{LoS}}\in\mathbb{C}^{1\times N}\) to real-valued \(\widehat{\mathbf{z}}_{u,\text{LoS}}\in\mathbb{R}^{1\times 2N}\) according to
\[\widetilde{\mathbf{z}}_{u,\text{LoS}}=[\mathrm{Re}(\widehat{\mathbf{z}}_{u, \text{LoS}}),\mathrm{Im}(\widehat{\mathbf{z}}_{u,\text{LoS}})]. \tag{13}\]
Then, the LoS-AidNet is used to refine the compressed G2U CSI by exploiting the sensed LoS scenario, which is expressed as
\[\widetilde{\mathbf{z}}_{u,\text{LoS}}=f_{\text{LoS-AidNet}}(\widetilde{ \mathbf{z}}_{u,\text{LoS}},\mathbf{\Theta}_{\text{LoS-AidNet}}), \tag{14}\]
where \(f_{\text{LoS-AidNet}}(\cdot)\) and \(\mathbf{\Theta}_{\text{LoS-AidNet}}\) denote the mapping of LoS learning operation and its network parameter, respectively. According to (11) along with \(\widetilde{\mathbf{z}}_{u,\text{LoS}}\), the U2G data \(\widehat{\mathbf{d}}_{u}\) is detected.
The sensed LoS and NLoS scenarios share the same network CSI-RecNet. By denoting the real-valued form of \(\widehat{\mathbf{z}}_{u,\text{NLoS}}\) as \(\widetilde{\mathbf{z}}_{u,\text{NLoS}}\in\mathbb{R}^{1\times 2N}\), we have
\[\widetilde{\mathbf{z}}_{u,\text{NLoS}}=[\mathrm{Re}\left(\widehat{\mathbf{z}}_ {u,\text{NLoS}}\right),\mathrm{Im}\left(\widehat{\mathbf{z}}_{u,\text{NLoS}} \right)]. \tag{15}\]
Then, using the CSI-RecNet, the recovered G2U CSI \(\widehat{\mathbf{h}}_{u}\) is given by
\[\widehat{\mathbf{h}}_{u}=f_{\text{CSI-RecNet}}(\widetilde{\mathbf{z}}_{u}, \mathbf{\Theta}_{\text{CSI-RecNet}}). \tag{16}\]
where \(\widetilde{\mathbf{z}}_{u}\) denotes the input of CSI-RecNet, i.e., \(\widetilde{\mathbf{z}}_{u,\text{NLoS}}\) or \(\widetilde{\mathbf{z}}_{u,\text{LoS}}\). \(f_{\text{CSI-RecNet}}(\cdot)\) and \(\mathbf{\Theta}_{\text{CSI-RecNet}}\) denote the mapping of G2U CSI recovery operation and its network parameter, respectively.
### _Off-line training and online deployment_
The details of the dataset collection are given as follows. Due to the effects in the UAV scenarios (geographical and geomorphic differences, weather effects, etc.), the completeness of the collected real data cannot be guaranteed. Besides, the complete collection of real data is costly and time-consuming [29]. Thus, we employ the Clustered-Delay-Line (CDL) channel model of 5G standard [23] to capture the spatial characteristics in the UAV-assisted mmWave system. According to 3GPP TS 38.901 [23], the CDL channel model is widely used for link level evaluation, which is considered to be very close to the real scenarios [30]. Specifically, the CDL model is often implemented by phase initialization along four different polarizations and generating coefficients for each cluster [31]. We employ the CDL-A and CDL-D models to generate the G2U and U2G channels. The CDL-A is used to form NLoS transmission channels, and the CDL-D is used to generate LoS transmission channels [23]. The proposed LoS sensing-based superimposed CSI feedback scheme is summarized in Algorithm 1.
#### Iii-A1 Off-line training
The training set \(\{\widehat{\mathbf{G}}_{u}^{(tr)},e_{u}\}\) is used to train the LoS-SenNet, where \(e_{u}\in\{0,1\}\) is the training label of LoS-SenNet. \(e_{u}=1\) and \(e_{u}=0\) are respectively employed to label the U2G channel with LoS path or not. The training data \(\widehat{\mathbf{G}}_{u}^{(tr)}\) is formed according to (8). For the LoS-AidNet, the training data is formed according to (13), and thus forms
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Layer} & \multicolumn{2}{c|}{Input} & \multicolumn{2}{c|}{Hidden} & \multicolumn{2}{c}{Output} \\ \cline{2-7} & LoS-AidNet & CSI-RecNet & LoS-AidNet & CSI-RecNet & LoS-AidNet & CSI-RecNet \\ \hline Neuron number & \(2N\) & \(2N\) & \(4N\) & \(4L_{a}N\) & \(2N\) & \(2L_{a}N\) \\ \hline Activation function & None & None & LReLU & tanh & Linear & Linear \\ \hline \end{tabular}
\end{table} TABLE II: Architecture of LoS-AidNet and CSI-RecNet.
the training set \(\{\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(tr)},\mathbf{z}_{u}\}\). According to (14) and (15), we collect \(\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(tr)}\) and \(\widetilde{\mathbf{z}}_{u,\text{NLoS}}^{(tr)}\) to form training set \(\{\widetilde{\mathbf{z}}_{u}^{(tr)},\mathbf{h}_{u}\}\) to train the CSI-RecNet. The training sets of LoS-SenNet, LoS-AidNet, and CSI-RecNet have \(100,000\), \(60,000\), and \(30,000\) samples, respectively, while the validation sets of them have \(10,000\), \(6,000\), and \(3,000\) samples, respectively.
In addition, the equivalent signal-to-noise ratio (SNR) in decibel (dB) is defined as \(\text{SNR}=10\text{log}_{10}(E_{u}/\sigma_{u}^{2})\) according to [26]. The LoS-SenNet is trained in a noise-free setting. The training SNR of LoS-AidNet is set as 10dB. When training the CSI-RecNet, the training SNR is set as 20dB. The Adam optimizer is used to update the network parameters of each network [32]. We utilize \(\alpha\), \(\gamma\), and \(\xi\) to represent the training epochs of LoS-SenNet, LoS-AidNet, and CSI-RecNet, respectively, where \(\alpha=20\), \(\gamma=50\), and \(\xi=50\).
The optimization goal of LoS-SenNet is to minimize the mean squared error (MSE) between \(o_{u}\) and \(e_{u}\), which is expressed as
\[\min_{\mathbf{\Theta}_{\text{LoS-SaNet}}}E[\left\|f_{\text{LoS-SenNet}}( \widetilde{\mathbf{G}}_{u}^{(tr)},\mathbf{\Theta}_{\text{LoS-SenNet}})-e_{u} \right\|^{2}]. \tag{17}\]
For training LoS-AidNet, the MSE of the refined compressed G2U CSI with LoS path, i.e., \(E[\left\|\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(tr)}-\mathbf{z}_{u}\right\| ^{2}]\) is minimized, which is given by
\[\min_{\mathbf{\Theta}_{\text{LoS-AidNet}}}E[\left\|f_{\text{LoS-AidNet}}( \widetilde{\mathbf{z}}_{u,\text{LoS}}^{(tr)},\mathbf{\Theta}_{\text{LoS-AidNet }})-\mathbf{z}_{u}\right\|^{2}]. \tag{18}\]
Similarly, the CSI-RecNet minimizes the MSE of the recovered G2U CSI, i.e., \(E[\left\|\widetilde{\mathbf{h}}_{u}^{(tr)}-\mathbf{h}_{u}\right\|^{2}]\), which is expressed as
\[\min_{\mathbf{\Theta}_{\text{CS-ReNet}}}E[\left\|f_{\text{CSI-RecNet}}( \widetilde{\mathbf{z}}_{u}^{(tr)},\mathbf{\Theta}_{\text{CSI-RecNet}})- \mathbf{h}_{u}\right\|^{2}]. \tag{19}\]
We perform the training once for LoS-SenNet, LoS-AidNet, and CSI-RecNet, and save the trained network parameters for the online running.
```
[Off-line training stage]: Input: Training sets: \(\{\widetilde{\mathbf{G}}_{u}^{(tr)},e_{u}\}\), \(\{\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(tr)},\mathbf{z}_{u}\}\), \(\{\widetilde{\mathbf{z}}_{u}^{(tr)},\mathbf{h}_{u}\}\); the training epochs of the LoS-SenNet, LoS-AidNet, and CSI-RecNet: \(\alpha\), \(\gamma\), \(\xi\). Output: The network parameters: \(\mathbf{\Theta}_{\text{LoS-SenNet}}\), \(\mathbf{\Theta}_{\text{LoS-AidNet}}\), \(\mathbf{\Theta}_{\text{CSI-RecNet}}\). LoS-SenNet: Randomly initialize the parameter of LoS-SenNet as \(\mathbf{\Theta}_{\text{LoS-SenNet}}\). for\(i=1,2,\ldots\alpha\)do Update the network parameter \(\mathbf{\Theta}_{\text{LoS-SenNet}}\) by using the Adam optimizer and (17) with current dataset \(\{\widetilde{\mathbf{G}}_{u}^{(tr)},e_{u}\}\). endfor Save the network parameter \(\mathbf{\Theta}_{\text{LoS-SenNet}}\). LoS-AidNet: Randomly initialize the parameter of LoS-AidNet as \(\mathbf{\Theta}_{\text{LoS-AidNet}}\). for\(i=1,2,\ldots\gamma\)do for\(i=1,2,\ldots\xi\)do Update the network parameter \(\mathbf{\Theta}_{\text{CSI-RecNet}}\) by using the Adam optimizer and (19) with current dataset \(\{\widetilde{\mathbf{z}}_{u}^{(tr)},\mathbf{h}_{u}\}\). endfor Save the network parameter \(\mathbf{\Theta}_{\text{CSI-RecNet}}\). LoS-AidNet: Randomly initialize the parameter of LoS-AidNet as \(\mathbf{\Theta}_{\text{LoS-AidNet}}\). for\(i=1,2,\ldots\gamma\)do Update the network parameter \(\mathbf{\Theta}_{\text{LoS-AidNet}}\) by using the Adam optimizer and (18) with current dataset \(\{\widetilde{\mathbf{z}}_{u}^{(tr)},\mathbf{h}_{u}\}\). endfor Save the network parameter \(\mathbf{\Theta}_{\text{CSI-RecNet}}\). [Online running stage]: Input: The received signal \(\widetilde{\mathbf{Y}}_{u}\). Output: The detected U2G data \(\widehat{\mathbf{d}}_{u}^{(te)}\) and the recovered G2U CSI \(\widehat{\mathbf{h}}_{u}^{(te)}\). Perform LS channel estimation on \(\widetilde{\mathbf{Y}}_{u}\) to gain the U2G channel matrix \(\widehat{\mathbf{G}}_{u}^{(te)}\). Obtain the sensed result \(\chi_{u}^{(te)}\) via loading \(\widetilde{\mathbf{G}}_{u}^{(te)}\) and the network parameter \(\mathbf{\Theta}_{\text{LoS-SenNet}}\) into LoS-SenNet and then performing the hard decision operation. Perform the initial feature extraction to obtain the compressed G2U CSI initial feature \(\widehat{\mathbf{z}}_{u}^{(te)}\) from \(\widetilde{\mathbf{Y}}_{u}\). if\(\chi_{u}^{(te)}=1\)then \(\widetilde{\mathbf{z}}_{u}^{(te)}=\widehat{\mathbf{z}}_{u,\text{LoS}}^{(te)}\). Obtain \(\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(te)}\) via loading \(\widehat{\mathbf{z}}_{u,\text{LoS}}^{(te)}\) and the network parameter \(\mathbf{\Theta}_{\text{LoS-AidNet}}\) into LoS-AidNet. Obtain the detected U2G data \(\widehat{\mathbf{d}}_{u}^{(te)}\) according to (11) along with \(\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(te)}\). Obtain the recovered G2U CSI \(\widehat{\mathbf{h}}_{u}^{(te)}\) via loading \(\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(te)}\) and the network parameter \(\mathbf{\Theta}_{\text{CSI-RecNet}}\) into CSI-RecNet. else \(\widehat{\mathbf{z}}_{u}^{(te)}=\widehat{\mathbf{z}}_{u,\text{LoS}}^{(te)}\). Obtain the detected U2G data \(\widehat{\mathbf{d}}_{u}^{(te)}\) according to (11) along with \(\widehat{\mathbf{z}}_{u,\text{NLoS}}^{(te)}\). Obtain the recovered G2U CSI \(\widehat{\mathbf{h}}_{u}^{(te)}\) via loading \(\widetilde{\mathbf{z}}_{u,\text{NLoS}}^{(te)}\) and the network parameter \(\mathbf{\Theta}_{\text{CSI-RecNet}}\) into CSI-RecNet. endif
```
**Algorithm 1** The algorithm of the proposed LoS sensing-based superimposed CSI feedback scheme.
We perform the training once for LoS-SenNet, LoS-AidNet, and CSI-RecNet, and save the trained network parameters for the online running.
**Algorithm 2** The algorithm of the proposed LoS sensing-based superimposed CSI feedback scheme.
#### Iv-A2 Online deployment
At the online running stage, with the received signal \(\widetilde{\mathbf{Y}}_{u}\) (i.e., \(\mathbf{Y}_{u}\) in system model at online operation stage), we employ the LS channel estimation to gain the U2G channel matrix \(\widetilde{\mathbf{G}}_{u}^{(te)}\). Meanwhile, we utilize the initial feature extraction to obtain the compressed G2U CSI initial feature \(\widehat{\mathbf{z}}_{u}^{(te)}\) from the received signal \(\widetilde{\mathbf{Y}}_{u}\). Then,
with \(\widetilde{\mathbf{G}}_{u}^{(te)}\) and \(\mathbf{\Theta}_{\text{LoS-SenNet}}\), we obtain the sensed result \(\chi_{u}^{(te)}\) via LoS-SenNet and the hard decision operation. If \(\chi_{u}^{(te)}=1\), the G2U channel includes the LoS path, i.e., \(\widetilde{\mathbf{z}}_{u}^{(te)}=\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(te)}\). If \(\chi_{u}^{(te)}=0\), the G2U channel excludes the LoS path, i.e., \(\widetilde{\mathbf{z}}_{u}^{(te)}=\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(te)}\). When \(\widetilde{\mathbf{z}}_{u}^{(te)}=\widehat{\mathbf{z}}_{u,\text{LoS}}^{(te)}\), we load \(\widehat{\mathbf{z}}_{u,\text{LoS}}^{(te)}\) and \(\mathbf{\Theta}_{\text{LoS-AidNet}}\) into LoS-AidNet to get \(\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(te)}\). Then, according to (11) along with \(\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(te)}\), the detected U2G data \(\widehat{\mathbf{d}}_{u}^{(te)}\) is obtained. Simultaneously, the recovered G2U CSI \(\widetilde{\mathbf{h}}_{u}^{(te)}\) is obtained by CSI-RecNet with \(\mathbf{\Theta}_{\text{CSI-RecNet}}\) and \(\widetilde{\mathbf{z}}_{u,\text{LoS}}^{(te)}\). When \(\widetilde{\mathbf{z}}_{u}^{(te)}=\widehat{\mathbf{z}}_{u,\text{LoS}}^{(te)}\), according to (11), we directly utilize the superimposed interference cancellation method to gain the detected U2G data \(\widehat{\mathbf{d}}_{u}^{(te)}\). Meanwhile, the recovered G2U CSI \(\widehat{\mathbf{h}}_{u}^{(te)}\) is obtained via loading \(\widehat{\mathbf{z}}_{u,\text{NaS}}^{(te)}\) and \(\mathbf{\Theta}_{\text{CSI-RecNet}}\) into CSI-RecNet.
## 4 Energy consumption and computational complexity
### Energy consumption analysis
Compared with the non-superimposed scheme (i.e., the mode of time division multiplexing) with the same transmitted power and data rate, the proposed scheme saves the energy consumption of the UAV transmitter due to its superimposed mode, and thus prolongs the UAV's battery life. For the non-superimposed scheme, \(M+N\) symbols are transmitted from UAV to gBS, where \(M\) is the symbol number of U2G data and \(N\) is the symbol number of G2U CSI. By contrast, the number of the transmitted symbols in the proposed scheme is reduced to \(M\). With transmitted power \(E_{u}\) and symbol period \(T_{\text{sym}}\), the energy consumption of non-superimposed CSI feedback is \((M+N)E_{u}T_{\text{sym}}\), while the energy consumed by the proposed scheme is \(ME_{u}T_{\text{sym}}\). To this end, the energy of \(NE_{u}T_{\text{sym}}\) is saved for UAV by utilizing the proposed scheme. Usually, to remedy the significant path attenuation, the gBS in UAV-assisted mmWave systems equips massive antennas [33], resulting in the dimension of G2U CSI (i.e., \(N\)) being prohibitively large [34]. Thus, the saved energy, i.e., \(NE_{u}T_{\text{sym}}\), is substantially effective in prolonging the battery life of a UAV.
### Computational complexity analysis
The comparison of computational complexity is given in TABLE III, where the number of floating-point operations (FLOPs) is considered as the metric of computational complexity. The number of FLOPs is obtained by counting how many computations the model does, which determines the computational time of the model [35]. In this paper, the complex multiplication is primarily counted as a single FLOP. For description convenience, "Proposed" is used to denote the proposed LoS sensing-assisted superimposed CSI feedback; "Ref [8]" represents the conventional superimposed CSI feedback in [8]; "Ref [9]" stands for the ELM-based CSI feedback in [9].
In the proposed scheme, the computational complexity of LoS-SenNet is equivalent to \((36L_{a}N+\lfloor{L_{a}/3}\rfloor\lfloor{N/3}\rfloor)/4\), the initial feature extraction has the computational complexity of \(3N+3N^{2}+3N^{2}M+2NM\), the computational complexity of LoS-AidNet is equivalent to \((16N^{2})/4\), and the computational complexity of CSI-RecNet is equivalent to \((8L_{a}N^{2}+8L_{a}^{2}N^{2})/4\). Thus, the total complexity of the proposed scheme (including LoS-SenNet, initial feature extraction, LoS-AidNet, and CSI-RecNet) is \(3N+7N^{2}+3N^{2}M+2NM+9L_{a}N+(\lfloor{L_{a}/3}\rfloor\lfloor{N/3}\rfloor)/4 +2L_{a}N^{2}+2L_{a}^{2}N^{2}\). In addition, the total complexity of the proposed scheme without LoS-SenNet and LoS-AidNet is \(3N+3N^{2}+3N^{2}M+2NM+2L_{a}N^{2}+2L_{a}^{2}N^{2}\). The conventional superimposed CSI feedback in [8] has the computational complexity of \(6N+6NM+6L_{a}N^{2}+6L_{a}N^{2}M\). In [9], the computational complexity is \(4L_{a}NM+32ML_{a}N+32M^{2}\). For the case where \(N=64\), \(M=512\), and \(L_{a}=5\), the computational complexities in "Proposed", "Ref [8]", and "Ref [9]" are \(6,634,507\), \(63,234,432\), and \(14,286,848\), respectively. Therefore, the proposed scheme has a lower computational complexity than those of [8] and [9].
On the whole, the proposed scheme saves the energy consumption of UAV transmitter and reduces the computational complexity of gBS receiver. With the saved UAV transmitter's energy consumption and the reduced gBS receiver's computational complexity, we further validate the proposed scheme can improve the normalized mean squared error (NMSE) and bit error rate (BER) at gBS in Section 5.
## 5 Experiment results
With these benefits of the UAV transmitter and gBS receiver given in Section 4, in this section, we validate the proposed scheme can further improve the NMSE and BER of gBS receiver. The CDL channel model in the 5G standard [23][36] is employed to verify the effectiveness of the proposed scheme, which is widely used and verified to be effective [30]. In Section 5.1, definitions and basic parameters involved in simulations are first given. Then, to verify the effectiveness of the proposed scheme, the NMSE of the G2U CSI and the BER of the U2G data are given in Section 5.2. Finally, the robustness of the proposed scheme is presented in Section 5.3.
### Parameters setting
Basic parameters and definitions involved in simulations are given as follows. In the simulation setup, \(L_{a}=5\), \(N=64\), and \(M=512\) are considered. We adopt the Walsh matrix as the spreading matrix \(\mathbf{Q}\)[26]. The modulated U2G data \(\mathbf{d}_{u}\) is formed by using QPSK modulation. For LoS-SenNet, LoS-AidNet, and CSI-RecNet, the testing set is generated with the same approach that the training set does (given in Section 3.3). The definitions of NMSE and BER are followed from [26],
where the NMSE is defined as
\[\text{NMSE}=\ \frac{\left\|\mathbf{h}_{u}-\widehat{\mathbf{h}}_{u}\right\|_{2}^{2 }}{\left\|\mathbf{h}_{u}\right\|_{2}^{2}}. \tag{20}\]
In this paper, the NMSE and BER of the proposed scheme are compared with those of [8] and [9]. The testing sets of LoS-SenNet, LoS-AidNet, and CSI-RecNet contain \(30,000\), \(18,000\), and \(9,000\) samples, respectively. We stop the testing for BER performance when at least 1000-bit errors are observed. For the effectiveness validation, the power proportional coefficient is set to \(\rho=0.15\). Furthermore, the ratio of the number of G2U CSI with LoS to the total number of G2U CSI is denoted by \(\beta\), which is set as \(0.7\) for the validity validation. In addition, to verify the effectiveness and robustness of LoS-SenNet and LoS-AidNet, the proposed scheme without LoS-SenNet and LoS-AidNet, denoted as "Proposed (without LoS-Sen & LoS-Aid)", is also simulated.
### NMSE and BER
In our work, the performance tradeoff between sensing and transmission is achieved by balancing the computational complexity and the performance of NMSE and BER. To validate the effectiveness of the proposed scheme, the NMSE of the G2U CSI and the BER of the U2G data are given in Fig. 2 and Fig. 3, respectively.
#### Vi-B1 NMSE performance analysis
As shown in Fig. 2, the NMSE curve of the "Proposed" is lower than those of "Ref [8]" and "Ref [9]" for each given SNR. For example, when \(\text{SNR}=10\text{dB}\), the NMSEs of "Ref [8]" and "Ref [9]" are larger than \(3.6\times 10^{-1}\), while the NMSE of "Proposed" is less than \(4.4\times 10^{-2}\). The "Proposed" achieves a better NMSE performance due to the following reasons. On the one hand, at the UAV transmitter, the "Proposed" compresses its G2U CSI and then employs spread spectrum technology to capture a larger spread spectrum gain than those of "Ref [8]" and "Ref [9]", which effectively suppress the superimposed interference between the G2U CSI and U2G data. Without the compression, the spread spectrum gains obtained by "Ref [8]" and "Ref [9]" are not sufficient, and thus encounter poor NMSE performance. On the other hand, at the gBS receiver, the "Proposed" exploits the LoS features in the G2U CSI to assist the recovery of G2U CSI by using LoS-SenNet and LoS-AidNet. Furthermore, "Proposed (without LoS-Sen & LoS-Aid)" is equivalent to the scheme that only considers NLoS transmission scenarios. Meanwhile, for each given SNR, the NMSE curve of "Proposed (without LoS-Sen & LoS-Aid)" is lower than those of "Ref [8]" and "Ref [9]", indicating the role of CSI-RecNet in CSI reconstruction. In addition, from Fig. 2, it can be observed that the G2U CSI's NMSE of the "Proposed" is smaller than that of the "Proposed (without LoS-Sen & LoS-Aid)", which confirms that the LoS-SenNet and LoS-AidNet play an important role for the proposed scheme in the recovery of G2U CSI. Thus, for the recovery of G2U CSI, the proposed scheme shows the smaller NMSE than those of "Ref [8]" and "Ref [9]", which reflects its effectiveness in improving the NMSE of G2U CSI. Besides, compared with the "Proposed (without LoS-Sen & LoS-Aid)", the LoS-SenNet and LoS-AidNet present their effectiveness in reducing the NMSE of G2U CSI.
2 BER performance analysis
Fig. 3 illustrates the effectiveness of the proposed scheme in terms of the BER of U2G data. As shown in Fig. 3, the values of BER for "Ref [8]" and "Ref [9]" are considerably higher than that of "Proposed" in the given SNR region. For example, when \(\text{SNR}=10\)dB, the BERs of "Ref [8]" and "Ref [9]" reach about \(3.1\times 10^{-2}\) and \(1.0\times 10^{-2}\), respectively, while the BER of the "Proposed" is less than \(3.6\times 10^{-3}\). The "Proposed" improves the BER performance compared with "Ref [8]" and "Ref [9]". One of the reasons is because the G2U CSI is compressed at the UAV transmitter, achieving a larger spread spectrum gain than those of "Ref [8]" and "Ref [9]". In particular, at the gBS receiver, the "Proposed" optimizes the recovery of compressed G2U CSI by using LoS-SenNet and LoS-AidNet to respectively sense LoS scenario and exploit the LoS features, which eases superimposed interference cancellation and thus improves the detection correctness. Furthermore, "Proposed (without LoS-Sen & LoS-Aid)" could be regarded as the scheme that only considers the NLoS transmission scenario, and its BER curve for each given SNR is lower than that of "Ref [8]" and equivalent to that of "Ref [9]", which benefits from the spread spectrum gain at the UAV transmitter. In addition, the U2G data's BER of the "Proposed" is smaller than that of the "Proposed (without LoS-Sen & LoS-Aid)", reflecting the LoS-SenNet and LoS-AidNet are effective to exploit the LoS features to improve the recovery of compressed G2U CSI. On the whole, compared with the "Ref [8]" and "Ref [9]", the proposed scheme is effective to reduce the BER of the U2G data. Relative to "Proposed (without LoS-Sen & LoS-Aid)", the effectiveness of the LoS-AidNet is also validated.
### Robustness analysis
In this subsection, the robustness of the "proposed" scheme is analysed against the impacts of varying parameters, i.e., the power proportional coefficient \(\rho\) and the ratio of the number of G2U CSI with LoS to the total G2U CSI set \(\beta\). For ease of analysis, only the analysed parameter (i.e., \(\rho\) or \(\beta\)) is varied while the other fundamental parameters remain the same as those given in Section 5.1.
#### 5.3.1 Robustness against \(\rho\)
To evaluate the NMSE performance against the impact of \(\rho\), the NMSE curves against varying \(\rho\) (i.e., \(\rho=0.10\), \(\rho=0.15\), and \(\rho=0.20\)) are presented in Fig. 4. For each given \(\rho\), the NMSE of the "Proposed" is smaller than those of the "Ref [8]" and "Ref [9]" in the given SNR region. As the increase of \(\rho\), the NMSEs decrease for "Ref [9]", "Proposed (without LoS-Sen & LoS-Aid)", and "Proposed", and vice versa. The reason is that the G2U CSI could obtain more transmission power with a larger value of \(\rho\). The NMSE of "Ref [8]" decreases and then increases with the increase of \(\rho\). The reason is that, with relatively low SNR, e.g., SNR\(\leq 10\)dB, the NMSE performance is mainly affected by the parameter \(\rho\). The NMSE of the "Proposed" is smaller than those of "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)" in the whole SNR regions. For example, in the case of \(\rho=0.20\) and SNR \(=14\)dB, the NMSE of "Proposed" is less than \(1.5\times 10^{-2}\), the NMSEs of "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)" reach about \(4.6\times 10^{-1}\), \(1.4\times 10^{-1}\), and \(2.0\times 10^{-2}\), respectively. Thus, against the impact of \(\rho\), the proposed scheme reduces the NMSE of G2U CSI compared with "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)".
To verify the BER performance against the impact of \(\rho\), the BER curves with varying \(\rho\) are plotted in Fig. 5. From Fig. 5, the BERs of the "Ref [8]", "Ref [9]", "Proposed (without LoS-Sen & LoS-Aid)", and "Proposed" decrease as \(\rho\) increases, and vice versa. The reason is that the larger the power proportional coefficient \(\rho\) is, the more power is allocated to the G2U CSI, while the less power is reserved for the U2G data, resulting in poorer U2G data detection performance. In addition, with the increase of SNR, the BERs of "Ref [8]", "Proposed (without
LoS-Sen & LoS-Aid)", and "Proposed" encounter the error floor due to the superimposed interference. Even so, for each given \(\rho\) and SNR, the BER of the "Proposed" is smaller than those of the "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)". Therefore, the proposed scheme improves the BER performance against the impact of \(\rho\).
To sum up, against the impact of \(\rho\), the proposed scheme improves the NMSE and BER compared with "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)".
#### Vi-B2 Robustness against \(\beta\)
To provide more insights into the robustness of the NMSE performance against the impact of \(\beta\), the NMSE curves with different values of \(\beta\) (i.e., \(\beta=0.60\), \(\beta=0.70\), and \(\beta=0.80\)) are shown in Fig. 6. From Fig. 6, for each given \(\beta\), the NMSE of the "Proposed" is smaller than those of the "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)" in the given SNR region. For example, in the case of \(\beta=0.80\) and \(\text{SNR}=16\)dB, the NMSE of "Proposed" is less than \(8.5\times 10^{-3}\), the NMSEs of "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)" reach about \(3.5\times 10^{-1}\), \(1.2\times 10^{-1}\), and \(1.1\times 10^{-2}\), respectively. This confirms that the proposed scheme benefits from exploiting the inherent properties of UAV-assisted mmWave systems, namely sensing the LoS scenario by LoS-SenNet and exploiting the LoS features by LoS-AidNet to assist the recovery of compressed G2U CSI. As the increase of \(\beta\), the NMSEs of "Ref [8]" and "Ref [9]" are approximately unchanged. The reason is that the priori information of LoS is not exploited. On the contrary, for the "Proposed" and "Proposed (without LoS-Sen & LoS-Aid)", the NMSEs decrease with the increase of \(\beta\). The increased \(\beta\) leads to a smaller sparsity due to the increased possibility of LoS, which is conducive to compressing and recovering the G2U CSI. In particular, at the gBS receiver, the LoS scenarios are sensed by LoS-SenNet and the LoS features are further exploited by the LoS-AidNet. Thus, the "Proposed" obtains the better recovery accuracy than those of "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)". On the whole, against the impact of \(\beta\), the NMSEs of "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)" are reduced by using the "Proposed".
To highlight the robustness of the BER performance against the impact of \(\beta\), Fig. 7 depicts the BER performance for the U2G data detection. It can be observed from Fig. 7 that as \(\beta\) increases from \(0.6\) to \(0.8\), the BERs of "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)" are approximatively equivalent. On the contrary, the BER of the "Proposed" decreases as \(\beta\) increases, and this could be interpreted as the proposed scheme improving the performance of BER via refining the recovery accuracy of the compressed G2U CSI. Namely, our proposal refines the recovery accuracy of the compressed G2U CSI by making full use of the priori information of LoS via LoS-SenNet and LoS-AidNet, and then utilizes the refined compressed G2U CSI for superimposed interference cancellation to recover U2G data. Therefore, an increase in \(\beta\) has the same impact on the BER of U2G data as it does on the NMSE of G2U CSI. Moreover, for each given \(\beta\), the BER of the "Proposed" is smaller than those of the "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)" in the given SNR region. In a word, the proposed scheme improves the BER performance against the impact of \(\beta\).
To sum up, despite the impact of \(\beta\), Fig. 6 and Fig. 7 illustrate that the "Proposed" could still improve the NMSE and BER compared with "Ref [8]", "Ref [9]", and "Proposed (without LoS-Sen & LoS-Aid)". Furthermore, according to the curves of "Proposed" and "Proposed (without LoS-Sen & LoS-Aid)" in Fig. 6 and Fig. 7, it is also validated that the proposed scheme applies not only to LoS transmissions but also to NLoS transmissions.
assisted mmWave systems. Three lightweight networks are implemented to refine the recovery accuracy of the G2U CSI, thereby improving the detection accuracy of the U2G data. Inspired by ISAC, the first network LoS-SenNet will sense whether the U2G channel contains the LoS path, then the superimposed U2G data and G2U CSI will be recovered by the superimposed interference cancellation and two dedicated lightweight neural networks, i.e., LoS-AidNet and CSI-RecNet. Compared with other CSI feedback solutions, the proposed scheme could reduce the computational complexity of the gBS receiver, decrease the energy consumption of the UAV transmitter, and prolong the battery life. Simulation results validate the effectiveness of the proposed scheme in terms of lower NMSE and BER and the robustness against the impact of variant parameters. In our future works, we will consider the real data in real channel scenarios to promote the application of LoS sensing-based superimposed CSI feedback in practical systems. Meanwhile, we will study online training to alleviate the impact of off-line training on real-time performance.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgements
The authors would like to acknowledge the support of the Sichuan Science and Technology Program (Grant No.2021JDRC0003, 23ZDYR0243, 2021YFG0064), the Demonstration Project of Chengdu Major Science and Technology Application (Grant No. 2020-YP09-00048-SN), the Special Funds of Industry Development of Sichuan Province (Grant No. zyf-2018-056), and the Industry-University Research Innovation Fund of China University (Grant No. 2021ITA10016/cxy0743).
|
2306.02796 | MCTS: A Multi-Reference Chinese Text Simplification Dataset | Text simplification aims to make the text easier to understand by applying
rewriting transformations. There has been very little research on Chinese text
simplification for a long time. The lack of generic evaluation data is an
essential reason for this phenomenon. In this paper, we introduce MCTS, a
multi-reference Chinese text simplification dataset. We describe the annotation
process of the dataset and provide a detailed analysis. Furthermore, we
evaluate the performance of several unsupervised methods and advanced large
language models. We additionally provide Chinese text simplification parallel
data that can be used for training, acquired by utilizing machine translation
and English text simplification. We hope to build a basic understanding of
Chinese text simplification through the foundational work and provide
references for future research. All of the code and data are released at
https://github.com/blcuicall/mcts/. | Ruining Chong, Luming Lu, Liner Yang, Jinran Nie, Zhenghao Liu, Shuo Wang, Shuhan Zhou, Yaoxin Li, Erhong Yang | 2023-06-05T11:46:36Z | http://arxiv.org/abs/2306.02796v3 | # MCTS: A Multi-Reference Chinese Text Simplification Dataset
###### Abstract
Text simplification aims to make the text easier to understand by applying rewriting transformations. There has been very little research on Chinese text simplification for a long time. The lack of generic evaluation data is an essential reason for this phenomenon. In this paper, we introduce MCTS, a multi-reference Chinese text simplification dataset. We describe the annotation process of the dataset and provide a detailed analysis of it. Furthermore, we evaluate the performance of some unsupervised methods and advanced large language models. We hope to build a basic understanding of Chinese text simplification through the foundational work and provide references for future research. We release our data at [https://github.com/blcuicall/mcts](https://github.com/blcuicall/mcts).
## 1 Introduction
The task of text simplification aims to make the text easier to understand by performing multiple rewriting transformations. It can provide reading assistance for children (Kajiwara et al., 2013), non-native speakers (Paetzold, 2016) and people with language disorders (Carroll et al., 1998; Paetzold, 2016; Evans et al., 2014). Moreover, text simplification can also be used as a method of data augmentation to benefit downstream natural language processing (NLP) tasks (Van et al., 2021).
For a long time, the research of text simplification systems mainly depends on large-scale parallel corpora for training, such as WikiLarge (Zhang and Lapata, 2017) and Newsela (Xu et al., 2015). But due to the limitation of existing data in language and domain, recent work on text simplification systems has started to focus on unsupervised methods and achieves good results (Surya et al., 2019; Kumar et al., 2020; Martin et al., 2022), which makes it possible to build Chinese text simplification systems independent of large-scale parallel corpora. In this case, how to evaluate the Chinese text simplification systems becomes a problem to be solved. On the other hand, large language models have the ability to solve various NLP tasks (Thoppilan et al., 2022; Chowdhery et al., 2022). Recently a series of large language models represented by ChatGPT 1 performs well on many tasks (Qin et al., 2023; Jiao et al., 2023; Bang et al., 2023). In English text simplification, Feng et al. (2023) find that large language models outperform state-of-the-art methods and are judged to be on par with human annotators. Nevertheless, whether these models can achieve the same excellent results in Chinese text simplification remains unclear.
Footnote 1: [https://chat.openai.com/chat](https://chat.openai.com/chat)
To solve these problems, in this paper, we introduce MCTS, a multi-reference dataset for evaluating Chinese text simplification models. MCTS consists of 3,615 human simplifications associated with 723 original sentences selected from the Penn Chinese Treebank (Xue et al., 2005) (5 simplifications per original sentence). We hope to use this dataset to measure the development status of Chinese text simplification and provide references for future research.
We design several simple unsupervised Chinese text simplification methods and test them on our proposed dataset. These methods can be served as the baselines for future studies. Furthermore, we evaluate the Chinese text simplification ability of the most advanced large language models, GPT-3.5 and ChatGPT. The results show that these large language models could outperform the unsupervised methods we set up. However, compared to human written simplification, there is still a certain gap. In summary, our contributions are listed below:
* We manually annotated a dataset that can be used for the evaluation of Chinese text simplification. It is a multi-reference dataset and contains multiple types of rewriting transfor
mations.
* We provide several text features and conducted a detailed analysis of the dataset, which could help to understand the characteristics of human Chinese text simplification.
* On the proposed dataset, we evaluated the performance of some unsupervised methods and large language models, which could serve as the baselines for future research.
## 2 Related Work
### Evaluation Data for English text simplification
Early evaluation data for English text simplification mainly consist of sentence pairs obtained from English Wikipedia and Simple English Wikipedia through automatic sentence alignment. However, the Simple English Wikipedia was found to contain a large proportion of inadequate or inaccurate simplifications (Yasseri et al., 2012; Xu et al., 2015). And it is problematic to evaluate simplification systems with only a single reference because there are several ways of simplifying a sentence.
For the above reasons, Xu et al. (2016) introduced TurkCorpus, a multi-reference dataset for the evaluation of English text simplification. They first collected 2,359 original sentences from English Wikipedia and then obtained 8 manual reference references for every original sentence via crowdsourcing. The dataset can be used for evaluation metrics requiring multiple references, such as BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016). However, the rewriting transformations involved in TurkCorpus are very simple. Annotators were asked to simplify a sentence mainly by lexical paraphrasing but without deleting content or splitting the sentences. While another multi-reference dataset for English text simplification, HSplit (Sulem et al., 2018), only contains the rewriting transformations of sentence split, which uses the same original sentences in the test set of TurkCorpus.
In order to involve multiple transformations, Alva-Manchego et al. (2020) created the ASSET dataset. Using the same origin sentences, They extended TurkCorpus through crowdsourcing. The dataset includes rewriting transformations of lexical paraphrasing (lexical simplification and reordering), sentence splitting, and compression (deleting unimportant information). ASSET now has been adopted as a standard dataset for evaluating English text simplification systems.
Similar to ASSET, MCTS is a dataset with multiple references and multiple rewriting transformations. To our best knowledge, it is the first multi-reference dataset used for Chinese text simplification evaluation.
### Unsupervised Text Simplification
Unsupervised text simplification methods do not require aligned complex-simple sentence pairs. Sai Surya et al. (2019) first attempted to realize an unsupervised neural text simplification system by importing adversarial and denoising auxiliary losses. They collected two separate sets of complex and simple sentences extracted from a parallel Wikipedia corpus and trained on them with auto-encoders. Lu et al. (2021) found that during the process of neural machine translation, it is possible to generate more high-frequency tokens. According to this finding, they built a pseudo text simplification corpus by taking the pair of the source sentences of the translation corpus and the translations of their references in a bridge language, which could be used to train text simplification models in a Seq2Seq way. Martin et al. (2022) leveraged paraphrase data mined from Common Crawl and used ACCESS (Martin et al., 2020), a method to make any sequence-to-sequence model controllable, to generate simplifications and not paraphrases at test time. Their method achieved good results and was considered the state-out-of-art unsupervised text simplification method.
### Large Language Models
Compared to general pre-trained models, large language models are also typically based on the transformer architecture but are much larger in scale, such as GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022) and OPT (Zhang et al., 2022). They can handle various NLP tasks through the given instructions, which do not require any gradient updates (Brown et al., 2020).
ChatGPT is obtained by fine-tuning a GPT-3.5 via reinforcement learning from human feedback (RLHF) (Christiano et al., 2017). As a large language model for intelligent human-computer dialogue, it can answer user input with high quality. ChatGPT has recently attracted significant attention from the NLP community, and there have been many studies on it (Qin et al., 2023; Guo et al., 2023; Yang et al., 2023). However, exploring these
models in Chinese text simplification is still lacking.
## 3 Creating MCTS
In this section, we describe more details about MCTS. In section 3.1, we introduce the preparation of original sentences. And in section 3.2, we introduce the annotation process of MCTS.
### Data Preparation
We use Penn Chinese Treebank (CTB) as the source of the original sentence in the dataset. CTB is a phrase structure tree bank built by the University of Pennsylvania. It includes Xinhua news agency reports, government documents, news magazines, broadcasts, interviews, online news, and logs. We first filtered out the simple sentences using a filter based on the average lexical difficulty level in HSK to ensure that the original sentences we choose are sufficiently complex. Then we manually selected from the remaining sentences. Finally, we obtained 723 news sentences as the original sentence.
### Annotation Process
MCTS is an evaluation dataset that is completely manually annotated. The detailed annotating process is as follows.
Annotator RecruitmentAll the annotators we recruited are native Chinese speakers and are undergraduate or graduate students in school. Most of them have a background in linguistics or computer science. All annotators needed to attend a training course and take the corresponding Qualification Test (see more details below) designed for our task. Only those who have passed the Qualification Test could enter the Annotation Round.
Simplification InstructionsWe provided the exact instructions for annotators for the Qualification Test and the Annotation Round. In the instructions, we defined three types of rewriting transformations.
* Paraphrasing: Replacing complex words or phrases with simple formulations.
* Compression: Deleting repetitive or unimportant information from the sentence.
* Structure Changing: Modifying complex sentence structures into simple forms.
Compared to rewriting transformations involved in ASSET, we replaced sentence splitting with structural changing. The latter covers a broader range and is more consistent with the actual situation of simplifying Chinese sentences. Besides, the paraphrasing transformation in Chinese is much more flexible than in English. It includes not only the substitution of synonyms but also the interpretation of complex phrases or idioms. For every rewriting transformation, we provided several examples. Annotators could decide for themselves which types of rewriting to execute in any given original sentence.
Qualification TestAt this stage, we provided 20 sentences to be simplified. Annotators needed to simplify these sentences according to the instructions given. We checked all submissions to filter
\begin{table}
\begin{tabular}{l l} \hline \hline
out annotators who could not perform the task correctly. Of the 73 people who initially registered, only 35 passed the Qualification Test (48%) and worked on the task.
Annotation RoundAnnotators who passed the Qualification Test had access to this round. To facilitate annotating work, we provided a platform that can display the difficulty level of words in a text. We collected 5 simplifications for each of the 723 original sentences. Table 1 presents a few examples of simplifications in MCTS, together with their English translation.
## 4 Dataset Analysis
Following ASSET Alva-Manchego et al. (2020), we report a series of text features in MCTS and study the simplifications in the dataset through them.
### Text Features
We calculated several low-level features for all simplification examples to measure the rewriting transformations included in MCTS. These features are listed below.
* Number of sentence splits: The difference between the number of sentences in the simplification and the number of sentences in the original sentence.
* Compression level: The number of characters in the simplification divided by the number of characters in the original sentence.
* Replace-only Levenshtein distance: The character-level Levenshtein distance Levenshtein et al. (1966) for replace operations only divided by the length of the shorter string in the original sentence and simplification. As described in ASSET, ignoring insertions and deletions can make this feature independent of compression level and serve as a proxy for measuring the lexical paraphrases of the simplification.
* Proportion of words deleted, added and reordered: The number of words deleted/reordered from the original sentence divided by the number of words in the original sentence; and the number of words that were added to the original sentence divided by the number of words in the simplification.
* Lexical complexity score ratio: We compute the score as the mean squared lexical difficulty level in HSK. The ratio is then the value of this score on the simplification divided by that of the original sentence, which can be considered as an indicator of lexical simplification.
* Dependency tree depth ratio: The ratio of the depth of the dependency parse tree of the simplification relative to that of the original sentence. Follwing ASSET Alva-Manchego et al. (2020), we perform parsing using spaCy 2. This feature can reflect structural simplicity to a certain extent. Footnote 2: [https://github.com/explosion/spaCy](https://github.com/explosion/spaCy)
### Results and Analysis
The density of all these features is shown in Figure 1. We can see that sentence splitting operation appears not frequently on MCTS. By observing the data, we believe that this is due to the characteristics of the Chinese. Compound sentences are commonly used in Chinese and one sentence consists of two or more independent clauses. During the simplification, annotators tend to rewrite a complex sentence with nested clauses into compound sentences rather than multiple simple sentences. So this is not to say that Chinese text simplification rarely involves sentence structure change, but that the way of structural change is not limited to sentence splitting.
Although we introduced compression as a rewriting transformation in the simplification instructions, the compression ratio is not too concentrated on the side less than 1.0. The reason is that, on the one hand, the annotators tend to retain as much semantic information as possible, and on the other hand, more characters may be added when paraphrasing.
By analyzing replace-only Levenshtein distance, we can see that the simplifications in MCTS have a considerable degree of paraphrasing the input as simplifications are distributed at all levels. Regarding the distribution of deleted, added, and reordered words, we can find that the peaks all occur at positions greater than 0.0. This further reveals the plentiful rewriting operations contained in MCTS.
In terms of lexical complexity, we can clearly see the high density of ratios less than 1.0, indicating that simplification has significantly lower lexical complexity compared to the original sentence. Some instances have a lexical complexity
mple words in the process of sentence compression.
Finally, the dataset shows a high density of a 1.0 ratio in dependency tree depth. This may indicate that significant structural changes were not made.
## 5 Experiment
In order to measure the development status of Chinese text simplification and provide references for future research, we conducted a series of experiments on the proposed MCTS.
### Methods
We attempt several unsupervised Chinese text simplification methods and large language models and provided their results on MCTS. The first three are unsupervised methods that utilize automatic machine translation technology. We use Google Translator 3 to translate. These unsupervised methods can be used as the baselines for future work.
Footnote 3: [https://translate.google.com/](https://translate.google.com/)
Direct Back TranslationAs high-frequency words tend to be generated in the process of neural machine translation Lu et al. (2021), back translation is a potential unsupervised text simplification method. We translated the original Chinese sentences into English and then translated them back to obtain simplified results. We chose English as the bridge language because of the rich bilingual translation resources between Chinese and English.
Translated Wiki-LargeTranslating existing text simplification data into Chinese is a simple way to construct pseudo data. We translated English sentence pairs in Wiki-Large into Chinese sentence pairs and used them to train a BART-based Lewis et al. (2020) model as one of our baselines.
Cross-Lingual Pseudo DataIn addition to the above two methods, we also designed a simple way to construct pseudo data for Chinese text simplification, which can leverage the knowledge from English text simplification models. As shown in Figure 2, we first collect a large amount of Chinese sentence data, for example, the People's Daily Corpus. Then, we translate these sentences into English and simplify them using existing English text simplification models. Finally, we translate the simplified English sentences back into Chinese and align them with the original Chinese sentences to obtain parallel data. To ensure data quality, we filter the obtained parallel data from three aspects: simplicity, fluency, and semantic retention. For simplicity, we calculate the average lexical difficulty level for both the original sentence and the simplified sentence. Only when the difficulty level of the simplified sentence is significantly reduced compared to the original sentence, this parallel sentence pair will be retained. For fluency, we calculate the perplexity for the simplified sentences and filter out sentences above the preset threshold. For semantic retention, we use sentence-transformers toolkit Reimers and Gurevych (2020) to calculate the semantic similarity between the original sentence and simplified sentence, and also filter out sentences that exceed the preset threshold. Using the filtered data, we train a BART-base model.
Figure 1: Density of text features in simplifications from MCTS
Large Language ModelsWe chose two advanced large language models to conduct experiments: _gpt-3.5-turbo_ and _text-davinci-003_. Both of them are based on GPT-3.5. The former is the most capable GPT-3.5 model and is optimized for chatting. The latter is the previous model, which can execute any language task according to instructions. We translated the simplification prompt used by Feng et al. (2023) as our prompt. More details about the prompt can be found in Table 2. The experiment was conducted under the zero-shot setting.
### Automatic Metrics
Following previous work, we choose three metrics for evaluation: SARI Xu et al. (2016), BLEU Papineni et al. (2002) and HSK Level Kong et al. (2022).
SariSARI Xu et al. (2016) is a commonly used evaluation metric for text simplification. Comparing system outputs to multiple simplification references and the original sentences, SARI calculates the mean of the n-gram F1 scores of _add_, _keep_, and _delete_. In our experiment, we tokenize sentences using Stanford CoreNLP4 and use the EASSE toolkit 5Alva-Manchego et al. (2019) to calculate SARI.
Footnote 4: [https://github.com/stanfordnlp/CoreNLP](https://github.com/stanfordnlp/CoreNLP)
Footnote 5: [https://github.com/feralvam/easse](https://github.com/feralvam/easse)
BleuBLEU Bilingual Evaluation Understudy Papineni et al. (2002) was initially used to evaluate the quality of machine translation. By calculating the N-gram and counting the times that can be matched, BLEU can reflect the closeness between system outputs and references. Just like calculating SARI, we use the EASSE toolkit to calculate the BLEU score.
HSK LevelIn order to measure the complexity of Chinese sentences, we import HSK Level. HSK is the Chinese proficiency test designed for non-native speakers 6. It provides a vocabulary of nine levels from easy to difficult. Following previous work Kong et al. (2022), we count the proportion of words at levels 1-3 and 7+ in system outputs. The higher the proportion of words in levels 1-3 (7+), the easier (more challenging) the outputs are understood. Our specific implementation of this metric is the same as Kong et al. (2022).
Footnote 6: [https://www.chinesetest.cn](https://www.chinesetest.cn)
### Human Evaluation
In order to obtain more comprehensive evaluation results, we further conduct human evaluation. Following the previous work Dong et al. (2019); Kumar et al. (2020), we evaluate the Chinese text simplification systems on three dimensions:
* Fluency: Is the output grammatical?
* Adequacy: How much meaning from the original sentence is preserved?
* Simplicity: Is the output simpler than the original sentence?
We provide simplifications generated by different systems for the recruited volunteers. And we ask the volunteers to fill out a five-point Likert scale (1 is the worst, 5 is the best) about these simplifications for each dimension. Additionally, following Feng et al.'s work Feng et al. (2023), we measure the volunteers' subjective choices by ranking the simplifications to focus on actual usage rather than evaluation criteria.
\begin{table}
\begin{tabular}{|l|} \hline Our Prompt \\ \hline
4 \(\#\)\(\#\
## 6 Results
We divide all the 723 sentences in MCTS into two subsets: 366 for validation and 357 for testing the Chinese text simplification models. In this section, we report the evaluation results on the test set of MCTS.
### Results of Automatic Evaluations
The results of automatic evaluations are shown in Table 3. In addition to the model results, we also report the score of the source and gold reference. The source scores are calculated on the unedited original sentence. And we calculate the gold reference scores by evaluating each reference against all others in a leave-one-out scenario and then averaging the scores.
To our surprise, direct back translation gets the best SARI score among the unsupervised methods. But regarding HSK level, the performance of direct back translation is not good, even worse than the source. We find that many rewrite operations were generated during the back translation process, which is highly correlated with the SARI score. But due to the lack of control over simplicity, direct back translation is more like a sentence paraphrase method than text simplification. This may be why it performs poorly on the HSK level.
The translated Wiki-Large method gets the best BLEU score but the lowest SARI score among all methods. In fact, the system output has hardly changed compared to the original sentence. As the unedited source gets the highest BLEU score of 84.75, we believe the single BLEU value cannot be used as an excellent indicator of text simplification. Because there is a significant overlap between the original sentence and the references. As for the poor performance of translated Wiki-Large method, we believe it is due to the large amount of noise contained in the translated training data.
The SARI score of the cross-lingual pseudo data method is 38.49, which is between the other two unsupervised methods. But it performs better on the HSK level than the other two. This may be because the model learned simplification knowledge from pseudo data that was transferred from the English text simplification model.
In terms of the large language models, the gpt-3.5-turbo significantly performs better than text-davinci-003 and it achieves the best scores on SARI and HSK levels. However, compared to the gold reference, the performance of gpt-3.5-turbo is still insufficient.
### Results of Human Evaluations
We conducted human evaluations on three representative methods, namely direct back translation, cross-lingual pseudo data, and gpt-3.5-turbo. We recruited three volunteers to conduct the evaluation. All of them have a background in linguistics. We selected 30 sentences from the test set of MCTS for each volunteer and provided them with the original sentences and the outputs of these methods. For the convenience of comparison, a randomly selected reference for each sentence was additionally provided. Volunteers were asked to rate the simplification of these four groups. The results of the human evaluation are shown in Table 4.
We can see that the gold reference gets the best average score and rank. It is significantly superior to the output results of other simplification systems. For detail, it gets the best simplicity score of 4.20 and the best fluency score of 4.68. Due to some degree of sentence compression, it does not achieve the best adequacy score but only 4.31.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & SARI \(\uparrow\) & BLEU \(\uparrow\) & L1-3 (\%) \(\uparrow\) & L7+ (\%) \(\downarrow\) \\ \hline Source & 22.37 & 84.75 & 40.24 & 44.90 \\ Gold Reference & 48.11 & 61.62 & 46.25 & 39.50 \\ \hline Direct Back Translation & 40.37 & 48.72 & 39.19 & 45.44 \\ Translated Wiki-Large & 28.30 & **82.20** & 40.32 & 44.92 \\ Cross-Lingual Pseudo Data & 38.49 & 63.06 & 41.57 & 44.24 \\ \hline gpt-3.5-turbo & **42.39** & 49.22 & **43.68** & **41.29** \\ text-davinci-003 & 37.97 & 36.18 & 38.80 & 45.32 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The automatic evaluation results on the test set of MCTS. \(\uparrow\) The higher, the better. \(\downarrow\) The lower, the better. **Bold** means the best result, and underline means the second-best result.
As for the direct back translation method, despite its excellent performance in adequacy, it achieves the lowest simplicity score due to the lack of corresponding control measures. On the contrary, the cross-lingual pseudo data method performs well in terms of simplicity but does not perform well in terms of adequacy. Because it tends to perform more sentence compression, which removes lots of semantic information. These two unsupervised methods get a similar average score and rank score.
The gpt-3.5-turbo gets the second-best results among all metrics. By analyzing the average score and the rank score, we can find that it is significantly better than the two unsupervised simplification methods. But compared to the gold reference, there is still a certain gap. Our experiment has shown that under the zero-shot setting, there is still room for further improvement in the large language model's Chinese text simplification ability.
## 7 Conclusion
In this paper, we introduced the MCTS, a human-annotated dataset for the validation and evaluation of Chinese text simplification systems. It is a multi-reference dataset that contains multiple rewriting transformations. By calculating the low-level features for simplifications, we have shown the rich simplifications in MCTS, which may be of great significance for understanding the simplification and readability of Chinese text from a linguistic perspective. Furthermore, we tested the Chinese text simplification ability of some unsupervised methods and advanced large language models using the proposed dataset. We found that even advanced large language models are still inferior to human simplification under the zero-shot setting. Finally, we hope our work can motivate the development of Chinese text simplification systems and provide references for future research.
## Acknowledgments
This work was supported by the Fundamental Research Funds for the Central Universities, and the Research Funds of Beijing Language and Culture University (No. 23YCX131).
|
2304.05858 | Convergence properties of a Gauss-Newton data-assimilation method | Four-dimensional weak-constraint variational data assimilation estimates a
state given partial noisy observations and dynamical model by minimizing a cost
function that takes into account both discrepancy between the state and
observations and model error over time. It can be formulated as a Gauss-Newton
iteration of an associated least-squares problem. In this paper, we introduce a
parameter in front of the observation mismatch and show analytically that this
parameter is crucial either for convergence to the true solution when
observations are noise-free or for boundness of the error when observations are
noisy with bounded observation noise. We also consider joint state-parameter
estimation. We illustrated theoretical results with numerical experiments using
the Lorenz 63 and Lorenz 96 models. | Nazanin Abedini, Svetlana Dubinkina | 2023-04-12T13:42:50Z | http://arxiv.org/abs/2304.05858v1 | # Convergence properties of a Gauss-Newton data-assimilation method
###### Abstract
Four-dimensional weak-constraint variational data assimilation estimates a state given partial noisy observations and dynamical model by minimizing a cost function that takes into account both discrepancy between the state and observations and model error over time. It can be formulated as a Gauss-Newton iteration of an associated least-squares problem. In this paper, we introduce a parameter in front of the observation mismatch and show analytically that this parameter is crucial either for convergence to the true solution when observations are noise-free or for boundness of the error when observations are noisy with bounded observation noise. We also consider joint state-parameter estimation. We illustrated theoretical results with numerical experiments using the Lorenz 63 and Lorenz 96 models.
## 1 Introduction
Data assimilation (DA) estimates a state of a dynamical model given partial noisy measurements (also called observations), e.g. [1]. A so-called variational DA minimizes a cost function that is a difference between an estimation and the observations provided that either the estimate is a solution of the dynamical model (strong-constraint variational DA, see e.g. [2, 3, 4, 5]) or the dynamical model equations are satisfied under model error (weak-constraint variational DA, see e.g. [6, 7]).
Variational DA can be viewed as an approach of combining model and observations based on Tikhonov-Philips regularization, which is used to solve ill-posed inverse problems [8]. Inverse problems appear in diverse fields such as geosciences and image reconstruction [9, 10, 11, 12]. Inverse problems are concerned with seeking a (stationary) solution of a mathematical model given a set of noisy and incomplete observations. Due to sparsity of observations, the corresponding discrete inverse problem has a highly ill-conditioned coefficient matrix. In order to obtain a stable solution to an ill-posed inverse problem, regularization methods are required.
Motivated by the weak-constraint variational DA, we consider a DA method that minimizes a cost function under assumption of model error. We are seeking a solution over a time window at once akin to the four-dimensional variational DA (WC4DVar). The main difference to WC4DVar is that the cost function we consider has a parameter in front of the observation mismatch. As we will show, this parameter plays a significant role in the convergence of the DA method and has to be chosen carefully in order to achieve either convergence or boundedness of the error. As in [13] we use the dynamical model to regularize the least-squares minimization problem.
There are several research papers investigating error propagation of variational data assimilation. The error is defined as a norm of the difference between the DA estimate and the true trajectory, from which the observations are generated. In [14], the authors considered the Lorenz 63 model and the Navier-Stocks equations and showed that the error converges to zero asymptotically in time for noise-free observations. In [13; 15], it was shown that for a nonlinear dynamical model the error is bounded under assumption of contractivity in the high modes and bounded observation noise. We do not assume contractivity but assume bounds on some operators. The other difference in our analysis compared to the existing analysis of variational DA is that we derive convergence and bounds on the error as iteration goes to infinity and not as the time goes to infinity. The DA method considered in this paper is based on a Gauss-Newton iteration of the least-squares minimization problem, e.g. [16; 17], which is was also considered for incremental four-dimensional DA [18] in [19; 20].
The paper is organised as follows. In Section 2, we describe the DA minimization problem and introduce a Gauss-Newton DA iteration to solve it. In Section 3, we derive an error convergence result for noise-free observations and an error boundess result for noisy observations with bounded noise. In Section 4, we extend the DA method to joint state-parameter estimation. In Section 5, we illustrate the theoretical results with numercilac experiments using the Lorenz 63 and Lorenz 96 models. Finally, we present our conclusions in Section 6.
## 2 Gauss-Newton data-assimilation method
Let us consider the following nonlinear dynamical model
\[\frac{du^{\dagger}}{dt}=f(u^{\dagger}),\quad u^{\dagger}(t)\in \mathbb{R}^{n},\ t\in[0,T], \tag{1}\]
where \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\). Since in many applications the model is defined by the time-discretization, we consider data assimilation in the context of a discrete deterministic model. Let \(0=t_{0}<t_{1}<\cdots<t_{N-1}=T\) be an equidistant partition of \(I=[0,T]\) with \(t_{j}=j\Delta t\), then the time-discretization of Equation (1) is
\[u^{\dagger}_{j+1}=F_{j}(u^{\dagger}_{j}),\quad u^{\dagger}_{j} \in\mathbb{R}^{n},\quad j=0,\ldots,N-1, \tag{2}\]
where \(u^{\dagger}_{j+1}=u^{\dagger}(t_{j+1})\) and \(F_{j}\) is twice continuously differentiable \(\forall j=0,\cdots,N\). Let us denote \(\mathbf{u}^{\dagger}=\{u^{\dagger}_{0},\ldots,u^{\dagger}_{N}\}\) the true solution of the model and assumed to be unknown. Suppose a sequence of noisy observations \(\mathbf{y}:=\{y_{k_{0}},\ldots,y_{k_{M}}\}\), where \(k_{0}\geq 0\) and \(k_{M}\leq N\), related to \(\mathbf{u}^{\dagger}\) is given as
\[y_{j}=\mathcal{H}_{j}u^{\dagger}_{j}+\eta_{j},\quad y_{j}\in \mathbb{R}^{b},\quad j=k_{0},\ldots,k_{M}, \tag{3}\]
where \(\mathcal{H}_{j}:\mathbb{R}^{n}\to\mathbb{R}^{b}\), \(b\leq n\), is the linear observation operator, and the observation noise \(\eta_{j}\), either deterministic or stochastic, is bounded. The goal of data assimilation is to find a state \(\mathbf{u}=\{u_{0},\cdots,u_{N}\}\) such that the distance between the state \(\mathbf{u}\) and the observation \(\mathbf{y}\) is minimized. The weak-constraint variational data assimilation (WC4DVar) [6; 3] minimizes the distance to the observations \(\mathbf{y}\) under the condition that the estimate is a solution of the dynamical model (Equation (2)) under model error. Namely, WC4DVar solves the following minimization problem
\[\min_{\mathbf{u}\in\mathbb{R}^{nN}}\frac{1}{2}\{\|G(\mathbf{u}) \|^{2}+\|\mathbf{y}-H\mathbf{u}\|^{2}\},\]
where
\[G(\mathbf{u})=(G_{0}(\mathbf{u})\ G_{1}(\mathbf{u})\ \cdots\ G_{N-1}(\mathbf{u}))^{T}\,, \quad G_{j}(\mathbf{u})=u_{j+1}-F_{j}(u_{j}),\quad j=0,\ldots,N-1, \tag{4}\]
and
\[H=\left(\mathcal{H}_{k_{0}}\ \cdots\ \mathcal{H}_{k_{M}}\right)^{T}.\]
In this paper, we abuse the notation of \(L^{2}\)-norm in \(\mathbb{R}^{s}\) for different values of dimension \(s\). In WC4DVar the norms are typically weighted by the error covariance matrices. We consider a similar minimization problem with the sole difference of a parameter \(\alpha\) in front of the observation mismatch, namely
\[\min_{\mathbf{u}\in\mathbb{R}^{nN}}\frac{1}{2}\{\|G(\mathbf{u})\|^{2}+\alpha \|\mathbf{y}-H\mathbf{u}\|^{2}\}. \tag{5}\]
As we already mentioned in introduction, the parameter \(\alpha\) is crucial for the convergence and bound result on error.
We minimize the cost function (Equation (5)) as follows. We start with an initial guess \(\mathbf{u}^{(0)}=H^{T}\mathbf{y}+(I-H^{T}H)\mathbf{u}^{b}\), where \(\mathbf{y}\) is the observation and \(\mathbf{u}^{b}\) is the background solution, which is usually a forecast state from the previous analysis cycle. Then the method proceeds by the iteration
\[\mathbf{u}^{(k+1)}=\mathbf{u}^{(k)}-\left(G^{\prime}(\mathbf{u}^{(k)})^{T}G^{ \prime}(\mathbf{u}^{(k)})+\alpha H^{T}H\right)^{-1}\left(G^{\prime}(\mathbf{u} ^{(k)})^{T}G(\mathbf{u}^{(k)})+\alpha H^{T}(H\mathbf{u}^{(k)}-\mathbf{y}) \right), \tag{6}\]
where \(k\) denotes the index of the Gauss-Newton's iteration, \(G(\mathbf{u}^{(k)})\) defined in Equation (4), and \(G^{\prime}(\mathbf{u}^{(k)})\) is Jacobian of \(G\) which has an \(n(N-1)\times nN\) block structure:
\[G^{\prime}(\mathbf{u})=\begin{bmatrix}-F_{0}^{{}^{\prime}}(u_{0})&I&&&&&\\ &-F_{1}^{{}^{\prime}}(u_{1})&I&&&&&\\ &&\ddots&\ddots&\\ &&&-F_{N-1}^{{}^{\prime}}(u_{N-1})&I\end{bmatrix}. \tag{7}\]
## 3 Error Analysis
We define the error between approximation \(\mathbf{u}\) of the Gauss-Newton DA method (Equation (6)) and the true solution \(\mathbf{u}^{\dagger}\) by
\[\mathbf{e}_{k}:=\mathbf{u}^{(k)}-\mathbf{u}^{\dagger},\quad\forall k=0,1,\ldots. \tag{8}\]
We show that the norm of error (Equation (8)) either converges to zero or is bounded, dependent whether the observations are noise-free or noisy.
### **Noise-free observations**
We assume that the sequence of observations \(\mathbf{y}\) is noise-free, thus \(\eta_{j}=0,\ j=0,\cdots,N\) in Equation (3). We show that the Gauss-Newton DA method (Equation (6)) produces an accurate state estimation under the vanishing noise assumption. In order to do this, we need the following assumptions on \(G^{\prime}\).
**Assumption 3.1**.: _The Jacobian \(G^{\prime}(\mathbf{u})\) defined in Equation (7) is globally Lipschitz continuous with the Lipschitz constant denoted by \(L_{1}>0\), namely_
\[\|G^{\prime}(\mathbf{u})-G^{\prime}(\mathbf{v})\|\leq L_{1}\|\mathbf{u}- \mathbf{v}\|,\quad\forall\mathbf{u},\mathbf{v}\in\mathbb{R}^{nN}, \tag{9}\]
_and there exists \(\alpha>0\) such that_
\[\|(G^{\prime}(\mathbf{x})^{T}G^{\prime}(\mathbf{x})+\alpha H^{T}H)^{-1}G^{ \prime T}(\mathbf{x})\|\leq(L_{1}c)^{-1},\quad\forall\mathbf{x}\in\mathbb{R}^ {nN}, \tag{10}\]
_with \(L_{1}\) being the Lipschitz constant from Equation (9) and \(c>0\) being an upper bound on the norm of initial error \(\|\mathbf{e}_{0}\|\)._
We abuse the notation by denoting both vector and matrix norm by \(\|\cdot\|\).
**Theorem 1**.: _Let Assumption (3.1) hold and let observations (Equation (3)) to be noise-free. Then norm of the error defined in Equation (8) converges to zero as iteration goes to infinity._
Proof.: For the sake of simplicity, we use \(G_{k}\) and \(G^{\prime}_{k}\) instead of \(G(\mathbf{u}^{(k)})\) and \(G^{\prime}(\mathbf{u}^{(k)})\), respectively. Substituting Equation (6) into Equation (8) we have
\[\mathbf{e}_{k+1} = \mathbf{u}^{(k+1)}-\mathbf{u}^{\dagger}\] \[= \mathbf{u}^{(k)}-(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T} H)^{-1}\left(G^{{}^{\prime}T}_{k}G_{k}-\alpha H^{T}(\mathbf{y}-H\mathbf{u}^{(k)}) \right)-\mathbf{u}^{\dagger}.\]
By substituting Equation (3) with \(\eta_{j}=0,\ \forall j=0,\cdots,N\) into the expression above, we obtain the following
\[\mathbf{e}_{k+1} = \mathbf{e}_{k}-(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H) ^{-1}(G^{{}^{\prime}T}_{k}G_{k}+\alpha H^{T}H\mathbf{e}_{k}).\]
Opening the brackets and using the following property
\[I-(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}\alpha H^{T}H=(G^{{} ^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}G^{{}^{\prime}T}_{k}G^{\prime} _{k},\]
we get
\[\mathbf{e}_{k+1}=(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}\left( G^{{}^{\prime}T}_{k}(G^{\prime}_{k}\mathbf{e}_{k}-G_{k})\right).\]
Since \(G(\mathbf{u}^{\dagger})=0\), we add it to the right-hand side and use the mean value theorem to obtain the following:
\[\mathbf{e}_{k+1} =(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}(G^{{}^{ \prime}T}_{k}G^{\prime}_{k}\mathbf{e}_{k}-G^{{}^{\prime}T}_{k}(G(\mathbf{u}^{ (k)})-G(\mathbf{u}^{\dagger}))\] \[=(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}G^{{}^{ \prime}T}_{k}G^{\prime}_{k}\mathbf{e}_{k}\] \[-(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}G^{{}^{ \prime}T}_{k}\left(\int_{0}^{1}G^{\prime}(s\mathbf{u}^{(k)}+(1-s)\mathbf{u}^ {\dagger})ds\right)\mathbf{e}_{k}\] \[=(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}G^{{}^{ \prime}T}_{k}\left(G^{\prime}_{k}-\int_{0}^{1}G^{\prime}(s\mathbf{u}^{(k)}+(1- s)\mathbf{u}^{\dagger})ds\right)\mathbf{e}_{k}\] \[=(G^{{}^{\prime}T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}G^{{}^{ \prime}T}_{k}\left(\int_{0}^{1}\left(G^{\prime}(\mathbf{u}^{(k)})-G^{\prime} \left(s\mathbf{u}^{(k)}+(1-s)\mathbf{u}^{\dagger}\right)\right)\mathbf{e}_{k} ds\right).\]
Taking norm of both sides and using Lipschitz continuity (Equation (9)) on \(G^{\prime}\), we get
\[\|\mathbf{e}_{k+1}\|\leq\frac{L_{1}}{2}\big{\|}(G^{{}^{\prime}T}_{k}G^{\prime} _{k}+\alpha H^{T}H)^{-1}G^{{}^{\prime}T}_{k}\big{\|}\|\mathbf{e}_{k}\|^{2}.\]
Using Equation (10), we conclude that
\[\|\mathbf{e}_{k+1}\|\leq\frac{1}{2c}\|\mathbf{e}_{k}\|^{2}.\]
Therefore, for \(k=1,2,\dots,\) we have
\[\|\mathbf{e}_{k}\|\leq\left(\frac{1}{2c}\right)^{2^{k}-1}\|\mathbf{e}_{0}\|^{2^ {k}}.\]
Since \(\|\mathbf{e}_{0}\|\leq c\), we get
\[\|\mathbf{e}_{k}\| \leq \left(\frac{1}{2}\right)^{2^{k}-1}c,\]
which leads to \(\lim_{k\to\infty}\|\mathbf{e}_{k}\|=0\).
### **Noisy observations**
Next we consider noisy observations. We show that norm of the error defined in Equation (8) is bounded. In order to prove this result, we require local conditions on \(G^{\prime}\) and bounded observation noise.
**Assumption 3.2**.: \(G\) _is continuously differentiable in the open convex set \(D\subset\mathbb{R}^{nN}\). The Jacobian \(G^{\prime}(\mathbf{u})\) defined in Equation (7) is locally Lipschitz continuous in \(D\), with the Lipschitz constant denoted by \(L_{2}>0\), namely_
\[\|G^{\prime}(\mathbf{u})-G^{\prime}(\mathbf{v})\|\leq L_{2}\|\mathbf{u}- \mathbf{v}\|,\quad\forall\mathbf{u},\mathbf{v}\in D, \tag{11}\]
_and there exists \(0<\alpha<1\) such that_
\[\|(G^{\prime}(\mathbf{x})^{T}G^{\prime}(\mathbf{x})+\alpha H^{T} H)^{-1}G^{\prime T}(\mathbf{x})\|\leq(L_{2}c)^{-1},\quad\forall\mathbf{x}\in D, \tag{12}\] \[\|H^{T}\eta\|\|(G^{\prime}(\mathbf{x})^{T}G^{\prime}(\mathbf{x})+ \alpha H^{T}H)^{-1}\|\leq c/2,\quad\forall\mathbf{x}\in D, \tag{13}\]
_with \(L_{2}\) being the Lipschitz constant from Equation (11) and \(c>0\) being an upper bound on the norm of initial error \(\|\mathbf{e}_{0}\|\)._
**Lemma 3.3**.: _(Dennis and Robert [21]) Let \(G:\mathbb{R}^{l}\to\mathbb{R}^{m}\) be continuously differentiable in the open convex set \(D\subset\mathbb{R}^{l}\) and let the Jacobian of \(G\) be Lipschitz continuous at \(\mathbf{x}\in D\), using a vector norm and the induced matrix operator norm and the Lipschitz constant \(L\). Then, for any \(\mathbf{x}+\mathbf{p}\in D\),_
\[\|G(\mathbf{x}+\mathbf{p})-G(\mathbf{x})-G^{\prime}(\mathbf{x})\mathbf{p}\| \leq\frac{L}{2}\|\mathbf{p}\|^{2}.\]
**Theorem 2**.: _Let Assumption (3.2) hold and let observations (Equation (3)) to be noisy. Then_
\[\limsup_{k\to\infty}\|\mathbf{e}_{k}\|\leq\frac{\alpha c}{1-\alpha}, \tag{14}\]
_provided the iteration \(\mathbf{u}^{(k)}\) does not leave the convex set \(D\) for all \(k\)._
Proof.: For the sake of simplicity, we use \(G_{k}\) and \(G^{{}^{\prime}}_{k}\) instead of \(G(\mathbf{u}^{(k)})\) and \(G^{\prime}(\mathbf{u}^{(k)})\), respectively. Substituting Equation (6) into Equation (8) we have
\[e_{k+1} = \mathbf{u}^{(k+1)}-\mathbf{u}^{\dagger}\] \[= \mathbf{u}^{(k)}-\mathbf{u}^{\dagger}-(G^{\prime T}_{k}G^{\prime }_{k}+\alpha H^{T}H)^{-1}(G^{\prime T}_{k}G_{k}+\alpha H^{T}(H\mathbf{u}^{(k) }-\mathbf{y})),\]
Using Equation (3) in the above equation and adding and subtracting \(G^{\prime T}_{k}G^{\prime}_{k}\mathbf{e}_{k}\) leads to
\[\mathbf{e}_{k+1} = \mathbf{e}_{k}-(G^{\prime T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1 }(G^{\prime T}_{k}G_{k}+\alpha H^{T}H\mathbf{e}_{k}-\alpha H^{T}\eta))\] \[= \mathbf{e}_{k}-(G^{\prime T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{- 1}(G^{\prime T}_{k}G^{\prime}_{k}\mathbf{e}_{k}+\alpha H^{T}H\mathbf{e}_{k}+G ^{\prime T}_{k}(G_{k}-G^{\prime}_{k}\mathbf{e}_{k})-\alpha H^{T}\eta))\] \[= -(G^{\prime T}_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}(G^{\prime T}_ {k}(G_{k}-G(\mathbf{u}^{\dagger})-G^{\prime}_{k}\mathbf{e}_{k})-\alpha H^{T} \eta),\]
since \(G(\mathbf{u}^{\dagger})=0\). By taking norm of both sides of the above equation and using Lemma 3.3, we get
\[\|\mathbf{e}_{k+1}\|\leq\frac{L_{2}}{2}\|(G^{\prime T}_{k}G^{\prime}_{k}+ \alpha H^{T}H)^{-1}G^{\prime T}_{k}\|\|\mathbf{e}_{k}\|^{2}+\alpha\|(G^{\prime T }_{k}G^{\prime}_{k}+\alpha H^{T}H)^{-1}\|\|H^{T}\eta\|. \tag{15}\]
Using Equations (12) and (13) in the above expression leads to
\[\|\mathbf{e}_{k+1}\|\leq\frac{1}{2c}\|\mathbf{e}_{k}\|^{2}+\alpha\frac{c}{2},\]
Using \((a+b)^{2}\leq 2a^{2}+2b^{2}\) in the inequality above gives rise to
\[\|\mathbf{e}_{k}\|\leq 2^{-k}\left(\frac{1}{c}\right)^{2^{k}-1}\|\mathbf{e}_{0 }\|^{2^{k}}+c\sum_{i=0}^{k-1}\left(\frac{\alpha^{2^{i}}}{2^{i+1}}\right), \quad\text{for}\quad k=1,2,\dots.\]
Since \(\|\mathbf{e}_{0}\|\leq c\) and \(2^{-i}<1\) for \(\forall i=0,1,\dots\), we get
\[\|\mathbf{e}_{k}\|<2^{-k}c+c\sum_{i=0}^{k-1}\alpha^{2^{i}}.\]
Since \(0<\alpha<1\), we have \(\sum_{i=0}^{k-1}\alpha^{2^{i}}<\alpha\sum_{i=0}^{k}\alpha^{i}\) leading consequently to
\[\|\mathbf{e}_{k}\|<2^{-k}c+c\alpha\sum_{i=0}^{k}\alpha^{i},\]
and in the limit of \(k\) going to infinity
\[\limsup_{k\to\infty}\|\mathbf{e}_{k}\|<\frac{\alpha c}{1-\alpha}.\]
_Remark 3.4_.: Note that in case of \(\eta_{j}=0,\forall j=1,\cdots,N\), we have \(\|H^{T}\eta\|=0\) and Equation (13) is trivially satisfied, which in turn implies assumption of \(\alpha>0\) instead of \(1>\alpha>0\). Furthermore, Equations (11) and (12) are equivalent to Equations (9) and (10) but locally and thus Theorem (2) is equivalent to Theorem (1) but locally.
## 4 Joint State-Parameter Estimation
Data assimilation can also be used if the dynamical model depends on uncertain parameter. We extend the Gauss-Newton DA method Equation (6) to joint state-parameter estimation. We consider
\[G(\mathbf{u};\boldsymbol{\theta})=u_{n+1}-F_{n}(u_{n};\theta),\]
where \(\boldsymbol{\theta}=(\theta_{1},\theta_{2},\ldots,\theta_{q})^{T}\) is an uncertain parameter. Then the minimization problem becomes
\[\min_{\{\mathbf{u}\in\mathbb{R}^{nN},\ \boldsymbol{\theta}\in\mathbb{R}^{q}\}} \{\|G(\mathbf{u};\boldsymbol{\theta})\|^{2}+\alpha\|H\mathbf{u}-\mathbf{y}\| ^{2}\}.\]
Starting with an initial guess for the state \(\mathbf{u}^{(0)}\) and parameters \(\boldsymbol{\theta}^{(0)}\), the iteration proceeds as follows:
\[\mathbf{u}^{(k+1)} = \mathbf{u}^{(k)}-\mathcal{L}(\mathbf{u}^{(k)};\boldsymbol{\theta} ^{(k)}), \tag{16}\] \[\boldsymbol{\theta}^{(k+1)} = \boldsymbol{\theta}^{(k)}-\mathcal{S}(\mathbf{u}^{(k+1)}; \boldsymbol{\theta}^{(k)}). \tag{17}\]
Here \(\mathcal{L}(\mathbf{u}^{(k)};\cdot)\) and \(\mathcal{S}(\cdot;\boldsymbol{\theta}^{(k)})\) are defined by
\[\mathcal{L}(\mathbf{u}^{(k)};\cdot) = \Big{(}G^{{}^{\prime}T}_{u}(\mathbf{u}^{(k)};\cdot)G^{{}^{\prime} }_{u}(\mathbf{u}^{(k)};\cdot)+\alpha H^{T}H\Big{)}^{-1}\left(G^{{}^{\prime}T} _{u}(\mathbf{u}^{(k)};\cdot)G(\mathbf{u}^{(k)};\cdot)+\alpha H^{T}(H\mathbf{ u}^{(k)}-\mathbf{y})\right)\!,\] \[\mathcal{S}(\cdot;\boldsymbol{\theta}^{(k)}) = \Big{(}G^{{}^{\prime}T}_{\theta}(\cdot;\boldsymbol{\theta}^{(k)} )G^{{}^{\prime}}_{\theta}(\cdot;\boldsymbol{\theta}^{(k)})\Big{)}^{-1}\,G^{{} ^{\prime}T}_{\theta}(\cdot;\boldsymbol{\theta}^{(k)})G(\cdot;\boldsymbol{ \theta}^{(k)}),\]
respectively, \(G^{\prime}_{u}\) and \(G^{\prime}_{\theta}\) are derivatives of \(G\) with respect to \(u\) and \(\theta\), respectively.
**Assumption 4.1**.: _We consider \(G(\mathbf{u};\boldsymbol{\theta})=\mathcal{G}(\mathbf{u})+A\boldsymbol{\theta}\). \(\mathcal{G}\) is continuously differentiable in the open convex set \(D\subset\mathbb{R}^{nN}\) and Lipshitz in \(D\), with the Lipschitz constant denoted by \(L_{0}>0\), namely_
\[\|\mathcal{G}(\mathbf{u})-\mathcal{G}(\mathbf{v})\|\leq L_{0}\| \mathbf{u}-\mathbf{v}\|,\quad\forall\mathbf{u},\mathbf{v}\in D. \tag{18}\]
_The Jacobian \(\mathcal{G}^{\prime}(\mathbf{u})\) is locally Lipschitz continuous in \(D\), with the Lipschitz constant denoted by \(L_{3}>0\), namely_
\[\|\mathcal{G}^{\prime}(\mathbf{u})-\mathcal{G}^{\prime}(\mathbf{v})\|\leq L_{ 3}\|\mathbf{u}-\mathbf{v}\|,\quad\forall\mathbf{u},\mathbf{v}\in D, \tag{19}\]
_and there exists \(\alpha>0\) such that_
\[\|(\mathcal{G}^{\prime}(\mathbf{x})^{T}\mathcal{G}^{\prime}(\mathbf{x})+ \alpha H^{T}H)^{-1}\mathcal{G}^{\prime T}(\mathbf{x})\|\leq(2L_{3}c)^{-1},\quad \forall\mathbf{x}\in D, \tag{20}\]
_with \(L_{3}\) being the Lipschitz constant from Equation (19), and \(c>0\) being an upper bound on the norm of initial error \(\|\mathbf{e}_{0}\|\). Furthermore, \(b/c<1\) where \(b=\|A(A^{T}A)^{-1}A^{T}\|L_{0}/L_{3}\)._
**Theorem 3**.: _We consider \(G(\mathbf{u};\boldsymbol{\theta})=\mathcal{G}(\mathbf{u})+A\boldsymbol{\theta}\) and noise-free observations of \(\mathbf{u}\). We assume that Assumption 4.1 holds for \(G(\mathbf{u};\boldsymbol{\theta})\). Then_
\[\limsup_{k\to\infty}\|\mathbf{e}_{k}\|<\frac{b/2}{1-b/c},\]
_and_
\[\limsup_{k\to\infty}\|\boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}^{\dagger} \|<L_{0}\|(A^{T}A)^{-1}A^{T}\|\frac{b/2}{1-b/c}.\]
Theorem (3) is proven in (B).
## 5 Numerical Experiments
In this section, we present numerical experiments to illustrate the theoretical results of Section 3 and Section 4. First, we consider noise-free observations. We illustrate the convergence Theorem (1) under different sizes of observations. Second, we consider noisy observations and show the theoretical bound (Equation (14)), computed numerically. Third, we consider a dynamical model with uncertain parameters and estimate them using the Gauss-Newton DA method (Equations (16) and (17)). For each set-up, we perform 100 numerical experiments with different realizations of truth \(\mathbf{u}^{\dagger}\), observations \(\mathbf{y}\), and background solution \(\mathbf{u}^{b}\). For each solution \(\mathbf{u}\), we compute cost function, error with respect to the truth, error with respect to the truth of observed variables, and error with respect to the truth of non-observed variables :
\[C =\|G(\mathbf{u})\|+\alpha\|\mathbf{y}-H\mathbf{u}\|, \tag{21}\] \[\mathcal{E} =\|\mathbf{u}^{\dagger}-\mathbf{u}\|,\] (22) \[\mathcal{E}^{O} =\|H\mathbf{u}^{\dagger}-H\mathbf{u}\|,\] (23) \[\mathcal{E}^{N} =\|(I-H^{T}H)(\mathbf{u}^{\dagger}-\mathbf{u})\|, \tag{24}\]
respectively.
We compare the Gauss-Newton DA method to WC4DVar, which minimizes the following cost function:
\[J(u_{0};\{y_{n}\})=\frac{1}{2}\sum_{n=1}^{N}(y_{n}-Hu_{n})^{T}R^{-1}(y_{n}-Hu_ {n})+\frac{1}{2}(u_{n}-F_{n}(u_{n-1})^{T}Q^{-1}(u_{n}-F_{n}(u_{n-1}), \tag{25}\]
where \(R\) is covariance matrix of observational error and \(Q\) is covariance matrix of model error (see, e.g., [2; 3; 4; 5; 18]). The main distinction of the Gauss-Newton DA method from WC4DVar is in the tunable parameter \(\alpha\), which has a significant influence on error convergence or error boundedness. The minimization of the WC4DVar cost function is done by a Matlab built-in Levenberg-Marquardt algorithm.
We perform numerical experiments with the Lorenz 63 (L63) and Lorenz 96 (L96) models. L63 is a chaotic model which is widely used as a toy model in data-assimilation numerical experiments. It simulates atmospheric convection in a simple way [22]. The model is described by the following ODEs
\[\dot{x}_{1}=\sigma(x_{2}-x_{1}),\quad\dot{x}_{2}=x_{1}(\rho-x_{3})-x_{2},\quad \dot{x}_{3}=x_{1}x_{2}-bx_{3}. \tag{26}\]
We implement the L63 model with the standard parameters, \(\sigma=10\), \(\rho=28\), and \(b=8/3\). The differential equations are discretized with a forward Euler scheme with time step \(\Delta t=0.005\). The initial conditions are random numbers, which are independently and identically distributed. We generate observations by computing a solution of L63 on \(t\in[0,100]\). The observations are drawn at every tenth time step, which corresponds to assimilation time window of 6 hours. For the Euler-discretized L63 model, the Lipschitz condition is
\[\|G^{\prime}(X)-G^{\prime}(Y)\|\leq\sqrt{2}\Delta t\|X-Y\|.\]
(For derivation of the Lipschitz constant see A.1).
The L96 model [23] is a one-dimensional atmosphere model which is described by the following ODEs
\[\dot{x}_{l}=-x_{l-2}x_{l-1}+x_{l-1}x_{l+1}-x_{l}+\mathcal{F},\quad l=1,\ldots,d, \tag{27}\]
where the dimension \(d\) and forcing \(\mathcal{F}\) are parameters. Cyclic boundary conditions are imposed. We implement the L96 model with the standard parameter choices \(d=40\) and \(\mathcal{F}=8\). We discretize the differential equations with a forward Euler scheme with time step \(\Delta t=0.0025\). The initial conditions are random numbers which are independently and identically distributed. We generate observations by computing a solution of L96 on \(t\in[0,100]\). The observations are drawn at every tenth time step, which corresponds to assimilation time window of 3 hours. The Lipschitz condition of the Euler-discretized L96 model is:
\[\|G^{\prime}(X)-G^{\prime}(Y)\|\leq\sqrt{6}\Delta t\|X-Y\|.\]
(For derivation of the Lipschitz constant see A.2).
### State estimation given noise-free observations
We perform numerical experiments with noise-free observations, thus \(\eta_{j}=0,\ j=0,\cdots,N\) in Equation (3). In order to satisfy the conditions of Theorem (1), we first need to compute the Lipschitz constant of the Jacobian of the dynamical model. Next, we need to find a suitable \(\alpha\). We use Algorithm 1, where an upper bound on the initial error, \(c\), is chosen arbitrarily.
Given the initial guess of the Gauss-Newton DA method, \(\mathbf{u}^{(0)}\), an upper bound on the initial error, \(c\), and the Lipschitz constant \(L_{1}\) that satisfies Equation (9) in Assumption (3.1), we choose an arbitrary positive \(\alpha_{0}\), for example \(\alpha_{0}=0.001\).
```
1:while\(\|(G^{\prime T}(\mathbf{u}^{(0)})G^{\prime}(\mathbf{u}^{(0)})+\alpha_{0}H^{T}H)^{ -1}G^{\prime T}(\mathbf{u}^{(0)})\|>(L_{1}c)^{-1}\) and \((\sim isnan(\|(G^{\prime}(\mathbf{u}^{(0)})^{T}G^{\prime}(\mathbf{u}^{(0)})+ \alpha_{0}H^{T}H)^{-1}G^{\prime T}(\mathbf{u}^{(0)})\|))\)do
2:\(\alpha_{0}\gets 2*\alpha_{0}\)
3:endwhile
4:if\(isnan(\|(G^{\prime}(\mathbf{u}^{(0)})^{T}G^{\prime}(\mathbf{u}^{(0)})+\alpha_{ 0}H^{T}H)^{-1}G^{\prime T}(\mathbf{u}^{(0)})\|)\)then
5: Error: There is no \(\alpha\).
6:else\(\alpha\leftarrow\alpha_{0}\)
7:endif
```
**Algorithm 1** Finding parameter \(\alpha\) in case of noise-free observations
We use the same value of \(\alpha\) throughout the iteration and check whether Equation (10) is satisfied. If there exists an iteration \(k\) such that Equation (10) does not hold we terminate the iteration, otherwise the iteration proceeds until a tolerance value (\(10^{-14}\)) is reached for the change in the solution \(\|\mathbf{u}^{(k+1)}-\mathbf{u}^{(k)}\|\).
We consider the L63 model and assume only the first component is observed, i.e., \(H=[1,0,0]\). In all of the experiments Equation (10) is satisfied throughout the iteration. In Figure 0(a), we display error with respect to the truth (Equation (22)) as a function of iteration. We plot median and plus and minus one standard deviation over 100 simulations. We see that error is a decreasing function, as we expect from Theorem (1). We also considered other observation operators which depend only on second or third variable and obtained equivalent results (not shown). Therefore, for the L63 model the convergence Theorem (1) does not depend which variable is observed.
Furthermore, We perform numerical experiments with different values of \(\alpha\) that satisfy Equation (10). In Figure 0(b), we display cost function (Equation (21)) as a function of iteration for different values of \(\alpha\). A faster convergence rate is achieved at \(\alpha=0.004\), as to be expected since \(\alpha=0\) corresponds to the second order Newton method when observations are complete.
Next, we perform numerical experiments with the L96 model and we assume every second variable is observed. In Figure 1(a), we display error with respect to the truth as a function of iteration. We plot median and plus and minus one standard deviation over 100 simulations. We see that error is a decreasing function, as we expect from Theorem (1). Comparing to Figure 0(a) we observe that more iterations are needed to reach the desired tolerance in the L96 model due to the higher dimension. In Figure 1(b), we plot the cost function of the L96 model for different values of \(\alpha\) when observing every second variable. A faster convergence rate is achieved at \(\alpha=0.004\). Furthermore, we investigate the convergence property under different sizes of observations. In Figure 3, we plot the median value of error (Equation (22)) over 20 simulations as a function of iteration in a semi-log scale. We see that when the size of observations decreases more iterations are needed to obtain the desired tolerance.
Furthermore, Equation (10) of Assumption (3.1) gives implicit requirement on the size of observations. In numerical experiments with the L96 model, we see that if the number of observations is less than eight then the matrix \((G^{\prime}(\mathbf{x})^{T}G^{\prime}(\mathbf{x})+\alpha H^{T}H)^{-1}\) is ill-posed and we are not able to find \(\alpha\) that satisfies Equation (10). Therefore, the conditions in Theorem (1) do not hold and error convergence is not guaranteed.
### State estimation given noisy observations
We perform numerical experiments with noisy observations, where \(\eta\sim\mathcal{N}(0,\gamma^{2}I)\) in Equation (3). We assume that \(\|H^{T}\eta\|\) is bounded and known, which is a strong assumption. In order to satisfy the conditions of Theorem (2), we first need to compute the Lipschitz constant of the Jacobian of the dynamical model, and then to find a suitable \(\alpha\). We use Algorithm 2 to find \(\alpha\), where an upper bound on the initial error, \(c\), is chosen arbitrarily.
Figure 1: Application of the Gauss-Newton DA method to L63 with noise-free observations, where only the first variable is observed. On the left: Error (Equation (22)) as a function of iteration: median (dashed line), +/- one standard deviation (shadowed area) over 100 simulations. On the right: Cost function (Equation (21)) as a function of iteration for different values of \(\alpha\).
Figure 3: Application to L96 with noise-free observations. Error of the Gauss-Newton DA method as a function of iterations for different sizes of observations.
Figure 2: Application of the Gauss-Newton DA method to L96 with noise-free observations, where every second variable is observed. On the left: Error (Equation (22)) as a function of iteration: median (dashed line), +/- one standard deviation (shadowed area) over 100 simulations. On the right: Cost function (Equation (21)) as a function of iteration for different values of \(\alpha\).
Given the initial guess of the Gauss-Newton DA method, \(\mathbf{u}^{(0)}\), an upper bound on the initial error, \(c\), and the Lipschitz constant \(L_{2}\) that satisfies Equation (11) in Assumption (3.2), we choose an arbitrary positive \(\alpha_{0}\), for example \(\alpha_{0}=0.001\).
```
1:while\(\|(G^{\prime T}(\mathbf{u}^{(0)})G^{\prime}(\mathbf{u}^{(0)})+\alpha_{0}H^{T}H)^{-1}G^{ \prime T}(\mathbf{u}^{(0)})\|>(L_{2}c)^{-1}\) and \(\|H^{T}\eta\|\|({G^{\prime T}(\mathbf{u}^{(0)})G^{\prime}(\mathbf{u}^{(0)})+ \alpha_{0}H^{T}H})^{-1}\|>c/2\) and \((\sim isnan(\|(G^{\prime}(\mathbf{u}^{(0)})^{T}G^{\prime}(\mathbf{u}^{(0)})+ \alpha_{0}H^{T}H)^{-1}G^{\prime T}(\mathbf{u}^{(0)})\|))\)do
2:\(\alpha_{0}\gets 2*\alpha_{0}\)
3:endwhile
4:if\(isnan(\|(G^{\prime}(\mathbf{u}^{(0)})^{T}G^{\prime}(\mathbf{u}^{(0)})+\alpha_{0 }H^{T}H)^{-1}G^{\prime T}(\mathbf{u}^{(0)})\|)\) or \(\alpha_{0}>1\)then
5: Error: There is no \(\alpha\).
6:else\(\alpha\leftarrow\alpha_{0}\)
7:endif
```
**Algorithm 2** Finding parameter \(\alpha\) in case of noisy observations
We use the same value of \(\alpha\) throughout the iteration and check whether Equations (12) and (13) are satisfied. If there exists an iteration \(k\) such that either Equation (12) or Equation (13) does not hold, we terminate the iteration, otherwise the iteration proceeds until it reaches the maximum value of iterations, which is 20 for L63 and 70 for L96.
We consider the L63 model with noisy observations. We assume only the first component is observed, i.e., \(H=[1,0,0]\) and \(\gamma=0.01\) in Equation (3). In all of the experiments Equations (12) and (13) are satisfied throughout the iteration. In Figure 4, we display error with respect to the truth (Equation (22)) as a function of iteration. We plot median (dashed line) and plus and minus one standard deviation (shadowed area) over 100 simulations. In Figure 4, we also display theoretical bound (Equation (14)), we plot median (green line) over 100 simulations which depends on different \(\alpha\)s. We see that error is below the theoretical bound, as we expect from Theorem (2). On the right of Figure 4, we display error of observed components (Equation (23)) as a function of iteration. We plot median and plus and minus one standard deviation over 100 simulations. We see that error of observed components (dashed line) is less than observation error \(\|\mathbf{y}-H\mathbf{u}^{\dagger}\|\) (red line). Furthermore, comparing the right panel to the left panel of Figure 4 we see that the error of observed components is less than the total error.
Next, we perform numerical experiments with the L96 model with noisy observations. We assume every second variable is observed and \(\gamma=0.01\) in Equation (3). On the left of Figure 5, we display error with respect to the truth as a function of iteration. We plot median and plus and minus one standard deviation over 100 simulations. We see that error is a decreasing function for the first five iterations and after that it remains bounded. Comparing to Figure 4 we observe that due to the higher dimensions, more iterations are needed to reach the desired tolerance in the L96 model. Moreover, as we expect from Theorem (2), the median of error is below the median of theoretical bound (green line) over 100 simulations. On the right of Figure 5, we display error of observed components (Equation (23)) as a function of iteration. We see that error of observed components is less than the total error (left panel of Figure 5), as it is expected. Furthermore, we see from both Figures 4 and 5 that the theoretical bound is not tight as to be expected.
Finally, we investigate error (Equation (22)) as a function of observation noise level \(\gamma\). From Theorem (2) follows that the convergence result is achieved as \(\|H^{T}\eta\|\) goes to zero. In Figure 6, we display error (Equation (22)) for different values of \(\gamma\) for the L63 model on the left and for the L96 model on the right. As we expected from Remark (3.4), we see that error decreases as observation
Figure 4: Application of the Gauss-Newton DA method to L63 with noisy observations, where only the first variable is observed and \(\gamma=0.01\) in Equation (3). On the left: Error (Equation (22)) as a function of iteration: median (dashed line), +/- one standard deviation (shadowed area) over 100 simulations and a theoretical bound (Equation (14)) in green. On the right: Error of observed components (Equation (23)) as a function of iteration and observation error in red.
Figure 5: Application of the Gauss-Newton DA method to L96 with noisy observations, where every second variable is observed and \(\gamma=0.01\) in Equation (3). On the left: Error (Equation (22)) as a function of iteration: median (dashed line), +/- one standard deviation (shadowed area) over 100 simulations and a theoretical bound Equation (14) in green. On the right: Error of observed components (Equation (23)) and observation error in red.
noise level decreases and the convergence result achieves when \(\gamma\) is close to zero.
### Comparison to weak-constraint 4DVar
Weak-constraint four-dimensional variational method is one of the well-known data-assimilation methods for estimating initial condition in weather forecasting applications. It minimizes the cost function (Equation (25)) under assumption of imperfect model dynamics which is also the goal of the Gauss-Newton DA method. We compare Gauss-Newton DA method to WC4DVar method. We perform numerical experiments using the L63 and L96 models with the same parameters as in the previous section. In these experiments, we use identical data, models, and windows for both methods.
In Figure 7 and Figure 8 we plot errors for the L63 and L96 models, respectively. On the left of the figures, we plot errors with respect to the truth of observed variables (Equation (23)). On the right of the figures, we plot errors with respect to the truth of non-observed variables (Equation (24)). We see that error of Gauss-Newton DA method is significantly less than the error of WC4DVar method for both observed variables and non-observed variables. We see that the error of both methods is below the observation error.
### Parameter estimation
As described in Section 4, the Gauss-Newton DA method can be applied to the problem of joint state-parameter estimation. In these experiments, we use the Gauss-Newton DA method (Equations (16) and (17)) to estimate a parameter \(\sigma\) of the L63 model which we assume to be uncertain. Different values of initial \(\sigma\) were chosen, 5, 15, and 20, with the true \(\sigma\) being 10. We alternate between Equations (16) and (17) until a termination condition is satisfied. The termination condition is satisfied when either the number of iterations reaches the maximum (500) or the distance between two successive approximations of the uncertain parameter is less than desired tolerance (\(10^{-3}\)). We consider both noisy and noise-free observations and assume the first and second components of the state are observed. For each numerical experiment and fixed initial
Figure 6: Error with respect to the truth for different observation noise level. On the left: Application to L63 with noisy observations, we observe only the first variable. On the right: Application to L96 with noisy observations, we observe every second variable.
Figure 8: Application to L96. Error as a function of time: median (dashed line) +/- one standard deviation over 20 simulations with length of assimilation window, 1.25. On the left: error with respect to the truth of observed variables. On the right: error with respect to the truth of non-observed variables. The Gauss-Newton DA method is in grey, WC4DVar method is in blue, and the observational error is in red.
Figure 7: Application to L63. Error as a function of time: median (dashed line) +/- one standard deviation over 20 simulations with length of assimilation window, 2.5. On the left: error with respect to the truth of observed variables. On the right: error with respect to the truth of non-observed variables. The Gauss-Newton DA method is in grey, WC4DVar method is in blue, and the observational error is in red.
\(\sigma\), we perform 100 realizations. In Table 1 and 2, we display the error of state estimation, which is the median of Equation (22) over 100 simulations, and estimated \(\sigma\) for different initial parameters. Table 1 is for noise-free observations and Table 2 is for noisy observations with \(\gamma=\sqrt{3}\).
In Table 1, we consider noise-free observations. We assume the first and second components of the state are observed. The results of \(\sigma\) estimation are shown in Table 1.
In Table 2, we consider noisy observations. The observations of state are obtained from the true state by adding i.i.d zero mean Gaussian noise as in Equation (3) with covariance matrix \(\Gamma=\gamma^{2}I\), where \(\gamma^{2}\) is the variance of the noise procedure. In this numerical experiment, we consider noisy observations with \(\gamma=\sqrt{3}\) in Equation (3). It should be noted that similar results can be obtained for \(\rho\) or \(\beta\) estimation of the L63 model (not shown).
## 6 Conclusion
We considered a cost function of a data-assimilation problem consisting of observation mismatch, \(\|H\mathbf{u}-\mathbf{y}\|\) weighted by a parameter \(\alpha\) and model mismatch \(\|G(\mathbf{u})\|\). By solving this minimization problem approximately using an iterative algorithm, Gauss-Newton DA method, we found an estimate for a considered system. One of the advantage of this cost function is using the information of available observations in each iteration which can help to find a more accurate approximated solution for a physical system. We then established error bounds with respect to the truth under some conditions in two cases, noise-free and noisy observations. In case of noise-free observations, we proved the method is convergent and we obtained an upper bound in case of noisy observations. In numerical experiments, we applied Gauss-Newton DA method for two models, Lorenz 63 and Lorenz 96. We observed the numerical evidence confirmed the theoretical results and we demonstrated that the results compare favorably to those of WC4DVar. Moreover, we showed that how the size of observations could affect on the convergence results. We observed
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Property & \multicolumn{3}{|c|}{Initial guess for \(\sigma\)} \\ \hline Initial \(\sigma\) & 5 & 15 & 20 \\ \hline Error between estimated and true solution & 0.4025 & 0.4256 & 0.4267 \\ Estimated \(\sigma\) & 9.7465 & 10.2746 & 10.2785 \\ \hline \end{tabular}
\end{table}
Table 1: Estimation of \(\sigma\) by the Gauss-Newton DA method for noise-free observations, we observe the first and second components of the state.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Property & \multicolumn{3}{|c|}{Initial guess for \(\sigma\)} \\ \hline Initial \(\sigma\) & 5 & 15 & 20 \\ \hline Error between estimated and true solution & 2.7287 & 6.7806 & 3.8887 \\ Estimated \(\sigma\) & 9.7380 & 10.4505 & 10.8239 \\ \hline \end{tabular}
\end{table}
Table 2: Estimation of \(\sigma\) by Gauss-Newton DA method for noisy observations.
that if the size of observations is less than eight in the Lorenz 96 model, we do not have a convergence result. Therefore future directions include finding a method for deriving an approximation based on low-order of observations.
## Appendix A Lipschitz constants for the Lorenz 63 and the Lorenz 96 models
Here we compute the Lipschitz constant for the Lorenz 63 and the Lorenz 96 models discretized with forward Euler.
### Lipschitz constant for the L63 model discretized with forward Euler
Consider the discretized form of Equation (26)
\[X_{1}(t_{k+1}) = X_{1}(t_{k})+\Delta t\sigma(X_{2}(t_{k})-X_{1}(t_{k})), \tag{10}\] \[X_{2}(t_{k+1}) = X_{2}(t_{k})+\Delta t\left(X_{1}(t_{k})(\rho-X_{3}(t_{k}))-X_{2}( t_{k})\right),\] \[X_{3}(t_{k+1}) = X_{3}(t_{k})+\Delta t\left(X_{1}(t_{k})X_{2}(t_{k})-bX_{3}(t_{k}) \right),\]
with the following matrix notation
\[\mathbf{X}_{k+1}=F(\mathbf{X}_{k}),\quad G(\mathbf{X}):=\mathbf{X}_{k+1}-F( \mathbf{X}_{k})\]
where \(\mathbf{X}_{k}=(X_{1}(t_{k}),X_{2}(t_{k}),X_{3}(t_{k}))^{T}\) and \(F(\mathbf{X}_{k})\) is the right-hand side of (10). From the definition of Jacobian of \(G\) we have
\[G^{\prime}(\mathbf{X})-G^{\prime}(\mathbf{Y})=\begin{bmatrix}-F^{\prime}( \mathbf{X}_{0})+F^{\prime}(\mathbf{Y}_{0})\\ &-F^{\prime}(\mathbf{X}_{1})+F^{\prime}(\mathbf{Y}_{1})\\ &&\ddots\\ &&&-F^{\prime}(\mathbf{X}_{N-1})+F^{\prime}(\mathbf{Y}_{N-1})\end{bmatrix}.\]
with
\[F^{\prime}(\mathbf{X}_{k})=\begin{pmatrix}1-\sigma\Delta t&\sigma\Delta t&0\\ \Delta t(\rho-X_{3}(t_{k}))&1-\Delta t&-\Delta tX_{1}(t_{k})\\ \Delta tX_{2}(t_{k})&\Delta tX_{1}(t_{k})&1-b\Delta t\end{pmatrix}. \tag{11}\]
From (11) we get
\[\|F^{\prime}(\mathbf{X}_{k})-F^{\prime}(\mathbf{Y}_{k})\|_{F}^{2 }=\sum_{j=1}^{n}\sum_{i=1}^{m}|F_{ij}|^{2} =\Delta t^{2}\bigg{(}|Y_{3}(t_{k})-X_{3}(t_{k})|^{2}+|Y_{1}(t_{k}) -X_{1}(t_{k})|^{2} \tag{12}\] \[+|Y_{2}(t_{k})-X_{2}(t_{k})|^{2}+|Y_{1}(t_{k})-X_{1}(t_{k})|^{2} \bigg{)}\] \[\leq 2\Delta t^{2}\|\mathbf{X}_{k}-\mathbf{Y}_{k}\|_{L^{2}}^{2}.\]
Therefore, we have
\[\|G^{\prime}(\mathbf{X})-G^{\prime}(\mathbf{Y})\|_{F}\leq\sqrt{2}\Delta t\| \mathbf{X}-\mathbf{Y}\|_{L^{2}}.\]
Since \(\|A\|_{L^{2}}\leq\|A\|_{F}\), by substituting it into the expression above we obtain the following
\[\|G^{\prime}(\mathbf{X})-G^{\prime}(\mathbf{Y})\|_{L^{2}}\leq\sqrt{2}\Delta t \|\mathbf{X}-\mathbf{Y}\|_{L^{2}}.\]
### Lipschitz constant for the L96 model discretized with forward Euler
Consider the discretized form of Equation (27)
\[X_{l}(t_{k+1})=X_{l}(t_{k})+(-X_{l-2}(t_{k})X_{l-1}(t_{k})+X_{l-1}(t_{k})X_{l+1}( t_{k})-X_{l}(t_{k})+\mathcal{F})\,\Delta t,\,l=1,\ldots,40. \tag{10}\]
with the following matrix notation
\[\mathbf{X}_{k+1}=F(\mathbf{X}_{k}),\qquad G(\mathbf{X}):=\mathbf{X}_{k+1}-F( \mathbf{X}_{k})\]
where \(\mathbf{X}_{k}=\left(X_{1}(t_{k}),\ldots,X_{40}(t_{k})\right)^{T}\) and \(F(\mathbf{X}_{k})\) is the right-hand side of (10). From the definition of Jacobian of \(G\) we have
\[G^{\prime}(\mathbf{X})-G^{\prime}(\mathbf{Y})=\begin{bmatrix}-F^{\prime}( \mathbf{X}_{0})+F^{\prime}(\mathbf{Y}_{0})&&&&\\ &-F^{\prime}(\mathbf{X}_{1})+F^{\prime}(\mathbf{Y}_{1})&&\\ &&\ddots&\\ &&&&-F^{\prime}(\mathbf{X}_{N-1})+F^{\prime}(\mathbf{Y}_{N-1})\end{bmatrix}.\]
with
\[F^{\prime}(\mathbf{X}_{k})=\begin{pmatrix}1-\Delta t&\Delta tX_{40}&0&\ldots &0&-\Delta tX_{40}&\Delta t(X_{2}-X_{39})\\ \Delta t(X_{3}-X_{40})&1-\Delta t&\Delta tX_{1}&0&\ldots&0&-\Delta tX_{1}\\ \vdots&\ldots&&\ldots&&\vdots\\ 0&\ldots&0&-\Delta X_{38}&\Delta t(X_{40}-X_{37})&1-\Delta t&\Delta tX_{38}\\ \Delta tX_{39}&0&\ldots&0&-\Delta tX_{39}&\Delta t(X_{1}-X_{38})&1-\Delta t \end{pmatrix}.\]
From the expression above we get
\[\|F^{\prime}(\mathbf{X}_{k})-F^{\prime}(\mathbf{Y}_{k})\|_{F}^{2} =\Delta t^{2}|X_{40}-Y_{40}|^{2}+\Delta t^{2}|X_{40}-Y_{40}|^{2}\] \[+\Delta t^{2}|(X_{2}-Y_{2})-(X_{39}-Y_{39})|^{2}\] \[+\Delta t^{2}|(X_{3}-Y_{3})-(X_{40}-Y_{40})|^{2}\] \[+\Delta t^{2}|X_{1}-Y_{1}|^{2}+\Delta t^{2}|X_{1}-Y_{1}|^{2}\] \[+...\] \[+\Delta t^{2}|X_{38}-Y_{38}|^{2}+\Delta t^{2}|(X_{40}-Y_{40})-(X _{37}-Y_{37})|^{2}\] \[+\Delta t^{2}|X_{38}-Y_{38}|^{2}+\Delta t^{2}|X_{39}-Y_{39}|^{2}\] \[+\Delta t^{2}|X_{39}-Y_{39}|^{2}+\Delta t^{2}|(X_{1}-Y_{1})-(X_{ 38}-Y_{38})|^{2}\] \[\leq 2\Delta t^{2}\|\mathbf{X}_{k}-\mathbf{Y}_{k}\|_{L^{2}}^{2}\] \[+\{2\Delta t^{2}|X_{2}-Y_{2}|^{2}+2\Delta t^{2}|X_{39}-Y_{39}|^{2 }+2\Delta t^{2}|X_{3}-Y_{3}|^{2}\] \[+2\Delta t^{2}|X_{40}-Y_{40}|^{2}+...+2\Delta t^{2}|X_{40}-Y_{40 }|^{2}\] \[+2\Delta t^{2}|X_{37}-Y_{37}|^{2}+2\Delta t^{2}|X_{1}-Y_{1}|^{2}+ 2\Delta t^{2}|X_{38}-Y_{38}|^{2}\}\] \[\leq 2\Delta t^{2}\|\mathbf{X}_{k}-\mathbf{Y}_{k}\|_{L^{2}}^{2}+2 \Delta t^{2}(2\|\mathbf{X}_{k}-\mathbf{Y}_{k}\|_{L^{2}}^{2})\] \[=6\Delta t^{2}\|\mathbf{X}_{k}-\mathbf{Y}_{k}\|_{L^{2}}^{2}.\]
On the other hand,
\[\|G^{\prime}(\mathbf{X}_{k})-G^{\prime}(\mathbf{Y}_{k})\|_{L^{2}}^{2}\leq\|G^{ \prime}(\mathbf{X}_{k})-G^{\prime}(\mathbf{Y}_{k})\|_{F}^{2}=\sum_{j=1}^{n}\sum _{i=1}^{m}|F_{ij}|^{2}.\]
Combining the two above expressions leads to
\[\|G^{\prime}(\mathbf{X})-G^{\prime}(\mathbf{Y})\|_{L^{2}}\leq\sqrt{6}\Delta t\| \mathbf{X}-\mathbf{Y}\|_{L^{2}}.\]
## Appendix B Bound on the parameter estimation error
Here we prove Theorem (3).
Proof.: By setting \(G(\mathbf{u}^{(k)};\boldsymbol{\theta}^{(k-1)})=\mathcal{G}(\mathbf{u}^{(k)})+A \boldsymbol{\theta}^{(k-1)}\) in Equation (17), we get
\[\boldsymbol{\theta}^{(k)}=\boldsymbol{\theta}^{(k-1)}-(A^{T}A)^{-1}A^{T} \left(\mathcal{G}(\mathbf{u}^{(k)})+A\boldsymbol{\theta}^{(k-1)}\right)\]
Let us define \(\mathbf{E}_{k}=\boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}^{\dagger}\), then we have
\[\mathbf{E}_{k} =\mathbf{E}_{k-1}-(A^{T}A)^{-1}A^{T}\left(\mathcal{G}(\mathbf{u}^ {(k)})+A\boldsymbol{\theta}^{(k-1)}\right)\] \[=\mathbf{E}_{k-1}-(A^{T}A)^{-1}A^{T}\left(\mathcal{G}(\mathbf{u}^ {(k)})+A\boldsymbol{\theta}^{(k-1)}-\mathcal{G}(\mathbf{u}^{\dagger})-A \boldsymbol{\theta}^{\dagger}\right),\]
where we used \(G(\mathbf{u}^{\dagger};\boldsymbol{\theta}^{\dagger})=\mathcal{G}(\mathbf{u}^ {\dagger})+A\boldsymbol{\theta}^{\dagger}=0\). Therefore, we conclude that
\[\mathbf{E}_{k}=-(A^{T}A)^{-1}A^{T}\left(\mathcal{G}(\mathbf{u}^{(k)})- \mathcal{G}(\mathbf{u}^{\dagger})\right).\] (B.1)
On the other hand, using Equation (16) in the definition of the state error \(\mathbf{e}_{k}=\mathbf{u}^{k}-\mathbf{u}^{\dagger}\) leads to
\[\mathbf{e}_{k+1}=\mathbf{e}_{k}-\left(\mathcal{G}^{\prime T}\mathcal{G}^{ \prime}+\alpha H^{T}H\right)^{-1}\left(\mathcal{G}^{\prime T}\left(\mathcal{G} (\mathbf{u}^{(k)})+A\boldsymbol{\theta}^{(k)}\right)+\alpha H^{T}(H\mathbf{u} ^{(k)}-\mathbf{y})\right),\]
where \(\mathcal{G}^{\prime}\) and \(\mathcal{G}^{\prime T}\) are computed at \(\mathbf{u}^{(k)}\). By substituting Equation (3) with \(\eta_{j}=0,\ \forall j=0,\cdots,N\) into the expression above, we obtain the following
\[\mathbf{e}_{k+1} = \mathbf{e}_{k}-\left(\mathcal{G}^{\prime T}\mathcal{G}^{\prime}+ \alpha H^{T}H\right)^{-1}\left(\mathcal{G}^{\prime T}\left(\mathcal{G}( \mathbf{u}^{(k)})+A\boldsymbol{\theta}^{(k)}\right)+\alpha H^{T}H\mathbf{e}_{ k}\right)\] \[= \mathbf{e}_{k}-\left(\mathcal{G}^{\prime T}\mathcal{G}^{\prime}+ \alpha H^{T}H\right)^{-1}\left(\mathcal{G}^{\prime T}\left(\mathcal{G}( \mathbf{u}^{(k)})+A\boldsymbol{\theta}^{(k)}-\mathcal{G}(\mathbf{u}^{\dagger} )-A\boldsymbol{\theta}^{\dagger}\right)+\alpha H^{T}H\mathbf{e}_{k}\right),\]
where we used \(\mathcal{G}(\mathbf{u}^{\dagger})+A\boldsymbol{\theta}^{\dagger}=0\). Opening the brackets and using the following property
\[I-(\mathcal{G}^{{}^{\prime}T}\mathcal{G}^{\prime}+\alpha H^{T}H)^{-1}\alpha H ^{T}H=(\mathcal{G}^{{}^{\prime}T}\mathcal{G}^{\prime}+\alpha H^{T}H)^{-1} \mathcal{G}^{{}^{\prime}T}\mathcal{G}^{\prime},\]
we get
\[\mathbf{e}_{k+1}=\left(\mathcal{G}^{\prime T}\mathcal{G}^{\prime}+\alpha H^{T} H\right)^{-1}\mathcal{G}^{\prime T}\left(\mathcal{G}^{\prime}\mathbf{e}_{k}- \mathcal{G}(\mathbf{u}^{(k)})+\mathcal{G}(\mathbf{u}^{\dagger})-AE_{k}\right).\]
Substituting Equation (B.1) into the expression above leads to
\[\mathbf{e}_{k+1}=(\mathcal{G}^{{}^{\prime}T}\mathcal{G}^{\prime}+\alpha H^{T}H )^{-1}\mathcal{G}^{{}^{\prime}T}\left[\mathcal{G}^{\prime}\mathbf{e}_{k}- \mathcal{G}(\mathbf{u}^{(k)})+\mathcal{G}(\mathbf{u}^{\dagger})+A(A^{T}A)^{-1 }A^{T}\left(\mathcal{G}(\mathbf{u}^{(k)})-\mathcal{G}(\mathbf{u}^{\dagger}) \right)\right].\]
Taking norm of both sides, using assumptions (18)-(20) and Lemma 3.3, we get
\[\|\mathbf{e}_{k+1}\|\leq\frac{1}{2L_{3}c}\left(\frac{L_{3}}{2}\|\mathbf{e}_{k} \|^{2}+L_{0}\|A(A^{T}A)^{-1}A^{T}\|\|\mathbf{e}_{k}\|\right),\]
which we rewrite into the following expression
\[\|\mathbf{e}_{k+1}\|\leq\frac{1}{4c}\left(\|\mathbf{e}_{k}\|+b\right)^{2}\leq \frac{1}{2c}\|\mathbf{e}_{k}\|^{2}+\frac{1}{2c}b^{2},\]
where \(b=\|A(A^{T}A)^{-1}A^{T}\|L_{0}/L_{3}\). Next, we recursively obtain the following inequality
\[\|\mathbf{e}_{k}\|\leq 2^{-k}\left(\frac{1}{c}\right)^{2k-1}\|\mathbf{e}_{0}\|^ {2^{k}}+\frac{c}{2}\sum_{i=1}^{k}\left(\frac{b}{c}\right)^{2^{i}},\quad\text{ for}\quad k=1,2,\ldots\]
Since by assumption \(b/c<1\) and \(\|e_{0}\|<c\), we have that
\[\|\mathbf{e}_{k}\|<2^{-k}c+\frac{b}{2}\sum_{i=0}^{k}\left(\frac{b}{c}\right)^ {i},\]
and in the limit of \(k\) goes to infinity
\[\limsup_{k\to\infty}\|\mathbf{e}_{k}\|<\frac{b/2}{1-b/c}.\]
Furthermore, taking norm of Equation (B.1) leads to
\[\|\mathbf{E}_{k}\|\leq L_{0}\|(A^{T}A)^{-1}A^{T}\|\|\mathbf{e}_{k}\|.\]
By taking limit of both sides of the above inequality, as \(k\to\infty\), we conclude that
\[\limsup_{k\to\infty}\|\mathbf{E}_{k}\|<L_{0}\|(A^{T}A)^{-1}A^{T}\|\frac{b/2}{1 -b/c}.\]
|
2310.09125 | Training and Predicting Visual Error for Real-Time Applications | Visual error metrics play a fundamental role in the quantification of
perceived image similarity. Most recently, use cases for them in real-time
applications have emerged, such as content-adaptive shading and shading reuse
to increase performance and improve efficiency. A wide range of different
metrics has been established, with the most sophisticated being capable of
capturing the perceptual characteristics of the human visual system. However,
their complexity, computational expense, and reliance on reference images to
compare against prevent their generalized use in real-time, restricting such
applications to using only the simplest available metrics. In this work, we
explore the abilities of convolutional neural networks to predict a variety of
visual metrics without requiring either reference or rendered images.
Specifically, we train and deploy a neural network to estimate the visual error
resulting from reusing shading or using reduced shading rates. The resulting
models account for 70%-90% of the variance while achieving up to an order of
magnitude faster computation times. Our solution combines image-space
information that is readily available in most state-of-the-art deferred shading
pipelines with reprojection from previous frames to enable an adequate estimate
of visual errors, even in previously unseen regions. We describe a suitable
convolutional network architecture and considerations for data preparation for
training. We demonstrate the capability of our network to predict complex error
metrics at interactive rates in a real-time application that implements
content-adaptive shading in a deferred pipeline. Depending on the portion of
unseen image regions, our approach can achieve up to $2\times$ performance
compared to state-of-the-art methods. | João Libório Cardoso, Bernhard Kerbl, Lei Yang, Yury Uralsky, Michael Wimmer | 2023-10-13T14:14:00Z | http://arxiv.org/abs/2310.09125v1 | # Training and Predicting Visual Error for Real-Time Applications
###### Abstract.
Visual error metrics play a fundamental role in the quantification of perceived image similarity. Most recently, use cases for them in real-time applications have emerged, such as content-adaptive shading and shading reuse to increase performance and improve efficiency. A wide range of different metrics has been established, with the most sophisticated being capable of capturing the perceptual characteristics of the human visual system. However, their complexity, computational expense, and reliance on reference images to compare against prevent their generalized use in real-time, restricting such applications to using only the simplest available metrics. In this work, we explore the abilities of convolutional neural networks to predict a variety of visual metrics without requiring either reference or rendered images. Specifically, we train and deploy a neural network to estimate the visual error resulting from reusing shading or using reduced shading rates. The resulting models account for 70%-90% of the variance while achieving up to an order of magnitude faster computation times. Our solution combines image-space information that is readily available in most state-of-the-art deferred shading pipelines with reprojection from previous frames to enable an adequate estimate of visual errors, even in previously unseen regions. We describe a suitable convolutional network architecture and considerations for data preparation for training. We demonstrate the capability of our network to predict complex error metrics at interactive rates in a real-time application that implements content-adaptive shading in a deferred pipeline. Depending on the portion of unseen image regions, our approach can achieve up to 2\(\times\) performance compared to state-of-the-art methods.
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal:
This rules out most image metrics, with simple estimates becoming the norm among perceptual problems.
2. Estimation is only possible for previously seen content. The higher the amount of motion in the scene (and thus the frequency of disocclusion of previously unseen regions), the smaller the impact of these methods becomes.
In this work, our goal is to enable the use of arbitrary metrics in real-time applications and their efficient prediction, even for previously unseen regions of the scene resulting from, e.g., fast camera movement. We present a convolutional neural network (CNN) that takes as input both reprojected renders, similarly to previous work and current-frame screen-space information that is often readily available in G-buffers before final shading, such as material properties or light visibility buffers. We demonstrate our approach by applying our network to solve the broad problem of adaptive rendering mode selection: given a viewport that is divided into equally-sized tiles, select the suitable fidelity mode for each one. Possible examples in current hardware include variable-rate shading (VRS), software multi-sampling, temporal shading reuse, and hybrid rendering. By enabling consistent prediction of arbitrary metrics on the entire screen regardless of scene motion, we also open the door for new methods, use cases, and perceptual metrics to appear in a real-time context.
Metric prediction for seen and unseen regions as a learning effort confronts us with novel challenges: balanced selection of training samples becomes non-trivial since conventional data preparation methods cannot be applied. Furthermore, for many practical use cases (including VRS), perceptually correct threshold values may be required, which cannot be measured for unseen regions. In this paper, we present solutions to these challenges. As a proof of concept, we use our approach to implement content-adaptive VRS. Our main contributions are as follows:
1. A compact CNN for learning and predicting error metrics in real-time applications for seen and unseen regions.
2. Two metric transforms to produce a more balanced training loss that easily generalizes for new metrics and scenes.
3. Applying a correction to metrics that removes the need for explicitly measuring perceptual thresholds, embedding them into the trained models' predictions.
4. Analysis and discussion of which current-frame screen-space data is most valuable for predicting error metrics.
5. An evaluation of achievable quality, performance, and ability to generalize our learning-based approach for VRS with the current state of available hardware support.
In the following, Section 2 lists previous work and necessary context to frame our contributions. Section 3 describes our network and how to train it to consistently achieve high-accuracy image-error estimation in real-time. Section 4 describes how to use the network in the context of adaptive rendering mode selection, including a concrete example for application to VRS (see Figure 1). Finally, Section 5 considers the performance and quality aspects of our approach and provides an analysis of the obtained results.
## 2. Related Work
Methods for reducing the amount of final shading computation required per display pixel are not a new concept. Mixed-resolution shading (Sandhi et al., 2017; Wang et al., 2018) renders expensive and low-frequency shading components at low resolution and bilaterally upsamples the results to full resolution. Decoupled shading (Sandhi et al., 2017; Wang et al., 2018; Wang et al., 2018) separates the shading rate from the visibility sampling rate by establishing a mapping between the visibility samples and shading samples and sharing or reusing shading samples where possible. Texture-space shading (Sandhi et al., 2017; Wang et al., 2018; Wang et al., 2018) computes shading in texture or patch space in an appropriate density controlled by the mip level. These software-based techniques are available for use on a wider variety of hardware but require more complicated implementation and maintenance due to their significant deviation from the hardware rasterization pipeline.
Variable-rate shading (VRS) does not suffer from these issues. VRS can be seen as a generalization of multi-sample anti-aliasing, by which a single shading operation can be used to color not only multiple samples within a single pixel but multiple pixels. Software-based VRS implementations commonly divide the screen into \(n\times n\) pixel tiles (where \(n\) is an integer number) and assign shading rates--the ratio of actual pixels to the number of shading operations--independently to each tile. Current hardware implementations are even more specific and operate on \(16\times 16\) tiles, with a fixed set of possible shading rates (Sandhi et al., 2017; Wang et al., 2018). Some use cases for VRS have been targets of growing interest, such as foveated rendering (Sandhi et al., 2017; Wang et al., 2018; Wang et al., 2018), a technique which uses eye-tracking hardware to direct rendering resources to the region the user focuses on (Wang et al., 2018), or lens-optimized shading (Sandhi et al., 2017; Wang et al., 2018), which aims at warping screen space to more closely match the final lens-corrected image (Sandhi et al., 2017). However, these techniques are only usable with specific peripherals, such as a VR display with eye-tracking capabilities, and do not take advantage of scene-dependent information.
Content-adaptive shading, first proposed by Yang et al. (Yang et al., 2018), provides a more general solution that is usable in the rendering of any 3D scene. It does so by dynamically varying the shading rate across the screen according to the perceivable level of detail of the content being rendered: the rendering result of the previous frame and the previous shading rate choices are reprojected into the current screen space and used as cues to choose the required shading rate. Drobot (Sandhi et al., 2017) developed a variant of this concept, designed with software-based VRS in mind. Mueller et al. (Mueller et al., 2018) showed that shading information from previous frames can be reused for quite some time if properly sampled. Jindal et al. (Jindal et al., 2018) proposed a more elaborate VRS specific metric that adapts to known texture IDs. However, these techniques share several common limitations: First, they rely solely on analyzing the content from previous frames. Thus they are unable to make predictions where reprojection data isn't available. Further, they are unable to make any predictions regarding how a surface's light response or texture aliasing might change over time, which can be especially problematic with visual edges, shiny and animated materials. Finally, due to the constraints of real-time rendering, image quality needs to be measured using a computationally efficient estimator, and some form of Just-Noticeable-Difference (JND) (Sandhi et al., 2017) threshold. Thus, these methods have to rely on multiple approximations, leading to imprecise shading-rate decisions, which, in theory, could accumulate error over time. In practice, adaptive shading is only used after significant engine- and scene-specific tuning, such as ensuring it is only enabled in highly diffuse materials.
There has been a large amount of work in developing image metrics capable of replicating human perception, which remains inaccessible in real-time environments. Andersson et al. (Andersson et al., 2017) presented the FLIP estimator, inspired by models of the human visual system and designed with particular focus on the differences between rendered images and corresponding ground truths. Zhang et al. (Zhang et al., 2019) discovered that, during image classification, the intermediate image representation produced by the network could be used for comparison with other images. Wolski et al. (Wolski et al., 2019) created a data set of image pairs with user markings of where they perceive distortions and a convolutional network trained on it capable of predicting markings in new images. There has also been a surge in the development of deep learning approaches for the post-processing of real-time renderings, such as super-resolution and temporal anti-aliasing of rasterized surfaces (Wang et al., 2019; Wang et al., 2019), or denoising of ray-traced ones (Wang et al., 2019; Wang et al., 2020).
## 3. Metric Prediction
Conventionally, a reference image metric \(f(I,J)\) computes the perceptual difference between a reference image \(I\) and a candidate image \(J\). No-reference methods \(f(J)\) guess perceptual issues given the expected proprieties and common distortions in natural images. Values may be computed for the entire image domain or regions thereof. In this work, we aim instead to estimate \(f(I,I^{\prime})\), where \(I^{\prime}\) represents an informed approximation of the reference \(I\), such as a lower-resolution rendering of \(I\). Our goal is to predict \(f(I,I^{\prime})\) directly, without explicitly computing either \(I\) or \(I^{\prime}\) by exploiting other, more easily available screen-space scene information instead.
Our deep learning-based approach enables fast prediction of complex metrics that would otherwise incur significant computational overhead. However, one challenge to overcome is the sensitivity of machine learning to unbalanced training data sets; another is that the practical applications of \(f(I,I^{\prime})\) often involves spatially varying parameters, e.g., the local just noticeable difference (JND) at each point in \(I\)(Wang et al., 2019). In this section, we introduce our network architecture, discuss which input data should be used to predict metrics, and present our solution to the output imbalance problem. Furthermore, we show how the spatially varying JND threshold can be integrated directly into the trained model. For the sake of brevity, the visual illustrations of our approach will focus exclusively on the example of predicting the error when \(I\) and \(I^{\prime}\) vary in shading rate. In the figures displayed in this article, plotted or color-coded values of \(f(I,I^{\prime})\) show the difference between reference \(I\) and corresponding \(I^{\prime}\) obtained with coarser \(2\times 2\) shading rate for a given metric \(f\).
### Convolutional Network Architecture
Figure 2 shows the schematic of our convolutional network architecture. It consists of \(3\!\times\!3\) convolutions, interlaced with rectified linear units (ReLU) and batch normalization. We optimize for prediction performance by pooling as early as possible in the network and maintain a consistent amount of parallelism by dividing hidden channels into independent groups at the same rate at which down-pooling is performed--that is, we try to keep the number of independent groups times the number of pixels remains the same. A single final sigmoid layer is used to constrain the output to the range \([0,1]\). To support optimized generation of (conservative) predictions for arbitrarily-sized image regions (e.g., for application to hardware VRS), maximum pooling is done depending on an intended region size \(w\!\times\!w\) (for per-pixel predictions, \(w=1\)). The size \(\Lambda(i)\) in down-pooling layer \(i\) is:
\[\Lambda(i)=\begin{cases}2&\text{if }\frac{w}{2i}>2\;\;\text{and }i<5\\ \lceil\frac{w}{2i}\rceil&\text{otherwise}\end{cases} \tag{1}\]
The design of our network is governed by its intended use in real-time applications: given sufficient training time, the network is capable of learning sophisticated features while prediction remains fast. Its layout makes it compatible with optimized, massively parallel inferencing solutions, such as TensorRT. Furthermore, the per-region predictions for \(w>1\) can be passed on directly to tile-based procedures.
We converged on our eventual design after comparing more complex alternatives, which all underperformed or provided no visible benefit over the simpler solution. These alternatives included using partial convolutions--with and without data masking--and rendering-aware denormalization. We also decided on maximum pooling as it provided higher accuracy than downscaling purely through convolution.
Figure 2. Proposed network architecture for metric prediction. Each block \(i\) performs down-pooling at size \(\Lambda(i)\).
### Input Data
Our solution aims to leverage as input any screen-space information that becomes available in real-time rendering pipelines prior to expensive stages that can benefit from accurate metric predictions. Hence, it presents an ideal fit for ubiquitous deferred shading pipelines, which provide a range of screen-space information via the G-buffer. Outputs of previous frames are also commonly obtained as a byproduct of rendering or at little additional cost through temporal reprojection. The question then becomes which of these resources to choose as inputs for the network to yield high accuracy while keeping the input set compact. We assessed commonly available G-buffer contents and statistically analyzed how influential each is on the prediction of perceptual error metrics. Our reference rendering pipeline uses deferred shading, with cascading shadow maps, screen-space ambient occlusion, fast approximate anti-aliasing, and tone mapping with automatic exposure selection. The pipeline was implemented on top of Falcon (Falcor, 2017) and the network trained on established ORCA scene assets (Amazon Bistro (2017) and Unreal Engine 4's Sintemple (2017)).
We found that directly available information in the G-buffer-such as view-space normals, diffuse color, or roughness--enables reasonable predictions across the entire screen. However, it lacks a myriad of information that otherwise would have to be explicitly encoded, such as lighting, tone mapping, or other effects. As shown in Figure 3, we found the temporal reprojection of final color from previous frames to be a valuable asset (similar to (Sandhi et al., 2018) and (Kang et al., 2018)), as it contains most of this missing information. However, color reprojection is spatially limited to previously seen regions only and thus presents decreasing benefits in use cases with more obstructions, animated scenes or fast-paced camera movement. Figure 3 proves that using temporal reprojection with a quickly changing view or scene does not suffice to produce adequate predictions for the current frame. Hence, a good prediction solution should weight available inputs differently, depending on whether it is predicting for recently seen or newly disoccluded, unseen regions. We assumed (and experimentally confirmed) that the network's prediction quality is highest if reprojected color is paired with a binary mask (seen = 1, unseen = 0).
To quantify the contribution of each input candidate, we used DeepLIFT (Amazon Bistro, 2017; Li et al., 2018) on a model trained on all pre-selected candidate inputs and computed attribution scores on a large validation set from our test suite. Table 1 lists the mean absolute attribution score of each candidate input, as identified on the FLIP metric. As expected, reprojected color contributes the most, but even more so if masked (accepted if previously seen, zero otherwise). The contribution of diffuse material colors is highest for unseen regions. Other inputs are less important, such as emissive material color, for which we found no anecdotal or statistical benefit, or the dot product between the surface normal and the reflection vector, which is redundant if view-space normals are provided directly. Most RGB channels are relatively redundant, with whichever channel being first in the input order becoming the dominant one and representing the majority of the accuracy of the whole group. The only exception was normals, where the Z-axis is always the dominant one. We also found no advantages of training with HDR for RGB input data instead of 8-bit color channels. Using this knowledge, we can derive effective yet compact input data sets. For real-time applications, we propose to use a single 4-channel texture containing the reprojection mask, one RGB channel (any) of the reprojected color, one RGB channel of diffuse material color, and the Z-axis of the view-space normals. This provides a good tradeoff between desired low inference time and prediction quality since these four account for 52.08% of the network's prediction capability, according to DeepLIFT.
### Reparameterization
For a given perceptual metric, its output value distribution can change drastically with different environments and rendering settings. We noticed in our experiments that, for most metrics, the
\begin{table}
\begin{tabular}{l l c c} \hline \hline Channels & Format & Seen Regions & Unseen \\ \hline Reprojected Color & RGB & 31.37\% & — \\ Reprojection Mask & Bool & 17.26\% & 8.67\% \\ View Normals & RGB & 16.89\% & 33.63\% \\ Diffuse Color & RGB & 14.8\% & 42.59\% \\ View Normal Z & Float & 10.12\% & 20.15\% \\ Shadowing & Float & 7.48\% & 9.36\% \\ Roughness & Float & 5.41\% & 6.77\% \\ Specular Color & RGB & 5.73\% & 10.61\% \\ Reflect Product & Float & 1.06\% & 1.33\% \\ Emissive Color & RGB & 0.01\% & 0.03\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. DeepLIFT contribution of network inputs (FLIP)
Figure 3. Network predictions for FLIP error between the full-resolution reference image and a coarser, \(2\times 2\)-shaded versions. Results were obtained from networks trained with different screen-space input sets. Number of input channels used are 3, 16, 19 and 4 in (b), (c), (d) and (e), respectively.
tested scenes produced mostly low output values and only a few very high outliers. Such an unbalanced target distribution might prevent the network from converging to a reasonable solution altogether when trained on arbitrary scenes. In theory, this problem becomes less noticeable the more data and a greater variety of scenes are provided. However, our goal is to provide a solution that can be efficiently trained with a limited training set, as well as arbitrary metrics, scenes, and rendering settings, yet still, generalize across them well.
We note that for many real-time applications, high metric prediction accuracy is most relevant within a limited range of values that drive performance-related optimizations, such as render mode selection. Thus, we choose to tackle the data imbalance issue by using a modified parameter space that balances the data distribution while preserving the relevant information in it.
Let \(\mathcal{L}(Y,\hat{Y})\) be a given loss function, where \(\hat{Y}\in[0,1]\) is a set of predicted values in transformed space, and \(Y=f(I,I^{\prime})\in[0,1]\) the corresponding target values. We define a new loss function that measures the difference between predictions \(\hat{Y}\) and targets \(Y\) after transforming them to a new parameter space according to a function \(\mathcal{T}\):
\[\mathcal{L}_{adaptive}(Y,\hat{Y})=\mathcal{L}(\mathcal{T}(Y),\hat{Y}) \tag{2}\]
We then use mean absolute error (MAE) as our \(\mathcal{L}\) loss function:
\[\mathcal{L}_{MAE\ adaptive}(Y,\hat{Y})=\frac{1}{n}\sum_{i}^{N}|\mathcal{T}(Y_ {i})-\hat{Y}_{i}| \tag{3}\]
If predictions in non-transformed space are required (e.g., for comparison with perceptual thresholds), they can be obtained as \(\mathcal{T}^{-1}(\hat{Y})\). In the following, we describe our two proposed different reparameterization transforms.
#### 3.3.1. Clamped Transform
A computationally efficient but lossy reparameterization solution is to re-scale the metric, so its output distribution is centered at 0.5, and clamp outlier values to \([0,1]\) --existing work on HDR imagery shows us this is not unreasonable (K
post-processing and effects have been applied since the computed errors should capture the perceived visual difference.
For data collection and training, we start by capturing the environments at representative viewpoints at each possible render mode. This is necessary for generating the training and validation targets of any metric that relies on \(I,I^{\prime}\) image pairs. We then compute the metric between each render mode and the reference image obtained without any optimizations active. We also capture the corresponding network input data for each rendered frame, both temporarily reprojected and from the current frame.
### Mode Inference
Rather than predicting render modes directly, we suggest producing a continuous error prediction and perform mode selection based on user-defined thresholds, e.g., the JND threshold, as this allows for greater control by artists and application users alike. Consequently, we can exploit our metric prediction network for this task. We set the layout for our network such that the predicted metric between a render mode and the reference image is computed in a separate output channel for each available mode. We can therefore iterate these channels in order of increasing computational cost and check if any presents a perceptual loss lower than the defined threshold. If no available channel presents an acceptable value for a given tile, we apply the highest quality mode instead:
```
chooseMode(metric,tile) foreachmodeinincreasingcost ifmetric[mode,tile]<threshold returnmode returnreferencemode
```
### Rate Extrapolation for VRS
Many modern real-time graphics solutions offer support for VRS, which allows selecting different shading rates for individual objects or image regions to economize on expensive fragment shader invocations. Commonly supported shading rates include fundamental squares (\(1\times 1\), \(2\times 2\) and \(4\times 4\)) and rectangles with conforming side lengths. For this particular use case, the metric values for similar shading rates are strongly correlated: similar to Yang et al. (Yang et al., 2017), we can reduce the number of output channels by extrapolating the outputs of multiple channels from just a few. Let \(\hat{Y}_{u\times 0}\) be an output channel of the network, where \(u\) and \(v\) are its corresponding horizontal and vertical shading strides, respectively. Let \(k=2.13\) capture the constant relative change in error when switching from a shading rate to its half (e.g., \(2\times 2\to 4\times 4\)), as derived by Yang et al. (Yang et al., 2017). We can approximate lower shading rates from higher ones, allowing for using only two output channels--the network predictions for \(1\times 2\) and \(2\times 1\) shading rates:
\[\hat{Y}_{u\times 0}\approx\begin{cases}\max(\hat{Y}_{u\times 0},\hat{Y}_{u \times 0})&\text{if }u=v\\ \max(\hat{Y}_{u\times\frac{v}{2}}\cdot k,\hat{Y}_{u\times 0})&\text{if }u>v\\ \max(\hat{Y}_{u\times\frac{v}{2}}\cdot k,\hat{Y}_{u\times\frac{v}{2}})&\text{if }u<v \end{cases} \tag{10}\]
The values for shading rate \(2\times 2\) can be extrapolated from \(1\times 2\) and \(2\times 1\). Following Equation 10, \(2\times 4\) can further be obtained from \(1\times 2\) and \(2\times 2\), \(4\times 2\) from \(2\times 1\) and \(2\times 2\), \(4\times 4\) from \(2\times 4\) and \(4\times 2\), and so on. We found that square rates are approximated with higher precision than non-square rates. Thus, in practice, we recommend using 4 output channels (\(1\times 2\), \(2\times 1\), \(2\times 4\), \(4\times 2\)) and extrapolating the others for good quality/performance trade-off.
## 5. Evaluation
We evaluate our approach regarding prediction quality, performance, and robustness. We use the 4-channel input set we recommended in Section 3.2, and randomly captured 12820 viewpoint pairs to simulate camera movement in a total of 8 different scenes. Results were generated on a Windows 10 PC with an i7 CPU @ 3.40GHz, 16GB RAM, and an NVIDIA RTX 2080TI GPU.
### Metric Prediction
To evaluate the network's prediction capability, we trained and tested it with three established error metrics (PSNR, FLIP, and LPIPS), as well as the Weber-corrected variants (JNFLIP and JNYang). Validation was performed for each scene from Section 3.2, using 64 random viewpoint pairs that were withheld during training, as well as on three scenes the network was never trained on: Emerald Square (day/dusk) (Shenik Cathedral, 2017) and Sibenik Cathedral (Shenik, 2017). For the approximation \(I^{\prime}\) of \(I\) that the network should learn, we chose images rendered for the same frames at full resolution (\(I\)) and at four different reduced shading rates (\(I^{\prime}\)).
Table 2 shows the measured statistics per scene for predicting each metric between reference images and their reduced versions on each scene's test set. Its consistent high accuracy, high coefficient of determination, and low variance in each scene's test set indicate that the network generalizes rather well: the model is capable of explaining most of the variance in each metric (high R\({}^{2}\)) without over-fitting to specific scenes or states (visual examples of predictions are provided in Figure 6). We did not
Figure 4. Example of the network predicting FLIP on an extremely unbalanced scenario (Beng et al., 2017), containing mostly background and highly reflective surfaces. Training this network to predict this error (b) with traditional losses causes it to underestimate the metric (c,d). Applying our transform on the parameter space remedies the issue (e).
between triangle/material count and the network's ability to predict perceptual metrics. In fact, the highest prediction accuracy was achieved on the most demanding scene in terms of geometry and the number of unique materials, Emerald Square at dusk, despite the network having only been trained on daylight scenes. The lowest scores were obtained in Bistro (Interior), which can be explained by the large number of specular objects it contains: since light sources are not explicitly encoded in the input, the network struggles to produce accurate predictions in previously unseen regions with specular materials. To test this theory, we created two variations of this scene: one with highly specular chrome materials and one with flat checkerboard textures applied everywhere (see supplemental material). As expected, the prediction quality, as indicated by \(R^{2}\), is lower for the completely specular scene (FLIP: 71%, PSNR: 76%, LPIPS: 64%, JNYang: 70%, JNFLIP: 70%). However, for the same scene with only flat, checkered textures, the opposite is true: prediction quality rises, conversely, bringing it closer to the other scenes. The network does not require a large number of training samples to achieve generalization: in our experiments, we found a negligible decrease in test accuracy--0.04%--when an environment is not included as part of the training and found no benefit in using more than \(500-2000\) captured frames on any environment (the exact number depends on the scene size).
Finally, we recorded the run time for prediction and compared it against reference implementations of the corresponding metrics on the CPU and on the GPU (Python+PyTorch). For our neural network, timings are independent of the metric it was trained on since it does not influence its architecture. Inference with our network took \(0.58\)s/\(2ms\) on CPU/GPU, respectively. It is thus significantly faster than explicitly computing FLIP (\(2.46\)s/\(190ms\)\(\rightarrow\)\(4.24\)x/\(95.9\times\)) and LPIPS (\(13.6\)s/\(16.4ms\)\(\rightarrow\)\(23.5\)x/\(8.2\times\)). For the much simpler PSNR, our approach is between \(2\times\) and \(10\times\) slower.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c} & \multicolumn{4}{c|}{Suntemple, 606k \(\Delta\), 48 @} & \multicolumn{4}{c|}{Amazon Bistro (Exterior), 2.8M \(\Delta\), 132 @} & \multicolumn{4}{c}{Amazon Bistro (Interior), 1M \(\Delta\), 71 @} \\ \cline{2-13} & \(R^{2}\) & MAE\({}_{total}\) & MAE\({}_{under.}\) & \(\sigma_{MAE}\) & \(R^{2}\) & MAE\({}_{total}\) & MAE\({}_{under.}\) & \(\sigma_{MAE}\) & \(R^{2}\) & MAE\({}_{total}\) & MAE\({}_{under.}\) & \(\sigma_{MAE}\) \\ \hline FLIP & **90\%** & 5.99e-2 & 4.40e-2 & 7.30e-2 & **81\%** & 8.55e-2 & 4.40e-2 & 1.08e-1 & **78\%** & 7.88e-2 & 4.46e-2 & 9.87e-2 \\ PSNR & **92\%** & 3.15e-2 & 1.42e-2 & 3.21e-2 & **82\%** & 4.06e-2 & 1.98e-2 & 4.55e-2 & **80\%** & 3.85e-2 & 1.25e-2 & 4.71e-2 \\ LPIPS & **79\%** & 4.32e-2 & 2.46e-2 & 4.54e-2 & **77\%** & 4.15e-2 & 2.23e-2 & 4.51e-2 & **72\%** & 3.39e-2 & 1.52e-2 & 3.83e-2 \\ JNYang & **87\%** & 7.75e-2 & 5.02e-2 & 1.11e-1 & **84\%** & 7.93e-2 & 4.15e-2 & 1.29e-1 & **78\%** & 8.59e-2 & 4.72e-2 & 1.32e-1 \\ JNFLIP & **88\%** & 7.37e-2 & 4.78e-2 & 9.56e-2 & **82\%** & 9.52e-2 & 4.10e-2 & 8.21e-2 & **79\%** & 9.51e-2 & 4.12e-2 & 8.21e-2 \\ \hline \end{tabular}
\begin{tabular}{c|c c c c|c c c|c c c c} & \multicolumn{4}{c|}{Emerald Square (Day), 10M \(\Delta\), 220 @} & \multicolumn{4}{c}{Emerald Square (Dusk), 10M \(\Delta\), 222 @} & \multicolumn{4}{c}{Shenish Cathedral, 75L \(\Delta\), 15 @} \\ \cline{2-13} & \(R^{2}\) & MAE\({}_{total}\) & MAE\({}_{under.}\) & \(\sigma_{MAE}\) & \(R^{2}\) & MAE\({}_{total}\) & MAE\({}_{under.}\) & \(\sigma_{MAE}\) & \(R^{2}\) & MAE\({}_{total}\) & MAE\({}_{under.}\) & \(\sigma_{MAE}\) \\ \hline FLIP & **94\%** & 4.98e-2 & 2.51e-2 & 7.56e-2 & **94\%** & 6.18e-2 & 1.24e-2 & 6.99e-2 & **91\%** & 4.07e-2 & 2.50e-2 & 6.18e-2 \\ PSNR & **83\%** & 5.45e-2 & 3.22e-2 & 5.72e-2 & **92\%** & 4.95e-2 & 6.4e-3 & 4.30e-2 & **88\%** & 2.95e-2 & 1.15e-2 & 4.05e-2 \\ LPIPS & **80\%** & 4.15e-2 & 2.29e-2 & 4.55e-2 & **81\%** & 3.56e-2 & 1.38e-2 & 3.85e-2 & **70\%** & 3.28e-2 & 2.09e-2 & 3.68e-2 \\ JNYang & **94\%** & 5.03e-2 & 1.83e-2 & 9.50e-2 & **92\%** & 5.15e-2 & 1.55e-2 & 1.04e-1 & **86\%** & 6.61e-2 & 2.34e-2 & 9.61e-2 \\ JNFLIP & **90\%** & 7.30e-2 & 2.37e-2 & 8.15e-2 & **82\%** & 9.62e-2 & 0.61e-2 & 1.53e-1 & **81\%** & 9.18e-2 & 1.06e-2 & 9.17e-2 \\ \end{tabular}
\end{table}
Table 2. Prediction quality on test sets across six different scenes. The network has only been trained on the three scenes in the left column (Suntemple and Amazon Bistro). For each scene, we give the number of triangles (\(\Delta\)), unique materials (\(\otimes\)), and the achieved \(R^{2}\) score (coefficient of determination), mean average error (MAE, i.e., the discrepancy between measured and predicted perceptual metric, both total and underestimation only) and variance (\(\sigma_{MAE}\)) of the total MAE.
Figure 5. Proposed rendering mode selection pipeline for content-adaptive shading with VRS. Inputs are provided to a network that has been trained to predict a perceptual metric, with Weber correction and reparameterization applied. At run-time, the network predicts the implied image error for selecting different shading rates. Based on these predictions and a user-defined threshold, the final shading rate (\(\blacksquare\) full, \(\blacksquare\) fine, \(\blacksquare\) medium, \(\blacksquare\) coarse) is selected for each image region.
### Content-Adaptive Shading Application
To assess a real-time use case, we implemented content-adaptive deferred shading in Falcon (Falcon, 2017) using our network, trained on JNYang and running on \(16\times 16\) tiles at \(1080p\) resolution. We load the network into TensorRT and provide it with GBuffer-texture inputs in Falcon directly. For comprehensive results, we ran performance evaluation on five scenes that exhibit varying complexity in terms of geometry and materials: Suntemple, Bistro (Exterior), and the regular/specular/checkered Bistro (Interior). Frame times of our approach was compared with full-rate shading and a state-of-the-art VRS method (Sandes, 2018). We considered two types of camera motion between frames: slow (resulting in 14% previously unseen pixels per frame on average), and fast (31% unseen on average), and evaluated 15 corresponding viewpoint pairs per scene and speed.
Inference with TensorRT requires a constant \(\approx 2.3\) ms per frame. For our approach to provide a benefit, it must amortize this overhead, which can only occur under appreciable fragment shader load. To simulate a pipeline comparable to interactive graphics applications (e.g., AAA video game titles), we created a synthetic
Figure 6. Examples of our network predicting metrics in tested scenes. Black in the center column indicates unseen regions in the current frame. All metrics performed similarly across tested scenes, with no obvious outliers or catastrophic failures.
load (50:1 arithmetic to memory) in the deferred fragment shader to bound full-rate shading performance to 60 FPS. In combination with our network's prediction, GPU hardware support for VRS yields a considerable performance gain across the board. For a slow-/fast-moving camera between frames, we achieved a 1.12/1.14\(\times\) speedup for Suntemple, 1.17/1.18\(\times\) for Bistro (Exterior), and 1.42/1.41\(\times\) for the regular Bistro (Interior). The purely specular and checkered versions of the latter performed slightly better (1.5/1.54\(\times\) and 1.48/1.52\(\times\), respectively): in both cases, this can be explained by the reduction of sharp features and high-frequency visual details in the scene, which enables the network to choose lower shading rates. In summary, VRS using our network reduced average frame times by at least 10% compared to full-rate shading in all examined scenarios. The relative performance gain is boosted by the reduction of high-frequency features, permitting the use of lower shading rates.
For comparison with Yang et al. (Yang et al., 2017), we used the same setup and configured the synthetic load so to have their approach match the target frame rate. Since their base overhead is significantly lower than our network's inference time, our method trails behind Yang et al.'s at 60 FPS with slow camera motion on static scenes (52.1 FPS on average across all scenes \(\rightarrow\) 0.87\(\times\)). For fast camera motion, however, our method performs better (1.03\(\times\)) due to its ability to predict and use lower shading rates in unseen regions, rather than defaulting to full resolution. Using an even heavier load (30 FPS target), our method prevails as soon as camera motion occurs (1.11\(\times\) at slow, and 2.16\(\times\) at fast motion). Hence, even given the early state of dedicated GPU inferencing hardware, our learning-based approach can provide clear benefits in such demanding scenarios.
### Limitations
The key purpose of our approaches is to enable optimizations in real-time applications by predicting the--otherwise expensive--pixel shading result. This naturally impedes its ability to account for factors that are unknown prior to pixel shading. We circumvent this issue by relying on reprojection and G-buffer data, the latter of which may not contain all information affecting the final color generation (e.g., light source position, cf. Figure 3). Hence, similar to other state-of-the-art methods (Yang et al., 2017), the network is bound to make assumptions about such effects based on previously seen regions. If an effect cannot be predicted from G-buffer data alone, it may only react to it in the next frame, when its reprojection becomes available. This includes temporal inconsistencies in the scene (e.g., sudden disocclusion of a strong light source), reflections, and modifying of rendering settings or post-processing effects (Figure 7). However, in this paper, we have shown that our approach can be trained to discard reprojected color and substitute information derived from G-buffer data instead. Hence, it may be trained to adapt to sudden changes immediately. For instance, in the case of a disoccluded light source, this could be achieved by providing additional input tracking changes in the binary screen-space shadowing information between frames. For more complex effects, like reflections or fog, more sophisticated solutions may be needed to provide suitable, inexpensive approximations of the required information to the network. The decision of trading a single-frame delay of predicted effects for larger input sets should then depend on the user's expected attention to them. Future work may explore under which circumstance reprojection may be omitted and instead replaced by additional, equally expressive encodings or estimates of important scene features, such as light sources and reflections. Tackling this challenge would come with the advantage of providing a unified solution for both seen and unseen regions.
Although the achieved performance in real-time applications is acceptable with our approach, it incurs an overhead that limits its applicability. For slow-moving changes, selective reuse of predictions could significantly alleviate this issue, which we aim to pursue in future work. In our proof-of-concept, the naive screen-space reprojection used is not precise, which can sometimes cause our network to hallucinate thin objects' duplicates due to material and reprojection data inconsistency (Figure 7). This could be improved upon by using state-of-the-art, non-screen-space reprojection.
## 6. Conclusion
In this paper, we have presented a method for training and predicting perceptual metrics using a learning-based approach. The proposed network architecture is compact enough to make predictions with high accuracy in real-time, without relying on a reference or rendered image. We have shown how to tackle common machine learning problems, such as unbalanced training data, with specialized solutions for our task that anticipate the eventual real-time applications. Furthermore, we have shown how the concept
Figure 7. (a,b) Reliance on reprojection can cause the network to react to sudden lighting changes in previously seen regions with a delay of one frame. (c,d) Changing tone-mapping method also does not result in immediate different predictions. (e,f) Incorrect previous frame reprojections can cause our network to hallucinate duplicated objects due to surface information mismatch.
of visually-based decision-making with just-noticeable differences can be directly integrated into the learning process.
Our solution can be used to predict various metrics and generalizes well to new scenes and applications. By exploiting recent advances in GPU hardware, inference can be performed in real-time, thus opening the door for new uses of visual error metrics in real-time rendering applications. Our exemplary content-adaptive shading setup shows that, while direct execution of our network per-frame may not always be expedient, visually-based decision-making can already be performed at highly interactive frame rates. Hence, applications with very demanding shading or only occasional prediction that is amortized over time are likely to benefit from our solution. Furthermore, it is safe to assume that future hardware generations will significantly improve upon neural network inference speed. To enable experimentation and research of such applications, we have published our full codebase for capturing, learning, and applying relevant metrics to 3D scenes and used datasets at jailiborc.github.io/rt-percept.
|
2302.06007 | Maximum mass and stability of differentially rotating neutrons stars | We present our study of stability of differentially rotating, axisymmetric
neutron stars described by a polytropic equation of state with $\Gamma = 2$. We
focus on quasi-toroidal solutions with a degree of differential rotation
$\widetilde A=1$. Our results show that for a wide range of parameters
hypermassive, quasi-toroidal neutron stars are dynamically stable against
quasi-radial perturbations, which may have implications for newly born neutron
stars and binary neutron stars mergers. | Paweł Szewczyk, Dorota Gondek-Rosińska, Pablo Cerdá-Durán | 2023-02-12T21:30:13Z | http://arxiv.org/abs/2302.06007v1 | # Maximum mass and stability of differentially rotating neutrons stars+
###### Abstract
We present our study of stability of differentially rotating, axisymmetric neutron stars described by a polytropic equation of state with \(\Gamma=2\). We focus on quasi-toroidal solutions with a degree of differential rotation \(\vec{A}=1\). Our results show that for a wide range of parameters hypermassive, quasi-toroidal neutron stars are dynamically stable against quasi-radial perturbations, which may have implications for newly born neutron stars and binary neutron stars mergers.
## 1 Introduction
Differential rotation seems to appear naturally in many dynamical scenarios involving neutron stars (NS), including the collapse of stellar cores (see e.g. [23]) and binary neutron star (BNS) mergers (see e.g. [13]). Its stabilizing effect may allow for configurations with masses significantly higher than the mass limit for rigidly rotating neutron stars. Its study is relevant for the understanding of black hole formation in those astrophysical scenarios with consequences in observations of core-collapse supernovae (CCSN) and BNS mergers, especially with current gravitational wave ground-based observatories (LIGO, Virgo and Kagra [15, 1, 12]) and future ones (the Einstein Telescope and Cosmic Explorer).
### Equilibrium models of differentially rotating NS
The solution space of differentially rotating neutron stars in equilibrium was already extensively studied by different authors. It was shown by [3]
that differentially rotating NS with masses significantly larger than non-rotating or rigidly rotating NS can exist and be stable against radial collapse and bar formation. Those with masses larger than the limit for rigidly rotating objects are called hypermassive NS.
However, studying the whole solution space has proven to be numerically challenging. The existence of different types of solutions of differentially rotating neutron stars was for the first time found by [2] using relativistic highly accurate and stable multi-domain spectral numerical code FlatStar. Most importantly, for a given degree of differential rotation, the solution is not uniquely determined by the maximal density and angular momentum of the NS (or any other suitable pair of parameters), as is the case for rigid rotation. Instead, different types of solutions may coexist for the same parameters. The maximum mass for different degrees of differential rotation and different solution types was presented by [10] and [19] for polytropes, showing that the most massive configurations are obtained for modest degree of differential rotation. Similar results were obtained for strange quark stars [20] and NS with several realistic equations of state [7].
While many studies in the past were using a rotation law of Komatsu, Eriguchi, and Hachisu [14], which is mainly consistent with CCSN remnants [23], the rotation law observed in simulations of BNS merger remnants departs significantly from that one. Rotation laws better suited for BNS mergers have been proposed by [22], and its impact on the solution space of equilibrium models studied by [11].
### Stability properties of hypermassive NS
For non-rotating NS the limit for both secular and dynamical stability occurs at the point of maximal mass (\(M_{TOV}\)). This criterion can be, to some point, extended to rigidly rotating NS. The so-called turning point criterion was presented by Friedman, Ipser, and Sorkin [9] and proven to be a sufficient criterion for instability. It states that the point of maximal gravitational mass \(M\) on a sequence of configurations of fixed angular momentum \(J\) (\(J\)-constant turning points), or, alternatively, the point of minimal gravitational mass on a sequence of fixed rest mass \(M_{0}\) (\(M_{0}\)-constant turning points) marks the onset of instability. This criterion, however, does not give the exact threshold to collapse. The neutral-stability point where F-mode frequency vanishes differs from the turning-point line [21]. Numerical simulations confirm that the neutral-stability line marks the threshold to prompt collapse.
For rigidly rotating NS, the \(J\)-constant turning points coincide with the \(M_{0}\)-constant turning points, but it is no longer the case for differential rotation of a given degree. While other authors usually refer to the former,
in this paper we use the latter as we find it to be a closer estimate of stability threshold.
On secular timescales, differential rotation transforms into rigid rotation due to effects of viscosity and magnetic breaking [16, 6, 17]. By definition, hypermassive NS have masses that cannot be supported by rigid rotation only. This eventually may lead to a delayed collapse and delayed emission of gravitational waves. There is no clear criterion of dynamical stability for hypermassive NS to tell if the collapse will be prompt or delayed.
Various authors have studied the stability properties of differentially rotating NS by means of numerical simulations. An example of hypermassive NS dynamically stable to both radial instabilities and bar formation was presented by [3]. In [24] the authors explore the limit of stability to quasi-radial oscillations for differentially rotating NS, excluding quasi-toroidal configurations. The threshold to collapse proves to be close to the (\(J\)-constant) turning-point line, which is still a valid sufficient criterion of dynamical instability. A caveat for the large masses supported by many these works is that they may be subject to non-axisymmetric corrotational instabilities (usually known as low-\(T/|W|\) instabilities, see e.g. [18]) that are able to transport efficiently angular momentum and erase differential rotation. Although there is no clear criterion for the onset of this instability, all studied cases in the literature of NS with quasi-toroidal shape (e.g. [8]) have shown the dynamical growth of these instabilities.
## 2 Equilibrium models
We consider axisymmetric, stationary configuration of rotating fluid in cylindrical coordinates \((t,\rho,z,\phi)\). The configurations we study are highly flattened and cylindrical coordinates are more practical than spherical ones. The line element associated with such configuration may be written in form:
\[ds^{2}=-e^{2\nu}dt^{2}+e^{2\mu}(d\rho^{2}+dz^{2})+W^{2}e^{-2\nu}(d\phi-\omega dt )^{2}, \tag{1}\]
with four metric potentials \(\mu,W,\nu,\omega\), being functions of \(\rho\) and \(z\) only, due to symmetry.
In general, the properties of matter are defined by the equation of state (EOS). Here we use a polytropic EOS with \(\Gamma=2\) (or \(N=1\) in alternate notation) which yields a relation between total mass-energy density \(\epsilon\) and pressure \(p\):
\[\epsilon(p)=p+\sqrt{\frac{p}{K}}, \tag{2}\]
where \(K\) is the polytropic constant.
We often use a dimensionless value of relativistic enthalpy as a main thermodynamical parameter, which in case of polytrope with \(\Gamma=2\) can be expressed as:
\[H=\log(1+2K\epsilon_{B}), \tag{3}\]
where \(\epsilon_{B}\) is the rest-mass density.
To describe the differential rotation profile, one need to specify the rotation law. For the fluid four-velocity \(u^{\alpha}\) and angular velocity \(\Omega\), it can be defined as function \(u^{t}u_{\phi}=F(\Omega)\). Here we use the J-const rotation law [9]:
\[F(\Omega)=A^{2}(\Omega_{c}-\Omega), \tag{4}\]
with \(\Omega_{c}\) being the angular velocity on the rotation axis and \(A\) a constant describing the steepness of the profile. This produces a rotation profile consistent with remnants of CCSN. As a dimensionless measure of the degree of differential rotation, we use the value of \(\widetilde{A}=\frac{r_{e}}{\widetilde{A}}\) with \(r_{e}\) being the star radius on equatorial plane.
To construct the initial data we solve the relativistic field equations for four metric potentials on the \(\rho,z\) plane using the highly accurate code FlatStar. It uses an efficient multi-domain spectral method to construct equilibrium models of rotating compact objects. For more technical details see [2] and appendix of [10].
In this paper, we focus on the degree of differential rotation \(\widetilde{A}=1\). According to classification of [2], all configurations studied here are of type C. Our main interest lies in quasi-toroidal configurations, which produce the largest masses and are not extensively studied by other authors. Figure 1 shows the relativistic enthalpy profile of one of such configurations. The maximal value, \(H_{\max}\), is not in the center of star, and hence the classification as quasi-toroidal. We select 5 sequences with constant rest mass of configurations close to the stability limit estimated by the turning point criterion.
## 3 Stability against quasi-radial instabilities
To test the stability of the selected configurations, we perform relativistic axisymmetric hydrodynamical simulations using the CoCoNuT code [4], which uses the conformally flat approximation (CFC). In the 3+1 split, the line element reads:
\[ds^{2}=-\alpha^{2}dt^{2}+\gamma_{ij}(dx^{i}+\beta^{i}dt)(dx^{j}+\beta^{j}dt), \tag{5}\]
where \(\gamma_{ij}=\Phi^{4}\delta_{ij}\) in the CFC approximation, with the conformal factor \(\Phi^{6}=e^{2\mu-u}\) (using variables from equation 1). The accuracy of CFC was
tested, for example, by [5], showing that the astrophysical properties (such as mass) are reconstructed with only a small discrepancy less than 5%.
To induce the collapse in unstable configurations, we introduce a radial perturbation in form of an additional velocity component. The amplitude of this perturbation is carefully chosen to make the mean value of the maximal density match the initial value in stable solutions (for unstable solutions we extrapolate the amplitude from the stable region). We test our results by comparing them with the results of [24] and [21], finding them to be in agreement.
For all our models we inspect the evolution of maximal density in time. For stable configurations we see the value of density oscillating around the initial value. Unstable configurations show an exponential growth of the maximal density in the first few ms, marking the prompt collapse to a BH. Figure 2 shows stable and unstable configurations on \(H_{\rm max}-M\) plane.
## 4 Summary
We have selected a sample of quasi-toroidal configurations of neutron stars with polytropic equation of state. We used a polytrope with \(\Gamma=2\) and j-constant rotation law. By performing a numerical relativistic hydrodynamical evolution, we tested stability of the selected equilibria against axisymmetrical (quasi-radial) perturbations. We show that differential rotation allows the existence of dynamically stable models with masses almost
Figure 1: Example of a quasi-toroidal initial configuration (\(H_{\rm max}=0.26\), \(M_{0}=0.33\)). Color coded, the relativistic enthalpy \(H\) in a meridional cross section. The maximal value \(H_{\rm max}\) is found far off the rotation axis (\(y=0\)).
twice as massive as \(M_{\rm TOV}\). These stable configurations, if formed during CCSN or BNS mergers, may undergo a significantly delayed collapse to a black hole. Further study is needed to inspect the stability properties against non-axisymmetrical perturbations on dynamical timescales.
## 5 Acknowledgments
This work was partially supported by the Polish National Science Centre grants No. 2017/26/M/ST9/00978 and 2022/45/N/ST9/04115, by POMOST/2012-6/11 Program of Foundation for Polish Science co-financed by the European Union within the European Regional Development Fund, by the Spanish Agencia Estatal de Investigacion (Grants No. PGC2018-095984-B-I00 and
Figure 2: Simulated configurations divided into stable (green marks) and unstable (red marks) to quasi-radial perturbations. Blue dashed line shows the line of (\(M_{0}\)-constant) turning-points, being the first estimate of stability. The orange dashed line marks the boundary between spheroidal and quasi-toroidal configurations. Limit of mass for this degree of differential rotation, limit for rigid rotation and sequence of non-rotating NS are presented for reference.
PID2021-125485NB-C21) funded by MCIN/AEI/10.13039/501100011033 and ERDF A way of making Europe, by the Generalitat Valenciana (PROMETEO/2019/071), and by COST Actions CA16104 and CA16214.
|
2308.13337 | Evidence of the Coulomb gap in the density of states of MoS$_2$ | $\mathrm{MoS_2}$ is an emergent van der Waals material that shows promising
prospects in semiconductor industry and optoelectronic applications. However,
its electronic properties are not yet fully understood. In particular, the
nature of the insulating state at low carrier density deserves further
investigation, as it is important for fundamental research and applications. In
this study, we investigate the insulating state of a dual-gated exfoliated
bilayer $\mathrm{MoS_2}$ field-effect transistor by performing magnetotransport
experiments. We observe positive and non-saturating magnetoresistance, in a
regime where only one band contributes to electron transport. At low electron
density ($\sim 1.4\times 10^{12}~\mathrm{cm^{-2}}$) and a perpendicular
magnetic field of 7 Tesla, the resistance exceeds by more than one order of
magnitude the zero field resistance and exponentially drops with increasing
temperature. We attribute this observation to strong electron localization.
Both temperature and magnetic field dependence can, at least qualitatively, be
described by the Efros-Shklovskii law, predicting the formation of a Coulomb
gap in the density of states due to Coulomb interactions. However, the
localization length obtained from fitting the temperature dependence exceeds by
more than one order of magnitude the one obtained from the magnetic field
dependence. We attribute this discrepancy to the presence of a nearby metallic
gate, which provides electrostatic screening and thus reduces long-range
Coulomb interactions. The result of our study suggests that the insulating
state of $\mathrm{MoS_2}$ originates from a combination of disorder-driven
electron localization and Coulomb interactions. | Michele Masseroni, Tingyu Qu, Takashi Taniguchi, Kenji Watanabe, Thomas Ihn, Klaus Ensslin | 2023-08-25T12:14:15Z | http://arxiv.org/abs/2308.13337v2 | # Evidence of the Coulomb gap in the density of states of MoS\({}_{2}\)
###### Abstract
MoS\({}_{2}\) is an emergent van der Waals material that shows promising prospects in semiconductor industry and optoelectronic applications. However, its electronic properties are not yet fully understood. In particular, the nature of the insulating state at low carrier density deserves further investigation, as it is important for fundamental research and applications. In this study, we investigate the insulating state of a dual-gated exfoliated bilayer MoS\({}_{2}\) field-effect transistor by performing magnetotransport experiments. We observe positive and non-saturating magnetoresistance, in a regime where only one band contributes to electron transport. At low electron density (\(\sim 1.4\times 10^{12}\,\mathrm{cm}^{-2}\)) and a perpendicular magnetic field of 7 Tesla, the resistance exceeds by more than one order of magnitude the zero field resistance and exponentially drops with increasing temperature. We attribute this observation to strong electron localization. Both temperature and magnetic field dependence can, at least qualitatively, be described by the Efros-Shklovskii law, predicting the formation of a Coulomb gap in the density of states due to Coulomb interactions. However, the localization length obtained from fitting the temperature dependence exceeds by more than one order of magnitude the one obtained from the magnetic field dependence. We attribute this discrepancy to the presence of a nearby metallic gate, which provides electrostatic screening and thus reduces long-range Coulomb interactions. The result of our study suggests that the insulating state of MoS\({}_{2}\) originates from a combination of disorder-driven electron localization and Coulomb interactions.
## I Introduction
The resistivity \(\rho\) of some semiconductors shows a metal-insulator transition as a function of the electron density \(n\)[1]. For densities larger than a critical value \(n_{\mathrm{c}}\) the resistivity shows a metallic temperature dependence (\(\mathrm{d}\rho/\mathrm{d}T>0\)), while below \(n_{\mathrm{c}}\) it shows an insulating temperature dependence (\(\mathrm{d}\rho/\mathrm{d}T<0\)). This metal-insulator transition attracted great interest in the late 1990s [2; 3; 4; 5]. In two-dimensional (2D) semiconductors the origin of the metallic phase is controversial [6; 7; 8], as it was predicted that any amount of defects would inexorably lead to electron localization at zero temperature in a 2D system [9; 10]. The insulating phase at low densities can be due to either intriguing correlated states, like Wigner crystals [11] or disorder-induced electron localization [5], as well as a combination of the two effects.
In highly disordered systems, charge transport at low temperatures occurs via electron hopping between localized states [12], known as variable-range hopping (VRH). The conductivity in hopping transport at zero magnetic field is usually described by an exponential dependence on the temperature of the form
\[\sigma(T)\propto\exp\left[-\left(\frac{T_{0}}{T}\right)^{p}\right],\]
where \(T_{0}\) and \(p\leq 1\) are constants that depend on the hopping mechanism. In a non-interacting system, the density of states close to the Fermi energy is constant (but finite) and the conductivity is described by Mott's law [13], for which \(p=1/3\) (for two-dimensional systems). When electrons are strongly localized, the long-range Coulomb potential is not efficiently screened. Electron correlations result in a Coulomb gap in the density of states close to the Fermi energy [14; 15]. The modified density of states changes the temperature dependence of the hopping conductivity, which is now characterized by the parameter \(p=1/2\), as described by the Efros-Shklovskii (ES) theory [16].
The insulating phase of MoS\({}_{2}\) has been experimentally studied in monolayers [17] and multilayers [18; 19], where both thermally activated transport at intermediate temperatures (\(T\sim 50\,\mathrm{K}\) to \(100\,\mathrm{K}\)) and Mott VRH transport at lower temperatures have been observed. In addition, it is expected that electron-electron interactions play a major role in determining the electronic properties due to the large electron effective mass [\(m^{*}\approx(0.4-0.6)m_{0}\)] of MoS\({}_{2}\), especially at low densities. Indeed, signatures of interaction effects have already been reported in the literature [20; 21], among which there was also the observation of a Wigner crystal in MoSe\({}_{2}\)[22]. Therefore, MoS\({}_{2}\), and in general, semiconducting transition metal dichalcogenides (TMDs), are good candidates for the observation of the Coulomb gap in the density of states. However, the observation of interaction effects is restricted to low densities, where the Coulomb energy dominates over the kinetic energy of electrons. Transport experiments in this density range are challenging in most materials and require low defect densities [23]. The observation of the Coulomb gap in MoS\({}_{2}\) remains to date elusive [19] due to the large density of intrinsic defects.
Here, we investigate magnetotransport in bilayer MoS\({}_{2}\) encapsulated in hexagonal boron nitride (hBN). We first |
2303.11652 | Boundedness of $p$-adic Hardy-Hilbert type integral operator on Block
spaces | In this paper, we estimate an operator norm of dilation operators on block
spaces ($\mathfrak{B}_{r,\alpha}(\mathbb{Q}_p)$) over $p$-adic field. With this
estimate, we establish the boundedness of $p$-adic Hardy-Hilbert type integral
operator on $\mathfrak{B}_{r,\alpha}(\mathbb{Q}_p)$. Moreover as application to
our result, we obtain the $p$-adic Hilbert inequality, $p$-adic Hardy
inequality and $p$-adic Hardy-Littlewood-P\'olya inequality on
$\mathfrak{B}_{r,\alpha}(\mathbb{Q}_p)$. | Salman Ashraf | 2023-03-21T07:53:10Z | http://arxiv.org/abs/2303.11652v1 | # Boundedness of \(p\)-adic Hardy-Hilbert type integral operator on block spaces
###### Abstract.
In this paper, we estimate an operator norm of dilation operators on block spaces \((\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}))\) over \(p\)-adic field. With this estimate, we establish boundedness of \(p\)-adic Hardy-Hilbert type integral operator on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\). Moreover as application to our result, we obtain the \(p\)-adic Hilbert inequality, \(p\)-adic Hardy inequality and \(p\)-adic Hardy-Littlewood-Polya inequality on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\).
Key words and phrases:\(p\)-adic field, \(p\)-adic Hardy-Hilbert type integral operator, Block space, Morrey space
## 1. Introduction
Classical block space is a generalization of Lebesgue space. Zorko [44] introduced the block spaces, and proved that block space is the predual of the classical Morrey spaces. Since the blocks considered by Zorko [44] has mean zero, therefore later in [9], the authors prove that mean zero condition on blocks can be omitted and still have a description of the block spaces as a predual of Morrey spaces. The reader is referred to [34, 43], for some recent developments of Morrey spaces and their related function spaces on \(\mathbb{R}^{n}\).
The Hardy-Littlewood-Polya inequality for Lebesgue space \(L^{q}(\mathbb{R}_{+})\) were established in [22], which unify several important results in analysis such as the Hardy inequality and the Hilbert inequality. Also, the Hardy-Littlewood-Polya inequality for block spaces on \(\mathbb{R}_{+}\) were established in [24]. One can refer to [19, 26, 41, 42] for detail study of Hardy inequality and Hilbert inequality and it's related topics.
The main aim of this paper is to establish boundedness of \(p\)-adic Hardy-Hilbert type integral operator on block spaces over \(p\)-adic field. As consequences, we establish \(p\)-adic Hardy inequality, \(p\)-adic Hilbert inequality and \(p\)-adic Hardy-Littlewood-Polya inequality for block spaces over \(p\)-adic field. In [23], K. P. Ho introduced and discussed some fundamental properties of block spaces over locally compact Vilenkin group. He also obtained the boundedness of the Hardy-Littlewood maximal function on block spaces over locally compact Vilenkin group. Since additive group of \(p\)-adic field is an example of locally compact Vilenkin group, therefore in section 2, we define the block spaces over \(p\)-adic field by the idea in [23, 27].
In 2020, Huabing Li and Jianjun Jin [28] introduced and studied the \(p\)-adic Hardy-Hilbert type integral operator. It should be pointed out that the \(p\)-adic Hardy-Hilbert type integral operator includes many classical operators in \(p\)-adic Harmonic analysis such as the \(p\)-adic Hardy operator, \(p\)-adic Hilbert operator and \(p\)-adic Hardy-Littlewood-Polya operator. Let us now recall the definition of \(p\)-adic Hardy-Hilbert type integral operator.
Let \(f\) be a nonnegative integrable function on \(\mathbb{Q}_{p}\), and \(\mathcal{K}:\mathbb{R}_{+}\times\mathbb{R}_{+}\to[0,\infty)\) be a homogeneous function of degree \(-1\), that is, \(\mathcal{K}(\xi x,\xi y)=\xi^{-1}\mathcal{K}(x,y)\) for any \(\xi>0.\) Then \(p\)-adic Hardy-Hilbert type integral operator with the kernel \(\mathcal{K}\) is defined by
\[\mathscr{T}^{p}f(x)=\int\limits_{\mathbb{Q}_{p}^{*}}\mathcal{K}(|x|_{p},|y|_{p} )f(y)dy,\ x\in\mathbb{Q}_{p}^{*}. \tag{1.1}\]
Let us now see that for some special case of the kernel \(\mathcal{K},\ \mathscr{T}^{p}\) reduces to some important operators in \(p\)-adic analysis.
1. Let us choose \[\mathcal{K}(|x|_{p},|y|_{p})=\frac{1}{|x|_{p}\,+\,|y|_{p}},\] then we have \(p\)-adic Hilbert operator defined by (1.2) \[H^{p}f(x)=\int\limits_{\mathbb{Q}_{p}^{*}}\frac{f(y)}{|x|_{p}\,+\,|y|_{p}}dy,\ x\in \mathbb{Q}_{p}^{*}.\]
2. Let us choose \[\mathcal{K}(|x|_{p},|y|_{p})=|x|_{p}^{-1}\Phi_{E}(|y|_{p}),\] where \(\Phi_{E}\) is the characteristic function of \(E=\{y\in\mathbb{Q}_{p}:|y|_{p}\leq|x|_{p}\}\), then we have \(p\)-adic Hardy operator (1.3) \[\mathcal{H}^{p}f(x)=\frac{1}{|x|_{p}}\,\int\limits_{|y|_{p}\leq|x|_{p}}f(y)dy, \ x\in\mathbb{Q}_{p}^{*}.\]
3. By choosing \[\mathcal{K}(|x|_{p},|y|_{p})=\frac{(|x|_{p}|y|_{p})^{\frac{\lambda}{2}}}{ \max\{|x|_{p},|y|_{p}\}^{\lambda+1}},\ \lambda\geq 0,\] we obtain the operator (1.4) \[\mathscr{D}^{p}f(x)=\int\limits_{\mathbb{Q}_{p}^{*}}\frac{(|x|_{p}|y|_{p})^{ \frac{\lambda}{2}}}{\max\{|x|_{p},|y|_{p}\}^{\lambda+1}}f(y)dy,\ x\in\mathbb{Q }_{p}^{*}.\] Observe that for \(\lambda=0,\ \mathscr{D}^{p}\) reduces to \(p\)-adic Hardy-Littlewood-Polya operator defined by \[\mathscr{P}^{p}f(x)=\int\limits_{\mathbb{Q}_{p}^{*}}\frac{f(y)}{\max\{|x|_{p},|y|_{p}\}}dy,\ x\in\mathbb{Q}_{p}^{*}.\]
In [17], K. H. Dung and D. V. Duong gave the necessary and sufficient condition for the boundedness of \(p\)-adic Hardy-Hilbert type integral operator on two weighted Morrey spaces and Morrey-Herz spaces. Also, in [18], they established the boundedness of \(\mathscr{T}^{p}\) on weighted Triebel-Lizorkin space. Recently, Batbold, Sawano and Tumendemberel [5] introduced \(m\)-linear \(p\)-adic integral operator which is similar to certain multilinear integral operators on Euclidean spaces (see [8]). More precisely, Let \(K:\mathbb{R}_{+}^{m}\to[0,\infty)\) be a homogeneous function of degree \(-m\), that is, \(K(\xi x_{1},\xi x_{2},\cdots,\xi x_{m+1})=\xi^{-m}K(x_{1},x_{2},\cdots,x_{m+1})\) for any \(\xi>0.\) Then \(m\)-linear \(p\)-adic integral operator with the kernel \(K\) is defined by
\[T_{m}^{p}(f_{1},f_{2},\cdots,f_{m})(x)=\int\limits_{(\mathbb{Q}_{p}^{*})^{m}}K (|x|_{p},|y_{1}|_{p},\cdots,|y_{m}|_{p})\prod\limits_{i=1}^{m}f_{i}(y_{i})dy_ {1}dy_{2}\cdots dy_{m},\ x\in\mathbb{Q}_{p}^{*}.\]
Batbold, Sawano and Tumendemberel [5] gave necessary and sufficient conditions for the boundedness of \(m\)-linear \(p\)-adic integral operator on \(p\)-adic Lebesgue spaces and Morrey spaces with power weights. In [36], Duong and Hong obtain the boundedness of \(m\)-linear \(p\)-adic integral operator on two weighted Herz spaces. As an application to their result, they obtain the boundedness of \(p\)-adic multilinear Hilbert operator, \(p\)-adic multilinear Hardy operator and \(p\)-adic multilinear Hardy-Littlewood-Polya operator on two-weighted Herz spaces.
A local field is a locally compact, totally disconnected, non-Archimedian norm valued and non-discrete topological field, see [35] for basic Fourier analysis on local fields. The basic archetypes of local fields are the \(p\)-adic field \(\mathbb{Q}_{p}\), and a field of formal Laurent series \(\mathbb{F}_{q}((t))\) over the finite
field with \(q\) elements. In recent years, the study of Harmonic and Wavelet analysis on local fields has received a lot of attention (see [2, 6, 7, 10, 11, 33] and references therein).
The study of operators on local fields, is quite new and lots of new topics are worth to explore. For further information on boundedness of some fundamental operators in Harmonic analysis, like maximal operator, singular integral operator, dilation operators, Hardy operator, Hardy-Cesaro operator and Hausdorff operator on function spaces over local fields (see [3, 4, 12, 13, 14, 17, 18, 20, 21, 29, 31, 37, 39]).
This paper is organized as follows. In section 2, we provide a brief introduction to \(p\)-adic analysis as well as definition of block spaces over \(p\)-adic field. In section 3, we estimate an operator norm of dilation operator on block spaces over \(p\)-adic field. With this estimate, we establish boundedness of \(p\)-adic Hardy-Hilbert type integral operator on block spaces over \(p\)-adic field. Finally as an application of the boundedness of \(p\)-adic Hardy-Hilbert type integral operator, we obtain the \(p\)-adic Hilbert inequality, \(p\)-adic Hardy inequality and \(p\)-adic Hardy-Littlewood-Polya inequality for block spaces over \(p\)-adic field.
## 2. Preliminaries
### The field of \(p\)-adic numbers (\(\mathbb{Q}_{p}\))
Let \(p\) be any fixed prime in \(\mathbb{Z}.\) Define the \(p\)-adic absolute value (or \(p\)-adic norm) \(|\cdot|_{p}\) on \(\mathbb{Q}\) by
\[|x|_{p}=\begin{cases}p^{-\gamma}&\text{if }x=p^{\gamma}\frac{m}{n}\\ 0&\text{if }x=0,\end{cases}\]
where \(\gamma,m,n\in\mathbb{Z}\) and \(m,n\) are not divisible by \(p.\) The field of \(p\)-adic numbers, denote by \(\mathbb{Q}_{p},\) is the completion of the field of rational numbers \(\mathbb{Q}\) with respect to the metric \(d_{p}(x,y)=|x-y|_{p}.\) It is easy to see that \(p\)-adic absolute value satisfy the following properties:
1. \(|xy|_{p}=|x|_{p}|y|_{p}\) for all \(x,y\in\mathbb{Q}_{p};\)
2. \(|x+y|_{p}\leq\max\{|x|_{p},|y|_{p}\},\) for all \(x,y\in\mathbb{Q}_{p},\)
The property (b) is called the _ultrametric inequality(or the non-Archimedean property)_. It follows that
\[|x+y|_{p}=\max\{|x|_{p},|y|_{p}\}\text{ if }|x|_{p}\neq|y|_{p}.\]
\(\mathbb{Q}_{p}\) with natural operations and topology induced by the metric \(d_{p}\) is a locally compact, non-discrete, complete and totally disconnected field. It is also well known that any non-zero \(p\)-adic number \(x\in\mathbb{Q}_{p}\) can be uniquely represented in the canonical series
\[x=p^{\gamma}\sum_{l=0}^{\infty}c_{l}p^{l}, \tag{2.1}\]
where \(c_{l}\in\mathbb{Z}/p\mathbb{Z}\) and \(c_{0}\neq 0.\) The series (2.1) is convergence in \(p\)-adic norm since \(|c_{l}p^{l}|_{p}\leq p^{-l}.\) For \(a\in\mathbb{Q}_{p}\) and \(k\in\mathbb{Z},\) we denote by
\[B^{k}(a) =\{x\in\mathbb{Q}_{p}:|x-a|_{p}\leq p^{k}\},\] \[S^{k}(a) =\{x\in\mathbb{Q}_{p}:|x-a|_{p}=p^{k}\},\]
respectively, a ball and a sphere of radius \(p^{k}\) and center at \(a.\) We use the notations \(B^{k}=B^{k}(0)\) and \(S^{k}=S^{k}(0).\) The set \(\{B^{k}\subset\mathbb{Q}_{p}:k\in\mathbb{Z}\}\) satisfies the following:
1. \(\{B^{k}\subset\mathbb{Q}_{p}:k\in\mathbb{Z}\}\) is a base for neighborhood system of identity in \(\mathbb{Q}_{p},\) and \(B^{k}\subset B^{k+1},\ k\in\mathbb{Z};\)
2. \(B^{k},\ k\in\mathbb{Z},\) is open, closed and compact in \(\mathbb{Q}_{p};\)
3. \(\mathbb{Q}_{p}=\bigcup\limits_{k=-\infty}^{+\infty}B^{k}\) and \(\{0\}=\bigcap\limits_{k=-\infty}^{+\infty}B^{k}.\)
We also have,
\[\mathbb{Q}_{p}^{*}=\mathbb{Q}_{p}\setminus\{0\}=\bigcup_{k=-\infty}^{+\infty}S^{k}.\]
Since additive group of \(\mathbb{Q}_{p}\) is a locally compact Abelian group, we choose a Haar measure \(dx\) on additive group of \(\mathbb{Q}_{p}\) normalized so that
\[|B^{0}|=\int_{B^{0}}dx=1,\]
where \(|E|\) denotes the Haar measure of a measurable set \(E\) of \(\mathbb{Q}_{p}.\) Then by a simple calculation, the Haar measures of any balls and spheres can be obtained. Especially, we frequently use
\[|B^{k}|=p^{k}\text{ and }|S^{k}|=p^{k}(1-p^{-1}).\]
**Definition 2.1**.: _Let \(S(\mathbb{Q}_{p})\) be the space of complex-valued, locally constant and compactly supported functions on \(\mathbb{Q}_{p}.\) The space \(S(\mathbb{Q}_{p})\) is called the Schwartz space over \(\mathbb{Q}_{p},\) has the following properties:_
1. _If_ \(\phi\in S(\mathbb{Q}_{p}),\) _then_ \(\phi\) _is continuous._
2. _If_ \(\phi\in S(\mathbb{Q}_{p}),\) _then there exists_ \(k\in\mathbb{Z}\) _such that supp_ \(\phi\subset B^{k}.\)__
3. _If_ \(\phi\in S(\mathbb{Q}_{p}),\) _then there exists_ \(l\in\mathbb{Z}\) _such that_ \(\phi\) _is constant on the cosets of_ \(B^{l}.\)__
4. \(S(\mathbb{Q}_{p})\) _is dense in_ \(L^{r}(\mathbb{Q}_{p}),\ 1\leq r<\infty.\)__
We refer to [35, 40] for details of \(p\)-adic field and proof of statements discussed in this subsection.
### Block spaces
We begin with definition of \(p\)-adic central Morrey spaces (see [15, 16]).
**Definition 2.2**.: _Let \(\alpha\) be a non-negative real number and \(1\leq r<\infty.\) Then the \(p\)-adic central Morrey space is defined by_
\[M_{r,\alpha}(\mathbb{Q}_{p})=\{f\in L^{r}_{\text{loc}}(\mathbb{Q}_{p})\ :\ \|f\|_{M_{r,\alpha}(\mathbb{Q}_{p})}<\infty\},\]
_where_
\[\|f\|_{M_{r,\alpha}(\mathbb{Q}_{p})}=\Bigg{(}\sup_{k\in\mathbb{Z}}\ \frac{1}{|B^{k}|^{\alpha r}}\int_{B^{k}}|f(x)|^{r}dx\Bigg{)}^{1/r}.\]
It is clear that \(M_{r,0}(\mathbb{Q}_{p})=L^{r}(\mathbb{Q}_{p}).\)
**Definition 2.3**.: _Let \(\alpha\in\mathbb{R}\) and \(0<r<\infty.\) A function \(a:\mathbb{Q}_{p}\rightarrow\mathbb{C}\) is said to be a central \((r,\alpha)\)-block if there exist \(n\in\mathbb{Z}\) such that supp \(a\subset B^{n}\) and satisfies_
\[\|a\|_{L^{r}(\mathbb{Q}_{p})}\leq\ |B^{n}|^{-\alpha}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) generated by central \((r,\alpha)\)-blocks are defined by_
\[\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})=\{f\in L^{r}_{\text{loc}}(\mathbb{Q}_ {p}):f=\sum_{k=1}^{\infty}\lambda_{k}a_{k},\ \sum_{k=1}^{\infty}|\lambda_{k}|<\infty\text{ and each }a_{k}\text{ is a central }(r,\alpha)\text{-block}\}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) is endowed with the norm_
\[\|f\|_{M_{r,\alpha}(\mathbb{Q}_{p})}=\Bigg{(}\sup_{k\in\mathbb{Z}}\ \frac{1}{|B^{k}|^{\alpha r}}\int_{B^{k}}|f(x)|^{r}dx\Bigg{)}^{1/r}.\]
It is clear that \(M_{r,0}(\mathbb{Q}_{p})=L^{r}(\mathbb{Q}_{p}).\)
**Definition 2.4**.: _Let \(\alpha\in\mathbb{R}\) and \(0<r<\infty.\) A function \(a:\mathbb{Q}_{p}\rightarrow\mathbb{C}\) is said to be a central \((r,\alpha)\)-block if there exist \(n\in\mathbb{Z}\) such that supp \(a\subset B^{n}\) and satisfies_
\[\|a\|_{L^{r}(\mathbb{Q}_{p})}\leq\ |B^{n}|^{-\alpha}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) generated by central \((r,\alpha)\)-blocks are defined by_
\[\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})=\{f\in L^{r}_{\text{loc}}(\mathbb{Q}_ {p}):f=\sum_{k=1}^{\infty}\lambda_{k}a_{k},\ \sum_{k=1}^{\infty}|\lambda_{k}|<\infty\text{ and each }a_{k}\text{ is a central }(r,\alpha)\text{-block}\}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) is endowed with the norm_
\[\|f\|_{M_{r,\alpha}(\mathbb{Q}_{p})}=\Bigg{(}\sup_{k\in\mathbb{Z}}\ \frac{1}{|B^{k}|^{\alpha r}}\int_{B^{k}}|f(x)|^{r}dx\Bigg{)}^{1/r}.\]
It is clear that \(M_{r,0}(\mathbb{Q}_{p})=L^{r}(\mathbb{Q}_{p}).\)
**Definition 2.5**.: _Let \(\alpha\in\mathbb{R}\) and \(0<r<\infty.\) A function \(a:\mathbb{Q}_{p}\rightarrow\mathbb{C}\) is said to be a central \((r,\alpha)\)-block if there exist \(n\in\mathbb{Z}\) such that supp \(a\subset B^{n}\) and satisfies_
\[\|a\|_{L^{r}(\mathbb{Q}_{p})}\leq\ |B^{n}|^{-\alpha}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) generated by central \((r,\alpha)\)-blocks are defined by_
\[\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})=\{f\in L^{r}_{\text{loc}}(\mathbb{Q}_ {p}):f=\sum_{k=1}^{\infty}\lambda_{k}a_{k},\ \sum_{k=1}^{\infty}|\lambda_{k}|<\infty\text{ and each }a_{k}\text{ is a central }(r,\alpha)\text{-block}\}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) is endowed with the norm_
\[\|f\|_{M_{r,\alpha}(\mathbb{Q}_{p})}=\Bigg{(}\sup_{k\in\mathbb{Z}}\ \frac{1}{|B^{k}|^{\alpha r}}\int_{B^{k}}|f(x)|^{r}dx\Bigg{)}^{1/r}.\]
It is clear that \(M_{r,0}(\mathbb{Q}_{p})=L^{r}(\mathbb{Q}_{p}).\)
**Definition 2.6**.: _Let \(\alpha\in\mathbb{R}\) and \(0<r<\infty.\) A function \(a:\mathbb{Q}_{p}\rightarrow\mathbb{C}\) is said to be a central \((r,\alpha)\)-block if there exist \(n\in\mathbb{Z}\) such that supp \(a\subset B^{n}\) and satisfies_
\[\|a\|_{L^{r}(\mathbb{Q}_{p})}\leq\ |B^{n}|^{-\alpha}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) generated by central \((r,\alpha)\)-blocks are defined by_
\[\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})=\{f\in L^{r}_{\text{loc}}(\mathbb{Q}_ {p}):f=\sum_{k=1}^{\infty}\lambda_{k}a_{k},\ \sum_{k=1}^{\infty}|\lambda_{k}|<\infty\text{ and each }a_{k}\text{ is a central }(r,\alpha)\text{-block}\}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) is endowed with the norm_
\[\|f\|_{M_{r,\alpha}(\mathbb{Q}_{p})}=\Bigg{(}\sup_{k\in\mathbb{Z}}\ \frac{1}{|B^{k}|^{\alpha r}}\int_{B^{k}}|f(x)|^{r}dx\Bigg{)}^{1/r}.\]
It is clear that \(M_{r,0}(\mathbb{Q}_{p})=L^{r}(\mathbb{Q}_{p}).\)
**Definition 2.7**.: _Let \(\alpha\in\mathbb{R}\) and \(0<r<\infty.\) A function \(a:\mathbb{Q}_{p}\rightarrow\mathbb{C}\) is said to be a central \((r,\alpha)\)-block if there exist \(n\in\mathbb{Z}\) such that supp \(a\subset B^{n}\) and satisfies_
\[\|a\|_{L^{r}(\mathbb{Q}_{p})}\leq\ |B^{n}|^{-\alpha}.\]
_The block space \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) generated by central \((r,\alpha)\)-blocks are defined by_
\[\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})=\{f\in L^{r}_{\text{loc}}(\mathbb{Q}_ {p}):f=\sum_{k=1}^{\infty}\lambda_{k}a_{k},\
\[\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}{=\inf\bigg{\{}\sum_{k=1}^{\infty} \lvert\lambda_{k}\rvert:\ f=\sum_{k=1}^{\infty}\lambda_{k}a_{k}\bigg{\}}},\]
_where the infimum is taken over all such decompositions of \(f.\)_
According to the definition of \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}),\) for any central \((r,\alpha)\)-block \(b,\) we have
\[\|b\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}{\leq 1}.\]
For \(r\in(1,\infty),\) let \(r^{\prime}\) be the conjugate of \(r.\) We have Holder's inequality for \(M_{r,\alpha}(\mathbb{Q}_{p})\) and \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\)[28, Lemma 3.2].
**Lemma 2.1**.: _Let \(1<r<\infty\) and \(\alpha>0.\) If \(f\in M_{r,\alpha}(\mathbb{Q}_{p})\) and \(g\in\mathfrak{B}_{r^{\prime},\alpha}(\mathbb{Q}_{p})\) then_
\[\int_{\mathbb{Q}_{p}}\lvert f(x)g(x)\rvert dx\leq\|f\|_{M_{r,\alpha}(\mathbb{ Q}_{p})}\|g\|_{\mathfrak{B}_{r^{\prime},\alpha}(\mathbb{Q}_{p})}.\]
### Subspace of Morrey spaces
Zorko [44] proved that set of continuous functions with compact support \((C_{0}(\mathbb{R}^{n}))\) is not dense in Morrey spaces \((L^{p,\lambda}(\mathbb{R}^{n}))\) and she introduced an important subset of \(L^{p,\lambda}(\mathbb{R}^{n})\) so-called _Zorko subspace_\(L^{p,\lambda}_{0}(\mathbb{R}^{n}),\) which is defined as the closure of \(C_{0}(\mathbb{R}^{n})\) in the \(L^{p,\lambda}(\mathbb{R}^{n})\) norm. Adams and Xiao [1] stated that \(L^{p,\lambda}_{0}(\mathbb{R}^{n})\) is the predual of block spaces \((H^{p^{\prime},\lambda}(\mathbb{R}^{n}))\) and the three spaces \(L^{p,\lambda}_{0}(\mathbb{R}^{n})-H^{p^{\prime},\lambda}(\mathbb{R}^{n})-L^{p, \lambda}(\mathbb{R}^{n})\) are similar to \(VMO(\mathbb{R}^{n})-H^{1}(\mathbb{R}^{n})-BMO(\mathbb{R}^{n})\)(see [34]). In [25], Izumi, Sato and Yabuta considered Morrey spaces on unit circle \(\mathbf{T}\) and proved in detail that \(L^{p,\lambda}_{0}(\mathbf{T})\) is the predual of block spaces.
Motivated by above work, for \(1<r<\infty\) and \(\alpha>0,\) we consider the function
\[f(x):=\begin{cases}\lvert x\rvert_{p}^{\frac{\alpha-1}{r}},&\lvert x\rvert_{ p}\leq 1,\\ 0,&\lvert x\rvert_{p}>1.\end{cases}\]
Then \(f\in M_{r,\alpha}(\mathbb{Q}_{p})\) and for any \(g\in S(\mathbb{Q}_{p}),\) we get \(c>0\) such that
\[\left(\sup_{k\in\mathbb{Z}}\ \frac{1}{\lvert B^{k}\rvert^{\alpha r}}\int_{B^{k} }\lvert f(x)-g(x)\rvert^{r}dx\right)^{1/r}\geq c>0.\]
Hence, functions in \(M_{r,\alpha}(\mathbb{Q}_{p})\) cannot be approximated by functions in \(S(\mathbb{Q}_{p}).\) In particular, \(S(\mathbb{Q}_{p})\) is not dense in \(M_{r,\alpha}(\mathbb{Q}_{p}),\) and therefore we define \(\widetilde{M}_{r,\alpha}(\mathbb{Q}_{p})\) as the closure of \(S(\mathbb{Q}_{p})\) in \(M_{r,\alpha}(\mathbb{Q}_{p}).\)
Analogous to the classical case, one could expect the following:
\[\widetilde{M}_{r,\alpha}(\mathbb{Q}_{p})\overset{*}{\longrightarrow}\mathfrak{ B}_{r^{\prime},\alpha}(\mathbb{Q}_{p})\overset{*}{\longrightarrow}M_{r,\alpha}( \mathbb{Q}_{p}), \tag{2.2}\]
and the spaces \(\widetilde{M}_{r,\alpha}(\mathbb{Q}_{p})-\mathfrak{B}_{r^{\prime},\alpha}( \mathbb{Q}_{p})-M_{r,\alpha}(\mathbb{Q}_{p})\) have a relationship alike to \(VMO(\mathbb{Q}_{p})-H^{1}(\mathbb{Q}_{p})-BMO(\mathbb{Q}_{p}).\) The reader might find the papers [30, 32, 38], where the spaces \(H^{1}(\mathbb{Q}_{p}),\ BMO(\mathbb{Q}_{p})\) and \(VMO(\mathbb{Q}_{p})\) are studied and it is also pointed out that \(BMO(\mathbb{Q}_{p})\) can be characterize as the dual of the space \(H^{1}(\mathbb{Q}_{p}).\)
The following theorem establish Minkowski's integral inequality for block spaces \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}).\)
**Theorem 2.4**.: _Let \(1<r<\infty\) and \(\alpha>0.\) Let \(F\) be a function on the product space \(\mathbb{Q}_{p}\times\mathbb{Q}_{p}\) such that \(\|F(\cdot,y)\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}{\in L^{1}(\mathbb{Q}_{ p})}\) then we have_
\[\bigg{\|}\int\limits_{\mathbb{Q}_{p}}F(\cdot,y)dy\bigg{\|}_{\mathfrak{B}_{r, \alpha}(\mathbb{Q}_{p})}\leq\int\limits_{\mathbb{Q}_{p}}\|F(\cdot,y)\|_{ \mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}dy. \tag{2.3}\]
Proof.: Let \(r^{\prime}\) be the conjugate of \(r\) and suppose \(g\in\widetilde{M}_{r^{\prime},\alpha}(\mathbb{Q}_{p})\) with \(\|g\|_{\widetilde{M}_{r^{\prime},\alpha}^{\prime}(\mathbb{Q}_{p})}{\leq 1}\). Write
\[G(x)=\int\limits_{\mathbb{Q}_{p}}F(x,y)dy.\]
By using Holder's inequality for \(M_{r^{\prime},\alpha}(\mathbb{Q}_{p})\) and \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}))\) we have that
\[\bigg{|}\int_{\mathbb{Q}_{p}}G(x)g(x)dx\bigg{|} \leq\int\limits_{\mathbb{Q}_{p}}\int\limits_{\mathbb{Q}_{p}}|F(x, y)||g(x)|dxdy\] \[\leq\int\limits_{\mathbb{Q}_{p}}\|F(x,y)\|_{\mathfrak{B}_{r, \alpha}(\mathbb{Q}_{p})}\|g\|_{M_{r^{\prime},\alpha}(\mathbb{Q}_{p})}dy\] \[=\int\limits_{\mathbb{Q}_{p}}\|F(x,y)\|_{\mathfrak{B}_{r,\alpha}( \mathbb{Q}_{p})}\|g\|_{\widetilde{M}_{r^{\prime},\alpha}^{\prime}(\mathbb{Q}_ {p})}dy\] \[\leq\int\limits_{\mathbb{Q}_{p}}\|F(x,y)\|_{\mathfrak{B}_{r, \alpha}(\mathbb{Q}_{p})}dy.\]
Taking now on the left the supremum over \(g\in\widetilde{M}_{r^{\prime},\alpha}(\mathbb{Q}_{p})\) with \(\|g\|_{\widetilde{M}_{r^{\prime},\alpha}(\mathbb{Q}_{p})}{\leq 1}\), we obtain that \(G\in(\widetilde{M}_{r^{\prime},\alpha}(\mathbb{Q}_{p}))^{*}.\) Now by (2.2) we have \(G\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) and (2.3) is valid.
## 3. Main Results
First, we study the dilation operators on block spaces. Let \(\tau(\neq 0)\in\mathbb{Q}_{p}\) and for any function \(f\) on \(\mathbb{Q}_{p}\), consider the dilation operator of the form
\[(\mathcal{D}_{\tau}f)(x)=f(\tau x),\qquad x\in\mathbb{Q}_{p}.\]
The following theorem, which discovers the boundedness of dilation operator on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\), is needed in order to prove the main result of this paper.
**Theorem 3.1**.: _Let \(1<r<\infty\) and \(\alpha>0.\) Then, for all \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}),\) we have_
\[\|\mathcal{D}_{\tau}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}{\leq 2} |\tau|_{p}^{-(1/r+\alpha)}\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}.\]
Proof.: Let \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}),\) then by the definition of \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) there exist a sequence of scalers \(\{\lambda_{k}\}_{k\in\mathbb{N}}\) and family of central \((r,\alpha)\)- blocks with the support \(B^{n}\) for \(n\in\mathbb{Z}\) such that
\[f=\sum_{k=1}^{\infty}\lambda_{k}a_{k}, \tag{3.1}\]
and for any \(\epsilon>0,\)
\[\sum_{k=1}^{\infty}|\lambda_{k}|{<(1+\epsilon)}\|f\|_{\mathfrak{B}_{r,\alpha} (\mathbb{Q}_{p})}.\]
Since \(a_{k}\) is a central \((r,\alpha)\)- block with the support \(B^{n}\), therefore we see that \(\mathcal{D}_{\tau}a_{k}\) is a central \((r,\alpha)\)- block with the support \(\tau^{-1}B^{n}\) and
\[\|\mathcal{D}_{\tau}a_{k}\|_{L^{r}(\mathbb{Q}_{p})} =|\tau|_{p}^{-1/r}\|a_{k}\|_{L^{r}(\mathbb{Q}_{p})}\] \[\leq|\tau|_{p}^{-1/r}|B^{n}|^{-\alpha}\] \[=|\tau|_{p}^{-1/r}|\tau|_{p}^{-\alpha}|\tau^{-1}B^{n}|^{-\alpha}\] \[=|\tau|_{p}^{-(1/r+\alpha)}|\tau^{-1}B^{n}|^{-\alpha}.\]
From (3.1),
\[\mathcal{D}_{\tau}f =\sum_{k=1}^{\infty}\lambda_{k}\mathcal{D}_{\tau}a_{k}\] \[=\sum_{k=1}^{\infty}|\tau|_{p}^{-(1/r+\alpha)}\lambda_{k}|\tau|_{p }^{(1/r+\alpha)}\mathcal{D}_{\tau}a_{k}\] \[=\sum_{k=1}^{\infty}\gamma_{k}b_{k},\]
where \(\gamma_{k}=|\tau|_{p}^{-(1/r+\alpha)}\lambda_{k}\) and \(b_{k}=|\tau|_{p}^{(1/r+\alpha)}\mathcal{D}_{\tau}a_{k}.\) By the definition of block spaces, we have \(\mathcal{D}_{\tau}f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})\) and
\[\|\mathcal{D}_{\tau}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})} \leq\sum_{k=1}^{\infty}|\gamma_{k}|\] \[=\sum_{k=1}^{\infty}|\tau|_{p}^{-(1/r+\alpha)}|\lambda_{k}|\] \[\leq|\tau|_{p}^{-(1/r+\alpha)}(1+\epsilon)\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}.\]
As \(\epsilon>0\) was arbitrary, we obtain that
\[\|\mathcal{D}_{\tau}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}\leq 2|\tau|_{p }^{-(1/r+\alpha)}\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p})}.\]
The following \(p\)-adic Hardy-Hilbert type integral inequality is the main result of this paper.
**Theorem 3.2**.: _Let \(1<r<\infty,\ \alpha>0\) and let \(p\)-adic Hardy-Hilbert type integral operator \(\mathscr{T}^{p}\) is defined by (1.1). If \(\mathcal{K}\) satisfies_
\[C_{r,\alpha}=2(1-p^{-1})\sum_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r +\alpha-1)}<\infty. \tag{3.2}\]
_Then_
\[\|\mathscr{T}^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\leq C_{r, \alpha}\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})},\]
_for all \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*}).\)_
Proof.: Let \(y=x\tau\) in (1.1), then, by \(dy=|x|_{p}d\tau\) we have
\[\mathscr{T}^{p}f(x) =\int\limits_{\mathbb{Q}_{p}^{*}}\mathcal{K}(|x|_{p},|x\tau|_{p})f (\tau x)|x|_{p}d\tau\] \[=\int\limits_{\mathbb{Q}_{p}^{*}}|x|_{p}^{-1}\mathcal{K}(1,|\tau| _{p})f(\tau x)|x|_{p}d\tau\] \[=\int\limits_{\mathbb{Q}_{p}^{*}}\mathcal{K}(1,|\tau|_{p})f(\tau x )d\tau\] \[=\sum\limits_{k=-\infty}^{\infty}\;\int\limits_{S^{k}}\mathcal{K} (1,|\tau|_{p})\mathcal{D}_{\tau}f(x)d\tau \tag{3.3}\] \[=\sum\limits_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})\;\int \limits_{S^{k}}\mathcal{D}_{\tau}f(x)d\tau.\]
Let us first apply the norm \(\|\cdot\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\) on both sides of (3) and then using Theorem 2.4 and Theorem 3.1, we get
\[\|\mathscr{T}^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})} \leq\sum\limits_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})\;\left\| \int\limits_{S^{k}}\mathcal{D}_{\tau}f(x)d\tau\right\|_{\mathfrak{B}_{r, \alpha}(\mathbb{Q}_{p}^{*})}\] \[\leq\sum\limits_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})\;\int \limits_{S^{k}}\|\mathcal{D}_{\tau}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p }^{*})}d\tau\] \[\leq\sum\limits_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})\;\int \limits_{S^{k}}2|\tau|_{p}^{-(1/r+\alpha)}\|f\|_{\mathfrak{B}_{r,\alpha}( \mathbb{Q}_{p}^{*})}d\tau\] \[=\sum\limits_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})2p^{-k(1/r+ \alpha)}|S^{k}|\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\] \[=\sum\limits_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})2p^{-k(1/r+ \alpha)}p^{k}(1-p^{-1})\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\] \[=2(1-p^{-1})\sum\limits_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})p ^{-k(1/r+\alpha-1)}\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\] \[=C_{r,\alpha}\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}.\]
Therefore (3), assures that for all \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\), we have
\[\|\mathscr{T}^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\leq C_{r, \alpha}\|f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}.\]
As a consequence of Theorem 3.2, we obtain the \(p\)-adic Hilbert type inequality on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\).
**Theorem 3.3**.: _Let \(1<r<\infty,\;\alpha>0\) and let \(p\)-adic Hilbert operator is defined by (1.2). If \(0<1/r+\alpha<1\) then there is a constant \(C>0\) such that for any \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\)_
\[\|H^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\leq C\|f\|_{\mathfrak{ B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}.\]
Proof.: Let \(\mathcal{K}(|x|_{p},|y|_{p})=\dfrac{1}{|x|_{p}+|y|_{p}}.\) Notice that, \(\mathcal{K}\) is a nonnegative homogeneous function of degree \(-1,\) and
\[C_{r,\alpha} =2(1-p^{-1})\sum_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r+ \alpha-1)}\] \[=2(1-p^{-1})\bigg{(}\sum_{k=-\infty}^{0}\mathcal{K}(1,p^{k})p^{-k (1/r+\alpha-1)}\ +\ \sum_{k=1}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r+\alpha-1)}\bigg{)}\] \[=2(1-p^{-1})\bigg{(}\mathcal{K}(1,1)\ +\ \sum_{k=-\infty}^{-1} \mathcal{K}(1,p^{k})p^{-k(1/r+\alpha-1)}\ +\ \sum_{k=1}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r+\alpha-1)}\bigg{)}\] \[=2(1-p^{-1})\bigg{(}\frac{1}{2}\ +\ \sum_{k=1}^{\infty}\mathcal{K}(1,p^{-k})p^{k (1/r+\alpha-1)}\ +\ \sum_{k=1}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r+\alpha-1)}\bigg{)}\] \[=2(1-p^{-1})\bigg{(}\frac{1}{2}\ +\ \sum_{k=1}^{\infty}\bigg{[}\frac{p^{k }p^{k(1/r+\alpha-1)}}{p^{k}+1}\ +\ \frac{p^{-k(1/r+\alpha-1)}}{1+p^{k}}\bigg{]}\bigg{)}\] \[=2(1-p^{-1})\bigg{(}\frac{1}{2}\ +\ \sum_{k=1}^{\infty}\frac{1}{1+p^{k}}(p^{k (1/r+\alpha-1+1)}\ +\ p^{-k(1/r+\alpha-1)})\bigg{)},\]
since we have \(0<1/r+\alpha<1,\) which implies that \(C_{r,\alpha}<\infty,\) and therefore Theorem 3.2 gives the \(p\)-adic Hilbert type inequality on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*}).\)
We also obtain the \(p\)-adic Hardy type inequality on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*}).\)
**Theorem 3.4**.: _Let \(1<r<\infty,\ \alpha>0\) and let \(p\)-adic Hardy operator is defined by (1.3). If \(0<1/r+\alpha<1\) there is a constant \(C>0\) such that for any \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\)_
\[\|\mathcal{H}^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\leq C\|f\|_ {\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}.\]
Proof.: Let \(\mathcal{K}(|x|_{p},|y|_{p})=|x|_{p}^{-1}\Phi_{E}(|y|_{p}),\) where \(\Phi_{E}\) is the characteristic function of \(E=\{y\in\mathbb{Q}_{p}:|y|_{p}\leq|x|_{p}\},\) and it is obviously a nonnegative homogeneous function of degree \(-1.\) Moreover, we have
\[C_{r,\alpha} =2(1-p^{-1})\sum_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/ r+\alpha-1)}\] \[=2(1-p^{-1})\sum_{k=-\infty}^{0}p^{-k(1/r+\alpha-1)}\] \[=2(1-p^{-1})\bigg{(}1+\sum_{k=1}^{\infty}p^{k(1/r+\alpha-1)} \bigg{)}<\infty,\ (\because 1/r+\alpha<1).\]
Hence, according to Theorem 3.2, we have \(p\)-adic Hardy type inequality on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*}).\)
**Theorem 3.5**.: _Let \(1<r<\infty,\ \alpha>0\) and let the operator \(\mathscr{D}^{p}\) is defined by (1.4). If \(-\frac{\lambda}{2}<1/r+\alpha<\frac{\lambda}{2}+1\) then there is a constant \(C>0\) such that for any \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\)_
\[\|\mathscr{D}^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\leq C\|f\|_ {\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}.\]
Proof.: Let \(\mathcal{K}(|x|_{p},|y|_{p})=|x|_{p}^{-1}\Phi_{E}(|y|_{p}),\) where \(\Phi_{E}\) is the characteristic function of \(E=\{y\in\mathbb{Q}_{p}:|y|_{p}\leq|x|_{p}\},\) and it is obviously a nonnegative homogeneous function of degree \(-1.\) Moreover, we have
\[C_{r,\alpha} =2(1-p^{-1})\sum_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r +\alpha-1)}\] \[=2(1-p^{-1})\sum_{k=-\infty}^{0}p^{-k(1/r+\alpha-1)}\] \[=2(1-p^{-1})\bigg{(}1+\sum_{k=1}^{\infty}p^{k(1/r+\alpha-1)} \bigg{)}<\infty,\ (\because 1/r+\alpha<1).\]
Hence, according to Theorem 3.2, we have \(p\)-adic Hardy type inequality on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*}).\)
**Theorem 3.6**.: _Let \(1<r<\infty,\ \alpha>0\) and let the operator \(\mathscr{D}^{p}\) is defined by (1.4). If \(-\frac{\lambda}{2}<1/r+\alpha<\frac{\lambda}{2}+1\) then there is a constant \(C>0\) such that for any \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\)_
\[\|\mathscr{D}^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\leq C\|f\|_{ \mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}.\]
Proof.: Let \(\mathcal{K}(|x|_{p},|y|_{p})=|x|_{p}^{-1}\Phi_{E}(|y|_{p}),\) where \(\Phi_{E}\) is the characteristic function of \(E=\{y\in\mathbb{Q}_{p}:|y|_{p}\leq|x|_{p}\},\) and it is obviously a nonnegative homogeneous function of degree \(-1.\) Moreover, we have
\[C_{r,\alpha} =2(1-p^{-1})\sum_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r +\alpha-1)}\] \[=2(1-p^{-1})\sum_{k=-\infty}^{0}p^{-k(1/r+\alpha-1)}\] \[=2(1-p^{-1})\bigg{(}1+\sum_{k=1}^{\infty}p^{k(1/r+\alpha-1)} \bigg{)}<\infty,\ (\because 1/r+\alpha<1).\]
Hence, according to Theorem 3.2, we have \(p\)-adic Hardy type inequality on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*}).\)
**Theorem 3.7**.: _Let \(1<r<\infty,\ \alpha>0\) and let the operator \(\mathscr{D}^{p}\) is defined by (1.4). If \(-\frac{\lambda}{2}<1/r+\alpha<\frac{\lambda}{2}+1\) then there is a constant \(C>0\) such that for any \(f\in\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\)_
\[\|\mathscr{D}^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\leq C\|f\|_{ \mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}.\]
Proof.: Let \(\mathcal{K}(|x|_{p},|y|_{p})=\dfrac{1}{|x|_{p}+|y|_{p}}.\) Notice that, \(\mathcal{K}\) is a nonnegative homogeneous function of degree \(-1,\) and \(\mathcal{K}(|x|_{p},|y|_{p})=\dfrac{1}{|x|_{p}+|y|_{p}}.
\[\mathcal{K}(|x|_{p},|y|_{p})=\frac{(|x|_{p}|y|_{p})^{\frac{\lambda}{2}}}{\max\{|x |_{p},|y|_{p}\}^{\lambda+1}},\ \lambda\geq 0. \tag{3.4}\]
We find that \(\mathcal{K}\) is a nonnegative homogeneous function of degree \(-1\), and
\[C_{r,\alpha} =2(1-p^{-1})\sum_{k=-\infty}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/ r+\alpha-1)}\] \[=2(1-p^{-1})\bigg{(}\sum_{k=-\infty}^{0}\mathcal{K}(1,p^{k})p^{-k (1/r+\alpha-1)}\ +\ \sum_{k=1}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r+\alpha-1)}\bigg{)}\] \[=2(1-p^{-1})\bigg{(}\mathcal{K}(1,1)\ +\ \sum_{k=1}^{\infty} \mathcal{K}(1,p^{-k})p^{k(1/r+\alpha-1)}\ +\ \sum_{k=1}^{\infty}\mathcal{K}(1,p^{k})p^{-k(1/r+\alpha-1)}\bigg{)}\] \[=2(1-p^{-1})\bigg{(}1\ +\ \sum_{k=1}^{\infty}p^{-\frac{k\lambda}{2}}p^{k(1/ r+\alpha-1)}\ +\ \sum_{k=1}^{\infty}p^{-k(\frac{\lambda}{2}+1)}p^{-k(1/r+\alpha-1)}\bigg{)}\] \[=2(1-p^{-1})\bigg{(}1\ +\ \sum_{k=1}^{\infty}\bigg{[}p^{k(1/r+ \alpha-1-\frac{\lambda}{2})}\ +\ p^{-k(1/r+\alpha-1+\frac{\lambda}{2}+1)}\bigg{]}\bigg{)}<\infty,\ (\because-\frac{ \lambda}{2}<1/r+\alpha<\frac{\lambda}{2}+1).\]
Consequently, the boundedness of \(\mathscr{D}^{p}\) on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\) is assured by Theorem 3.2.
**Remark 3.1**.: _Taking \(\lambda=0\) in kernal (3.4), we get the p-adic Hardy-Littlewood- Polya inequality on \(\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})\) as follows:_
\[\|\mathscr{D}^{p}f\|_{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}\leq C\|f\| _{\mathfrak{B}_{r,\alpha}(\mathbb{Q}_{p}^{*})}.\]
|
2306.03488 | Correlated Pseudorandomness from the Hardness of Quasi-Abelian Decoding | Secure computation often benefits from the use of correlated randomness to
achieve fast, non-cryptographic online protocols. A recent paradigm put forth
by Boyle $\textit{et al.}$ (CCS 2018, Crypto 2019) showed how pseudorandom
correlation generators (PCG) can be used to generate large amounts of useful
forms of correlated (pseudo)randomness, using minimal interactions followed
solely by local computations, yielding silent secure two-party computation
protocols (protocols where the preprocessing phase requires almost no
communication). An additional property called programmability allows to extend
this to build N-party protocols. However, known constructions for programmable
PCG's can only produce OLE's over large fields, and use rather new splittable
Ring-LPN assumption.
In this work, we overcome both limitations. To this end, we introduce the
quasi-abelian syndrome decoding problem (QA-SD), a family of assumptions which
generalises the well-established quasi-cyclic syndrome decoding assumption.
Building upon QA-SD, we construct new programmable PCG's for OLE's over any
field $\mathbb{F}_q$ with $q>2$. Our analysis also sheds light on the security
of the ring-LPN assumption used in Boyle $\textit{et al.}$ (Crypto 2020). Using
our new PCG's, we obtain the first efficient N-party silent secure computation
protocols for computing general arithmetic circuit over $\mathbb{F}_q$ for any
$q>2$. | Maxime Bombar, Geoffroy Couteau, Alain Couvreur, Clément Ducros | 2023-06-06T08:13:12Z | http://arxiv.org/abs/2306.03488v1 | # Correlated Pseudorandomness from the Hardness of Quasi-Abelian Decoding
###### Abstract
Secure computation often benefits from the use of correlated randomness to achieve fast, non-cryptographic online protocols. A recent paradigm put forth by Boyle _et al._ (CCS 2018, Crypto 2019) showed how _pseudorandom correlation generators_ (PCG) can be used to generate large amounts of useful forms of correlated (pseudo)randomness, using minimal interactions followed solely by local computations, yielding _silent_ secure two-party computation protocols (protocols where the preprocessing phase requires almost no communication). Furthermore, _programmable_PCG's can be used similarly to generate multiparty correlated randomness to be used in silent secure N-party protocols. Previous works constructed very efficient (non-programmable) PCG's for correlations such as random oblivious transfers. However, the situation is less satisfying for the case of _random oblivious linear evaluation_ (OLE), which generalises oblivious transfers over large fields, and are a core resource for secure computation of arithmetic circuits. The state-of-the-art work of Boyle _et al._ (Crypto 2020) constructed programmable PCG's for OLE, but their work suffers from two important downsides: (1) it only generates OLE's over _large fields_, and (2) it relies on a relatively new "splittable" ring-LPN assumption, which lacks strong security foundations.
In this work, we construct new programmable PCG's for the OLE correlation, that overcome both limitations. To this end, we introduce the _quasi-abelian syndrome decoding problem_ (QA-SD), a family of assumptions which generalises the well-established quasi-cyclic syndrome decoding assumption. Building upon QA-SD, we construct new programmable PCG's for OLE's over any field \(\mathbb{F}_{q}\) with \(q>2\). Our analysis also sheds light on the security of the ring-LPN assumption used in Boyle _et al._ (Crypto 2020). Using our new PCG's, we obtain the first efficient N-party silent secure computation protocols for computing general arithmetic circuit over \(\mathbb{F}_{q}\) for any \(q>2\).
Keywords:Pseudorandom correlation generators, oblivious linear evaluation, quasi-abelian codes, silent secure computation
**Table of Contents**
* 1 Introduction
* 1.1 PCG's: State of the Art and Challenges
* 1.2 Our Contributions
* 1.3 Related Works
* 1.4 Organization
* 2 Technical Overview
* 2.1 Generating Pseudorandom Correlations: a Template
* 2.2 Quasi-Abelian Codes to the Rescue
* 3 Preliminaries
* 3.1 Syndrome Decoding Assumptions
* 3.2 The Linear Test Framework
* 4 Group Algebras and Quasi-Abelian Codes
* 4.1 Quasi-Abelian Codes
* 4.2 Duality for Quasi-Abelian Codes
* 4.3 Fast-Fourier Transform and Encoding
* 4.4 The Quasi-Abelian Decoding Problem
* 4.5 Security Analysis
* 5 Pseudorandom Correlation Generators from QA-SD
* 5.1 A Template for Programmable PCG for OLE from QA-SD
* 5.2 Instantiating the Group Algebra
* 6 Concrete Cryptanalysis
* 6.1 Instance Projection via Quotient
* 6.2 Information Set Decoding
* 6.3 Prange and statistical decoding (Low-Weight Parity-Check)
* 6.4 Algebraic Decoding Attacks
* 6.5 Attacks on Multivariate LWE
* 6.6 Decoding One-Out-Of Many
* 7 Applications to Secure Computation
* 7.1 Application : (N-party) multiplication triples generation for arithmetic circuit
* 7.2 Secure Computation with Circuit-Dependent Preprocessing
* A Additional Preliminaries
* A.1 Function Secret Sharing
* A.2 Pseudorandom Correlation Generators
* B From Decision-QA-SD to Search-QA-SD
* C Algebraic number theory in function fields
* C.1 Algebraic function fields.
* C.2 Galois extensions.
* C.3 The Carlitz module
* D The Curious Case of \(\mathbb{F}_{2}\)
* D.1 An attempt based on the Carlitz module
* D.2 Building OLE's.
* D.3 QA-SD to the rescue.
* D.4 A note on efficiency.
## 1 Introduction
Correlated randomness is a powerful resource in secure computation. Following the seminal work of Beaver [1], many lightweight, concretely efficient secure computation protocols have been designed in a model where the parties have access to long trusted correlated random strings: \(\Omega(n)\)-length instances of a simple correlation enable securely computing circuits with \(n\) gates. Depending on the setting, various correlations are used: for example, oblivious transfer (OT) correlations are used for two-party (semi-honest) secure computation of Boolean circuits, and oblivious
linear evaluation (\(\mathsf{OLE}\)) correlations, which generalize \(\mathsf{OT}\) over arbitrary fields, enable 2-party semi-honest secure computation of arithmetic circuits. Eventually, \(n\)-party Beaver triples enable \(n\)-party semi-honest secure computation of arithmetic circuits, and authenticated Beaver triples enable maliciously secure computation of arithmetic circuits.
Since protocols in the correlated randomness paradigm are lightweight and very efficient, they gave rise to a popular, two-stage approach: first, the parties run an input-independent _preprocessing phase_, which securely generates and distributes the correlated strings, and second, these strings are consumed by an _online_ protocol. Traditional approaches for implementing the preprocessing phase had \(\Omega(n)\) communication [12, 13, 14] and formed the efficiency bottleneck of the overall protocol. The situation changed recently with a new approach, introduced in [1, 15, 16] and further refined in many subsequent works [1, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26], with appealing efficiency features such as a one-time, \(o(n)\)-communication phase followed solely by local computation. At the heart of this approach is the notion of _pseudorandom correlation generators_ (\(\mathsf{PCG}\)'s). Informally, a \(\mathsf{PCG}\) has two algorithms: \(\mathsf{Gen}(1^{\lambda})\) outputs two _short correlated keys_\((\mathsf{k}_{0},\mathsf{k}_{1})\), and \(R_{\sigma}\leftarrow\mathsf{Expand}(\mathsf{k}_{\sigma})\) stretches \(\mathsf{k}_{\sigma}\) into a long string \(R_{\sigma}\), such that \((R_{0},R_{1})\) is a pseudorandom instance of a target correlation. \(\mathsf{PCG}\)'s enable an efficient, two-stage _silent_ preprocessing phase:
1. First, the parties securely distribute the short \(\mathsf{PCG}\) seeds, using a small amount of work and communication (often independent of the circuit size).
2. Second, the parties locally stretch the \(\mathsf{PCG}\)'s into long correlated pseudorandom strings: this part is the bulk of the computation, but does not require any further communication among the parties.
This is the model of secure computation with _silent preprocessing_ (or _silent secure computation_ in short), where most of the preprocessing phase is pushed offline. Previous works gave efficient constructions of \(\mathsf{PCG}\)'s for various correlations such as \(\mathsf{OT}\)'s [1, 14, 2], \(\mathsf{BCG}^{+}\)[22], vector-\(\mathsf{OLE}\)[18], \(\mathsf{OLE}\)'s over large fields [18], authenticated Beaver triples [18] and many more. These \(\mathsf{PCG}\)'s all build upon a common template, which combines function secret sharing (\(\mathsf{FSS}\)) for simple function classes with suitable variants of the syndrome decoding assumption.
### \(\mathsf{PCG}\)'s: State of the Art and Challenges
Very efficient constructions of \(\mathsf{PCG}\)'s for the \(\mathsf{OT}\) correlations have been proposed [1, 19, 20, 21]. The most recent constructions (see [20, 18]) allow to generate millions of random \(\mathsf{OT}\)'s per second on one core of a standard laptop. Combined with the GMW protocol, they effectively enable extremely efficient two-party secure computation of Boolean circuits in the semi-honest model, with minimal communication in the preprocessing phase (a few dozen of kilobytes, independent of the circuit size), followed by cheap local computation, and a fast online phase (exchanging four bits per AND gate).
The situation, however, is much less satisfactory in essentially all other standard settings of secure computation, where the \(\mathsf{OT}\) correlation is not the best choice of correlation5, and one of the major open problems in this line of work is to improve this state of affair. Concretely, when targeting any one of _multiparty_ computation (with \(N>2\) parties), _arithmetic_ computation (for arithmetic circuits over a field \(\mathbb{F}\) of size \(|\mathbb{F}|>2\)), or _malicious_ security, the best-known \(\mathsf{PCG}\)-based solutions lag way behind the state of the art for 2-party, semi-honest secure computation of Boolean circuits. At a high level, the problem is twofold:
Footnote 5: While the \(\mathsf{OT}\) correlation is complete even for \(N\)-party malicious secure computation of arithmetic circuits, its use induces large overheads in the online phase: an \(\Omega(N^{2})\) communication overhead for handling \(N\) parties, an \(\Omega(\log^{2}|\mathbb{F}|)\) overhead for handling larger fields \(\mathbb{F}\), and an \(\Omega(\lambda)\) overhead for handling malicious parties. In contrast, other choices of correlated randomness can avoid each of these overheads.
* Secure computation of arithmetic circuits requires the \(\mathsf{OLE}\) correlation rather than the \(\mathsf{OT}\) correlation, and the constructions of [1, 18, 19, 20] are inherently limited to the \(\mathsf{OT}\) correlation. To handle \(\mathsf{OLE}\), a fundamentally different approach is required.
Additionally, handling \(N>2\) parties or achieving malicious security both require the underlying \(\mathsf{PCG}\) for \(\mathsf{OLE}\) (or \(\mathsf{OT}\)) to satisfy a property known as _programmability_ (at a high level, programmability allows both to generate \(N\)-party correlations from \(O(N^{2})\) 2-party correlations, which is required because all known \(\mathsf{PCG}\)'s are inherently restricted to the 2-party setting, and to _authenticate_ 2-party correlations with a MAC, which is needed for malicious security). Unfortunately, the constructions of [1, 2, 3] cannot (by design) achieve programmability.
These two limitations were addressed in the recent work of [1], which introduced the first (reasonably efficient) construction of programmable \(\mathsf{PCG}\) for the \(\mathsf{OLE}\) correlation. While not as efficient as the best known \(\mathsf{PCG}\)'s for \(\mathsf{OT}\), it can produce around \(10^{5}\)\(\mathsf{OLE}\)'s per second on a standard laptop. However, the result of [1] suffers from two important downsides:
* it can only produce \(\mathsf{OLE}\)'s over _large enough fields_ (concretely, the field size must be larger than the circuit size). This leaves open the question of designing efficient programmable \(\mathsf{PCG}\)'s for \(\mathsf{OLE}\) over small fields.
* it relies on a relatively new _ring-\(\mathsf{LPN}\) with splittable polynomial_ assumption which states, in essence, that \((a,as+e)\) is hard to distinguish from \((a,b)\), where \(a,b\) are random polynomials from a ring \(\mathcal{R}=\mathbb{F}_{p}[X]/(P(X))\) where \(P\) splits into \(\deg(P)\) linear factors, and \(s,e\) are random _sparse_ polynomials from \(\mathcal{R}\). The ring-\(\mathsf{LPN}\) assumption was introduced a decade ago in [12] to build efficient authentication protocols, and it has received some attention from the cryptography community [1, 2, 13, 1, 1, 1, 1, 1]. However, so far, we lack both a principled understanding of _which_ choice of the underlying polynomial \(P\) yield solid instances (beyond the observation that reducible polynomials seem to enable more efficient attacks [1, 1]), and a general methodology to argue the security of ring-\(\mathsf{LPN}\) assumptions.
At a high level, the construction of \(\mathsf{PCG}\) for \(\mathsf{OLE}\) from [1] proceeds by generating a single large pseudorandom \(\mathsf{OLE}\) correlation over a polynomial ring \(\mathcal{R}=\mathbb{F}_{p}[X]/(P(X))\), assuming the hardness of the ring-\(\mathsf{LPN}\) assumption over \(\mathcal{R}\). When \(P\) splits into \(N=\deg(P)\) linear factors, the Chinese Remainder Theorem permits to convert this large \(\mathsf{OLE}\) correlation over \(\mathcal{R}\) into \(N\)\(\mathsf{OLE}\) correlations over \(\mathbb{F}_{p}\) (by reducing it modulo each of the factors of \(P\)). Note that the condition that \(P\) splits requires \(|\mathbb{F}_{p}|\geqslant N\), hence the restriction to large fields. Because the ring-\(\mathsf{LPN}\) assumption with a splittable polynomial is relatively new, the authors also provided a broad overview of its security against standard attacks and provided an ad-hoc analysis of the relation between the choice of the polynomial \(P\) and the security strength of this assumption.
### Our Contributions
In this work, we put forth and analyze a new general family of cryptographic assumptions related to the hardness of decoding codes defined over group algebras. A problem called _quasi-abelian syndrome decoding_ (\(\mathsf{QA}\)-\(\mathsf{SD}\)). Our family of assumptions builds upon quasi-abelian codes, a well-known family of codes in algebraic coding theory. It generalizes both the ring-\(\mathsf{LPN}\) assumption from [1] under some conditions on the underlying choice of polynomial and the quasi-cyclic syndrome decoding assumption. The latter assumption was in particular used in several recent works [1, 2, 3], [1, 2, 1], including prominent submissions to the NIST post-quantum competition. We show that working over group algebras presents several advantages:
1. a broad family of possible instantiations;
2. a rich structure that allows stronger security foundations;
3. a group algebra contains a canonical basis given by the group itself, providing a canonical notion of sparsity.
Building on our new family of assumptions, we overcome both downsides of the recent work of [1] and obtain \(\mathsf{PCG}\)'s for \(\mathsf{OLE}\)'s over _general fields_ with _solid security foundations_. In more details:
A Template for Building New PCG's.We revisit and generalize the approach of [BCG\({}^{+}\)20b] for building pseudorandom correlation generators for OLE from ring-LPN. We show that any choice of quasi-abelian code yields a PCG for OLE over a group algebra \(\mathcal{R}\) under the corresponding QA-SD assumption. We identify natural instances of our framework such that the group algebra \(\mathcal{R}\):
1. supports fast operations via generalizations of the Fast Fourier Transform (which allows to achieve efficiency comparable to that of [BCG\({}^{+}\)20b]), and
2. is isomorphic to a product \(\mathbb{F}_{q}\times\cdots\times\mathbb{F}_{q}\) of \(N\) copies of \(\mathbb{F}_{q}\) for arbitrary small \(q>2\) and arbitrary large \(N\) and therefore yields an efficient PCG for generating \(N\) copies of OLE over \(\mathbb{F}_{q}\) for any \(q>2\).
Therefore, we obtain new constructions of efficient programmable PCG over small fields, circumventing the main limitation of the work of [BCG\({}^{+}\)20b]. Our PCG's enable for the first time secure computation of arithmetic circuits over fields \(\mathbb{F}\) of any size \(|\mathbb{F}|>2\) in the silent preprocessing model. This holds for two or more parties, in the semi-honest or in the malicious setting. The concrete efficiency of our construction is comparable to that of [BCG\({}^{+}\)20b] (we refer the reader to Table 1 for details on the seed size and stretch of our PCG's). Concretely, our costs are essentially identical, up to the fact that [BCG\({}^{+}\)20b] uses FFT's over cyclotomic rings, while our generalization to arbitrary field relies on a generic FFT. Because FFT's over cyclotomic rings have been thoroughly optimized in hundreds of papers, we expect that using generic FFT's will be noticeably slower. Still, we identify some concrete FFT-friendly choices of quasi-abelian codes where fast FFT algorithms comparable to cyclotomic FFT's could in principle be designed. We leave the concrete optimization of these FFT algorithms to future work.
#### 2.1.1 Strong Security Foundations.
Building upon recent results on the minimum distance of quasi-abelian codes, we give evidence that the assumptions from our family cannot be broken by any attack from the _linear test framework_[BCG\({}^{+}\)20a, CRR21], a broad framework that encompasses essentially all known attacks on LPN and syndrome decoding (including ISD, Gaussian elimination, BKW, and many more). Our approach also sheds light on the security of the ring-LPN assumption. In essence, a conceptual message from our new approach is that some choices of \(P\) in the ring \(\mathbb{F}_{q}[X]/(P(X))\) yield an instance of QA-SD, and as such inherit our arguments of resistance against linear attacks. In contrast, other (seemingly very similar) choices of \(P\) yield instances that are _completely broken_ by linear attacks. This suggests that choosing instantiations of the ring-LPN assumption should be done with care, and our framework yields a way to do it with strong security guarantees.
As a contribution of independent interest, we also complement our security analysis by showing, for all concrete instantiations of our framework that we use in our new PCG constructions, a search-to-decision reduction for the underlying assumption. Therefore, we reduce the security of all our new PCG's to (instances of) the _search_ QA-SD assumption.
#### 2.1.2 The Case of \(\mathbb{F}_{2}\).
Perhaps intriguingly, the most natural way to instantiate our framework goes all the way to \(\mathbb{F}_{3}\), but breaks down over \(\mathbb{F}_{2}\). We prove a theorem that states that this is in fact inherent to the approach. Basically, the reason why the construction is not adaptable to \(\mathbb{F}_{2}\) is due to the fact that the product ring \(\mathbb{F}_{2}^{N}=\mathbb{F}_{2}\times\cdots\times\mathbb{F}_{2}\) has only one invertible element and hence can never be realised as a group algebra but in the irrelevant case of \(N=1\). We then discuss a general methodology toward circumventing this limitation over \(\mathbb{F}_{2}\). While our approach falls short of providing a full-fledged solution, it highlights a possible avenue towards the intriguing goal of one day getting an efficient programmable PCG for OLE's over \(\mathbb{F}_{2}\).
#### 2.1.3 Applications.
Building upon our new programmable PCG's, we obtain
* (via Beaver triples) secure \(N\)-party computation of arithmetic circuits over \(\mathbb{F}_{q}\), for any \(q>2\), with silent preprocessing and communication \(N^{2}\cdot\mathsf{poly}(\lambda)\cdot\log s\) bits (preprocessing phase) plus \(2Ns\) field elements (online phase), where \(s\) is the number of multiplication gates. The silent preprocessing phase involves \(O(N\mathsf{poly}(\lambda)s\log s)\) work per party. For small numbers of parties, the \(N^{2}\cdot\mathsf{poly}(\lambda)\cdot\log s\) is dominated by the \(2Ns\) field elements for values of \(s\) as low as \(2^{25}\).
* (via circuit-dependent correlated randomness) secure \(N\)-party computation of a batch of \(T\) arithmetic circuits over \(\mathbb{F}_{q}\), for any \(q>2\), with silent preprocessing and communication \(N^{2}\cdot\mathsf{poly}(\lambda)\cdot s\log T\) bits (preprocessing phase) plus \(NTs\) field elements (online phase), where \(s\) is the number of multiplication gates in each circuit. The silent preprocessing phase involves \(O(N\mathsf{poly}(\lambda)sT\log T)\) work per party.
As in [1], our protocols extend to the malicious setting by generating _authenticated_ correlated randomness instead, which our \(\mathsf{PCG}\)'s allow as well, and using a maliciously secure seed distribution protocol. Since the extension to authenticated correlated randomness and the seed distribution protocols in [1] are oblivious to the concrete choice of underlying ring \(\mathcal{R}\), they directly apply to our new \(\mathsf{PCG}\)'s from \(\mathsf{QA}\)-\(\mathsf{SD}\).
### Related Works
Traditional constructions of \(\mathsf{OLE}\) protocols require communication for each \(\mathsf{OLE}\) produced. The work of [13] requires \(\Omega(\log|\mathbb{F}|)\) string-\(\mathsf{OT}\)'s per \(\mathsf{OLE}\)6. \(\mathsf{OLE}\)'s can also be produced using state-of-the-art protocols based on homomorphic encryption [10, 11], _e.g._ producing 64MB worth of \(\mathsf{OLE}\)'s requires about 2GB of communication with Overdrive [10]. A recent direct construction of \(\mathsf{OLE}\) from \(\mathsf{Ring}\)-\(\mathsf{LWE}\) has also been described in [1]. Using their construction, generating a batch of \(\mathsf{OLE}\)'s has an amortized communication of about 8 elements of \(\mathbb{F}\) over a large enough field.
Footnote 6: This approach crucially requires structured \(\mathsf{OT}\)’s, hence we cannot remove the communication by using pseudorandom \(\mathsf{OT}\)’s.
PCG's for \(\mathsf{OLE}\)'s allow removing most of the communication overhead, by generating a large number of pseudorandom \(\mathsf{OLE}\)'s using sublinear communication. The work of [1], which is our starting point, has a computational cost comparable to that of recent \(\mathsf{OLE}\) protocols [10], but a considerably lower communication ; however, it only works over large fields. There has been several attempts to build PCG's for \(\mathsf{OLE}\)'s over small fields, but all suffer from severe downsides. The work of [1] describes a PCG construction that combines BGV-based somewhat homomorphic encryption (under ring-LWE) and a new, ad-hoc variant of the multivariate quadratic assumption with sparse secrets. Their PCG's require very large seed sizes and are only efficient when generating huge batches ([1] estimates about \(7.000\)\(\mathsf{OLE}\)'s per second using a 3GB seed size when producing 17GB worth of triples).
In an appendix, the work of [1] shows that the standard variant of syndrome decoding with quasi-cyclic code yields a \(\mathsf{PCG}\) for \(\mathsf{OLE}\)'s over arbitrary fields (including small fields). At a high level, the construction uses the fact that given two pseudorandom vectors \(\mathbf{x}^{\mathsf{T}}=\mathbf{H}\cdot\mathbf{e}_{\mathsf{T}}^{\mathsf{T}}\) and \(\mathbf{y}=\mathbf{H}\cdot\mathbf{e}_{\mathsf{y}}^{\mathsf{T}}\), generating shares of their pointwise products (_i.e._ a batch of pseudorandom \(\mathsf{OLE}\) correlations) reduces to generating shares of the diagonal of \(\mathbf{x}^{\mathsf{T}}\cdot\mathbf{y}=\mathbf{H}\cdot\left(\mathbf{e}_{ \mathsf{T}}^{\mathsf{T}}\cdot\mathbf{e}_{y}\right)\cdot\mathbf{H}^{\mathsf{T}}\), and the term \((\mathbf{e}_{\mathsf{T}}^{\mathsf{T}}\cdot\mathbf{e}_{y})\) can be shared efficiently with \(\mathsf{FSS}\) for point functions. However, the computational cost of generating \(n\)\(\mathsf{OLE}\)'s this way scales as \(\Omega(n^{2}\log n)\) (ignoring \(\mathsf{poly}(\lambda)\) factors), which makes it entirely impractical in practice (the sublinearity in these protocols only "kicks in" for values of \(n\) above about \(2^{30}\)).
Eventually, two recent works on PCG's [1], BCG\({}^{+}\)22] have introduced new variants of syndrome decoding called respectively _variable-density_ and _expand-accumulate_\(\mathsf{LPN}\). Each of these variants can actually be used to construct programmable \(\mathsf{PCG}\)'s for \(\mathsf{OLE}\) over small fields (though that was not their primary purpose: \(\mathsf{VDLPN}\) was introduced to construct pseudorandom correlation _functions_, and \(\mathsf{EALPN}\) to obtain more efficient "online-offline" PCG's for \(\mathsf{OT}\)). The intuition is that both assumptions can be formulated as the hardness of distinguishing \(\mathbf{H}\cdot\mathbf{e}^{\mathsf{T}}\) from random, where \(\mathbf{H}\) is a _sparse_ matrix, and the noise distribution is such that the term \((\mathbf{e}_{\mathsf{T}}^{\mathsf{T}}\cdot\mathbf{e}_{y})\) can still be shared efficiently using some appropriate \(\mathsf{FSS}\). In this case, extracting the diagonal of \(\mathbf{H}\cdot\left(\mathbf{e}_{\mathsf{T}}^{\mathsf{T}}\cdot\mathbf{e}_{y} \right)\cdot\mathbf{H}^{\mathsf{T}}\) does not require computing the full square matrix, and scales only as \(\mathsf{poly}(\lambda)\cdot\tilde{\Omega}(n)\). However, the hidden costs remain prohibitively large. Concretely, for both the \(\mathsf{EALPN}\) assumption and the \(\mathsf{VDLPN}\) assumption, the row-weight of \(\mathbf{H}\) must grow as \(\lambda\cdot\log n\)[1], BCG\({}^{+}\)22, CD23] (for some specific security parameter \(\lambda\)), hence the cost of generating \(n\)\(\mathsf{OLE}\)'s boils down to \(\lambda^{2}\cdot\log^{2}n\) invocations of an \(\mathsf{FSS}\) scheme, where the concrete security parameter \(\lambda\) must be quite large: the recent analysis of [1] estimates \(\lambda\approx 350\). For \(n=2^{30}\), this translates to around \(10^{8}\) invocation of an \(\mathsf{FSS}\) scheme for _each_\(\mathsf{OLE}\) produced, which is nowhere near practical.
### Organization
We provide a technical overview of our results in Section 2, and preliminaries in Section 3. Section 4 is devoted to introducing group algebras, quasi-abelian codes, and our new QA-SD family of assumptions. Section 5 uses our new QA-SD assumption to build programmable PCG's, adapting and generalising the template of [2]. Section 6 covers the concrete security analysis of QA-SD against various known attacks, and in particular against _folding attacks_, which exploit the structure of the assumption to reduce the dimension of the instances. Finally, in Section 7 we elaborate on the applications of our new PCG's to secure computation. Appendix A provides more detailed preliminaries on FSS and PCG's. Appendix B complements our study of QA-SD by providing a search-to-decision reduction for the subset of the QA-SD family used to construct our PCG's. Appendix C provides some background on function field theory, which is used in the analysis of some of our results. Appendix C adds background on the Carlitz module, which is at the heart of our (ultimately unsuccessful) attempt to extend our framework to OLE's over \(\mathbb{F}_{2}\). Appendix D covers our approach for building OLE's over \(\mathbb{F}_{2}\) and identifies the missing ingredient.
## 2 Technical Overview
### Generating Pseudorandom Correlations: a Template
A general template to construct PCG's was put forth in [2], and further refined in subsequent works. At a high level, the template combines two ingredients: a method that uses _function secret sharing_ to generate a _sparse_ version of the target correlation, and a carefully chosen linear code for which the syndrome decoding problem is conjectured to be intractable. To give a concrete example let us consider the task of generating an OLE correlation over a large polynomial ring \(\mathcal{R}=\mathbb{F}_{p}[X]/(P)\), where \(P\) is some degree-\(N\) split polynomial, and \(\mathbb{F}_{p}\) is a field. In a ring-OLE correlation, each party \(P_{\sigma}\) receives \((x_{\sigma},y_{\sigma})\in\mathcal{R}^{2}\) for \(\sigma=0,1\), which are random conditioned on \(x_{0}+x_{1}=y_{0}\cdot y_{1}\).
Sparse correlations from FSS.Informally, FSS for a function class \(\mathcal{F}\) allows to share functions \(f:\{0,1\}^{\ell}\mapsto\mathbb{G}\) (where \(\mathbb{G}\) is some group) from \(\mathcal{F}\) into \((f_{0},f_{1})\leftarrow\mathsf{Share}(f)\) such that
1. \(f_{\sigma}\) hides \(f\) (computationally), and
2. for any \(x\in\{0,1\}^{\ell}\), \(f_{0}(x)+f_{1}(x)=f(x)\).
Since FSS can always be achieved trivially by sharing the truth table of \(f\), one typically wants the shares to be compact (_i.e._ not much larger than the description of \(f\)). Efficient constructions of FSS from a length-doubling pseudorandom generator are known for some simple function classes, such as _point functions_ (functions \(f_{\alpha,\beta}\) that evaluate to \(\beta\) on \(x=\alpha\), and to \(0\) otherwise). FSS for point functions can be seen as a succinct way to privately share a long unit vector. More generally, FSS for \(t\)-point functions yield a succinct protocol for privately sharing a long \(t\)-sparse vector.
An FSS for multipoint functions immediately gives a strategy to succinctly distribute a _sparse_ ring-OLE correlation: sample two random \(t\)-sparse polynomials \(y_{0},y_{1}\) (_i.e._ polynomials with \(t\) nonzero coefficients in the standard basis), and define \(f\) to be the \(t^{2}\)-point function whose truth table are the coefficients of \(y_{0}\cdot y_{1}\) (over \(\mathbb{F}_{p}[X]\)). Each party \(P_{\sigma}\) receives \(\mathsf{k}_{\sigma}=(y_{\sigma},f_{\sigma})\), where \((f_{0},f_{1})=\mathsf{Share}(f)\). With standard constructions of multipoint FSS, the size of \(\mathsf{k}_{\sigma}\) is \(O(t^{2}\cdot\log N)\) (ignoring \(\lambda\) and \(\log p\) terms): whenever \(t\) is small, this is an exponential improvement over directly sharing \(y_{0}\cdot y_{1}\) (which would yield keys of length \(O(N)\)).
From sparse to pseudorandom using syndrome decoding.It remains to convert the sparse correlation into a pseudorandom correlation. This step is done non-interactively, by locally _compressing_ the sparse correlation using a suitable linear mapping. Viewing the compressed vector as the syndrome of a linear code \(\mathcal{C}\) (the compressive linear mapping is the parity-check matrix of \(\mathcal{C}\)). The mapping must satisfy two constraints: it should be _efficient_ (linear or quasi-linear in its input size), and its output on a sparse vector should be _pseudorandom_. Fortunately, decades of research on coding theory have provided us with many linear mappings which are conjectured to satisfy the latter;
the corresponding assumptions are usually referred to as (variants of) of the _syndrome decoding_ (SD) assumption, or as (variants of) the _learning parity with noise_ (LPN) assumption7.
Footnote 7: The name LPN historically refers to the hardness of distinguishing oracle access to samples \((\mathbf{a},\langle\mathbf{a},\mathbf{s}\rangle+e)\) (for a fixed secret \(\mathbf{s}\)) from samples \((\mathbf{a},b)\) where \(\mathbf{a},\mathbf{s}\) are random vectors, \(e\) is a biased random bit, and b is a uniform random bit. This becomes equivalent to the syndrome decoding assumption when the number of calls to the oracle is _a priori bounded_, hence the slight abuse of terminology. Since we will mostly use tools and results from coding theory in this work, we will use the standard coding theoretic terminology “syndrome decoding” to refer to the variant with bounded oracle access, which is the one used in all works on PCG’s.
Going back to our example, we will use two instances \((x_{\sigma}^{0},y_{\sigma}^{0})_{\sigma\in\{0,1\}}\) and \((x_{\sigma}^{1},y_{\sigma}^{1})_{\sigma\in\{0,1\}}\) of a sparse ring-OLE correlation. Fix a random element \(a\stackrel{{\text{\tiny$\mathcal{E}$}}}{{\leftarrow}}\mathcal{R}\). Each party \(P_{\sigma}\) defines
\[y_{\sigma}\leftarrow(1,\mathbf{a})\cdot(y_{\sigma}^{0},y_{\sigma}^{1})^{ \intercal}=y_{\sigma}^{0}+a\cdot y_{\sigma}^{1}\bmod P(X).\]
The assumption that \(y_{\sigma}\) is indistinguishable from random is known in the literature as the _ring-_LPN_ assumption_, and has been studied in several previous works [1] (for an appropriate choice of \(P\), it is also equivalent to the quasi-cyclic syndrome decoding assumption, used in NIST submissions such as BIKE [1] and HQC [2]). Furthermore, using FFT, the mapping can be computed in time \(O(N)\). Then, observe that we have
\[y_{0}y_{1}=(y_{0}^{0}+a\cdot y_{0}^{1})\cdot(y_{1}^{0}+a\cdot y_{1}^{1})=y_{0} ^{0}\cdot y_{1}^{0}+a\cdot(y_{1}^{0}\cdot y_{0}^{1}+y_{0}^{1}\cdot y_{1}^{1}) +a^{2}\cdot(y_{0}^{1}\cdot y_{1}^{1}),\]
where the polynomials \(y_{0}^{0}\cdot y_{1}^{0},y_{1}^{0}\cdot y_{0}^{1},y_{0}^{1}\cdot y_{1}^{1}\), and \(y_{0}^{1}\cdot y_{1}^{1}\) are all \(t^{2}\)-sparse. Hence, each of these four polynomials can be succinctly shared using FSS for a \(t^{2}\)-point function. Therefore, shares of \(y_{0}y_{1}\) can be reconstructed using a local linear combination of shares of sparse polynomials, which can be distributed succinctly using FSS for multipoint functions.
Wrapping upThe final PCG looks as follows: each party \(P_{\sigma}\) gets \((y_{\sigma}^{0},y_{\sigma}^{1})\) together with four FSS shares of \(t^{2}\)-point functions whose domain correspond to these four terms. The PCG key size scales as \(O(t^{2}\log N)\) overall. Expanding the keys amounts to locally computing the shares of the sparse polynomial products (four evaluations of the FSS on their entire domain, in time \(O(N)\)) and a few \(\tilde{O}(N)\)-time polynomial multiplications with \(a\) and \(a^{2}\) (which are public parameters). Observe that when \(P\) splits into \(N\) linear factors over \(\mathbb{F}_{p}[X]\), a single pseudorandom ring-OLE correlation as above can be locally transformed into \(N\) instances of pseudorandom OLE's over \(\mathbb{F}_{p}\): this is essentially the construction of PCG for OLE of [1]. However, this requires \(p\) to be larger than \(N\), restricting the construction to generating OLE's over large fields. Furthermore, the requirement of a splitting \(P\) makes the construction rely on a less-studied variant of ring-LPN.
### Quasi-Abelian Codes to the Rescue
We start by abstracting out the requirement of the construction of [1]. In coding theoretic terms, the hardness of distinguishing \((a,a\cdot e+f)\) with sparse \((e,f)\) is an instance of the (decisional) _syndrome decoding problem_ with respect to a code with parity check matrix \((1,a)\). At a high level, and sticking to the coding-theoretic terminology, we need a ring \(\mathcal{R}\) such that
1. the (decisional) syndrome decoding problem with respect to the matrix \((1,a)\) is intractable with high probability over the random choice of \(a\stackrel{{\text{\tiny$\mathcal{E}$}}}{{\leftarrow}}\mathcal{R}\);
2. given _sparse_ elements \((e,f)\) of \(\mathcal{R}\), it is possible to succinctly share the element \(e\cdot f\in\mathcal{R}\);
3. operations on \(\mathcal{R}\), such as products, can be computed efficiently (_i.e._ in time quasilinear in the description length of elements of \(\mathcal{R}\));
4. eventually, \(\mathcal{R}\) is isomorphic to \(\mathbb{F}\times\cdots\times\mathbb{F}\) for some target field \(\mathbb{F}\) of interest.
We identify _quasi-abelian codes_ as a family of codes that simultaneously satisfy all the above criteria. At a high level, a quasi-abelian code of index \(\ell\) has codewords of the form
\[\{(\mathbf{m}\mathbf{\Gamma}_{1},\ldots,\mathbf{m}\mathbf{\Gamma}_{\ell})\mid \mathbf{m}=(m_{1},\ldots,m_{\ell})\in(\mathbb{F}_{q}[G])^{k}\},\]
where each \(\mathbf{\Gamma}_{i}\) is an element of \(\mathbb{F}_{q}[G]^{k}\). Here, \(\mathbb{F}_{q}[G]\) denotes the _group algebra_:
\[\mathbb{F}_{q}[G]\stackrel{{\text{def}}}{{=}}\left\{\sum_{g\in G}a _{g}g\mid a_{g}\in\mathbb{F}_{q}\right\},\]
where \(G\) is a finite abelian group. Quasi-abelian codes generalise quasi-cyclic codes in a natural way: a quasi-cyclic code is obtained by instantiating \(G\) with \(\mathbb{Z}/n\mathbb{Z}\). We define the _quasi-abelian syndrome decoding problem_ (QA-SD) as the natural generalisation of the syndrome decoding problem to quasi-abelian codes. This encompasses both quasi-cyclic syndrome decoding and plain syndrome decoding. The properties of quasi-abelian codes have been thoroughly studied in algebraic coding theory. We elaborate below on why quasi-abelian codes turn out to be precisely the right choice given our constraints 1-4 above.
#### 3.1.2 Security Against Linear Tests.
The linear test framework from [3, 1] provides a unified way to study the resistance of LPN-style and syndrome decoding-style assumptions against a wide family of _linear_ attacks, which includes most known attacks on LPN and syndrome decoding. We refer the reader to Section 3.2 for a detailed coverage. At a high level, in our setting, security against linear attacks boils down to proving that \((1,a)\) generates a code with large minimum distance. On one hand, a recent result of Fan and Lin [11] proves that quasi-Abelian codes asymptotically meet the Gilbert-Varshamov bound when the code length goes to infinity and the underlying group is fixed. On the other hand, Gaborit and Zemor [1] prove a similar result when the size of the group goes to infinity but restricted to the case where the group is cyclic. We conjecture an extension of Gaborit and Zemor result to arbitrary abelian groups. The latter conjecture entails that the QA-SD problem cannot be broken by any attack from the linear test framework, for any choice of the underlying group \(G\). This is the key to circumvent the restrictions of [3].
#### 3.1.3 Distribution of Products of Sparse Elements.
Using quasi-abelian codes, the ring \(\mathcal{R}\) is therefore a group algebra \(\mathbb{F}_{q}[G]\). Now, given \(e=\sum_{g\in G}e_{g}g\) and \(f=\sum_{g\in G}f_{g}g\) any two \(t\)-sparse elements of \(\mathcal{R}\) (that is, such that \((e_{g})_{g\in G}\) and \((f_{g})_{g\in G}\) have Hamming weight \(t\)), the product \(e\cdot f\) can be rewritten as
\[e\cdot f=\sum_{e_{g},f_{h}\neq 0}e_{g}f_{h}\cdot gh,\]
which is a \(t^{2}\)-sparse element of the group algebra. In other words, the product of two sparse elements in a group algebra is always a sparse element. In the context of building PCG's, this implies that we can directly distribute elements \(ef\in\mathcal{R}\) using Sum of Point Function Secret Sharing (SPFSS) for \(t^{2}\)-point functions. This allows us to generalise the template PCG construction of [3] to the setting of arbitrary quasi-abelian code, with essentially the same efficiency (in a sense, the template is "black-box" in the ring: it only relies on the ability to distribute sparse elements via FSS).
We note that our generalised template differs slightly from the approach of [3]: in this work, the authors work over rings of the form \(\mathcal{R}=\mathbb{F}_{p}[X]/(P(X))\), where \(P\) is some polynomial. However, in general, this ring is not a group algebra, and the product of sparse elements of \(\mathcal{R}\) might not be sparse. They circumvented this issue by sharing directly the product over \(\mathbb{F}_{p}[X]\) (where the product of sparse polynomials remains sparse) and letting the parties reduce locally modulo \(P\). Doing so, however, introduces a factor 2 overhead in the expansion (and a slight overhead in the seed size). Our approach provides a cleaner solution, using a structure where sparsity is natively preserved through products inside the ring.
#### 3.1.4 Fast Operations on Group Algebras.
We observe that, by folklore results, operations over a group algebra \(\mathbb{F}_{q}[G]\) admit an FFT algorithm (using a general form of the FFT which encompasses both the original FFT of Cooley and Tuckey, and the Number Theoretic Transform). When using this general FFT, setting \(G=\mathbb{Z}/2^{t}\mathbb{Z}\) recovers the usual FFT from the literature. In full generality, given any abelian group \(G\) of cardinality \(n\) with \(\gcd(n,q)=1\) and exponent \(d\), if \(\mathbb{F}_{q}\) contains a
primitive \(d\)-th root of unity, then the Discrete Fourier Transform and its inverse can be computed in time \(O(n\cdot\sum_{i}p_{i})\), where the \(p_{i}\) are the prime factors appearing in the Jordan-Holder series of \(G\); we refer the reader to Section 4.3 for a more detailed coverage. For several groups of interest in our context, this appears to yield very efficient FFT variants. For example, setting \(q=3\) and \(G=(\mathbb{Z}/2\mathbb{Z})^{d}\), the resulting FFT is a \(d\)-dimensional FFT over \(\mathbb{F}_{3}\) and it can be computed in time \(\mathcal{O}(n\cdot\log n)\) (the group algebra \(\mathbb{F}_{3}[(\mathbb{Z}/2\mathbb{Z})^{d}]\) is the one that yields a PCG for \(n\) copies of OLE over \(\mathbb{F}_{3}\)).
We note that FFT's over cyclotomic rings, such as those used in [BCG\({}^{+}\)20b], have been heavily optimised in hundreds of papers, due to their wide use (among other things) in prominent cryptosystems. As such, it is likely that even over "FFT-friendly" choices of group algebras, such as \(\mathbb{F}_{3}[(\mathbb{Z}/2\mathbb{Z})^{d}]\), the general FFT construction described above will be in practice significantly less efficient than the best known FFT's implementations over cyclotomic rings. Hence, computationally, we expect that state-of-the-art implementations of the PCG of [BCG\({}^{+}\)20b] over large fields \(\mathbb{F}\) using a cyclotomic ring \(\mathcal{R}\) for the ring-LPN assumption will be noticeably faster than state-of-the-art implementations of our approach to generate OLE's over a small field, such as \(\mathbb{F}_{3}\). There is however nothing inherent to this: the efficiency gap stems solely from the years of effort that have been devoted to optimising FFT's over cyclotomic rings, but we expect that FFT's over other FFT-friendly group algebra such as \(\mathbb{F}_{3}[(\mathbb{Z}/2\mathbb{Z})^{d}]\) could be significantly optimised in future works. We hope that our applications to silent secure computation over general fields will motivate such studies in the future.
#### 4.2.2 From Quasi-Abelian Codes to OLE's over \(\mathbb{F}_{q}\).
Our general PCG template allows to generate a pseudorandom OLE over an arbitrary group algebra \(\mathbb{F}_{q}[G]\). Then, when using \(G=(\mathbb{Z}/(q-1)\mathbb{Z})^{d}\), we have that \(\mathbb{F}_{q}[G]\simeq\mathbb{F}_{q}^{n}\) (with \(n=(q-1)^{d}\)). Therefore, a single pseudorandom OLE over \(\mathbb{F}_{q}[G]\) can be _locally_ converted by the parties into \((q-1)^{d}\) copies of a pseudorandom OLE over \(\mathbb{F}_{q}\). Furthermore, for these concrete choices of \(G\), we complement our security analysis by proving a search-to-decision reduction, showing that the decision QA-SD problem over \(\mathbb{F}_{q}[G]\) with \(G=(\mathbb{Z}/(q-1)\mathbb{Z})^{d}\) is as hard as the _search_ QA-SD problem. This provides further support for the security of our instantiations.
In addition, our framework provides a way to investigate different instantiations of the ring-LPN problem through the lens of quasi-abelian codes. This turns out to play an important role in understanding the basis for the security of ring-LPN: seemingly very similar choices of the underlying polynomial can yield secure instances in one case, and completely broken instances in the other case. While the work of [BCG\({}^{+}\)20b] gave a heuristic cryptanalysis of ring-LPN, it fails to identify the influence of the choice of the polynomial.
Concretely, consider the ring \(\mathcal{R}=\mathbb{F}_{q}[X]/(P(X))\) with either \(P(X)=X^{q-1}-1\) or \(P(X)=X^{q}-X\). The latter is a natural choice, as it has the largest possible number of factors over \(\mathbb{F}_{q}\) (which controls the number of OLE's produced over \(\mathbb{F}_{q}\)). \(\mathcal{R}=\mathbb{F}_{q}[X]/(P(X))\) with \(P(X)=X^{q-1}-1\) is a group algebra, and the ring-LPN assumption with ring \(\mathcal{R}\) reduces to QA-SD\((\mathcal{R})\). Hence, it is secure against all attacks from the linear test framework (and admits a search-to-decision reduction) by our analysis. On the other hand, ring-LPN over the ring \(\mathcal{R}=\mathbb{F}_{q}[X]/(P(X))\) with \(P(X)=X^{q}-X\) does not fit in our framework, and turns out to be _completely broken_ by a simple linear attack: given \((a,b)\) where \(b\) is either random or equal to \(a\cdot e+f\bmod X^{q}-X\), it holds that \(e(0)=f(0)=0\bmod X^{q}-X\) with high probability, because \(e(0)=f(0)=0\) over \(\mathbb{F}_{q}[X]\) with high probability (since \(e,f\) are sparse, their constant coefficient is likely to be zero), and reduction modulo \(X^{q}-X\) does not change the constant coefficient. Hence, the adversary can distinguish \(b\) from random simply by computing \(b(0)\) (since \(b(0)\) is nonzero with probability \((q-1)/q\) for a random \(b\)).
The above suggests that settling for \(\mathcal{R}=\mathbb{F}_{q}[X]/(X^{q-1}-1)\) is a conservative choice to instantiate the PCG of [BCG\({}^{+}\)20b] with strong security guarantees. We note that [BCG\({}^{+}\)20b] recommended instead \(\mathcal{R}=\mathbb{F}_{p}[X]/(X^{n}+1)\) with \(n\) being a power of \(2\) and \(p\) a large prime for efficiency reasons (since it is a cyclotomic ring, it admits fast FFT's). We believe that a natural generalisation of our framework should also encompass this ring, and allow proving that it also yields a flavor of ring-LPN which is immune to linear attacks. However, this is beyond the scope of our paper, and we leave it to future work.
#### 4.2.3 Considerations on the Case of \(\mathbb{F}_{2}\).
Interestingly, the aforementioned instance allows generating many OLE's over \(\mathbb{F}_{q}\) for any \(q>2\); for \(q=2\), however, the term \(n=(q-1)^{d}\) becomes equal to \(1\)
that is, we only get a single \(\mathsf{OLE}\) over \(\mathbb{F}_{2}\) this way. This is in fact inherent to our approach: the product ring \(\mathbb{F}_{2}^{n}\) has only one invertible element, and therefore can never be realised as a group algebra unless \(n=1\). Hence, somewhat surprisingly, our general approach circumvents the size limitation of [1] and gets us all the way to \(\mathbb{F}_{3}\) or any larger field, but fails to provide a construction in the (particularly interesting) case \(\mathbb{F}_{2}\).
Motivated by this limitation of our framework, we devise a strategy to further generalise our approach through the theory of algebraic function fields (in essence, our generalisation is to quasi-abelian codes what quasi-negacyclic codes are to quasi-cyclic codes; we note that this is also close in spirit to the instance chosen in [1]: for their main candidate, they suggest using the ring \(\mathcal{R}=\mathbb{F}_{p}[X]/(X^{n}+1)\), which is a module over a group algebra and yields a _quasi-negacyclic code_). Alas, we did not manage to get a fully working candidate. At a (very) high level, our generalised framework produces pseudorandom elements \(x=a\odot e_{x}+1\odot f_{x}\) and \(y=a\odot e_{y}+1\odot f_{y}\) where \(e_{x},e_{y},f_{x},f_{y}\) are sparse. However, the product \(\odot\) is now _not_ the same product as the group algebra product \(x\cdot y\). Concretely, to share \(x\cdot y\), we need to share terms of the form \((u\odot e)\cdot(v\odot f)\) (where \(u,v\) can be \(a\) or \(1\)). However, unlike the case of our previous instantiation, this does not rewrite as a term of the form \(uv\cdot ef\) (which we could then share by sharing the sparse term \(ef\), as \(uv\) is public). Still, we believe that our approach could serve as a baseline for future works attempting to tackle the intriguing problem of building efficient programmable \(\mathsf{PCG}\)'s for \(\mathsf{OLE}\) over \(\mathbb{F}_{2}\). In particular, our unsuccessful attempts show that to get such a \(\mathsf{PCG}\), it suffices to find a way to succinctly share terms of the form \((u\odot e)\cdot(v\odot f)\) where \(u,v\) are public, and \(e,f\) are sparse. While \(\mathsf{FSS}\) do not provide an immediate solution to this problem, this reduces the goal to a "pure MPC problem" which could admit an efficient solution.
**Concrete Cryptanalysis.** Eventually, we complement our study by a concrete analysis of the security of our assumptions. As in previous works, the bounds derived from the resistance to linear attacks are quite loose, because they cover a _worst-case_ choice of linear attack. We cover standard attacks, such as information set decoding. A particularity of both ring-\(\mathsf{LPN}\) with splittable polynomial and our new family of \(\mathsf{QA}\)-\(\mathsf{SD}\) assumption is that it grants the adversary some additional freedom: the adversary can, informally, transform a \(\mathsf{QA}\)-\(\mathsf{SD}\) instance into an instance with reduced dimension (in the case of ring-\(\mathsf{LPN}\), by reducing modulo factors of \(P\); for \(\mathsf{QA}\)-\(\mathsf{SD}\), by quotienting by subgroups of \(G\)). This turns out to be equivalent to the concept of _folding attacks_, which have been recently studied both in the context of code-based cryptography [1] and of lattice-based cryptography [1]. We analyse the effect of folding attacks on our instances and discuss the impact on our parameter choices. In particular, the instances of \(\mathsf{QA}\)-\(\mathsf{SD}\) used in our \(\mathsf{PCG}\) construction closely resemble the Multivariate \(\mathsf{LWE}\) assumption (with sparse noise instead of small-magnitude noise), which was shown in [1] to be broken by folding attacks. We note (but this is well-known [1]) that folding attacks are much less devastating on \(\mathsf{LPN}\)- and syndrome decoding-style assumptions, essentially because folding yields a very slight increase of the noise magnitude in the \(\mathsf{LWE}\) setting (the sum of \(\mathsf{LWE}\) error terms has small magnitude), but increases the noise rate very quickly in the coding setting (the sum of sparse noises very quickly becomes dense).
## 3 Preliminaries
**Function Secret Sharing.** Function secret sharing (\(\mathsf{FSS}\)), introduced in [1, 1], allows to succinctly share functions. An \(\mathsf{FSS}\) scheme splits a secret function \(f:I\to\mathbb{G}\), where \(\mathbb{G}\) is some Abelian group, into two functions \(f_{0},f_{1}\), each represented by a key \(K_{0},K_{1}\), such that: (1) \(f_{0}(x)+f_{1}(x)=f(x)\) for every input \(x\in I\), and (2) each of \(K_{0},K_{1}\) individually hides \(f\).
An \(\mathsf{SPFSS}\) is an \(\mathsf{FSS}\) scheme for the class of _sums of point functions_: functions of the form \(f(x)=\sum_{i}f_{s_{i},y_{i}}(x)\) where each \(f_{s_{i},y_{i}}(\cdot)\) evaluates to \(y_{i}\) on \(s_{i}\), and to \(0\) everywhere else. As in previous works, we will use efficient constructions of \(\mathsf{SPFSS}\) in our constructions of PCGs. Such efficient constructions are known from any length-doubling pseudorandom generator [1]. We refer the reader to Appendix A for more details on \(\mathsf{FSS}\) and \(\mathsf{SPFSS}\).
**Pseudorandom Correlation Generators.** A pseudorandom correlation generator (\(\mathsf{PCG}\)) for some target ideal correlation takes as input a pair of short, correlated seeds and outputs long cor
related pseudorandom strings, where the expansion procedure is deterministic and can be applied locally. In slightly more details, a PCG is a pair \((\mathsf{Gen},\mathsf{Expand})\) such that \(\mathsf{Gen}(1^{\lambda})\) produces a pair of short seeds \((\mathsf{k}_{0},\mathsf{k}_{1})\) and \(\mathsf{Expand}(\sigma,\mathsf{k}_{\sigma})\) outputs a string \(R_{\sigma}\). A PCG is _correct_ if the distribution of the pairs \((R_{0},R_{1})\) output by \(\mathsf{Expand}(\sigma,\mathsf{k}_{\sigma})\) for \(\sigma=0,1\) is indistinguishable from a random sample of the target correlation. It is _secure_ if the distribution of \((\mathsf{k}_{1-\sigma},R_{\sigma})\) is indistinguishable from the distribution obtained by first computing \(R_{1-\sigma}\) from \(\mathsf{k}_{1-\sigma}\), and sampling a uniformly random \(R_{\sigma}\) conditioned on satisfying the target correlation with \(R_{1-\sigma}\) (for both \(\sigma=0\) and \(\sigma=1\)). In this work, we will mostly consider the OLE correlation, where the parties \(P_{0},P_{1}\) receive random vectors \(\mathbf{x}_{0},\mathbf{x}_{1}\in\mathbb{F}^{n}\) respectively, together with random shares of \(\mathbf{x}_{0}*\mathbf{x}_{1}\), where \(*\) denotes the component-wise (_i.e._ Schur) product.
Eventually, _programmable_PCG's allow generating multiple PCG keys such that part of the correlation generated remains the same across different instances. Programmable PCG's are necessary to construct \(n\)-party correlated randomness from the \(2\)-party correlated randomness generated via the PCG. Informally, this is because when expanding \(n\)-party shares (e.g. of Beaver triples) into a sum of \(2\)-party shares, the sum will involve many "cross terms"; using programmable PCG's allows maintaining consistent pseudorandom values across these cross terms. We refer the reader to Appendix A for more details on PCG's and programmable PCG's.
### Syndrome Decoding Assumptions
The syndrome decoding assumption over a field \(\mathbb{F}\) states, informally, that no adversary can distinguish \((\mathbf{H},\mathbf{H}\cdot\mathbf{e}^{\intercal})\) from \((\mathbf{H},\mathbf{b}^{\intercal})\), where \(\mathbf{H}\) is sampled from the set of parity-check matrices of some family of linear codes, and \(\mathbf{e}\) is a _noise vector_ sampled from some distribution over \(\mathbb{F}\)-vectors and typically sparse. The vector \(\mathbf{b}\) is a uniform vector over \(\mathbb{F}^{n}\). More formally, we define the SD assumption over a ring \(\mathcal{R}\) with dimension \(k\), code length \(n\), w.r.t. a family \(\mathcal{F}\) of linear codes, and a noise distribution \(\mathcal{D}\):
Definition 1 (Syndrome Decoding): Let \(k,n\in\mathbb{N}\), and let \(\mathcal{F}=\mathcal{F}_{n,k}\subset\mathcal{R}^{(n-k)\times n}\) be a family of parity-check matrices of codes over some ring \(\mathcal{R}\). Let \(\mathcal{D}\) be a noise distribution over \(\mathcal{R}^{n}\). The \((\mathcal{D},\mathcal{F},\mathcal{R})\)-\(\mathsf{SD}(k,n)\) assumption states that
\[\{(\mathbf{H},\mathbf{H}\cdot\mathbf{e}^{\intercal})\mid\mathbf{H}\stackrel{{ \leftarrow}}{{\leftarrow}}\mathcal{F},\mathbf{e}\stackrel{{ \leftarrow}}{{\leftarrow}}\mathcal{D}\}\stackrel{{ \leftarrow}}{{\approx}}\{(\mathbf{H},\mathbf{b}^{\intercal})\mid \mathbf{H}\stackrel{{\leftarrow}}{{\leftarrow}}\mathcal{F}, \mathbf{b}\stackrel{{\leftarrow}}{{\leftarrow}}\mathcal{R}^{n}\},\]
where "\(\stackrel{{\leftarrow}}{{\approx}}\)" denotes the computational indistiguishability.
Denoting \(t\) a parameter which governs the average density of nonzero entries in a random noise vector, common choices of noise distribution are Bernoulli noise (each entry is sampled from a Bernoulli distribution with parameter \(t/n\)), exact noise (the noise vector is uniformly random over the set of vectors of Hamming weight \(t\)), and regular noise (the noise vector is a concatenation of \(t\) random unit vectors). The latter is a very natural choice in the construction of pseudorandom correlation generators as it significantly improves efficiency [1, 1, 2] without harming security (to the best of our knowledge; the recent work [1] being efficient for very low code rates, which is not our setting).
Many codes are widely believed to yield secure instances of the syndrome decoding assumption, such as setting \(\mathbf{H}\) to be a uniformly random matrix over \(\mathbb{F}_{2}\) (the standard SD assumption), the parity-check matrix of an LDPC code [1] (the "Alekhnovich assumption"), a quasi-cyclic code (as used in several recent submissions to the NIST post-quantum competition, see e.g. [1, 2, 3] and in previous works on pseudorandom correlation generators, such as [1]), Toeplitz matrices [1, 2] and more. All these variants generalize naturally to larger fields (and are conjectured to remain secure over arbitrary fields).
In the context of PCG's, different codes enable different applications: advanced PCG constructions, such as PCGs for OLE, require codes with structure. When designing new PCGs, it is common to rely on syndrome decoding for codes which have not been previously analyzed in the literature - hence, unlike the ones listed above, they did not withstand years or decades of cryptanalysis. To facilitate the systematic analysis of new proposals, recent works [1, 2] have put forth a framework to automatically establish the security of new variants of the syndrome decoding assumption against a large class of standard attacks.
### The Linear Test Framework
The linear test framework provides a unified template to analyze the security of variants of the \(\mathsf{LPN}\) or syndrome decoding assumption against the most common attacks. It was first put forth explicitly in [10, 11] (but similar observations were implicit in many older works). Concretely, an attack against syndrome decoding in the linear test framework proceeds in two stages:
1. First, a matrix \(\mathbf{H}\) is sampled from \(\mathcal{F}\), and fed to the (unbounded) adversary \(\mathcal{A}\). The adversary returns a (nonzero) _test vector_\(\mathbf{v}=\mathcal{A}(\mathbf{H})\).
2. Second, a noise vector \(\mathbf{e}\) is sampled. The _advantage_ of the adversary \(\mathcal{A}\) in the linear test game is the bias of the induced distribution \(\mathbf{v}\cdot\mathbf{H}\cdot\mathbf{e}^{\intercal}\).
To formalize this notion, we recall the definition of the bias of a distribution:
Definition 2 (Bias of a Distribution): Given a distribution \(\mathcal{D}\) over \(\mathbb{F}^{n}\) and a vector \(\mathbf{u}\in\mathbb{F}^{n}\), the bias of \(\mathcal{D}\) with respect to \(\mathbf{u}\), denoted \(\mathsf{bias}_{\mathbf{u}}(\mathcal{D})\), is equal to
\[\mathsf{bias}_{\mathbf{u}}(\mathcal{D})=\left|\mathbb{F}_{\mathbf{x}\sim \mathcal{D}}[\mathbf{u}\cdot\mathbf{x}^{\intercal}=0]-\mathbb{P}_{\mathbf{x} \sim\mathcal{U}_{n}}[\mathbf{u}\cdot\mathbf{x}^{\intercal}=0]\right|=\left| \mathbb{P}_{\mathbf{x}\sim\mathcal{D}}[\mathbf{u}\cdot\mathbf{x}^{\intercal }=0]-\frac{1}{\left|\mathbb{F}\right|}\right|,\]
where \(\mathcal{U}_{n}\) denotes the uniform distribution over \(\mathbb{F}^{n}\). The bias of \(\mathcal{D}\), denoted \(\mathsf{bias}(\mathcal{D})\), is the maximum bias of \(\mathcal{D}\) with respect to any nonzero vector \(\mathbf{u}\).
We say that an instance of the syndrome decoding problem is _secure against linear test_ if, with very high probability over the sampling of \(\mathbf{H}\) in step 1, for any possible adversarial choice of \(\mathbf{v}=\mathcal{A}(\mathbf{H})\), the bias of \(\mathbf{v}\cdot\mathbf{H}\cdot\mathbf{e}^{\intercal}\) induced by the random sampling of \(\mathbf{e}\) is negligible. Intuitively, the linear test framework captures any attack where the adversary is restricted to computing a linear function of the syndrome \(\mathbf{b}^{\intercal}=\mathbf{H}\cdot\mathbf{e}^{\intercal}\), but the choice of the linear function itself can depend arbitrarily on the code. Hence, the adversary is restricted in one dimension (it has to be linear in \(\mathbf{b}^{\intercal}\)), but can run in unbounded time given \(\mathbf{H}\).
The core observation made in [10, 11] (and also implicit in previous works) is that almost all known attacks against syndrome decoding (including, but not limited to, attacks based on Gaussian elimination and the BKW algorithm [1, 12, 13, 14] and variants based on covering codes [1, 1, 15, 16, 17], the ISD family of information set decoding attacks [10, 11, 12, 13, 15, 16], statistical decoding attacks [1, 14, 15, 17], generalized birthday attacks [20, 18], linearization attacks [21, 22], attacks based on finding low weight code vectors [19], or on finding correlations with low-degree polynomials [1, 1]) fit in the above framework. Therefore, provable resistance against linear test implies security against essentially all standard attacks.
Security Against Linear Tests.Resistance against linear test is a property of both the code distribution (this is the "with high probability over the choice of \(\mathbf{H}\)" part of the statement) and of the noise distribution (this is the "the bias of the distribution induced by the sampling of \(\mathbf{e}\) is low" part of the statement). It turns out to be relatively easy to give sufficient conditions for resistance against linear tests. At a high level, it suffices that
1. the _code generated by \(\mathbf{H}\)_ has large minimum distance, and
2. for any large enough subset \(S\) of coordinates, with high probability over the choice of \(\mathbf{e}\), one of the coordinates of \(\mathbf{e}\) indexed by \(S\) will be nonzero.
The above characterization works for any noise distribution whose nonzero entries are uniformly random over \(\mathcal{R}\setminus\{0\}\), which is the case for all standard choices of noise distributions. To see why these conditions are sufficient, recall that the adversarial advantage is the bias of \(\mathbf{v}\cdot\mathbf{H}\cdot\mathbf{e}^{\intercal}\). By condition (2), if the subset \(S\) of nonzero entries of \(\mathbf{v}\cdot\mathbf{H}\) is sufficiently large, then \(\mathbf{e}\) will "hit" one of these entries with large probabilities, and the output will be uniformly random. But the condition that \(S\) is sufficiently large translates precisely to the condition that \(\mathbf{v}\cdot\mathbf{H}\) has large Hamming weight for any possible (nonzero) vector \(\mathbf{v}\), which is equivalent to saying that \(\mathbf{H}\) generates a code with large minimum distance. We recall the formalization below:
**Definition 3** (Security against Linear Tests).: _Let \(\mathcal{R}\) be a ring, and let \(\mathcal{D}\) denote a noise distribution over \(\mathcal{R}^{n}\). Let \(\mathcal{F}\subset\mathcal{R}^{(n-k)\times k}\) be a family of (parity-check matrices of) linear codes. Let \(\varepsilon,\eta:\mathbb{N}\mapsto[0,1]\) be two functions. We say that the \((\mathcal{D},\mathcal{F},\mathcal{R})\)-\(\mathsf{SD}(k,n)\) problem is \((\varepsilon,\eta)\)-secure against linear tests if for any (possibly inefficient) adversary \(\mathcal{A}\) which, on input \(\mathbf{H}\) outputs a nonzero \(\mathbf{v}\in\mathcal{R}^{n}\), it holds that_
\[\Pr[\mathbf{H}\stackrel{{\varepsilon}}{{\leftarrow}}\mathcal{F}, \mathbf{v}\stackrel{{\delta}}{{\leftarrow}}\mathcal{A}(\mathbf{H })\;:\;\mathsf{bias}_{\mathbf{v}}(\mathcal{D}_{\mathbf{H}})\geqslant \varepsilon(\lambda)]\leqslant\eta(\lambda),\]
_where \(\lambda\) denotes the security parameter and \(\mathcal{D}_{\mathbf{H}}\) denotes the distribution which samples \(\mathbf{e}\leftarrow\mathcal{D}\) and outputs \(\mathbf{H}\cdot\mathbf{e}^{\intercal}\)._
The _minimum distance_ of a matrix \(\mathbf{H}\), denoted \(\mathsf{d}(\mathbf{H})\), is the minimum weight of a nonzero vector in its row-span. Then, we have the following straightforward lemma:
**Lemma 4**.: _Let \(\mathcal{D}\) denote a noise distribution over \(\mathcal{R}^{n}\). Let \(\mathcal{F}\subset\mathcal{R}^{(n-k)\times k}\) be a family of parity-check matrices of linear codes. Then for any integer \(d\in\mathbb{N}\), the \((\mathcal{D},\mathcal{F},\mathcal{R})\)-\(\mathsf{SD}(k,n)\) problem is \((\varepsilon_{d},\eta_{d})\)-secure against linear tests, where_
\[\varepsilon_{d}=\max_{w\in(\mathbf{v})>d}\mathsf{bias}_{\mathbf{v}}(\mathcal{ D}),\quad\text{ and }\quad\eta_{d}=\Pr_{\mathbf{H}\stackrel{{\varepsilon}}{{ \leftarrow}}\mathcal{F}}[\mathsf{d}(\mathbf{H})\geqslant d].\]
The proof is folklore, and can be found e.g. in [1]. For example, using either Bernoulli, exact, or regular noise distributions with expected weight \(t\), for any \(\mathbf{v}\) of weight at least \(d\), the bias against \(\mathbf{v}\) is bounded by \(e^{-2td/n}\). Hence, if the code is a good code (_i.e._\(d=\Omega(n)\)), the bias is of the form \(2^{-\Omega(t)}\).
_When security against linear attacks does not suffice._ There are two important cases where security against linear test does not yield security against _all_ attacks.
1. When the code is strongly algebraic. For example, Reed-Solomon codes, which have a strong algebraic structure, have high dual minimum distance, but can be decoded efficiently with the Welch-Berlekamp algorithm, hence they do not lead to a secure syndrome decoding instance (and indeed, Welch-Berlekamp does not fit in the linear test framework).
2. When the noise is structured (e.g. for regular noise) and the code length is at least quadratic in the dimension. This opens the door to algebraic attacks such as the Arora-Ge attack [1] or the recent attack from Briaud and Oygarden [1]. However, when \(n=O(k)\) (which is the case in all our instances), these attacks do not apply.
The above are, as of today, the only known cases where security against linear attacks is known to be insufficient. Algebraic decoding techniques have a long history and are only known for very restricted families of codes, and the aforementioned algebraic attacks typically never applies in the \(n=O(k)\) regime which we usually consider for PCG's. Therefore, a reasonable rule of thumb is that a variant of syndrome decoding yields a plausible assumption if (1) it provably resists linear attacks, and (2) finding an algebraic decoding algorithm is a longstanding open problem.
## 4 Group Algebras and Quasi-Abelian Codes
### Quasi-Abelian Codes
Quasi-abelian codes have been first introduced in [20], and, since then, have been extensively studied in coding theory.
**Group Algebras.** Let \(\mathbb{F}_{q}\) denote the finite field with \(q\) elements, and let \(G\) be a finite abelian group of cardinality \(n\). The group algebra of \(G\) with coefficients in \(\mathbb{F}_{q}\) is the free algebra with generators \(G\). More precisely, it is the set \(\mathbb{F}_{q}[G]\) of formal linear combinations
\[\mathbb{F}_{q}[G]\stackrel{{\mathrm{def}}}{{=}}\left\{\sum_{g \in G}a_{g}g\;\Big{|}\;a_{g}\in\mathbb{F}_{q}\right\},\]
endowed with an \(\mathbb{F}_{q}-\)vector space structure in the natural way, and the multiplication is given by the convolution:
\[\left(\sum_{g\in G}a_{g}g\right)\left(\sum_{g\in G}b_{g}g\right)\stackrel{{ \mathrm{def}}}{{=}}\sum_{g\in G}\left(\sum_{h\in G}a_{h}b_{h^{-1}g}\right)g.\]
It is readily seen that \(\mathbb{F}_{q}[G]\) is commutative if and only if the group \(G\) is abelian, which will always be the case in this article.
Once an ordering \(g_{0},\ldots,g_{n-1}\) of the elements of \(G\) is chosen, the group algebra \(\mathbb{F}_{q}[G]\) is isomorphic (as an \(\mathbb{F}_{q}-\)linear space) to \(\mathbb{F}_{q}^{n}\) via \(\varphi\colon\sum_{i=0}^{n-1}a_{i}g_{i}\mapsto(a_{0},\ldots,a_{n-1})\). This isomorphism is not canonical since it depends on the ordering, but changing it only leads to a permutation of the coordinates, and many groups (especially _abelian_ groups) come with a canonical ordering. This isomorphism allows us to endow \(\mathbb{F}_{q}[G]\) with the Hamming metric, making \(\varphi\) an _isometry_. The weight \(\operatorname{wt}(a))\) of \(a\in\mathbb{F}_{q}[G]\) is defined as the Hamming weight of \(\varphi(a)\) (Note that changing the ordering of the group does not impact the weight of an element, which is thus well-defined).
Example 5: The simplest example to have in mind is the case of cyclic groups.
* Let \(G=\{1\}\) be the trivial group with one element. Then the group algebra \(\mathbb{F}_{q}[G]\) is isomorphic to the finite field \(\mathbb{F}_{q}\).
* Let \(G=\mathbb{Z}/n\mathbb{Z}\) be the cyclic group with \(n\) elements. Assuming that \(q\) is coprime to \(n\), it is easy to see that the group algebra \(\mathbb{F}_{q}[G]\) is nothing else than the usual polynomial ring \(\mathbb{F}_{q}[X]/(X^{n}-1)\). The isomorphism is given by \(k\mapsto X^{k}\) extended by linearity.
Remark 6: The above example shows that our framework will only be a generalisation of known constructions. This generality will be crucial though, because all the instances we introduce in the present article and which will be proved to resist to linear attacks will arise from group algebras.
Example 5 shows that the group algebra of a cyclic group can be seen as a (quotient of a) polynomial ring in one variable. For a general finite abelian group, this is not always so simple, however there is also an explicit nice representation. This uses the following standard fact from the theory of group algebras.
Proposition 7: _Let \(G_{1},G_{2}\) be two finite groups. Then_
\[\mathbb{F}_{q}[G_{1}\times G_{2}]\simeq\mathbb{F}_{q}[G_{1}]\otimes_{\mathbb{ F}_{q}}\mathbb{F}_{q}[G_{2}].\]
Example 8: Let \(G=\mathbb{Z}/n\mathbb{Z}\times\mathbb{Z}/m\mathbb{Z}\). Then, Proposition 7 entails that
\[\mathbb{F}_{q}[G]=\mathbb{F}_{q}[\mathbb{Z}/n\mathbb{Z}]\otimes_{ \mathbb{F}_{q}}\mathbb{F}_{q}[\mathbb{Z}/m\mathbb{Z}] =\mathbb{F}_{q}[X]/(X^{n}-1)\otimes_{\mathbb{F}_{q}}\mathbb{F}_{ q}[X]/(X^{m}-1)\] \[=\mathbb{F}_{q}[X,Y]/(X^{n}-1,Y^{m}-1).\]
This isomorphism can actually be made explicit by \((k,\ell)\mapsto X^{k}Y^{\ell}\) extended by linearity.
Remark 9: More generally, since it is well-known that any finite abelian group \(G\) is a product of cyclic group \(\mathbb{Z}/d_{1}\mathbb{Z}\times\cdots\times\mathbb{Z}/d_{r}\mathbb{Z}\), the previous statement asserts that the group algebra \(\mathbb{F}_{q}[G]\) is isomorphic to a quotient of a multivariate polynomial ring, namely:
\[\mathbb{F}_{q}[G]=\mathbb{F}_{q}[\mathbb{Z}/d_{1}\mathbb{Z}\times\cdots\times \mathbb{Z}/d_{r}\mathbb{Z}]\simeq\mathbb{F}_{q}[X_{1},\ldots,X_{r}]/(X_{1}^{d_ {1}}-1,\ldots,X_{r}^{d_{r}}-1).\]
Quasi-Abelian Codes.Let \(\ell>0\) be any positive integer, and consider the free \(\mathbb{F}_{q}[G]-\)module of rank \(\ell\):
\[(\mathbb{F}_{q}[G])^{\ell}\stackrel{{\mathrm{def}}}{{=}}\mathbb{ F}_{q}[G]\oplus\cdots\oplus\mathbb{F}_{q}[G]=\Big{\{}(a_{1},\ldots,a_{\ell})\mid a _{i}\in\mathbb{F}_{q}[G]\Big{\}}.\]
Any \(\mathbb{F}_{q}[G]-\)submodule of \((\mathbb{F}_{q}[G])^{\ell}\) is called a _quasi-group code_ of index \(\ell\) of \(G\) (or quasi-\(G\) code). When the group \(G\) is abelian, a quasi-\(G\) code is called _quasi-abelian_. More precisely, given a matrix
\[\mathbf{\Gamma}=\begin{pmatrix}\gamma_{1,1}&\ldots&\gamma_{1,\ell}\\ \vdots&\ddots&\vdots\\ \gamma_{k,1}&\ldots&\gamma_{k,\ell}\end{pmatrix}\in(\mathbb{F}_{q}[G])^{k \times\ell},\]
the quasi-\(G\) code defined by \(\mathbf{\Gamma}\) is
\[\mathcal{C}\stackrel{{\mathrm{def}}}{{=}}\{\mathbf{m}\mathbf{\Gamma} =(\mathbf{m}\mathbf{\Gamma}_{1},\ldots,\mathbf{m}\mathbf{\Gamma}_{\ell})\mid \mathbf{m}=(m_{1},\ldots,m_{\ell})\in(\mathbb{F}_{q}[G])^{k}\},\]
where \(\mathbf{\Gamma}_{i}\) denotes the column \(\begin{pmatrix}\gamma_{1,i}\\ \vdots\\ \gamma_{k,i}\end{pmatrix}\) and \(\mathbf{m}\mathbf{\Gamma}_{i}=m_{1}\gamma_{1,i}+\cdots+m_{k}\gamma_{k,i}\in \mathbb{F}_{q}[G]\). The matrix \(\mathbf{\Gamma}\) is said to be _systematic_ if it is of the form \(\mathbf{\Gamma}=\left(I_{k}\mid\mathbf{\Gamma}^{\prime}\right)\), where \(\mathbf{\Gamma}^{\prime}\in(\mathbb{F}_{q}[G])^{k\times(\ell-k)}\) and \(I_{k}\in(\mathbb{F}_{q}[G])^{k\times k}\) is the diagonal matrix with values \(1_{G}\).
Let \(a\in\mathbb{F}_{q}[G]\) and choose an ordering \(g_{0},\ldots,g_{n-1}\) of the elements of \(G\). Through the aforementioned isomorphism \(\varphi\), the element \(a\) can be represented as a vector \((a_{0},\ldots,a_{n-1})\in\mathbb{F}_{q}^{n}\). Now, consider the matrix
\[\mathbf{A}=\begin{pmatrix}\varphi(a\cdot g_{0})\\ \vdots\\ \varphi(a\cdot g_{n-1})\end{pmatrix}\in\mathbb{F}_{q}^{n\times n},\]
where each row is the vector representation of a shift of \(a\) by some element \(g_{i}\in G\). In short, the matrix \(\mathbf{A}\) is the matrix representing the multiplication-by-\(a\) map \(m\mapsto am\) in \(\mathbb{F}_{q}[G]\) in the basis \((g_{0},\ldots,g_{n-1})\). An easy computation shows that for \(m,a\in\mathbb{F}_{q}[G]\), the vector representation of the product \(m\cdot a\) is the vector-matrix product
\[\varphi(m)\mathbf{A}=(m_{0},\ldots,m_{n-1})\begin{pmatrix}\varphi(a\cdot g_{0 })\\ \vdots\\ \varphi(a\cdot g_{n-1})\end{pmatrix}.\]
In other words, any quasi-group code \(\mathcal{C}\) of index \(\ell\) can be seen as a linear code of length \(\ell\times n\) over \(\mathbb{F}_{q}\). The \(\mathbb{F}_{q}[G]-\)module structure endows \(\mathcal{C}\) with an additional action of the group \(G\) on each block of length \(n\); and \(\mathcal{C}\) (seen as a linear code over \(\mathbb{F}_{q}\)), admits a generator matrix formed out by \(k\times\ell\) square blocks of size \(n\).
Example 10: Let us continue with Example 5.
* If \(G=\{1\}\), then any linear code is a quasi-\(G\) code.
* If \(G=\mathbb{Z}/n\mathbb{Z}\) and \(q\) is coprime to \(n\). An element of \(\mathbb{F}_{q}[G]\simeq\mathbb{F}_{q}[X]/(X^{n}-1)\) is a polynomial of degree at most \(n\) which can be represented by the vector of its coefficients, and any product \(m(X)\cdot a(X)\in\mathbb{F}_{q}[G]\) can be represented by the _circulant_ vector-matrix product \[\left(m_{0}\ m_{1}\ \ldots\ m_{n-1}\right)\begin{pmatrix}a_{0}&a_{1}& \ldots\ a_{n-1}\\ a_{n-1}&a_{0}&\ldots\ a_{n-2}\\ \vdots&&\vdots\\ a_{1}&a_{n-1}&\ldots&a_{0}\end{pmatrix}\in\mathbb{F}_{q}^{n}.\] For simplicity, assume that \(k=1\) and \(\ell=2\). Then, a quasi-\(\mathbb{Z}/n\mathbb{Z}\) code of index \(2\) is defined over \(\mathbb{F}_{q}\) by a double-circulant generator matrix \[\left(\begin{array}{cccc|cccc}a_{0}&a_{1}&\ldots\ a_{n-1}\\ a_{n-1}&a_{0}&\ldots\ a_{n-2}\\ \vdots&&\vdots\\ a_{1}&a_{n-1}&\ldots&a_{0}\end{array}\begin{array}{cccc|cccc}b_{0}&b_{1}& \ldots\ b_{n-1}\\ b_{n-1}&b_{0}&\ldots\ b_{n-2}\\ \vdots&&\vdots&&\vdots\\ b_{1}&b_{n-1}&\ldots&b_{0}\end{array}\right).\] In other words, a quasi-\(\mathbb{Z}/n\mathbb{Z}\) code is nothing else than a usual _quasi-cyclic_ code with block length \(n\).
### Duality for Quasi-Abelian Codes
When dealing with codes, it may be easier to use the language of parity-check matrices, especially when considering random codes. In this section, we show that this also extends naturally to quasi-abelian codes.
Let \(G\) be an abelian group. The algebra \(\mathbb{F}_{q}[G]\) is naturally endowed with an inner product \(\langle\cdot,\cdot\rangle\) defined as follows:
\[\left\langle\sum_{g\in G}a_{g}g,\sum_{g\in G}b_{g}g\right\rangle\stackrel{{ \mathrm{def}}}{{=}}\sum_{g\in G}a_{g}b_{g},\]
which is simply the usual inner product over \(\mathbb{F}_{q}^{n}\) (this does not depend on the ordering of \(G\)). This inner product can be naturally extended to \((\mathbb{F}_{q}[G])^{\ell}\):
\[\langle(a_{1},\ldots,a_{\ell}),(b_{1},\ldots,b_{\ell})\rangle\stackrel{{ \mathrm{def}}}{{=}}\sum_{i=1}^{\ell}\langle a_{i},b_{i}\rangle,\]
and the notion of the dual \(\mathcal{C}^{\perp}\) of a code \(\mathcal{C}\) extends to quasi-abelian codes:
\[\mathcal{C}^{\perp}\stackrel{{\mathrm{def}}}{{=}}\left\{x\in( \mathbb{F}_{q}[G])^{\ell}\mid\langle x,c\rangle=0\quad\forall c\in\mathcal{C }\right\}.\]
**Proposition 11**.: _Let \(G\) be a finite abelian group and let \(\mathcal{C}\) be a quasi-\(G\) code of index \(\ell\). Then \(\mathcal{C}^{\perp}\) is also a quasi-\(G\) code of index \(\ell\)._
Proof.: There needs only to prove that \(\mathcal{C}^{\perp}\) is kept invariant by the action of \(\mathbb{F}_{q}[G]\).
For any \(a=\sum_{g\in G}a_{g}g\in\mathbb{F}_{q}[G]\), define \(\bar{a}\stackrel{{\mathrm{def}}}{{=}}\sum_{g\in G}a_{g}g^{-1}\in \mathbb{F}_{q}[G]\) and \(\sigma(a)\stackrel{{\mathrm{def}}}{{=}}a_{1_{G}}\in\mathbb{F}_{q}\) where \(1_{G}\) denotes the identity element of \(G\). The map \(a\mapsto\bar{a}\) is clearly an automorphism of \(\mathbb{F}_{q}[G]\) of order \(2\), and \(\sigma:\mathbb{F}_{q}[G]\mapsto\mathbb{F}_{q}\) is a linear form. Moreover, for \(a,b\in\mathbb{F}_{q}[G]\), a simple computation shows that \(\langle a,b\rangle=\sigma(a\bar{b})\).
Now, let \(x=(x_{1},\ldots,x_{\ell})\in\mathcal{C}^{\perp}\). For any \(c=(c_{1},\ldots,c_{\ell})\in\mathcal{C}\) and any \(a\in\mathbb{F}_{q}[G]\),
\[\langle x\cdot a,c\rangle=\sum_{i=1}^{\ell}\sigma((x_{i}a)\bar{c_{i}})=\sum_{ i=1}^{\ell}\sigma(x_{i}\overline{(c_{i}\bar{a})})=\langle x,c\cdot\bar{a} \rangle=0,\]
where in the last equality we used the fact that \(c\cdot\bar{a}\in\mathcal{C}\) since \(\mathcal{C}\) is an \(\mathbb{F}_{q}[G]-\)module. This concludes the proof of the proposition.
Example 12: Consider a quasi-abelian code \(\mathcal{C}\) of index \(2\), with a systematic generator matrix \(\mathbf{\Gamma}=(1\mid a)\). Then, \(\mathcal{C}\) admits a parity-check matrix of the form \(\mathbf{H}=(\bar{a}\mid-1)\).
### Fast-Fourier Transform and Encoding
This Section recalls Fast Fourier Transform algorithms in a general setting. This encompasses the usual FFT introduced by Cooley and Tuckey in 1965[16]8 or the Number Theoretic Transform (NTT) algorithm with which the reader might be more familiar. For a detailed presentation in the group algebra setting (see [1]).
Footnote 8: Although such an algorithm was already probably known by Gauss.
Let \(G\) be a finite abelian group9 of cardinality \(n\), \(\mathbb{F}_{q}\) a finite field with \(q\) elements, and consider the group algebra \(\mathbb{F}_{q}[G]\). As explained above, encoding a quasi-\(G\) code amounts to computing multiplications in \(\mathbb{F}_{q}[G]\) which can be done using Discrete Fourier Transform algorithms (DFT) when \(\gcd(n,q)=1\). Indeed, in this case Maschke theorem ensures that \(\mathbb{F}_{q}[G]\) is semisimple, _i.e._\(\mathbb{F}_{q}[G]\) is isomorphic to a direct product of finite fields10, where the product is now done componentwise. DFT-based algorithms to compute the products of two elements of \(\mathbb{F}_{q}[G]\) always follow the same strategy:
Footnote 9: Recall than in this work we restrict ourselves to the abelian setting, though a Fourier Transform theory exists also for non-abelian group algebras, making use of the theory of characters.
Footnote 10: This uses the abelianity of \(G\), in general \(\mathbb{F}_{q}[G]\) is a direct product of matrix algebras
1. Compute the componentwise products.
2. Compute the inverse map \(\mathbb{F}_{q^{\prime}1}\times\cdots\times\mathbb{F}_{q^{\prime r}}\to \mathbb{F}_{q}[G]\).
Fast Fourier Transform (FFT) algorithms correspond to the case where steps 1 and 3 can be done efficiently (typically in \(O(n\log(n))\) operations in \(\mathbb{F}_{q}\) compared to a quadratic _naive_ approach.) This operation is all the more efficient when \(\ell_{i}=1\) for all \(i\). This happens when \(\mathbb{F}_{q}\) contains a primitive \(d\)-th root of unity, where \(d=\exp(G)\) is the _exponent_ of \(G\), _i.e._ the _lcm_ of the orders of all elements of \(G\). For our applications, this will always be the case.
Recall that any finite group \(G\) has a Jordan-Holder composition series:
\[\{1_{G}\}=G_{0}\lhd G_{1}\lhd\cdots\lhd G_{r}=G\]
such that the quotients \(G_{i+1}/G_{i}\) (called the _factors_ of the series) are simple groups (_i.e._ in the abelian setting they are isomorphic to some \(\mathbb{Z}/p_{i}\mathbb{Z}\) where \(p_{i}\) is a prime.), and this composition series is uniquely defined, up to equivalence (_i.e._ all Jordan-Holder series have same length and the factors are the same up to permutation).
Proposition 13 ([10, Section 5]): _Consider a finite abelian group \(G\) of cardinality \(n\) with \(\gcd(n,q)=1\), and exponent \(d\). Assume that \(\mathbb{F}_{q}\) contains a primitive \(d\)-th root of unity. Let \(p_{1},\ldots,p_{r}\) denote all the primes (possibly non distinct) appearing in the Jordan-Holder series of \(G\) (in particular \(n=p_{1}\cdots p_{r}\)). Then the Discrete Fourier Transform (and its inverse) in \(\mathbb{F}_{q}[G]\) can be computed in \(O(n\times(p_{1}+\cdots+p_{r}))\) operations in \(\mathbb{F}_{q}\)._
Example 14: Proposition 13 encompasses well-known FFT's from the literature.
* The usual FFT corresponds to \(G=\mathbb{Z}/2^{t}\mathbb{Z}\). In this case, a composition series is given by \[G_{0}=\{0\}\subset\cdots\subset G_{i}=2^{t-i}\mathbb{Z}/2^{t}\mathbb{Z} \subset\cdots\subset G_{t}=G=\mathbb{Z}/2^{t}\mathbb{Z},\] and each factor \(G_{i+1}/G_{i}\) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\), and with the above proposition we recover the usual complexity in \(O(2^{t}\times t)=O(n\log(n))\). However, \(\mathbb{F}_{q}\) needs to be large enough to contain a primitive \(2^{t}-\)th root of unity12. Footnote 12: When the characteristic of \(\mathbb{F}_{q}\) is not too large, an approach based on the Frobenius Fast Fourier Transform can also be exploited to remove this fact.
* Consider the finite field \(\mathbb{F}_{3}\) and the group \(G=(\mathbb{Z}/2\mathbb{Z})^{t}\). Example 8 entails that \[\mathbb{F}_{3}[G]\simeq\mathbb{F}_{3}[X_{1},\ldots,X_{t}]/(X_{1}^{2}-1,\ldots,X_{t}^{2}-1).\] A composition series of \(G\) is given by \[G_{0}=\{0\}^{t}\subset\cdots\subset G_{i}=(\mathbb{Z}/2\mathbb{Z})^{i}\times \{0\}^{t-i}\subset\cdots\subset G_{t}=G=(\mathbb{Z}/2\mathbb{Z})^{t},\] and the FFT can also be computed in time \(O(2^{t}\times t)=O(n\log(n))\). This is nothing else than a \(t\)-dimensional FFT in \(\mathbb{F}_{3}\).
Remark 15: Proposition 13 is _asymptotic_, although efficient implementations exist for several groups and fields. They are particularly efficient when \(G\) admits a Jordan-Holder composition series with groups of index 2, such as in the above two examples, which allows a simple divide-and-conquer approach. For a more precise description of Multivariate FFT algorithms (see [13, Section 2.2]).
### The Quasi-Abelian Decoding Problem
In this section, we introduce computationally hard problems related to random quasi-abelian codes. They are variants of the Syndrome Decoding Problem, restricted to this class of codes.
Let \(G\) be a finite abelian group and \(\mathbb{F}_{q}\) a finite field with \(q\) elements. Given an integer \(t\in\mathbb{N}\), we denote by \(\mathcal{D}_{t}(\mathbb{F}_{q}[G])\) a noise distribution over \(\mathbb{F}_{q}[G]\) such that \(\mathbb{E}[\mathrm{wt}(x)]=t\) when \(x\stackrel{{\leftarrow}}{{\leftarrow}}\mathcal{D}_{t}\), and \(\mathcal{D}_{t,n}(\mathbb{F}_{q}[G])\stackrel{{\mathrm{def}}}{{=} }\mathcal{D}_{t}(\mathbb{F}_{q}[G])^{\otimes n}\) will denote its \(n\)-fold tensorization, _i.e._\(\mathbf{e}\stackrel{{\leftarrow}}{{\leftarrow}}\mathcal{D}_{t,n}( \mathbb{F}_{q}[G])\) is \(\mathbf{e}\in\mathbb{F}_{q}[G]^{n}\) and its coordinates are drawn independently according to \(\mathcal{D}_{t}(\mathbb{F}_{q}[G])\). A _random_ quasi-\(G\) code of index 2, in _systematic form_, will be a quasi-\(G\) code whose parity-check matrix \(\mathbf{H}\in(\mathbb{F}_{q}[G])^{2}\) is of the form \(\mathbf{H}=(\mathbf{1}\mid\mathbf{a})\), where \(a\) is uniformly distributed over \(\mathbb{F}_{q}[G]\). Equivalently, it is the dual of the code generated by \(\mathbf{H}\). The search Quasi-Abelian Syndrome Decoding problem is defined as follows:
**Definition 16** ((Search) -Qa-sdp problem).: _Given \(\mathbf{H}=(\mathbf{1}\mid\mathbf{a})\) a parity-check matrix of a random systematic quasi-abelian code, a target weight \(t\in\mathbb{N}\) and a syndrome \(\mathbf{s}\in\mathbb{F}_{q}[G]\), the goal is to recover an error \(\mathbf{e}=(\mathbf{e}_{1}\mid\mathbf{e}_{2})\) with \(\mathbf{e}_{i}\xleftarrow{\mathcal{D}}_{t}(\mathbb{F}_{q}[G])\) such that \(\mathbf{He}^{T}=\mathbf{s}\), i.e. \(\mathbf{e}_{1}+\mathbf{a}\cdot\mathbf{e}_{2}=\mathbf{s}\)._
The problem also has a decisional version.
**Definition 17** ((Decisional) Qa-sdp problem).: _Given a target weight \(t\), the goal of this decisional QA-SD problem is to distinguish, with a non-negligible advantage, between the distributions_
\[\begin{array}{ll}\mathcal{D}_{0}:&\quad\text{where $\mathbf{a},\mathbf{s} \xleftarrow{\mathbb{F}}_{q}[G]$}\\ \mathcal{D}_{1}:&\quad\text{where $\mathbf{a}\xleftarrow{\mathbb{F}}_{q}[G]$ and $\mathbf{e}_{i} \xleftarrow{\mathcal{D}}_{t}(\mathbb{F}_{q}[G])$}.\end{array}\]
Both assumptions above generalize immediately to the case of parity-check matrices with more columns and/or rows of blocks. When \(\mathbf{H}=(\mathbf{1}\mid\mathbf{a}_{1}\mid\cdots\mid\mathbf{a}_{\mathbf{c}- \mathbf{1}})\), for some parameter \(c\), this corresponds to what has been called Module-LPN in the literature. This corresponds to the hardness of syndrome decoding for a quasi-abelian code of larger rate \((c-1)/c\). We call (search, decisional) QA-SD\((c,\mathcal{R})\) this natural generalization of QA-SD.
The QA-SD assumption states that the above decisional problem should be hard (for appropriate parameters). When the group \(G\) is the trivial group, this is the usual _plain_SD-assumption, while when the group \(G\) is cyclic13, this is the QC-SD assumption at the core of Round 4 NIST submissions BIKE and HQC. Those problems, especially their search version, have been studied for over 50 years by the coding theory community and to this day, no efficient algorithm is known to decode a random quasi-abelian code. This is even listed as an open research problem in the most recent Encyclopedia of Coding Theory (from 2021) [21, Problem 16.10.5].
Footnote 13: and \(\gcd(q,|G|)=1\)
_Remark 18_.: In Definition 17, we consider quasi-abelian codes with a parity-check matrix in systematic form. Indeed, assume \(\mathbf{H}=(\mathbf{a}_{1}\mid\mathbf{a}_{2})\in\mathbb{F}_{q}[G]^{1\times 2}\). A syndrome of \(\mathbf{H}\) will be of the form \(\mathbf{s}=\ \mathbf{a}_{1}\mathbf{e}_{1}+\mathbf{a}_{2}\mathbf{e}_{2}\), and therefore is contained in the ideal \(\mathcal{I}=(\mathbf{a}_{1},\mathbf{a}_{2})\) of \(\mathbb{F}_{q}[G]\) generated by \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\)14. Therefore, when this ideal is _not_ the full ring, there is an obvious bias. When working over a large field \(\mathbb{F}_{q}\), elements of \(\mathbb{F}_{q}[G]\) are invertible with high probability, and therefore \(\mathcal{I}=\mathbb{F}_{q}[G]\) with overwhelming probability. On the other hand, this is not true anymore when working over small fields. Using parity-check matrices in systematic form ensures that \(1_{G}\in\mathcal{I}\), which removes the bias. This is a standard definition (see for instance [1, 1]), though not always formulated like that in the literature.
Footnote 14: Beware that \(\mathbb{F}_{q}[G]\) is not necessarily principal.
### Security Analysis
In this paragraph, we provide evidence for the QA-SD-assumption. Note first that for \(G=\{1\}\) it is nothing but the SD-assumption, which is well established. Moreover, we argue for security of QA-SD against linear tests (Definition 3). With Lemma 4 in hand, it suffices to show that given the parity-check matrix \(\mathbf{H}\) of a quasi-\(G\) code \(\mathcal{C}\), the minimum distance of the code _generated by \(\mathbf{H}\)_, i.e._ the _dual_ of \(\mathcal{C}\), is large with high probability (over the choice of \(\mathbf{H}\)). Note that when \(G=\{1\}\), it is well-known that random codes are good, _i.e._ meet the Gilbert-Varshamov (GV) bound (see for instance [14, 15, 16]).
**Proposition 19** (Gilbert-Varshamov).: _Let \(0<\delta<1-\frac{1}{q}\). Let \(\varepsilon>0\), and let \(\mathcal{C}\) be a random code of rate \(\frac{k}{n}\leqslant(1-h_{q}(\delta)-\varepsilon)\). Then,_
\[\mathbb{P}\left(d_{min}(\mathcal{C})>\delta n\right)\geqslant 1-q^{- \varepsilon n},\]
_where the probability is taken over the uniform choice of a generator matrix of \(\mathcal{C}\), and \(h_{q}\) denotes the \(q\)-ary entropy function_
\[h_{q}(x)\stackrel{{\text{def}}}{{=}}-x\log_{q}\left(\frac{x}{q-1} \right)-(1-x)\log_{q}(1-x).\]
For the past 50 years, it has been a long trend of research in coding theory to extend such a result for more general quasi-abelian codes. For the class of quasi-cyclic codes which are, by far, the most used quasi-abelian codes in cryptography, a GV-like bound was introduced by Kasami in [14]. Gaborit and Zemor even showed in [15] that various families of random double-circulant codes asymptotically satisfied a logarithmic improvement on this bound. More recently, this state of affairs was extended by Fan and Lin in [13] to _any_ quasi-abelian code, even in the modular case where \(char(\mathbb{F}_{q})\) is _not_ coprime to \(|G|\). The proof of this result makes use of the theory of representations of finite abelian groups in \(\mathbb{F}_{q}\).
Theorem 4.1 ([13, Theorem 2.1]): _Let \(G\) be a finite abelian group, and let \(\left(\mathcal{C}_{\ell}\right)_{\ell}\) be a sequence of random quasi-\(G\) codes of length \(\ell\in\mathbb{N}\) and rate \(r\in(0,1)\). Let \(\delta\in(0,1-\frac{1}{q})\). Then,_
\[\lim_{\ell\to\infty}\mathbb{P}\left(\frac{d_{min}(\mathcal{C}_{\ell})}{|G|}> \delta\ell\right)=\left\{\begin{array}{ll}1&\mbox{if }r<1-h_{q}(\delta);\\ 0&\mbox{if }r>1-h_{q}(\delta);\end{array}\right.\]
_and both limits converge exponentially fast. The above probability is taken over the uniform choice of a generator matrix \(\mathbf{G}_{\ell}\in\mathbb{F}_{q}[G]^{k\times\ell}\) of \(\mathcal{C}_{\ell}\)._
As it is often the case in coding theory, this result is stated asymptotically, but the convergence speed could be made more precise, the exponent depends on \(|G|\): the larger the group \(G\), the higher this probability. Actually, to assert the resistance of QA-SD against linear attacks, it would be more relevant to consider the regime where \(k,\ell\) are constant and \(|G|\) goes to infinity as it is done in [15] but such a development is out of reach of this article and we leave it as a conjecture. There is a caveat though. Indeed, as it was noticed in Remark 18, in the case of constant \(k,\ell\) and growing \(|G|\) there is a bias in the QA-SD distribution when the ideal generated by the blocks in the input parity-check matrix is not the full ring. This corresponds to the parity-check matrix not being _full-rank_ when seen as a matrix over \(\mathbb{F}_{q}[G]\). In this case, the minimum distance could drop, but heuristically a random quasi-\(G\) code will have a minimum distance linear in its length as long as this bias is removed, which is the case in our setting since we enforce the systematic form.
Example 21: In order to produce OLE's over the field \(\mathbb{F}_{p}\), [2] proposed to use a ring \(\mathcal{R}\) of the form \(\mathbb{F}_{p}[X]/(F(X))\) where \(F(X)\) is totally split in \(\mathbb{F}_{p}\).
* The choice of polynomial \(F\) that maximizes the number of OLE would be the polynomial \(F(X)=X^{p}-X\) which has precisely all its roots in \(\mathbb{F}_{p}\) (This is _not_ the choice recommended by the authors, but is still allowed in their framework). However, this ring does not fit in our setting, and in fact the SD-problem in this ring is vulnerable to a very simple linear attack: given \((a,b)\) where \(b\) is either random or equal to \(a\cdot e+f\bmod X^{p}-X\), it holds that \(e(0)=f(0)=0\bmod X^{q}-X\) with high probability, because \(e(0)=f(0)=0\) over \(\mathbb{F}_{p}[X]\) with high probability (since \(e,f\) are sparse, their constant coefficient is likely to be zero), and reduction modulo \(X^{p}-X\) does not change the constant coefficient. Hence, the adversary can distinguish \(b\) from random simply by computing \(b(0)\) (since \(b(0)\) is nonzero with probability \((p-1)/p\) for a random \(b\)).
* However, by simply removing the \(X\) factor and setting \(F(X)=X^{p-1}-1\), which would yield \(p-1\) copies of \(\mathbb{F}_{p}\) instead of \(p\), the ring \(\mathcal{R}=\mathbb{F}_{p}[X]/(X^{p-1}-\ 1)\) is nothing else than the group ring \(\mathbb{F}_{p}[\mathbb{F}_{p}^{\times}]\) and totally fits in our framework. In particular, it resists linear attacks. Note that the previous evaluation at \(0\) does no longer make sense.
## 5 Pseudorandom Correlation Generators from QA-SD
In the following we always consider \(\mathcal{R}=\mathbb{F}_{q}[G]=\left\{\sum_{g\in G}a_{g}g\mid a_{g}\in\mathbb{F }_{q}\right\},\) with \(G\) an abelian group. We refer to \(\mathcal{R}_{t}\) as the set of ring elements of \(\mathcal{R}\) of maximum weight \(t\).
### A Template for Programmable \(\mathsf{PCG}\) for OLE from QA-SD
Theorem 5.1: _Let \(\mathcal{R}=\mathbb{F}_{q}[G]\). Assume that \(\mathsf{SPFSS}\) is a secure \(\mathsf{FSS}\) scheme for sums of point functions, and that the \(\mathsf{QA}\mbox{-}\mathsf{SD}(c,\mathcal{R})\) assumption holds. Then there exists a generic construction scheme to construct a \(\mathsf{PCG}\) to produce one \(\mathsf{OLE}\) correlation (described on Fig. 3). If the \(\mathsf{SPFSS}\) is based on a \(\mathsf{PRG}:\{0,1\}^{\lambda}\to\{0,1\}^{2\lambda+2}\) via the \(\mathsf{PRG}\)-based construction from [1], we obtain:_
* _Each party's seed has maximum size around :_ \((ct)^{2}\cdot((\log|G|-\log t+1)\cdot(\lambda+2)+\lambda+\log q)+ct(\log|G|+\log q)\) _bits_
* _The computation of_ Expand _can be done with at most_ \((2+\lfloor(\log q)/\lambda\rfloor)|G|c^{2}t\)__PRG _operations, and_ \(O(c^{2}|G|\log|G|)\) _operations in_ \(\mathbb{F}_{q}\)_._
The protocol, adapted from the work of Boyle et al. [2], is described on Fig. 3. We first present an overview. Remind that an instance of the OLE correlation consists in giving a random value \(x_{\sigma}\in\mathcal{R}\) to party \(P_{\sigma}\) as well as an additive secret sharing of \(x_{0}\cdot x_{1}\in\mathcal{R}\) to both. Formally:
\[\left\{((x_{0},z_{0}),(x_{1},z_{1}))|x_{0},x_{1},z_{0}\stackrel{{ \$}}{{\leftarrow}}\mathcal{R},z_{1}+z_{0}=x_{0}\cdot x_{1}\right\}.\]
The core idea of the protocol is to give the two parties a random vector \(\mathbf{e_{0}}\) or \(\mathbf{e_{1}}\in\mathcal{R}_{t}^{c}\), where each element of the vector is sparse. In addition, parties have access to a vector \(\mathbf{a}=(1,\mathbf{\dot{a}})\), with \(\mathbf{\dot{a}}=(a_{1},\cdots,a_{c-1})\), a vector of random elements of \(\mathcal{R}\). We see the vector \(\mathbf{e_{\sigma}}\) of party \(P_{\sigma}\) as an error vector. Using the vector \(\mathbf{a}\), parties can locally extend their error vector and construct \(x_{\sigma}=\langle\mathbf{a},\mathbf{e_{\sigma}}\rangle\), which is pseudorandom under QA-SD.
We want to give the parties shares of \(x_{0}\cdot x_{1}\). Note that \(x_{0}\cdot x_{1}\) is a degree 2 function in \((\mathbf{e_{0}},\mathbf{e_{1}})\); therefore, it suffices to share \(\mathbf{e_{0}}\otimes\mathbf{e_{1}}\). We underline a property of the sparse elements in \(\mathcal{R}_{t}\). Let \(e,f\) be sparse elements. This means that there exist sets \(S_{e},S_{f}\subset G\), such that \(e=\sum_{g\in S_{e}}e_{g}g,f=\sum_{g\in S_{f}}f_{g}g\) with \(e_{g},f_{g}\in\mathbb{F}_{q}\) and \(|S_{e}|=|S_{f}|=t\leqslant|G|\). It follows that the product of \(e\cdot f\) can be expressed using only \(S_{e}\cdot S_{f}\stackrel{{\text{def}}}{{=}}\{gh\mid g\in S_{e}, \ h\in S_{f}\}\) as basis. We conclude with \(|S_{e}\cdot S_{f}|<|S_{e}|\cdot|S_{f}|=t^{2}\), to deduce that the product of sparse vectors in \(\mathcal{R}\) also gives us sparse vectors (with sparsity \(t^{2}\) instead of \(t\)). We note that here, we deviate from the original construction of [2]: over a ring of the form \(\mathbb{F}_{q}[X]/P(X)\) where \(P\) is some polynomial, it is not generally true that the product of sparse elements remains sparse. This is circumvented in [2] by sharing the product over \(\mathbb{F}_{q}[X]\) instead, and reducing locally. When using group algebras as we do, the product preserves sparsity and we can share the product directly within \(\mathbb{F}_{q}[G]\), which is slightly more efficient.
This result enables us to express each element of \(\mathbf{e_{0}}\otimes\mathbf{e_{1}}\) as a sum of \(t^{2}\) point functions. Then, we rely on SPFSS (Definition 36). Recall that an SPFSS takes as input a sequence of points as well as a vector of values, and produces two keys that can be use to find shares of the sum of the implicit point functions. When a party evaluates its key at each point in the domain, it obtains a pseudorandom secret sharing of the coefficients of the sparse element in \(\mathcal{R}_{t}\). The protocol uses \(c^{2}\) elements of \(\mathcal{R}_{t}\) as a result of the tensor product. This means that we need \(c^{2}\) instances of SPFSS for \(t^{2}\) point functions. This gives us a seed size of \(O(\lambda(ct)^{2}\log|G|)=O(\lambda^{3}\log|G|)\).
Proof (of Theorem 22).: First, we argue the correctness of the protocol. The coefficient vectors \(\mathbf{b_{\sigma}^{i}},\mathbf{A_{\sigma}^{i}}\) define a random element in \(\mathcal{R}_{t}\). We can rewrite the product of two of these elements as follows:
\[e_{0}^{i}\cdot e_{1}^{j}=\sum_{k,l\in[0..t)}\mathbf{b_{0}^{i}}[k]\cdot\mathbf{ b_{1}^{j}}[l]\mathbf{A_{0}^{i}}[k]\mathbf{A_{1}^{j}}[l].\]
This can indeed be described by a sum of point functions. From this point, \(\mathbf{u}=\mathbf{u_{0}}+\mathbf{u_{1}}\), then \(\mathbf{u}=\mathbf{e_{0}}\otimes\mathbf{e_{1}}\), each entry being equal to one of those \(e_{0}^{i}\cdot e_{1}^{j}\). The party obtains \(z_{\sigma}\) as an output, and we can verify:
\[z_{0}+z_{1}=\langle\mathbf{a}\otimes\mathbf{a},\mathbf{u_{0}}+\mathbf{u_{1}} \rangle=\langle\mathbf{a}\otimes\mathbf{a},\mathbf{e_{0}}\otimes\mathbf{e_{1 }}\rangle=\langle\mathbf{a}\otimes\mathbf{e_{0}}\rangle\cdot\langle\mathbf{a} \otimes\mathbf{e_{1}}\rangle=x_{0}\cdot x_{1}.\]
The next-to-last equality is straightforward to check. Note that here, \(\langle\mathbf{a},\mathbf{e_{\sigma}}\rangle\) is a QA-SD sample, with fixed random \(\mathbf{a}\) and independent secret \(e_{\sigma}\). We now briefly show sketch security (the analysis is essentially identical to [2] since the construction is "black-box" in the ring \(\mathcal{R}\), we sketch it for completeness). As the two cases are symmetrical, we assume \(\sigma=1\). Let \((k_{0},k_{1})\stackrel{{\$}}{{\leftarrow}}\mathsf{PCG.Gen}(1^{ \lambda})\) with associated expanded outputs \((x_{0},z_{0})\) and \((x_{1},z_{1})\), we need to show that
\[\left\{(k_{1},x_{0},z_{0})\right\}\equiv\left\{(k_{1},\tilde{x_{0}},\tilde{z_{ 0}})\mid\tilde{x_{0}}\stackrel{{\$}}{{\leftarrow}}\mathcal{R}, \tilde{z_{0}}=\tilde{x_{0}}\cdot x_{1}-z_{1}\right\}.\]
To show this, we use a sequence of hybrid distributions.
* Replace \(z_{0}\) by \(x_{0}\cdot x_{1}-z_{1}\).
* Step by step replace each the FSS key \(K_{1}^{i,j}\) in \(k_{1}\) by a simulated key generated only with the range and the domain of the function. Due of the correctness and the security properties of the FSS scheme, this distribution is indistinguishable from the original distribution.
* Replace \(x_{0}\) by a fresh \(\tilde{x}_{0}\). It is also impossible to distinguish this distribution from the previous one, since the \(K_{1}^{i,j}\) are now completely independent of \(x_{0}\), and we can rely on the QA-SD assumption.
* Reverse step 2 by using the FSS security property once again.
Regarding the size of the different parameters, we use the optimization suggested in [1], such as assuming that the QA-SD assumption holds also for _regular error distributions_ (we note that our proof of resistance against linear tests holds for very general noise distributions, and in particular for the regular noise distribution). We can thus reduce the seeds size to \((ct)^{2}\cdot((\log|G|-\log t+1)\cdot(\lambda+2)+\lambda+\log q)+ct(\log N+ \log q)\) bits ; and the number of PRG calls in Expand down to \((2+[(\log q)/\lambda])|G|c^{2}t\). Note that to achieve security, choosing \(ct=O(\lambda)\) is sufficient. The number of PRG calls can be further reduced to \(O(|G|c^{2})\) using batch codes to implement the SPFSS.
Theorem 4.1: _The_ PCG _construction for_ OLE _from Fig. 3 is programmable._
Proof: In order to show that our PCG is programmable we have to transform it a little, as the Gen functionality takes additional inputs \((\rho_{0},\rho_{1})\) in the programmability definition. In our case, we can choose \(\rho_{\sigma}=\left\{\mathbf{A}_{\sigma}^{\mathbf{i}},\mathbf{b}_{\sigma}^{ \mathbf{i}}\right\}\). In this way, as explained in the description of the protocol, the additional input of the players can be seen as a vector of elements in \(\mathcal{R}_{t}\), \(\mathbf{e}_{\sigma}=(\mathbf{e}_{\sigma}^{\mathbf{0}},\cdots,\mathbf{e}_{ \sigma}^{\mathbf{c}-\mathbf{1}})\). Because \(x_{\sigma}=\langle\mathbf{a},\mathbf{e}_{\sigma}\rangle\), the players can compute their first input locally, after expanding their \(\rho_{\sigma}\) into \(e_{\sigma}\). This defines functions \(\phi_{\sigma}\), and proves the programmability property. The proof of the correctness property is the same as in the proof of the Theorem 4.1. The programmable security property can be proven with s sequence of hybrid distribution as in the proof of Theorem 4.1, using the reduction to FSS scheme and the QA-SD assumption.
#### 4.2.2 Distributed Seed Generation
The protocol described in Fig. 3 assumes that a trusted dealer has given the parties their seed. What we want to do in practice is to achieve the Gen phase via a distributive setup protocol.
Theorem 4.2 (From [1]): _There exists a protocol securely realizing the functionality_ QA-SD\({}_{\mathsf{OLE-Setup}}\) _of Fig. 1 against malicious adversaries, with complexity:_
* _Communication costs per party dominated by_ \((ct)^{2}\cdot((2\lambda+3)\log 2|G|+(9t+2)\log(q-1))\)_._
* _Computation is dominated by_ \(2|G|\)__PRG _evaluations._
Taking \(ct=O(\lambda)\) is enough to achieve exponential security. With this we can conclude a general result:
Theorem 4.2: _Let G be a group, and \(\mathcal{R}=\mathbb{F}_{q}[G]\). Suppose that SPFSS is a secure FSS scheme for sums of point functions, and the QA-SD\((c,\mathcal{R})\) assumption. Then there exists a protocol securely realizing the QA-SD\({}_{\mathsf{OLE-All}}\) functionality over the ring \(\mathcal{R}\) with the following parameters_
* _Communication costs and size of the seed :_ \(O(\lambda^{3}\log|G|)\)_._
* _Computation costs :_ \(O(\lambda|G|)\)__PRG _evaluations and_ \(O(c^{2}|G|\log|G|)\) _operations in_ \(\mathbb{F}_{q}\)
Figure 1: Generic functionality for the distributed setup of OLE PCG seeds
### Instantiating the Group Algebra
In this section we instantiate our general result with a concrete construction of a PCG for OLE correlation over \(\mathbb{F}_{q}\). Remind that \(G=\prod_{i=1}^{n}\mathbb{Z}/q_{i}\mathbb{Z}\), \(q_{i}\geqslant 2\). Using Proposition 7 from previous
Figure 3: PCG for OLE over \(\mathcal{R}\), based on QA-SD
Figure 2: OLE Functionality with Corruption
section:
\[\mathbb{F}_{q}[G] =\mathbb{F}_{q}\left[\prod_{i=1}^{n}\mathbb{Z}/q_{i}\mathbb{Z}\right] \simeq\mathbb{F}_{q}[\mathbb{Z}/q_{1}\mathbb{Z}]\otimes_{\mathbb{F}_{q}}\cdots \otimes_{\mathbb{F}_{q}}\mathbb{F}_{q}[\mathbb{Z}/q_{n}\mathbb{Z}]\] \[\simeq\bigotimes_{i=1}^{n}\mathbb{F}_{q}[X_{i}]/(X_{1}^{q_{i}}-1) \simeq\mathbb{F}_{q}[X_{1},..,X_{n}]/(X_{1}^{q_{1}}-1,..,X_{n}^{q_{n}}-1).\]
Batch-OLE over \(\mathbb{F}_{q}\).In the following we let all the \(q_{i}\) be all equal to \(q-1\). We therefore use \(\mathcal{R}=\mathbb{F}_{q}[G]\simeq\mathbb{F}_{q}[X_{1},..,X_{n}]/(X_{1}^{q-1} -1,..,X_{n}^{q-1}-1)\). Remark that the elements of \(\mathbb{F}_{q}^{*}\) are the roots of the polynomial \(X_{i}^{q-1}-1\). Therefore, we can write \(X_{i}^{q-1}-1=\prod_{a\in\mathbb{F}_{p}^{*}}(X_{i}-a)\), for all \(1\leqslant i\leqslant n\) and, by the Chinese Remainder Theorem, we get
\[\mathbb{F}_{q}[X_{1},..,X_{n}]/(X_{1}^{q-1}-1,..,X_{n}^{q-1}-1)\simeq\prod_{i =1}^{T}\mathbb{F}_{q}.\]
where \(T=(q-1)^{n}\) is the number of elements in the group. We can apply our protocol to construct a \(\mathsf{PCG}\) for the \(\mathsf{OLE}\) correlation in \(\mathcal{R}\). This single \(\mathsf{OLE}\) over \(\mathcal{R}\) can be transformed in \(T\) different instances of \(\mathsf{OLE}\) over \(\mathbb{F}_{q}\). We get:
**Theorem 26**.: _Suppose that \(\mathsf{SPFSS}\) is a secure \(\mathsf{FSS}\) scheme for sums of point functions and that the \(\mathsf{QA}\)-\(\mathsf{SD}\) assumption holds. Let \(\mathcal{R}=\mathbb{F}_{q}[X_{1},..,X_{n}]/(X_{1}^{q-1}-1,..,X_{n}^{q-1}-1)\), and \(T=(q-1)^{n}\). We can construct a \(\mathsf{PCG}\) producing \(T\) instances for \(\mathsf{OLE}\) over \(\mathbb{F}_{p}\), using the \(\mathsf{QA}\)-\(\mathsf{SD_{OLE}}\) construction. The parameters we obtain are the following._
* _Each party's seed has size at most:_ \((ct)^{2}\cdot((n\log(q-1)-\log t+1)\cdot(\lambda+2)+\lambda+\log q)+ct(n\log(q -1)+\log q)\) _bits_
* _The computation of_ \(\mathsf{Expand}\) _can be done with at most_ \((2+\lfloor(\log q)/\lambda\rfloor)n\log(q-1)c^{2}t\)__\(\mathsf{PRG}\) _operations, and_ \(O(c^{2}(q-1)^{n}n\log(q-1))\) _operations in_ \(\mathbb{F}_{q}\)_._
Concrete Parameters.We report on Table 1 a set of concrete parameters for our new programmable PCGs from \(\mathsf{QA}\)-\(\mathsf{SD}\), when generating \(T\) instances of a pseudorandom OLE over \(\mathbb{F}_{q}\), chosen according to the analysis of of Section 6. We note that our concrete security parameters are very close to the parameters of [BCG\({}^{+}\)20b]. This stems from two points:
First, [BCG\({}^{+}\)20b] conservatively chose security bounds based on existing attacks over \(\mathbb{F}_{2}\), even though their instantiation is over \(\mathbb{F}_{p}\) with \(\log p\approx 128\) (and known attacks on syndrome decoding are less efficient over larger fields). One of the reason for this was to get conservative estimates (syndrome decoding over large fields was less investigated, and attacks could improve in the future); another motivation is that over \(\mathbb{F}_{2}\), tools have been implemented to automatically evaluate the resistance against various flavors of ISD (whose exact cost can be quite tedious to analyze). Because our PCGs can handle fields as low as \(\mathbb{F}_{3}\), and to avoid having to pick different parameters for each field size, we also based our analysis on known bounds for \(\mathbb{F}_{2}\).
Second, the main difference between our analysis and that of [BCG\({}^{+}\)20b] is that we must consider folding attacks, which are considerably more diverse in our setting (since an attacker can construct a reduced instance by quotienting with _any_ subgroup \(G^{\prime}\), of which there are many). Yet, the _effect_ of folding on security does not depend on the fine details of the subgroup \(G^{\prime}\), but only on the _size_ of \(G^{\prime}\), which allows to compute the new dimension and the reduced noise weight (via a generalized piling-up lemma). This does not differ significantly from the case of ring-LPN over cyclotomic rings considered in [BCG\({}^{+}\)20b], since there the adversary could reduce the dimension to any power of two of their choice: our setting allows the adversary to be slightly more fine grained in its dimension reduction (_i.e._ the adversary is not restricted to a power of two), but this does not make a significant difference on the concrete attack cost (essentially because close dimensions yield near-identical noise reduction via the piling-up lemma, and do not have significantly different impact on the concrete attack cost beyond that).
As our table illustrates, our \(\mathsf{PCG}\)'s offer a non-trivial stretch (computed as the ratio between the seed size and the size of storing the output \(\mathsf{OLE}\)'s) from a target number \(T=2^{25}\) of \(\mathsf{OLE}\)'s.
**Discussions on Efficient FFTs.** Operations over the group algebra can be accelerated using the generalized FFT. Here, we briefly remark that some specific values of \(q\) yield "FFT-friendly" instances, where the generalized FFT algorithm is extremely efficient (and could even be competitive with the more well-known FFT over cyclotomic rings, with proper optimizations): this is the case whenever \(q-1\) is a power of \(2\), since it enables a very efficient divide and conquer algorithm. For example, this is the case over \(\mathbb{F}_{3}[(\mathbb{Z}/2\mathbb{Z})^{2^{n}}]\), where the FFT reduces to a \(2^{n}\)-dimensional FFT over \(\mathbb{F}_{3}\).
**From Decision-QA-SD to Search-QA-SD.** In Appendix B, we give a reduction from the search version of QA-SD to the decision version for all instances over \(\mathcal{R}=\mathbb{F}_{q}[G]\) where \(G=(\mathbb{Z}/(q-1)\mathbb{Z})^{n}\), which is the group which we use to obtain PCG's for OLE's over \(\mathbb{F}_{q}^{(q-1)^{n}}\). This provides further support for the security of our PCG schemes, by showing that their security reduces to the _search_ QA-SD assumption. More precisely, we prove the following theorem:
**Theorem 27**.: _Let \(q,t\) be two integers, and let \(G\stackrel{{\text{def}}}{{=}}(\mathbb{Z}/(q-1)\mathbb{Z})^{t}\). Let \(n\stackrel{{\text{def}}}{{=}}|G|=(q-1)^{t}\) and \(w\in\{0,\ldots,n\}\) be an admissible weight, and let \(\psi\) be an error distribution over \(\mathcal{R}\stackrel{{\text{def}}}{{=}}\mathbb{F}_{q}[G]\) such that \(\mathbb{E}[\text{wt}(\mathbf{x})]=w\) when \(\mathbf{x}\) is sampled according to \(\psi\). Let \(\mathbf{s}\in\mathbb{F}_{q}[G]\) be a fixed secret._
_Suppose that there exists a distinguisher \(\mathcal{A}\) between \((\mathbf{a},\mathbf{y}^{\text{unif}})\) and \((\mathbf{a},\mathbf{a}\cdot\mathbf{s}+\mathbf{e})\) where \(\mathbf{a},\mathbf{y}^{\text{unif}}\leftarrow\mathcal{R}\) and \(\mathbf{e}\leftarrow\psi\). Denote by \(\tau\) its running time and \(\varepsilon\) its distinguishing advantage. Then, there exists an algorithm that recovers \(\mathbf{s}\in\mathcal{R}\) (with an overwhelming probability in \(n\)) in time_
\[O\left(n^{4}\times\frac{1}{\varepsilon^{2}}\times q\times\tau\right).\]
## 6 Concrete Cryptanalysis
In this section, we discuss the concrete security of QA-SD. Most of the attacks we discuss in this section fit in the framework of linear tests, and are therefore asymptotically ruled out by our proof of resistance against linear tests. However, while the concrete bounds of the proof are reasonable (in the sense that choosing parameters from these bounds would yield instances that can be reasonably used in practice), they are overly pessimistic. This stems from the fact that the linear test framework rules out _all linear attacks_ (even inefficient ones); equivalently, it considers that the adversary can always find a vector \(\mathbf{v}\) that minimizes \(\text{wt}(\mathbf{v}\cdot\mathbf{H})\). However, in practice, _finding_ the vector \(\mathbf{v}\) that minimizes \(\text{wt}(\mathbf{v}\cdot\mathbf{H})\) is a hard problem. Indeed, this problem, when instantiated with arbitrary codes is known to be NP-complete [13] and is commonly assumed to be hard in average and the best know algorithm to solve this search problem are nothing but the algorithms solving SD, _i.e._ all the known variants of ISD.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \(T\) & \(c\) & \(t\) & \((k,t^{\prime})\) & Seed size & Stretch & \# \(R\)-mults & \#PRG calls \\ \hline \(2^{25}\) & 2 & 152 & \((2^{8},121)\) & \(2^{26.0}/\log q\) & \(\log q\) & 4 & \(2^{28.2}\cdot\log q\) \\ \(2^{25}\) & 4 & 64 & \((3\cdot 2^{8},60)\) & \(2^{23.6}/\log q\) & \(5.3\log q\) & 16 & \(2^{28.0}\cdot\log q\) \\ \hline \(2^{30}\) & 2 & 152 & \((2^{8},121)\) & \(2^{26.3}/\log q\) & \(26\log q\) & 4 & \(2^{33.2}\cdot\log q\) \\ \(2^{30}\) & 4 & 64 & \((3\cdot 2^{8},60)\) & \(2^{24.0}/\log q\) & \(128\log q\) & 16 & \(2^{33.0}\cdot\log q\) \\ \hline \(2^{35}\) & 2 & 152 & \((2^{8},121)\) & \(2^{26.6}/\log q\) & \(676\log q\) & 4 & \(2^{38.2}\cdot\log q\) \\ \(2^{35}\) & 4 & 64 & \((3\cdot 2^{8},60)\) & \(2^{24.3}/\log q\) & \(3327\log q\) & 16 & \(2^{38.0}\cdot\log q\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Concrete parameters and seed sizes (per party, counted in bits) for our PCG for OLE over \(\mathbb{F}_{q}\) from QA-SD(\(\mathcal{R}\)), using \(\mathcal{R}=\mathbb{F}_{q}[(\mathbb{Z}/(q-1)\mathbb{Z})^{n}]\), \(\lambda=128\), target number \(T=(q-1)^{n}\) of OLE’s, syndrome compression factor \(c\in\{2,4\}\), and number of noisy coordinates \(t\). ‘Stretch’, computed as \(2T\)/(seed size), is the ratio between storing a full random OLE (i.e., \(2T\) field elements) and the smaller PCG seed. The parameter \(k\) denotes the dimension of the SD instance after folding, and \(t^{\prime}\) the (expected) noise weight of the folded instance (when heuristically choosing the best possible folding for the adversary). \(\#\textsf{PRG}\) calls is computed as \(4\cdot Tct\). Parameters are chosen to achieve \(\lambda\)-bits of security against known attacks, according to the analysis of Section 6.
When choosing concrete parameters, all previous works that rely on LPN or SD choose instead to use parameters derived using the _best possible_\(\mathbf{v}\) which can be obtained using existing linear attacks, such as ISD. For all known concrete linear attacks, two codes whose duals have the same minimum distance will yield the same resistance (measured as \(\operatorname{wt}(\mathbf{v}\cdot\mathbf{H})\)) against these attacks. In other words, these attacks, which are combinatorial in nature, only rely at their core on the distance properties of the code and _not_ on its general structure. To get an apple-to-apple efficiency comparison with the state of the art, the natural rule of thumb is therefore to choose parameters similar to those chosen for variants of syndrome decoding with the same minimum distance property: this heuristic was explicitely advocated in [1, Section 1.4]. In our setting, since quasi-abelian codes meet the GV bound (_i.e._ have typically the same minimum distance as random linear codes), this translates to choosing parameters comparable to those of the standard syndrome decoding problem with random codes.
In our context, this would however be too aggressive, since there are known ways in which an attacker _can_ exploit the structure of the code. First, because our codes are quasi-abelian codes and hence, according to Remark 9, they can be regarded as codes over a quotient of a multivariate polynomial ring. Therefore, an attacker can reduce the word modulo some ideal of the ring, in order to generate an instance of a "smaller" decoding problem. This approach has been considered in [10] in the code-based setting and in [1] in the lattice setting. This point of view has been considered in [1] when studying the security of OLE's generated using instances of Ring-LPN.
The parameters should therefore be chosen such that any such "reduced instance" remains intractable. Second, due to the quasi-abelian structure of our codes, one can apply the DOOM attack from [13] to obtain a speedup by a factor \(\sqrt{|G|}\), where \(G\) denotes the underlying abelian group of the group algebra.
Our setting.In the following, we focus on linear attacks against the \(\mathsf{QA}\)-\(\mathsf{SD}(n,k)\) assumption instantiated over a ring \(\mathcal{R}=\mathbb{F}_{q}[X_{1},\ldots,X_{d}]/(X_{1}^{q-1}-1,\ldots,X_{d}^{q- 1}-1)\). Our point is to distinguish pairs \(((a_{1},\ldots,a_{c}),a_{1}s_{1}+\cdots+a_{c}s_{c}+e)\) (with possibly \(c=1\)), where \(a\stackrel{{\mathsf{s}}}{{\leftarrow}}\mathcal{R}\) and \(s_{1},\ldots,s_{c},e\in\mathcal{R}\) are sparse with respect to the basis of monomials. As already mentioned in Section 4.4, the search version of the problem is equivalent to solving the QA-SD problem. That is to say solving a decoding problem of the form
\[(\ \mathbf{A}_{1}\ |\ \cdots\ |\ \mathbf{A}_{c}\ |\ 1\ )\begin{pmatrix} \mathbf{s}_{1}\\ \vdots\\ \mathbf{s}_{c}\\ \mathbf{e}\end{pmatrix}=0,\]
where the \(\mathbf{A}_{i}\)'s are the matrix representations in the basis of monomials of the multiplication-by-\(a_{i}\) maps in \(\mathcal{R}\) and the \(\mathbf{s}_{i}\)'s and \(\mathbf{e}\) are the unknown vector representations of the \(s_{i}\)'s and \(e\) in this basis, _i.e._ are unknown sparse vectors.
In terms of code parameters, the group codes have length \(n=(c+1)\dim_{\mathbb{F}_{q}}\mathcal{R}\) and dimension \(k=c\dim_{\mathbb{F}_{q}}\mathcal{R}\). Therefore, we always have \(k\geqslant\frac{n}{2}\) with equality when \(c=1\). In this setting, attacks such as Arora-Ge [1] (which require \(n=\Omega(k^{2})\)) or BKW (which require \(n\) to be subexponential in \(k\), or \(n=\Omega(k^{1+\varepsilon})\) using the sample-efficient variant of [13]) do not apply. Furthermore, our codes have rate \(c/(c+1)\) with \(c\geqslant 1\). In particular, this implies that the recent results on Statistical Decoding 2.0 [1], which improves over ISD when the code rate is sufficiently small, will not yield an efficient attack on our setting (for rates above \(1/2\), SD 2.0 is always outperformed by ISD).
### Instance Projection via Quotient
As previously mentioned, a manner to solve the problem is to solve the search \(\mathsf{QA}\)-\(\mathsf{SD}(\mathcal{R})\) problem, where \(\mathcal{R}=\mathbb{F}_{q}[X_{1},\ldots,X_{d}]/(X_{1}^{q-1}-1,\ldots,X_{d}^{q -1}-1)\). Given an instance \((a,b)\) of \(\mathsf{QA}\)-\(\mathsf{SD}(\mathcal{R})\), an attacker may construct a new decoding instance with smaller length and dimension. In full generality, the attacker can pick any ideal \(I\subseteq\mathbb{F}_{q}[X_{1},\ldots,X_{d}]\) containing \((X_{1}^{q-1}-1,\ldots,X_{d}^{q-1}-d)\) and represented by a Grobner basis, and construct a new instance \((a^{\prime},b^{\prime})\leftarrow(a,b)\bmod I\), where the mod operation is the reduction modulo \(I\) with respect to the chosen Grobner basis. For instance,
one can choose a sequence \((F_{1}(X_{1}),\ldots,F_{d}(X_{d}))\) of factors of \(X_{1}^{q-1}-1,\ldots,X_{d}^{q-1}-1\) and reduce modulo them.
However, in general, the projection modulo an arbitrary ideal \(I\) can significantly increase the noise. The way the noise increases is highly dependent from the density of the generators of \(I\). For example, if \(\mathcal{R}=\mathbb{F}_{q}[X_{1},X_{2}]/(X_{1}^{q-1}-1,X_{2}^{q-1}-1)\) and the attacker reduces modulo \(I=(F_{1}(X_{1}),F_{2}(X_{2}))\) where \(F_{1},F_{2}\) are respective factors of \(X_{1}^{q-1}-1\) and \(X_{2}^{q-1}-1\) of respective Hamming weight, say, \(3\) and \(5\), the noise rate can increase by a factor up to \((3-1)\cdot(5-1)=8\). Therefore, we expect this approach to be useful (to the attacker) only when the noise increase is very small.
Heuristically the best possible projections of \(\mathcal{R}\) regarded as the group algebra \(\mathbb{F}_{q}[G]\) seem to be the projections arising from quotients of \(G\). Namely, given a subgroup \(H\) of \(G\) the canonical map \(G\to G/H\) induces a morphism of algebras
\[\pi_{H}:\left\{\begin{array}{cc}\mathbb{F}_{q}[G]&\longrightarrow&\mathbb{F }_{q}[G/H]\\ \sum_{g\in G}a_{g}&g\longmapsto\sum_{\bar{g}\in G/H}(\sum_{h\in H}a_{gh})\bar {g}.\end{array}\right.\]
From a coding theoretic point of view, this operation is nothing but summing up the entries of a codeword whose index are in a same orbit under the action of \(H\). This operation sends a code of length \((c+1)|G|\) and dimension \(c|G|\) onto a code of length \((c+1)|G/H|\) and dimension \(c|G/H|\). Moreover, a noisy codeword \(c+e\) is sent onto \(\pi_{H}(c)+\pi_{H}(e)\) and the weight of \(\pi_{H}(e)\) is bounded from above by the weight of \(e\). In summary, the length and dimensions of the code are divided by \(|H|\) while the weight of the error is preserved or slightly reduced since some entries of \(e\) may sum up to \(0\).
Such projections seem optimal in terms of limiting the growth of the noise.
Remark 28: From the ring theoretic point of view, the map \(\pi_{H}\) can be regarded as a quotient map of \(\mathcal{R}=\mathbb{F}_{q}[G]\) modulo the ideal generated by all the elements \((h-e_{G})\) where \(h\in H\) and \(e_{G}\) denotes the unit element of the group \(G\).
Example 29: Following the spirit of [1] consider the case
\[\mathcal{R}=\mathbb{F}_{q}[X]/(X^{q-1}-1)\simeq\mathbb{F}_{q}[\mathbb{Z}/(q-1 )\mathbb{Z}].\]
In this situation, for any \(\ell|(q-1)\), one can consider the subgroup \(H=\ell\mathbb{Z}/(q-1)\mathbb{Z}\). The corresponding projection can be made explicit as
\[\pi_{H}:\left\{\begin{array}{cc}\mathbb{F}_{q}[X]/(X^{q-1}-1) \longrightarrow&\mathbb{F}_{q}[X]/(X^{\frac{q-1}{\ell}}-1)\\ \sum_{i=0}^{q-2}a_{i}X^{i}&\longmapsto\sum_{i=0}^{\frac{q-1}{\ell}-1}\left( \sum_{j=i\bmod\ell}a_{j}\right)X^{i}.\end{array}\right.\]
In short, we sum up the entries of the codeword whose indexes are congruent modulo \(\ell\).
Example 30: This example is in the spirit of the attacks on multivariate Ring-LWE[1]. Consider the ring \(\mathcal{R}=\mathbb{F}_{q}[\mathbb{Z}/n\mathbb{Z}\times\mathbb{Z}/n\mathbb{Z }]\simeq\mathbb{F}_{q}[X,Y]/(X^{n}-1,Y^{n}-1)\) and consider the subgroup
\[H\stackrel{{\mathrm{def}}}{{=}}\{(x,x)\ |\ x\in\mathbb{Z}/n\mathbb{Z }\}\subseteq G=\mathbb{Z}/n\mathbb{Z}\times\mathbb{Z}/n\mathbb{Z}.\]
Here the projection map can be made explicit as
\[\pi_{H}:\left\{\begin{array}{cc}\mathbb{F}_{q}[X,Y]/(X^{n}-1,Y^{n}-1) \longrightarrow&\mathbb{F}_{q}[X]/(X^{n}-1)\\ \sum_{i,j=0}^{n-1}a_{ij}X^{i}Y^{j}&\longmapsto\sum_{i=0}^{n-1}\left(\sum_{u+v \equiv i\bmod n}a_{uv}\right)X^{i}.\end{array}\right. \tag{1}\]
This approach is considered in [1] to provide an attack on multivariate Ring-LWE. In the coding theoretic context, this approach is analysed in depth in [15] where the projection map is called _folding_.
**Computing the new noise weight.** Following [3], we consider an instance \((a,ae+f)\) where each sparse vector \(e,f\) has been sampled as a sum of \(t/2\) random monomials. This distribution is very close to the original distribution, and its choice significantly simplifies the analysis. It also slightly favor the attacker (since the expected number of noisy entries will now be slightly below \(t\) due to possible collisions). In this setting, the expected noise rate \(t^{\prime}\) can be computed fairly simply. Let \(R_{m,\ell}\) be the random variable counting the number of nonzero coefficients in a polynomial with \(m\) coefficients over \(\mathbb{F}_{q}\) computed as the sum of \(\ell\) random monomials. Note that \(t^{\prime}=\mathbb{E}[R_{n,\ell}]\), where \(n\) is the code length. Then, we have
\[\mathbb{E}[R_{m,\ell+1}]=\left(1-\frac{\mathbb{E}[R_{m,\ell}]}{m}\right)\cdot \left(\mathbb{E}[R_{m,\ell}]+1\right)+\frac{\mathbb{E}[R_{m,\ell}]}{m}\cdot \left(\mathbb{E}[R_{m,\ell}]-\frac{1}{q-1}\right),\]
since adding a new random monomial increases the number of nonzero coefficients by \(1\) if it falls in a position with a zero coefficient, and decreases the expected number of nonzero coefficients by \(1/(q-1)\) otherwise (since this is the probability, when summing two random elements of \(\mathbb{F}_{q}^{*}\), to get \(0\)). Solving the recurrence relation gives
\[t^{\prime}=\frac{n\cdot(q-1)}{q}\cdot\left(1-\left(1-\frac{q}{n\cdot(q-1)} \right)^{\ell}\right).\]
In the rest of the analysis, we will cover standard attacks on syndrome decoding on instances of a given noise rate and dimension. Then, when choosing concrete parameters, we will estimate the attacker cost as the minimum cost of solving any instance obtained by reducing \(\mathbb{F}_{q}[G]\) to \(\mathbb{F}_{q}[G/H]\), estimating the reduced noise parameter \(t^{\prime}\) using the formula above. We note that this approach ignores the possibility that for a given instance, \(t^{\prime}\) ends up being much smaller than its expected value, which would yield some weak instances of the problem. As in [3], we observe that this can be avoided by changing the structure of the noise using rejection sampling: one can resample the noise vectors until the weight \(t^{\prime}\) of the reduced instance over \(\mathbb{F}_{q}[G/H]\) (using the best possible choice of \(|H|\) for the attacker with the attacks covered below) is at least its expected value (on average, since the probability of having \(\mathbb{E}[t^{\prime}]\leqslant t^{\prime}\) is \(1/2\), this reduces by at most a single bit the entropy of the noise vector).
### Information Set Decoding
In this section, we cover standard linear attacks against syndrome decoding. The most advanced attacks in this category are the information set decoding (ISD) attacks, initially introduced by Prange [10] and subsequently refined in a long sequence of works [11, 12, 13, 14]. Evaluating precisely the effect of each attack on a given instance is complex and tedious, but a general lower bound on the attack cost was derived in [1], based on similar analysis given in [10, 12, 13]. These lower bounds build upon the common structure of most ISD variants. In general, the cost of modern ISD algorithms for a code with parity-check matrix \(\mathbf{H}\) over \(\mathbb{F}_{2}\), with dimension \(k\), code length \(n\), and \(t\) noisy coordinates, is lower bounded by
\[\min_{p_{1},p_{2}}\left\{\frac{\min\left\{2^{k},\binom{n}{t}\right\}}{\binom{ k-p_{2}}{t-p_{1}}}\cdot\left(\frac{K_{1}+K_{2}}{\binom{k+p_{2}}{p_{1}}}+ \frac{t\cdot(k-p_{2})}{2^{p_{2}}}\right)\right\},\]
where \((p_{1},p_{2})\) satisfy \(0\leqslant p_{2}\leqslant k/2\) and \(0\leqslant p_{1}\leqslant k+p_{2}\), \(K_{1}\) denotes the cost of Gaussian elimination on a submatrix of \(\mathbf{H}\) with \(n-p_{2}\) columns, and \(K_{2}\) denotes the running time of a specific sub-algorithm, which varies accross different attacks. As in [3], we assume that performing Gaussian elimination on the submatrix of \(\mathbf{H}\) can be done in time \(K_{1}\approx(k-p_{2})^{2}\log(k-p_{2})\), because \(\mathbf{H}\) is a structured matrix. According to the analysis of [10], \(K_{2}\) can be lower bounded by \(K_{2}\geqslant\binom{(k+p_{2})^{2}/2}{p_{1}/8}\) for algorithm of [1]. As in [3], [1] seems to provide the best efficiency in our setting (more recent algorithms have large hidden constants that render them less practical, or improve over [1] only for very high noise rates).
The above analysis is restricted to the case of \(\mathbb{F}_{2}\), which is the easiest to attack using ISD. Over larger fields, one can use the above costs as a lower bound for the true cost of the attack, but as the field size grows, this lower bound becomes quite loose. Indeed, this bound was used to pick
concrete parameters in [BCG\({}^{+}\)20b], but a recent preprint [LWYY22] estimates that the parameters recommended in [BCG\({}^{+}\)20b] for 80 bits of security actually achieve 92-112 bits of security, while the parameters recommended for 128 bits of security actually achieve 133-171 bits of security. In our setting, however, our PCG's can be instantiated over fields as small as \(\mathbb{F}_{3}\), in which the costs should be much closer to the lower bounds used in [BCG\({}^{+}\)20b].
We note that a detailed analysis of ISD over larger fields was given in a recent paper [BCDL19]. However, for the sake of avoiding to compute different QA-SD parameters for each possible field size \(\mathbb{F}_{q}\), we stick in this paper to the conservative lower bound that stems from the analysis over \(\mathbb{F}_{2}\).
In [CT19] a study of the combination of ISD with the _folding_ operation is studied and precises how the use of folding improves the complexity of the decoder. It turns out that for small errors rates, which is precisely our setting, the use of folding does not represent a significant improvement.
### Prange and statistical decoding (Low-Weight Parity-Check)
We also consider other standard linear attacks, such as Prange decoding algorithm [Pra62] and low-weight parity checks [Zic17, AJ01, FKI06, Ove06, DT17] which leads to the so-called _statistical decoding_. The former, which is just the original ISD algorithm, consists in guessing \(k\) noise-free equations and solving the resulting system. It has the advantage over more recent ISD algorithms that it does not depend on the field size. The latter is also often more efficient than ISD in our setting. This is because ISD is a search attack, and executing the attack involves solving a linear system in each iteration of the attack. Since typical PCG applications have huge dimensions (e.g. \(k\approx 2^{30}\)), this polynomial cost turns out to have a significant impact on the overall runtime of the attack (even though ISDs have the lowest exponent in the exponential part of the attack). Low-weight parity checks, however, work by directly finding many \(\mathbf{v}\) such that \(\mathbf{v}\cdot\mathbf{H}\) has low weight, and declare \(\mathbf{b}\) to be a syndrome decoding instance if the set \(\{\mathbf{v}\cdot\mathbf{b}^{\intercal}\}\) contains too many zeroes. In other words, these attacks directly target the decision variant of syndrome decoding (on which our PCG's rely) and require computing only an inner product per iteration, rather than solving a large linear system. Concretely, the cost of Prange (when \(\mathbf{H}\) is a structured matrix) is given by \(O\left(1/(1-\frac{t}{n})^{k}\cdot k^{2}\log k\right)\) arithmetic operations, and the cost of the low-weight parity check attack is \(O\left(n/(k-1)^{t}\cdot k\right)\) arithmetic operations (see [BCGI18, BCG\({}^{+}\)20b]).
### Algebraic Decoding Attacks
An important line of work in code-based cryptography consists in recovering a hidden algebraic structure of a code which permits to decode. See for instance [Wie10, CGG\({}^{+}\)14, COT17, CMP17, CLT19]. In general such attacks rest on the fact that the public code \(\mathcal{C}\) or some of its subcodes has a peculiar behaviour with respect to the component wise product. Namely that the "square of \(\mathcal{C}\)", _i.e._ the span of the component wise products of any two words of \(\mathcal{C}\) has small dimension compared to the square of a random code.
Note that codes sharing this feature of having a "small square" benefit from an efficient decoding algorithm usually referred to as _Error Locating Pairs decoder_[Pel92]. See [Cou21, Section 4] for further details. Therefore, if a random quasi-group code had a small square compared to random codes, then one could deduce an algebraic decoder for quasi-group codes which is a longstanding open question: even when restricting to the case of cyclic codes!
Algebraic attacks exploit the structure of the underlying code to decode it efficiently. Many such algebraic decoding attacks have been devised in the literature, and fall in a unified framework developed in [Pel92, Kot92] based on componentwise product of codes. Examples of such attacks include [PMMM11, MP12, FGO\({}^{+}\)13, CGGU\({}^{+}\)13, MMP14] (and many more), and were often used to break some variants of the McEliece cryptosystem. In our context, though, algebraic decoding of quasi-group codes is a well-known and long-standing open problem: it has been studied for over 50 years in the coding theory community, and to this day no efficient algorithm is known to decode a random quasi-abelian code. This is listed as an open research problem in the most recent Encyclopedia of Coding Theory (from 2021) [Wil21, Problem 16.10.5].
### Attacks on Multivariate LWE
As already mentioned in Example 30, an attack on multivariate Ring-LWE is presented in [1]. This attack is based on a projection of the form \(\mathbb{F}_{q}[G]\to\mathbb{F}_{q}[G/H]\) as described in Section 6.1. The attack is particularly efficient since applying a map of the form (1) has a very limited impact on the Euclidean norm and hence has a limited impact on the noise term. In the coding theoretic setting, the situation is very different since the Hamming weight of the error is more or less preserved but then the relative weight, _i.e._ the ratio \(\frac{t}{n}\) is more or less multiplied by a term \(|H|\). Therefore, reducing with respect to a too large subgroup \(H\) leads to shorter codes but provides intractable instances of the decoding problem.
### Decoding One-Out-Of Many
For a code equipped with a non trivial permutation group, which is an obvious feature of quasi-abelian codes, the decoding problem can be made easier using Sendrier's _Decoding One Out of Many_ (DOOM) paradigm [11]. Indeed, consider a quasi-abelian code \(\mathcal{C}\subseteq\mathbb{F}_{q}[G]^{\ell}\) and a noisy codeword \(y=c+e\) with \(c\in\mathcal{C}\) and \(e\in\mathbb{F}_{q}[G]^{\ell}\) of low weight. Then, for any \(g\in G\), we get another instance of the decoding problem with an error term of the same weight:
\[g\cdot y=g\cdot c+g\cdot e.\]
Here \(g\cdot c\in\mathcal{C}\) and \(\operatorname{wt}(g\cdot e))=\operatorname{wt}(e)\). Therefore, given a single instance of QA-SD we naturally deduce \(|G|\) instances and solving one of them immediately solves the other ones. Thus, from [11], solving one out of \(|G|\) instances of SD permits to divide the work factor of any decoder by \(\sqrt{|G|}\). Therefore the cost of ISD should be divided by \(\sqrt{|G|}\) and the cost of the composition of a projection \(\mathbb{F}_{q}[G]\to\mathbb{F}_{q}[G/H]\) with ISD should be divided by \(\sqrt{|G/H|}\).
## 7 Applications to Secure Computation
In this part, we explain some of the main applications of our new PCG's to secure computation. To provide bounds, we will use the following restatement of Theorem 25 in the case \(\mathcal{R}=\mathbb{F}_{q}[X_{1},..,X_{n}]/(X_{1}^{q-1}-1,..,X_{n}^{q-1}-1)\).
**Theorem 31**.: _Suppose that \(\mathsf{SPFSS}\) is a secure \(\mathsf{FSS}\) scheme for sums of point functions and that the_ QA-SD _assumption holds. Let \(\mathcal{R}=\mathbb{F}_{q}[X_{1},..,X_{n}]/(X_{1}^{q-1}-1,..,X_{n}^{q-1}-1)\), and \(T=(q-1)^{n}\). We can construct a_ PCG _producing \(T\) instances for_ OLE _over \(\mathbb{F}_{p}\), using the_ QA-SD_OLE _construction with the following parameters_
* _Communication costs and size of the seed :_ \(O(\lambda^{3}\log T)\)_._
* _Computation costs :_ \(O(\lambda T)\)__PRG _evaluations and_ \(O(c^{2}T\log T)\) _operations in_ \(\mathbb{F}_{q}\)_._
First, as explained in [1], a PCG generating \(N\) multiplication triples can be derived from a PCG generating \(2N\) OLE.
**Extension to multiplication triples.** The OLE correlation gives a secret to each party \(P_{0}\) and \(P_{1}\) and an additive secret-sharing of the product of the two secrets. The OLE correlation is interesting in its own right and can be used directly in some applications, but in general, the multiplication triple correlation is used. A (2-party) multiplication triple, gives the parties additive shares of random elements \(a\) and \(b\), and shares of the product \(a\cdot b\). The main advantage of multiplication triples is their usefulness in the setting of 2-party computation of arithmetic circuits over \(\mathbb{F}_{q}\).
In this setting, each multiplication gate can be evaluated by consuming a single multiplication triple, and with communication costs of two \(\mathbb{F}_{q}\) elements per party - the additions are free in this setting. Using two instances of an OLE correlation we can obtain an instance of a multiplication triple correlation. Let \(a=a_{0}+a_{1},b=b_{0}+b_{1}\) and \(c=ab=a_{0}b_{0}+a_{0}b_{1}+a_{1}b_{0}+a_{1}b_{1}\), we can distribute \(a_{\sigma},b_{\sigma}\) to party \(P_{\sigma}\) and run two independent OLE instances to obtain the secret share of the cross terms \(a_{0}b_{1}\) and \(a_{1}b_{0}\). As the party \(P_{\sigma}\) can locally compute \(a_{\sigma}b_{\sigma}\) it gets a correct sharing of \(ab\). Note that we obtain the correlation in a black-box way. Thus, a PCG generating \(N\) multiplication triples can be derived from a PCG generating \(2N\) OLE.
### Application : (N-party) multiplication triples generation for arithmetic circuit
**Theorem 32**.: _Assume the existence of oblivious transfers and \(\mathsf{QA}\mbox{-}\mathsf{SD}(\mathcal{R})\) assumption, where \(\mathcal{R}=\mathbb{F}_{q}[X_{1},\cdots,X_{n}]/(X_{1}^{q-1}-1,\cdots,X_{n}^{q-1 }-1)\simeq\mathbb{F}_{q}\times\cdots\times\mathbb{F}_{q},\) with \(q\geqslant 3.\) Let \(T=(q-1)^{n}\). There exists a semi-honest \(N\)-party protocol for securely evaluating an arithmetic circuit \(C\) over \(\mathbb{F}_{q}\) with \(T\) multiplication gates, in the preprocessing model, such that:_
* _The preprocessing phase has communication cost_ \(\tilde{c}(N,\lambda,T)=O(\lambda^{3}\cdot N^{2}\cdot\log(2T))\)_, and computation cost_ \(\dot{c}(N,\lambda,T)=O(N^{2}\cdot\lambda\cdot 2T)\)__\(\mathsf{PRG}\) _calls;_ \(O(N^{2}\cdot 2T\log(2T))\) _operations in_ \(\mathbb{F}_{q}\)_._
* _The online phase is non-cryptographic and communication cost_ \(2\cdot N\cdot T\) _elements of_ \(\mathbb{F}_{q}\)_._
Proof.: Consider the parties \(P_{1},\cdots,P_{N}\). First, remark that programmability enables parties to generate "correlated" (2-party) multiplication triples, which can used to obtain \(N\) -party multiplication triples in the following way.
* The party \(P_{i}\) gets two random values \((x_{i},y_{i})\). We define \(X=\sum_{i}x_{i}\) and \(Y=\sum_{j}y_{j}\).
* Each pair of parties \((P_{i},P_{j})_{1\leqslant i,j\leqslant N,i\neq j}\) performs the programmable protocol for (2-party) multiplication triples with programmable inputs \((x_{i},y_{j})\), and obtains shares of \(x_{i}\cdot y_{j}\). We indicate the share of \(P_{i}\) as \(\langle x_{i}\cdot y_{j}\rangle_{i}\).
* Let \(K_{i}=\sum_{j=1}^{N}\langle x_{i}\cdot y_{j}\rangle_{i}+\langle x_{i}\cdot y_{ j}\rangle_{i}+x_{i}\cdot y_{i}\). The \(K_{i}\) are shares of the product \[X\cdot Y=\sum_{1\leqslant i,j\leqslant N}x_{i}\cdot y_{j}=\sum_{i=1}^{N}K_{i}\]
The parties use the \(\mathsf{QA}\mbox{-}\mathsf{SD}_{\mathsf{OLE}}\) to generate short seeds of each of the \(N\cdot(N-1)\) (2-party) multiplication triples they need. In the online phase, they locally expand the seeds to obtain \(T\) instances of (\(N\)-party) multiplication triples. The parties can execute the (\(N\)-party) GMW protocol using the multiplication triples, and evaluate the circuit.
Using Theorem 31 we obtain the cost of preprocessing for generating the \(2T\)\(\mathsf{OLE}\) over \(\mathbb{F}_{p}\), namely \(O(\lambda^{3}\cdot\log(2T))\) in communication cost, and \(\dot{c}(N,\lambda,T)=O(N^{2}\lambda 2T)\)\(\mathsf{PRG}\) calls ; \(O(\lambda^{2}\cdot 2T\log(2T))\) operations in \(\mathbb{F}_{q}\) in computation cost.
The cost of communication in the online phase is simply derived from the GMW algorithm using the multiplication triples. For each multiplication gate, each party must send two field elements, resulting in a cost of \(2\cdot N\cdot T\).
### Secure Computation with Circuit-Dependent Preprocessing
Circuit-dependent preprocessing is a variation of the standard Beaver's circuit randomization technique with multiplication triples. It has been investigated in recent works, such as [16, 10]. The idea is to preprocess multiplications in a way that depends on the structure of the circuit and leads to an online phase that requires just _one opening per multiplication gate_, instead of two when using multiplication triples. \(\mathsf{PCG}\)'s for \(\mathsf{OLE}\)'s do not directly enable reducing the preprocessing phase of secure computation with circuit-dependent correlated randomness: at a high level, this stems from the fact that since the correlated randomness depends on the topology of the circuit, it cannot be compressed beyond the description size of this topology. Nevertheless, \(\mathsf{PCG}\)'s enable _batch_ secure computation (_i.e._ securely computing many copies of the same circuit on different input) with silent preprocessing in the circuit-dependent correlated randomness setting, by using \(\mathsf{PCG}\)'s to compress a batch of correlations for a given gate access all circuits.
**Theorem 33**.: _Assume the existence of oblivious transfer and the \(\mathsf{QA}\mbox{-}\mathsf{SD}(\mathcal{R})\) assumption, where \(\mathcal{R}=\mathbb{F}_{q}[X_{1},\cdots,X_{n}]/(X_{1}^{q-1}-1,\cdots,X_{n}^{q-1 }-1)\simeq\mathbb{F}_{q}\times\cdots\times\mathbb{F}_{q}\), with \(q\geqslant 3\). Let \(T=(q-1)^{n}\). There exists a semi-honest 2-party protocol for securely evaluating \(T\) copies of an arithmetic circuit \(C\) over \(\mathbb{F}\) with \(S\) multiplication gates, in the preprocessing model, such that:_
* _The preprocessing phase has communication cost_ \(c(T,\lambda,S)=O(\lambda^{3}\cdot S\cdot\log(2T))\) _and a computation cost_ \(\dot{c}(T,\lambda,S)=O(\lambda\cdot S\cdot 2T)\)__\(\mathsf{PRG}\) _calls ;_ \(O(S\cdot 2T\log(2T))\) _operations in_ \(\mathbb{F}_{q}\)_._
* _The onTine phase is non-cryptographic and communication costs_ \(2\cdot S\cdot T\) _elements of_ \(\mathbb{F}\)
Proof: Let \(C\) be an arithmetic circuit over \(\mathbb{F}\) consisting of fan-in two addition and multiplication gates. Each wire \(w\) is assigned a mask \(r_{w}\) during the offline phase. The masks are designed as follows.
* if \(w\) is an input wire, \(r_{w}\) is chosen at random.
* if \(w\) is the output wire of a multiplication gate, \(r_{w}\leftarrow\mathbb{F}\) is chosen at random
* if \(w\) is the output wire of an addition gate with input wires \(u\) and \(v\), then \(r_{w}=r_{u}+r_{v}\).
* for each multiplication gate, we assigned a value \(s_{u,v}\), such that on input wires \(u\) and \(v\), \(s_{u,v}=r_{u}\cdot r_{v}\).
The masks are not known by the parties, but they obtain random additive shares of each \(r_{w}\) for any input and output wire of multiplication gates, as well as \(s_{u,v}\) for the multiplication gates.
When the online phase begins, both parties hide their secret values with random masks. The party that is not the one giving the input for a given wire \(w\) gives to the other one his shares \(\langle r_{w}\rangle\). The invariant of the online phase is that through the protocol, for each wire, parties know exactly the value \(x+r_{x}\) where \(r_{x}\) is the mask of this wire (for which parties have additive sharing), and \(x\) is the real value that is computed by the circuit passing through the wire \(w\). The invariant is preserved through each gate because of the following:
* For an addition gate, parties know \(x+r_{x}\) and \(y+r_{y}\). Then the parties add locally those values to obtain \(x+y+r_{x}+r_{y}\), with \(r_{x}+r_{y}\) being indeed the output mask for the addition gate.
* For a multiplication gate with \(r_{w}\) denoting its output wire's mask, parties know \(x+r_{x}\) and \(y+r_{y}\). The parties can locally compute their share \(\langle(x+r_{x})\cdot r_{y}+(y+r_{y})\cdot r_{x}+r_{x}\cdot r_{y}+r_{w}\rangle\) (the formula can be a little bit different if we are not in \(\mathbb{F}_{2}\)). By both exchanging one bit of information, they reconstitute that value. Adding up \((x+r_{x})\cdot(y+r_{y})\), they obtain in clear \(x\cdot y+r_{w}\) where \(r_{w}\) is the mask of the output wire of this multiplication gate.
In the end, we have to perform \(2S\) different calls to our PCG to create the (2-party) multiplication triples seeds. Again we use Theorem 3.1 to get the estimation of the costs in communication and space, per instances, and we multiply it by \(S\). In the online phase, we gain a factor 2 in communication because each party only has to send a bit of information for each of the multiplication gates. As there are \(S\cdot T\) multiplication gates in total, the communication cost in the online phase is \(2\cdot S\cdot T\).
|
2301.10335 | Multilingual Multiaccented Multispeaker TTS with RADTTS | We work to create a multilingual speech synthesis system which can generate
speech with the proper accent while retaining the characteristics of an
individual voice. This is challenging to do because it is expensive to obtain
bilingual training data in multiple languages, and the lack of such data
results in strong correlations that entangle speaker, language, and accent,
resulting in poor transfer capabilities. To overcome this, we present a
multilingual, multiaccented, multispeaker speech synthesis model based on
RADTTS with explicit control over accent, language, speaker and fine-grained
$F_0$ and energy features. Our proposed model does not rely on bilingual
training data. We demonstrate an ability to control synthesized accent for any
speaker in an open-source dataset comprising of 7 accents. Human subjective
evaluation demonstrates that our model can better retain a speaker's voice and
accent quality than controlled baselines while synthesizing fluent speech in
all target languages and accents in our dataset. | Rohan Badlani, Rafael Valle, Kevin J. Shih, João Felipe Santos, Siddharth Gururani, Bryan Catanzaro | 2023-01-24T22:39:04Z | http://arxiv.org/abs/2301.10335v1 | # Multilingual Multiaccented Multispeaker Tts with Radits
###### Abstract
We work to create a multilingual speech synthesis system which can generate speech with the proper accent while retaining the characteristics of an individual voice. This is challenging to do because it is expensive to obtain bilingual training data in multiple languages, and the lack of such data results in strong correlations that entangle speaker, language, and accent, resulting in poor transfer capabilities. To overcome this, we present a multilingual, multiaccented, multispeaker speech synthesis model1 based on RADTTS with explicit control over accent, language, speaker and fine-grained \(F_{0}\) and energy features. Our proposed model does not rely on bilingual training data. We demonstrate an ability to control synthesized accent for any speaker in an open-source dataset comprising of 7 accents. Human subjective evaluation demonstrates that our model can better retain a speaker's voice and accent quality than controlled baselines while synthesizing fluent speech in all target languages and accents in our dataset.
Rohan Badlani, Rafael Valle, Kevin J. Shih, Joao Felipe Santos, Siddharth Gururani, Bryan Catanzaro NVIDIA
## 1 Introduction
Recent progress in Text-To-Speech (TTS) has achieved human-like quality in synthesized mel-spectrograms [1, 2, 3, 4] and waveforms[5, 6]. Most models support speaker selection during inference by learning a speaker embedding table[1, 2, 3] during training, while some support zero-shot speaker synthesis by generating a speaker conditioning vector from a short audio sample[7]. However, most models support only a single language. This work focuses on factorizing out speaker and accent as controllable attributes, in order to synthesize speech for any desired combination of speaker, language and accent present in the training dataset.
It is very expensive to obtain bilingual datasets because most speakers are unilingual. Hence, speaker, language, and accent attributes are highly correlated in most TTS datasets. Training models with such entangled data can result in poor language, accent and speaker transferability. Notably, every language has its own alphabet and most TTS systems use different symbol sets for each language, sometimes even separate encoders[8], severely limiting representational sharing across languages. This aggravates entanglement of speaker, language and text, especially in datasets with very few speakers per language. Approaches like [9] introduce an adversarial loss to curb this dependence of text representations on speaker. Other approaches use a union of linguistic feature sets of all languages[10] to simplify text processing for multi-language training. However, these solutions don't support code-switching situations where words from multiple languages appear in mixed order in the synthesis prompt.
Recently, there has been an interest in factorizing out fine-grained speech attributes[11, 12, 13] like \(F_{0}\) and energy. We extend this fine-grained control by additionally factorizing out accent and speaker with an ability to predict frame-level \(F_{0}\) and energy for a desired combination of accent, speaker and language. We analyze the effects of such explicit conditioning on fine-grained speech features on the synthesized speech when transferring a voice to other languages.
Our goal is to synthesize speech for a target speaker in any language with a specified accent. Related methods include YourTTS[14], with a focus on zero-shot multilingual voice conversion. Although promising results are presented for a few language combinations, it shows limited success on transferring from languages with limited speakers. Moreover, it uses a curriculum learning approach to extend the model to new languages, making the training process cumbersome. Closest to our work is [9], which describes a multilingual and multispeaker TTS model without requiring individual speakers with multiple language samples.
In this work, we **(1)** demonstrate effective scaling of single language TTS to multiple languages using a shared alphabet set and alignment learning framework[4, 15]; **(2)** introduce explicit accent conditioning to control the synthesized accent; **(3)** propose and analyze several strategies to disentangle attributes (speaker, accent, language and text) without relying on parallel training data (multilingual speakers); and **(4)** explore fine-grained control of speech attributes such as \(F_{0}\) and energy and its effects on speaker timbre retention and accent quality.
## 2 Methodology
We build upon RADTTS[4, 13] as deterministic decoders tend to produce oversmoothed mels that require vocoder fine-tuning. Our model synthesizes mels(\(X\in\mathbb{R}^{C_{\textit{mel}}\times F}\)) using encoded text(\(\Phi\in\mathbb{R}^{C_{\textit{mel}}\times T}\)), accent(\(A\in\mathbb{R}^{D_{\textit{normal}}}\)) and speaker(\(S\in\mathbb{R}^{D_{\textit{polar}}}\)) as conditioning variables, with optional conditioning on fundamental frequency(\(F_{0}\in\mathbb{R}^{I\times F}\)) and energy(\(\xi\in\mathbb{R}^{I\times F}\)), where \(F\) is the number of mel frames, \(T\) is the text length, and energy is the per-frame mel energy average. We propose the following novel modifications:
### Shared text token set
Our goal is to train a single model with the ability to synthesize a target language with desired accent for any speaker in the dataset. We represent phonemes with the International Phone Alphabet (IPA) to enforce a shared textual representation. A shared alphabet across languages reduces the dependence of text on speaker identity, especially in low-resource settings (e.g. 1 speaker per language) and supports code-switching.
### Scalable Alignment Learning
We utilize the alignment learning framework in[4, 15], to learn speech-text alignments \(\Lambda\in\mathbb{R}^{T\times F}\) without external dependencies. A shared alphabet set simplifies this since alignments are learnt on a single token set instead of distinct sets. However, when the speech has a strong accent, the same token can be spoken in different ways from speakers with different accents and hence alignments can become brittle. To curb this multi-modality, we learn alignments between (text, accent) and mel-spectograms using accent \(A\) as a conditioning variable.
### Disentangling Factors
We focus on non-parallel data with a speaker speaking 1 language which typically has text \(\Phi\), accent \(A\) and speaker \(S\) entangled. We evaluate strategies to disentangle these attributes: **Speaker-adversarial loss** In TTS datasets, speakers typically read different text and have different prosody. Hence, there can be entanglement between speaker \(S\), text \(\Phi\) and prosody. Following[9], we employ domain adversarial training to disentangle \(S\) and \(\Phi\) by using a gradient reversal layer. We use a speaker classification loss, and backpropagate classifier's negative gradients through the text encoder and token embeddings.
\[L_{adv}=\sum_{i=1}^{N}P(s_{i}|\phi_{i};\theta_{spkclassifier}) \tag{1}\]
**Data Augmentation** Disentangling accent and speaker is challenging, as a speaker typically has a specific way of pronounsing words and phonemes, causing a strong association between speaker and accent. Straightforward approaches to learning from non-parallel data learn entangled representations because a speaker's language and accent can be trivially learned from the dataset. Since our goal is to synthesize speech for a speaker in a target language with desired accent, disentangling speaker \(S\) and accent \(A\) is essential, otherwise either speaker identity is not preserved in the target language or the generated speech retains the speaker's accent from the source language. To overcome this problem, we use data augmentations like formant, \(F_{0}\), and duration scaling to promote disentanglement between speaker and accent. For a given speech sample \(x_{i}\) with speaker identity \(s_{i}\) and accent \(a_{i}\), we apply a fixed transformation \(t\in\{1,2,...\tau\}\) to construct a transformed speech sample \(x_{i}^{t}\) and assign speaker identity as \(s_{i}+t\cdot N_{speakers}\) and accent as original accent \(a_{i}\), where \(\tau\) is the number of augmentations. This creates speech samples with variations in speaker identity and accent in order to orthogonalize these attributes.
**Embedding Regularization** Ideally, the information captured by the speaker and accent embeddings should be uncorrelated. To promote disentanglement between accent and speaker embeddings, we aim to decorrelate the following variables: (1) random variables in accent embeddings; (2) random variables in speaker embeddings; (3) random variables in speaker and accent embeddings from _each other_. While truly decorrelating the information is difficult, we can promote something close by using the constraints from VICReg[16]. We denote \(E^{A}\in\mathbb{R}^{D_{a}\times N_{a}}\), \(E^{S}\in\mathbb{R}^{D_{s}\times N_{s}}\) as the accent and speaker embedding tables respectively. Column vector \(e^{j}\in E\) denotes the \(j\)'th embedding in either table. Let \(\mu_{E}\) and \(Cov(E)\) be the means and covariance matrices. By using VICReg, we constrain standard deviations to be at least \(\gamma\) and suppress the off-diagonal elements of the covariance matrix (\(\gamma=1,\epsilon=1e-4\)):
\[L_{var} =\frac{1}{D}\sum_{i=j}\max\left(0,\gamma-\sqrt{Cov(E)_{i,j}+ \epsilon}\right) \tag{2}\] \[L_{covar} =\sum_{i\neq j}Cov(E)_{i,j}^{2} \tag{3}\]
Next, we attempt to decorrelate accent and speaker variables from _each other_ by minimizing the cross-correlation matrix from batch statistics. Let \(\tilde{E^{A}}\) and \(\tilde{E}^{S}\) be the sampled column matrices of accent and speaker embedding vectors sampled within a batch of size \(B\). We compute the batch cross -correlation matrix \(R^{AS}\) as follows (\(\mu_{E^{A}}\) and \(\mu_{E^{S}}\) computed from embedding table):
\[R^{AS} =\frac{1}{B-1}(\tilde{E}^{A}-\mu_{E^{A}})(\tilde{E}^{S}-\mu_{E^{S }})^{T} \tag{4}\] \[L_{xcorr} =\frac{1}{D_{a}D_{s}}\sum_{i,j}{(R_{i,j}^{AS})^{2}} \tag{5}\]
### Accent conditioned speech synthesis
We introduce an extra conditioning variable for accent \(A\) to RADTTS [4] to allow for accent-controllable speech synthesis. We call this model RADTTS-ML, a multilingual version of RADTTS. The following equation describes the model:
\[P_{radts}(X,\Lambda)=P_{mel}(X|\Phi,\Lambda,A,S)P_{dur}(\Lambda|\Phi,A,S) \tag{6}\]
We refer to our conditioning as accent instead of language, because we consider language to be _implicit in the phoneme sequence_. The information captured by the accent embedding should explain the fine-grained differences between how phonemes are pronounced in different languages.
### Fine-grained frame-level control of speech attributes
Fine-grained control of speech attributes like \(F_{0}\) and energy \(\mathcal{E}\) can provide high-quality controllable speech synthesis[13]. We believe conditioning on such attributes can help improve accent and language transfer. During training, we condition our mel decoder on ground truth frame-level \(F_{0}\) and energy. Following [13], we train deterministic attribute predictors to predict phoneme durations \(\Lambda\), \(F_{0}\), and energy \(\mathcal{E}\) conditioned on speaker \(S\), encoded text \(\Phi\), and accent \(A\). We standardize \(F_{0}\) using the speaker's \(F_{0}\) mean and standard deviation to remove speaker-dependent information. This allows us to predict speech attributes for any speaker, accent, and language
and control mel synthesis with such features. We refer to this model as RADMMM, which is described as:
\[P_{radmm}(X,\Lambda)=P_{mel}(X|\Phi,\Lambda,A,S,F_{0},\mathcal{E})\] \[P_{F_{0}}(F_{0}|\Phi,A,S)P_{\mathcal{E}}(\mathcal{E}|\Phi,A,S)P_{ dur}(\Lambda|\Phi,A,S) \tag{7}\]
## 3 Experiments
We conduct our experiments on an open source dataset2 with a sampling rate of 16kHz. It contains 7 different languages (American English, Spanish, German, French, Hindi, Brazilian Portuguese, and South American Spanish). This dataset emulates low-resource scenarios with only 1 speaker per accent with strong correlation between speaker, accent, and language. We use HiFiGAN vocoders trained individually on selected speakers in the evaluation set. We focus on the task of transferring the voice of 7 speakers in the dataset to the 6 _other_ language and accent settings in the dataset. Herein we refer to RADTTS-ML as RT and RADMMM as RM for brevity.
Footnote 2: Dataset source, metadata and filelists will be released with source code.
### Ablation of Disentanglement Strategies
We evaluate the effects of disentanglement strategies on the transfer task by measuring speaker timbre retention using the cosine similarity (Cosine Sim) of synthesized samples to source speaker's reference speaker embeddings obtained from the speaker recognition model Titanet[17]. We measure character error rate (CER) with transcripts obtained from Conformer[18] models trained for each language. Table 1 and Figure 1 demonstrate overall and accent grouped effects of various disentanglement strategies. The RT baseline uses the shared text token set, accent-conditioned alignment learning, and no additional constraints to disentangle speaker, text, and accent. The RM baseline uses this setup with \(F_{0}\) and energy conditioning. **Speaker Adversarial Loss (\(L_{adv}\))** We observe that the addition of \(L_{adv}\) loss to RT and RM does not affect speaker retention when synthesizing the speaker for a target language. However, we observe a drop in character error rate. We believe the gradients from the speaker classifier tend to remove speaker and accent information from encoded text \(\Phi\), which affects the encoded text representation leading to worse pronunciation.
**Data Augmentation** We use Pratt[19] to apply six augmentations: formant scaling down (\(\times[0.875-1.0]\)) and up (\(\times[1.0-1.25]\)), scaling \(F_{0}\) down (\(\times[0.9-1.0]\)) and up (\(\times[1.0-1.1]\)), and scaling durations to make samples faster(\(\times[0.9-1.0]\)) or slower(\(\times[1.0-1.1]\)). We augment the dataset with transformed audio defining a new speaker identifier, but retaining the original accent. In RT, this leads to a significant boost in speaker retention. We believe that creating more speakers per accent enhances disentanglement of accent and speaker. However, in RM, where \(F_{0}\) is predicted and the model is explicitly conditioned on augmented \(F_{0}\), we observe a significant drop in both speaker retention as well as CER with augmentations, likely due to conditioning on noisy augmented features.
**Embedding Regularization** We conduct three ablations with regularization: one that adds variance (\(L_{var}\)) and covariance (\(L_{covar}\)) constraints to the baseline, and two more involving all three constraints (\(L_{var}\), \(L_{covar}\), \(L_{xcorr}\)) with small (0.1) and large weights (10.0). We observe an improvement in speaker similarity with the best speaker retention with all three constraints in both RT and RM. Moreover, we observe similar CER to the baselines suggesting similar pronunciation quality.
Our final models include regularization constraints, but we don't use augmentation and \(L_{adv}\) due to worse pronunciation quality and limited success on speaker timbre retention.
### Comparing proposed models with existing methods
We compare our final RT and RM with the Tacotron 2-based model described in [9], call it T2, on the transfer task. We reproduced the model to the best of our ability, noting that training on our data was unstable, possibly due to data quality, and that results may not be representative of the original implementation. We tune denoising parameters[20] to reduce audio artifacts from T2 over-smooth generated mels[3, 21]. We attempted to implement YourTTS[14] but ran into issues reproducing the results on our dataset and hence we don't make a direct comparison to it.
**Speaker timbre retention** Table 2 shows the speaker cosine similarity of our proposed models and T2. We observe that both RT and RM perform similarly in terms of speaker retention and achieve better speaker timbre retention than T2. However, our subjective human evaluation below shows that RM samples are overall better than RT in both timbre preservation and pronunciation.
### Subjective human evaluation
We conducted an internal study with native speakers to evaluate accent quality and speaker timbre retention. Raters were pre-screened with a hearing test based on sinusoid counting. Since MOS is not suited for finer differences, we use comparative mean opinion scores (CMOS) with a 5 point scale (-2 to 2) as the evaluation metric. Given a reference sample and pairs of synthesized samples from different models, the raters use the 5 point scale to indicate which sample, if any, they believe is
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Disentanglement Strategy**} & \multicolumn{2}{c}{**RADTTS-ML**} & \multicolumn{2}{c}{**RADMMM**} \\ & **Cosine Sim** & **CER** & **Cosine Sim** & **CER** \\ \hline Baseline (\(\mathbf{\mu}\)) & 0.3062 \(\pm\) 0.0176 & 17.7 & 0.3483 \(\pm\) 0.0138 & 5.1 \\ (\(\mathbf{\mu}\)) = Archmitted \(F_{0}\) ped & NA & NA & 0.2946 \(\pm\) 0.01143 & 5.3 \\ (\(\mathbf{\mu}\)) = \(L_{adv}\) & 0.3027 \(\pm\) 0.0174 & 39.9 & NA & NA \\ (\(\mathbf{\mu}\)) = augmentation & 0.3858 \(\pm\) 0.0145 & 44.3 & 0.2174 \(\pm\) 0.0131 & 41.7 \\ (\(\mathbf{\mu}\)) = Low and Low & 0.4029 \(\pm\) 0.0144 & 13.7 & NA & NA \\ (\(\mathbf{\mu}\)) = low weight \(L_{var}\) = Low, Low and Low & 0.3784 \(\pm\) 0.0112 & 17.0 & 0.2329 \(\pm\) 0.0154 & 96.6 \\ (\(\mathbf{\mu}\)) = Low, Low and Low & 0.4217 \(\pm\) 0.0105 & 12.4 & 0.4188 \(\pm\) 0.0148 & 5.5 \\ (\(\mathbf{\mu}\)) = Low, Low, Low, Low, Low and Low & NA & NA & 0.4182 \(\pm\) 0.0157 & 7.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation results comparing disentanglement strategies using Cosine Sim and CER defined in 3.1
\begin{table}
\begin{tabular}{l c} \hline \hline
**Model** & **Cosine Similarity** \\ \hline RADTTS-ML (RT) & \(0.4186\pm 0.0154\) \\ RADMMM (RM) & \(0.4197\pm 0.0149\) \\ Tacotron2 (T2) & \(0.145\pm 0.0119\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Speaker timbre retention using Cosine Sim (Sec 3.1)
more similar, in terms of accent or speaker timbre, to the target language pronunciation or speaker timbre in reference audio.
**Accent evaluation:** We conduct accent evaluation with native speakers of every language. Fig 2 shows the preference scores of native speakers with \(95\%\) confidence intervals in each language for model pairs under consideration. Positive mean scores imply that the top model was preferred over the bottom model within the pair. Given limited access to native speakers, we show results for only 5 languages. We observe that there is no strong preference between RT final and its baseline in terms of accent quality. We find similar results for RM final and its baseline, suggesting that accent and pronunciation are not compromised by our suggested disentanglement strategies. To evaluate controllable accent, we synthesize samples from our best model (RM final) for every speaker in languages other than the speaker's native language. Samples using non-native language and accent are referred to as RM final, and samples with the new language but native accent are referred to as RM accented. Raters preferred samples using the target accent (RM final) over the source speaker's accent (RM accented), indicating the effectiveness of accent transfer. Finally, RM final is preferred over T2 in terms of accent pronunciation.
**Speaker timbre evaluation:** Table 3 shows CMOS scores with \(95\%\) confidence intervals. First, we observe that in both RT and RM, the final models with disentanglement strategies applied are preferred over baseline models in terms of speaker timbre retention. RM accented synthesis (RM accented) is rated as having similar speaker timbre as native accent synthesis with RM (RM final), indicating that changing accent doesn't change speaker timbre in RM, thus showcasing the disentangled nature of accent and speaker. Finally, RM final is preferred over T2 in terms of speaker timbre retention on transferring speaker's voice to target language.
**Effects of control with \(F_{0}\) and \(\mathcal{E}\):** Comparing RM final with RT final, we see that RM is preferred for most languages except German, indicating that explicit conditioning on \(F_{0}\) and energy results in better pronunciation and accent. Moreover, as illustrated in Table 1, RM final achieves a better CER than RT final. Table 3 demonstrates that explicit conditioning on \(F_{0}\) and energy in RM results in much better speaker timbre retention compared to RT. RM results in the best speaker retention, accent quality and pronunciation among our models.
## 4 Conclusion
We present a multilingual, multiaccented and multispeaker TTS model based on RADTTS with novel modifications. We propose and explore several disentanglement strategies resulting in a model that improves speaker, accent and text disentanglement, allowing for synthesis of a speaker with closer to native fluency in a desired language without multilingual speakers. Internal ablation studies indicate that explicitly conditioning on fine-grained features (\(F_{0}\) and \(\mathcal{E}\)) results in better speaker retention and pronunciation according to human evaluators. Our model provides an ability to predict such fine-grained features for any desired combination of speaker, accent and language and user studies show that under limited data constraints, it improves pronunciation in novel languages. Scaling the model to large-resource conditions with more speakers per accent remains the subject of future work.
Figure 1: Comparing speaker cosine similarity and CER of considered disentanglement strategies for every accent.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Model Pair** & **CMOS** \\ \hline RT Final vs RT Base & \(0.300\pm 0.200\) \\ RM Final vs RM Base & \(0.750\pm 0.189\) \\ RM Final vs RT Final & \(0.733\pm 0.184\) \\ RM Final vs RM Accented & \(0.025\pm 0.199\) \\ RM Final vs T2 & \(1.283\pm 0.144\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: CMOS for speaker timbre similarity.
Figure 2: CMOS per accent for model pairs under consideration. |
2302.09334 | Eco-evolutionary Dynamics of Non-episodic Neuroevolution in Large
Multi-agent Environments | Neuroevolution (NE) has recently proven a competitive alternative to learning
by gradient descent in reinforcement learning tasks. However, the majority of
NE methods and associated simulation environments differ crucially from
biological evolution: the environment is reset to initial conditions at the end
of each generation, whereas natural environments are continuously modified by
their inhabitants; agents reproduce based on their ability to maximize rewards
within a population, while biological organisms reproduce and die based on
internal physiological variables that depend on their resource consumption;
simulation environments are primarily single-agent while the biological world
is inherently multi-agent and evolves alongside the population. In this work we
present a method for continuously evolving adaptive agents without any
environment or population reset. The environment is a large grid world with
complex spatiotemporal resource generation, containing many agents that are
each controlled by an evolvable recurrent neural network and locally reproduce
based on their internal physiology. The entire system is implemented in JAX,
allowing very fast simulation on a GPU. We show that NE can operate in an
ecologically-valid non-episodic multi-agent setting, finding sustainable
collective foraging strategies in the presence of a complex interplay between
ecological and evolutionary dynamics. | Gautier Hamon, Eleni Nisioti, Clément Moulin-Frier | 2023-02-18T13:57:27Z | http://arxiv.org/abs/2302.09334v3 | # Eco-evolutionary Dynamics of Non-episodic Neuroevolution in Large Multi-agent Environments
###### Abstract.
Neuroevolution (NE) has recently proven a competitive alternative to learning by gradient descent in reinforcement learning tasks. However, the majority of NE methods and associated simulation environments differ crucially from biological evolution: the environment is reset to initial conditions at the end of each generation, whereas natural environments are continuously modified by their inhabitants; agents reproduce based on their ability to maximize rewards within a population, while biological organisms reproduce and die based on internal physiological variables that depend on their resource consumption; simulation environments are primarily single-agent while the biological world is inherently multi-agent and evolves alongside the population. In this work we present a method for continuously evolving adaptive agents without any environment or population reset. The environment is a large grid world with complex spatiotemporal resource generation, containing many agents that are each controlled by an evolvable recurrent neural network and locally reproduce based on their internal physiology. The entire system is implemented in JAX, allowing very fast simulation on a GPU. We show that NE can operate in an ecologically-valid non-episodic multi-agent setting, finding sustainable collective foraging strategies in the presence of a complex interplay between ecological and evolutionary dynamics. 1
Footnote 1: We provide videos that show the real-time behavior of our system in a companion website ([https://sites.google.com/view/non-episodic-neuroevolution-in](https://sites.google.com/view/non-episodic-neuroevolution-in)) as well as a repository ([https://github.com/flowersteam/EcoEvolax](https://github.com/flowersteam/EcoEvolax)) containing code for reproducing our experiments.
## 1. Introduction
There are striking differences in how adaptation operates in biological versus artificial systems. In Artificial Intelligence (AI), the most common approach is _performance-driven_. The main assumption is that intelligence must be implemented in a structured cognitive architecture (integrating e.g. control, learning and memory mechanisms) which is optimized (using machine learning methods) through pre-defined objective functions (Safar et al., 2016; Goyal et al., 2017; Goyal et al., 2017). The proposed methods are evaluated in benchmarks designed to capture various aspects of intelligence. For example, Chollet (Chollet, 2015) defines intelligence as _a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty_ and proposes a benchmark to evaluate it inspired by psychometric intelligence tests and called the _The Abstraction and Reasoning Corpus (ARC)_. The rise of deep neural networks as powerful function approximators has strongly revived this approach by allowing key advances in e.g. representation learning and reinforcement learning in high-dimensional spaces (Hamilton et al., 2017).
In contrast, biological adaptation seems to be better characterized by the notion of open-endedness (the continual generation of increasingly diverse organisms) than by the notion of performance. While the popular concept of the _survival of the fittest_ suggests that biological evolution is driven by performance, this concept has actually little grounding in evolutionary theory (Nakamura et al., 2016). An important paradigm shift is taking increasing importance in evolutionary biology, recognizing the crucial role of eco-evolutionary feedbacks as a main driver of evolution. The _extended evolutionary synthesis_(Goyal et al., 2017; Goyal et al., 2017) considers that, in the standard evolution theory as well as its modern synthesis, _too much causal significance is afforded to genes and selection, and not enough to the developmental processes that create novel variants, contribute to heredity, generate adaptive fit, and thereby direct the course of evolution_. It recognizes important feedback effects in terms of _constructive development_, i.e. the ability of an organism to shape its own developmental trajectory by constantly responding to, and altering, internal and external states, as well as of _reciprocal causation_, i.e. that developing organisms are not solely products but, by modifying their niche and therefore its associated fitness landscape, are also causes of evolution.
Following a similar paradigm shift, a recent trend in AI is increasingly recognizing the importance of reciprocal influence between adaptation and environmental dynamics (Goyal et al., 2017; Goyal et al., 2017; Nakamura et al., 2016). This approach, that we can qualify as _complexity-driven_ (by opposition to the _performance-driven_ approach mentioned above), considers intelligence as the emergent product of adaptive systems interacting with complex environmental dynamics. There are two main propositions here. Some contributions study how competition and cooperation pressures in populations of co-adapting agents can result in a behavioral arms race where each agent has to continuously improve its skills against those of other agents, an approach called _multi-agent autourriculum_(Goyal et al., 2017). Other contributions study how learning algorithms themselves can be meta-learned for operating in a diversity of environments. Clune (Goyal et al., 2017) calls this approach _AI-Generating Algorithms (AI-GA)_, with three main pillars: _(1) meta-learning architectures, (2) meta-learning the learning algorithms themselves, and (3) generating effective learning environments_. In both propositions (autocurriculum and AI-GA), it is the complexity of the environment (either through the presence of other co-adapting agents or through its intrinsic diversity) that drives the ability to continuously acquire new skills and generalize them in novel environments. In other words, if the performance-driven approach attempts to reverse-engineer the brain (or at least its main functions), the complexity-driven approach instead attempts to reverse-engineer the environmental conditions that lead to intelligence.
Both performance-driven and complexity-driven approaches, however, still differ crucially from how adaptation occurs in the natural world. Even complexity-driven approaches are almost exclusively evaluated in terms of their convergence toward efficient policies on evaluation tasks (e.g. their performance in board games against humans (Sundundhi et al., 2017), or in generalization to novel test environments (Sundhi et al., 2017)). This is especially due to the fact that they mostly rely on the reinforcement learning (RL) framework. A central assumption in RL is that the objective of an artificial agent must be to learn a task (or a set of tasks), that these tasks should be defined as reward functions, and that those rewards are provided by the environment. From a biological perspective, however, the environment does not contain any reward whatsoever. Rewards instead, if at all, result from the agent's own physiology and self-regulation and have emerged from evolution as a way to guide learning and exploration (see e.g. the metaphor of evolved stick/carrot mechanisms in (Sundhi et al., 2017)). Second, the standard training paradigm in RL is episodic, i.e. the environment is regularly reset to its initial conditions. While this procedure has the benefit to facilitate training from a machine learning perspective (Sundhi et al., 2017), it strongly differs from natural settings where environments are persistent, i.e. where the behavior of agents affects the environment in which the next generations will further evolve and learn (see however (Beng et al., 2017) for a recent attempt at studying ecologically valid non-episodic RL settings). Episodic training in RL prevents the study of both niche construction and eco-evolutionary feedback effects, which require that populations alter their environment and that those changes in the environment influence the subsequent evolution of the population (Sundhi et al., 2017). Contributions in automatic curriculum learning (Sundhi et al., 2017) studies such feedback effects but focuses on how to adaptively sample novel environments of increasing complexity, using episodic training, with the explicit objective to improve an agent's learning performance.
The main objective of this paper is to propose a method for studying large-scale eco-evolutionary dynamics in agent-based simulations with the following properties:
Non-episodic learningWe prevent any environment or population reset during a simulation, which leads to continuous environmental and population dynamics.
Bi-level adaptationAgents' behavior is controlled by neural networks whose weights are optimized using neuroevolution (Sundhi et al., 2017). Each network contains a memory component (LSTM), which enables adaptation within the agent's lifetime in the absence of weight updates. Thus the evolutionary process can be viewed as an outer loop that optimizes the ability of agents to adapt.
Physiology-driven death and reproductionThere is no notion of rewards, agents are instead equipped with a physiological system modulating their energy level according to the resources they consume, in a non-linear way. At the evolutionary scale, agents reproduce as long as they are able to maintain their energy level within a reasonable range and die if they don't manage to maintain this level above a certain threshold. The population size can therefore vary with time. This feature also brings the selection mechanism closer to minimal criterion selection (Beng et al., 2017), where agents are selected as long as their fitness is above a certain threshold, than fitness-based selection, where agents are selected based on their ability to maximize a performance metric.
Environment with complex intrinsic dynamicsDue to a lack of resets, it is important that the environment exhibits dynamics that will foster learning independently of the behavior of agents. For this aim we model our environment after common-pool resource (CPR) appropriation problems, where a group of agents competes for finite resources. We extend an existing environment model of CPR appropriation (Sundhi et al., 2017) with the presence of multiple niches. In our proposed environment, resources regrow proportionally to the density of nearby resources, with different regrowth rates in different regions of the environment.
Leveraging the GPU parallelization allowed by the JAX programming framework (Beng et al., 2017), we run large-scale continual simulations in environments with approximately 100.000 cells and thousands of agents (notably training such a population to observe complex dynamics and adaptive behavior requires about 10 minutes).
From the perspective of neuroevolution, our empirical study aims at answering the following questions: a) _can we realistically apply neuroevolution in multi-agent environments with thousands of agents?_ b) does a selection mechanism that allows agents to reproduce locally, without requiring generational resets, based on a minimal criterion suffice?_ c) _does evolving networks in a multi-agent setting lead to the emergence of adaptation mechanisms?_ From the perspective of multi-agent cooperation, our study targets the questions: a) _can we simulate systems with complex eco-evo dynamics where populations solving a CPR problem exhibit realistic behaviors?_ b) _does evolving under a minimal criterion enable sustainability?_ In Section 4 we answer these questions in the affirmative. Before that, in Section 2, we explain why these questions matter in our review of related fields and, in Section 3 describe how we modeled our system.
## 2. Background
### Neuroevolution
Neuroevolution draws inspiration from natural evolution to create agents that learn to adapt through an evolutionary process rather than gradient-based optimization (Sundhi et al., 2017). In a surprise to many, this simple process of selection and random mutations has recently performed competitively with the state-of-the-art art in RL for playing Atari games (Sundhi et al., 2017; Sundhi et al., 2017), and proven powerful in applications such as architecture search, where the non-differentiable nature of the search space prohibits gradient-based methods (Sundhi et al., 2017) and meta-learning, where the evolutionary process is conceived as an outer optimization loops that controls the intra-life learning plasticity of agents (Sundhi et al., 2017). Multi-agent environments, which are particularly promising for neuroevolution as they naturally entail the concept of a population, have been identified as a frontier for this family of methods (Sundhi et al., 2017), arguably due to their computational complexity and challenging multi-agent learning dynamics.
Neuroevolution methods are classically performance-driven: solutions are selected based on their ability to solve a pre-determined task. Complexity-driven approaches, on the other hand, where solutions are chosen based on criteria not directly related to performance, such as novelty, have proven powerful in tasks for which
the objective function is unknown to humans (Srivastava et al., 2017). For a given criteria, neuroevolution methods can also differ on whether solutions survive only if they are ranked high within the population (survival of the fittest) or if their fitness is above a threshold (minimum criterion). The latter category is the least explored (Beng et al., 2015), but has the potential of preserving a larger phenotypic diversity within the population and is believed to be closer to biological evolution.
Finally, neuroevolution methods almost exclusively consider discrete, over-lapping generations, at the beginning of which solutions experience mutation and selection simultaneously and the environment is reset to its initial conditions. We refer to this paradigm as episodic, borrowing terminology from RL, where recently it has been proposed to remove environmental resets, as they may introduce the need for human supervision (Beng et al., 2015) and are implausible from a biological perspective (Beng et al., 2015). This setting, termed as non-episodic or continuous in RL, is harder to envision in evolution under survival-of-the-fittest, where dividing time into non-overlapping generations ensures that agents compete based on the same time budget.
### Common-pool resource appropriation
CPR tasks abide in natural and human ecosystems: fisheries, grazing pastures and irrigation systems are examples of multi-agent systems where self-interested agents need to reach a sustainable resource appropriation strategy that does not exploit the finite resources. They belong to a class of game-theoretic tasks termed as social dilemmas, which exhibit a tension between individual and collective motives: the optimal collective strategy is to forage sustainably but self-interested agents will cooperate only if others cooperate as well; otherwise they will consume resources until they deplete them, a situation called Tragedy of the Commons (Cummas, 2010). Ecological properties of these complex systems, such as the spatiotemporal variability of resources and organisms are believed to play a big part in shaping solutions to CPR problems (Beng et al., 2015). From an ecological perspective, such settings give rise to scramble competition, where organisms of the same species appropriate resources at a rate contingent on their foraging ability, often leading to population bursts and crashes (Srivastava et al., 2017).
With recent advances in RL, computational studies of social dilemmas have managed to operate in simulation environments resembling the ones used in human lab studies, where agents can navigate a grid-world consuming resources (Srivastava et al., 2017; Srivastava et al., 2017). RL agents embody the self-interested trial-and-error learning paradigm and have confirmed our intuition that, when acting in a group, they cannot avoid a Tragedy of the Commons unless they employ some auxiliary mechanism for guarding against exploiters, such as learning to incur punishment (Srivastava et al., 2017) and reputation mechanisms (Beng et al., 2015). These studies, however, remain far from approaching the complexity of real ecosystems, which may comprise thousands of organisms that do not necessarily follow the reward-maximization paradigm.
## 3. Methods
### The environment
Our simulation environment is an extension of the CPR environment (Srivastava et al., 2017; Srivastava et al., 2017) that the AI community has been using to study the emergence of cooperation in groups of self-interested agents: a two-dimensional grid-world where some cells contain resources (in green) that the agents (in red) can collect. Resources grow depending of the presence of other resources around them, which means that there is a positive feedback loop, with reduction in resources leading to further reductions. In addition to resources, the environment may contain walls (in blue) that kill agents trying to traverse them (see Figure 1 for an illustration of our environment).
At each time step \(t\) of the simulation a resource may grow in a pixel in a cell of the environment with location \((x,y)\) based on the following three processes:
* a neighborhood-dependent probability \(p_{I}(x,y)\) determines the probability of regrowth in a cell based on the number of resources in its neighborhood, \(I\)
* a niche-dependent scaling factor \(c(x)\) is used to scale \(p_{I}\). We employ a latitudinal niching model used in previous
Figure 1. Illustration of our environment: resources (green) may grow in every cell based on a probability that depends on the number of surrounding resources and the niche the cell is placed in. Agents (red) navigate around, consume a resource upon stepping on it and may reproduce or die at any given time step based on their energy level.
Figure 2. An agent’s cognitive architecture: pixel values in its neighborhood are passed through a convolutional neural network with two layers, a memory cell (LSTM) and a feed-forward layer that outputs log probabilities for each of the five actions (a no-op action is included).
studies (Ballall et al., 2017; Ball et al., 2018): the world is divided into \(N\) niches, each one having the form of a horizontal stripe of pixels so that a cell's location depends only on its vertical position \(x\). We refer to \(c(x)\) as the climate value of niche \(x\).
* independently of its neighbors and niche, a resource grows with a constant low probability \(c\)
By modeling resource generation in this way we ensure that the resource distribution follows the CPR model, that it exhibits additional spatio-temporal variability due to the presence of niches and that resources do not disappear too easily, which can be problematic in reset-free environments. Thus, the combined regrowth rate for a resource \(r\) is:
\[p(x,y)=p_{I}(x,y)\cdot c(x)+c \tag{1}\]
A niche's climate value is determined by equation: \(c(x)=(\alpha^{x}+1)/(\alpha+1)\), which returns values from 0 to 1 and allows us to control the relationship between niche location and climate to be from linear to exponential.
### The agents
At each time step there is a variable number of agents \(K_{t}\) in the environment, each one characterized by its sensorimotor ability, cognitive capacity and physiology.
Sensorimotor abilityAn agent observes pixel values at each time step within its visual range (a square of size \([w_{o},w_{o}]\) centered around the agent, as illustrated in Figure 2). The pixel values contain information about the resources, other agents (including their number) and walls. At each time step an agent can choose to stay inactive or execute an action to navigate up, down, right or left.
Cognitive capacityAn agent is equipped with an artificial neural network that outputs the action to undertake based on the current observation and whose weights are initialized randomly once at the start of the simulation. Its architecture (illustrated in Figure 2) is minimal: a convolutional neural network, an LSTM cell that equips the agents with memory by enabling policies conditioned on a trajectory of observatories and a linear layer that transforms hidden states to actions.
PhysiologyAn agent is equipped with a simple physiological model modulating its level of energy: the agent is born with an initial energy value \(E_{0}\) which, at every time step, experiences a linear decrease, and, if the agent consumes a resource, is increased by one (see Figure 3 for an illustrative example of how the energy level may change within the lifetime of a hypothetical agent). The energy is also clipped to a max value \(E_{max}\).
### Non-episodic neuroevolution
In neuroevolution (NE) a population of neural networks adapts its weights through random mutations and a selection mechanism that promotes well-performing policies. Under a classical NE paradigm training time is divided into generations, at the end of which agents reproduce to form the next generation (Sandel, 2018; Sandel, 2018).
Our proposed system deviates from this paradigm in two respects:
* agents do not reproduce according to their fitness but according to a minimal criterion on their energy level;
* evolution is non-episodic: upon satisfying certain criteria an agent reproduces locally (the off-spring appears on the same cell as its parent), so that agents are added in an online fashion to the population, removing the need for a concept of generation.
ReproductionIn order to reproduce an agent needs to maintain its energy level above a threshold \(E_{\text{min}}\) for at least \(T_{\text{repr}}\) time steps. Once this happens the agent produces an off-spring and is a candidate for reproduction again. Thus, agents may have a variable number of off-spring and do not die upon reproduction. We illustrate this relationship between energy level and reproduction in Figure 3. Reproduction is asexual: an agent's weights are mutated by adding noise sampled from \(\mathcal{N}(0,\sigma)\)
DeathAn agent dies once its energy level has been below a threshold \(E_{\text{min}}\) for at least \(T_{\text{death}}\) time-steps or if its age is bigger than a certain value \(L_{\text{max}}\). Once this happens, the agent is removed from the population forever.
### Evaluation methodology
The classical performance-driven evaluation paradigm in machine learning separates an experiment into two distinct phases: during a _training phase_ the agents learn a policy and during an _evaluation phase_ the agents act without learning in pre-determined tasks. In RL, these tasks were traditionally identical to the ones used in training, as RL agents were too brittle to generalize to unseen conditions (Kleiner et al., 2017). Recent advances in meta-learning have enabled evaluation in a wide diversity of tasks, but require extensive training (Sandel, 2018).
Evaluation in a complexity-driven paradigm is however more nuanced: as we are interested in the system's ability to emerge
Figure 3. Illustration of how the energy evolves within the lifetime of an agent: the age is born with maximum energy which is linearly decreased at every time step and increased by one once the agent consumes a resource. The energy level determines reproduction and death: if it falls below \(E_{min}\) for \(T_{death}\) timesteps the agent dies and if it stays above \(E_{min}\) for \(T_{reproduce}\) time steps the agent produces an offspring and may reproduce again.
interesting behaviors that hint to open-ended dynamics, evaluating it on pre-defined set of tasks would defeat our purpose. For this reason we have structured our simulation methodology as follows: we let the population of agents evolve for a long time in a single environment and then study its behavior at a large scale, by monitoring population-wide and terrain-wide metrics and at a small scale, by focusing on local, interesting patterns of behaviors such as individual agents that move in a consistent way or collective immigration and foraging patterns. We then form specific hypotheses about the potential drives of these behaviors and design environments that enable testing these hypotheses. These environments differ from the one used for learning behaviors: they are much smaller and exhibit vastly different population and resource dynamics (we illustrate examples of such environments on the right of Figure 4). This evaluation methodology should strike as familiar to the ALife and ecology communities and, we anticipate, will become more prevalent in AI studying open-ended skill acquisition. Borrowing terminology from ecology, we henceforth refer to the large-scale environment as a _natural environment_ and the small-scale ones used for hypothesis-testing as _lab environments._
## 4. Results
We will now study the evolution of a population in our proposed system and probe certain quantities during evolution. Note that this system required some tuning of the hyperparameters in order to find a stable environment, as exponential growth of both food and population can easily lead to collapse (and even did after several generations in 3 out of the 5 seeds launched). We will first make a more detailed analysis of seed 1 and then explain some key differences with seed 2.
Details on the hyperparameters characterizing the natural environment can be found in Appendix A and an explanation of how the metrics have been implemented and how statistical significance was tested for in Appendix B. The simulation of the env with a large number of agents and 1e6 timesteps took 20 minutes on a single GPU thanks to JAX parallelization and speedup.
### Eco-evolutionary dynamics
In this simulation, the evolution of the population is deeply interconnected with the evolution of resources. For example, we can see that the population size at early generations seem to follow with some delay the amount of resources present in the environment (Fig 5). This follows the intuition that as resources are more present, the agents survive more easily and so reproduce more easily, which in turn leads to an increase of the population. This increases in population leads to overconsumption of resources in the environment which in turns make it harder to survive, decreasing the population size. Lastly, the population decrease leads again to an increase of the resources amount (as there are less agents eating) beginning a new cycle. This dynamic is highlighted in the huge drop in population size at time step 55000 in Figure 5.
a lot of diversity which might survive for some time (as the environment is easier).
### Large-scale trends
At the large scale several phases can be seen in the evolution of agents and the environment.
#### 4.2.1. Population size and life expectancy rise and plateau
At the very beginning, in the first phase A (fig 6.A), the environment contains plenty of resources which leads to an increase in the population. In a second phase (fig 6.B), the population seem to start to plateau while the amount of resources is still decreasing. This decrease in resources stops in phase C.
During phase A, B and C, the expectancy of the agents increases (fig 6.a) suggesting that the agents are becoming better even though the environment is changing. The expectancy starts to plateau in phase D where it seems like the environment reaches a more or less stable state on some metrics.
#### 4.2.2. Decrease in the amount of resources: A near tragedy-of-the-commons
The decrease in the amount of resources in the environment at the beginning (fig 6.B), seems to indicate that the evolving population as a whole depletes the resources in a greedy way even though more resources means a higher respawn of resources. The evolutionary path therefore seems to start by evolving a population which will go towards the tragedy of the common (which is here dampened by the fact that there are spontaneous regrover of resources). This is confirmed by looking at the environment after some time (fig 7.a) where we can see that there are only few patch of resources in some corner of the map while the majority of the map is constantly depleted. This suggest that at least local tragedy of the common happens in our simulation.
#### 4.2.3. Coexistence of agents with different movement dynamics
During phase A, we can see that the beginning of learning with a lot of resources leads to agents learning to move a lot as most of the population adopts this strategy (fig 6.d.A). Then, in phase C where the resources amount seem to plateau, we can see an increase in the number of low moving individuals (fig 6.C) which might exploit certain spots of resources even though agents that move a lot are still present.
Then we can see that from this point, those two extreme strategy coexists in the environment (fig 6.d). Those extreme behavior seem to correspond to two distinct types of agents: high movement individuals are agents that have an "opportunistic traveler" strategy as they travel mostly in straight line but exploit resources spots (especially from spontaneous regrover) as soon as they see them. On the other hand, the low movement individuals seem to be agents that exploit the regrowth of resources by staying at the same interesting place (with resources around) and waiting for resources to spread. We qualify this waiting of resources as sustainable strategy as agents do not consume resources directly but rather keep these resources as a reliable source of new respawn of resources for more long term survival. We refer to video 1.a of the companion website for a visualization of these behaviors and to the next subsection for a more detailed and controlled analysis of the behavior of agents.
Figure 6. Metrics on seed 1: a) life expectancy of agents, b) total number of resources present in the environment, c) size of the population and d) percentage of individuals with different amount of movement
#### 4.2.4. Diversity of eco-evolutionary path
We will now study some differences between seed 1 and 2.
In seed 1 sustainable and opportunistic travelers coexist during the whole evolution (fig 6.D), while seed 2 has a majority of opportunistic travelers and some sparse period where there are some sustainable behavior (fig 7.C). This may be explained by the differences in the environment led by the agents behavior. In fact seed 1 displays some area where there are big patches of resources (especially in the corner) (fig 7.a)) and so where sustainable agents can easily take advantage of. On the other end in seed 2 (fig 7.b), the map is completely depleted of patches of resources which only allows agents to sustain on spontaneous regrowth on random spots of the map, which might explain why there are so much opportunistic travelers and nearly no sustainable behavior. The sustainable behavior pic we can see in seed 2 might be explained by timesteps where some spot of food were left for some time and so where bigger patches of resources emerged which might have favored some switches in behavior. See appendix C.2 for more details on seed 2 and Videos 1.a and 1.b of the companion website for a better visualization of the dynamic and behavior of agents.
The (small) diversity of evolutionary and environment path between the 2 seeds we present are also an interesting feature of such eco-evo simulation.
### Evaluation in lab environments
How do the agents adapt their foraging behavior at an evolutionary and intra-life timescale to maximize their reproduction rate? In the natural environment we saw that both population size and the spontaneous regrowth of resources may contribute to avoiding resource depletion. At an evolutionary scale the population may adapt by regulating its size and updating its weights. But is it possible that the agents learned to adapt to different conditions they encounter in their lifetime in order to forage both efficiently and sustainably? This is the question the following simulations in the lab environments we discussed in Section 3 aim to address.
#### 4.3.1. Does the density of resources affect agents' greediness?
_Set-up_. There are three lab environments, with a single agent that cannot reproduce and resource regeneration deactivated, that differ in the amount of initial resources: the low-resources environment has 10 resources, the medium-resources 20 and the high resources 60 (see on the right of Figure 4 for an illustration of the low and high resources environments). To measure the amount of greediness of agents we introduce the following metric \(G\): we divide the simulation time into fixed windows of 20 time steps and check in which of these windows the agent has at least one resource in its field of view (let's denote this number with \(T_{r}\)) and the number of these windows during which the agent consumed at least one resource (let's denote this number with \(C_{r}\)), so that \(G=C_{r}/T_{r}\). This metric makes comparisons of the greediness of agents across the three tasks more fair, as it disentangles lack of consumption due to lack of resources from lack of consumption due to sustainable behavior. We compute this metric by randomly sampling 50 agents out of the natural-environment population close to the end of the simulation and performing 10 random trials for each sampled agent in each of the three environments. To quantify the effect of the density of resources we perform statistical tests comparing the greediness of each agent in the three tasks.
Our analysis showed that agents exhibit different qualitative behaviors that can be grouped in two types: a) agents for which no statistically significant differences appear between tasks. By monitoring their behavior in real-time we see that some of these agents initially consume a resource and then move away towards the environment's border, where they stay (see an example video in the supplementary site). These agents correspond to the _optormuistic travelers_ that we encountered in the natural environment and do not exhibit resource-dependent adaptation b) agents for which there are statistically significant differences between the low-resources and high-resources environment, with greediness in low-resource environments being higher. Overall, 9 out of the 50 agents exhibited this behavior (we illustrate greediness across tasks for one of these agents in Figure 8), which we refer to as _sustainable foragers_. These agents have learned to not over-consume resources when these are abundant, but stay close to them to consume them later and take advantage of their higher spread rate. But, having grown in a competitive environment where resources may disappear, these agents consume them immediately when they are scarce. Even in this case however they stop consuming eventually, which prohibits a complete extinction of resources (see videos for examples of how the behavior of the agent analyzed in Figure 8 differs between low-resources and high-resources environments).
#### 4.3.2. Does peer-pressure lead to greediness?
_Set-up_. We employ the same three environments but now allow agents to reproduce. This means that, after \(T_{repr}=20\) timesteps, new agents will appear that will compete for resources with the original agent. Our hypothesis here is that the presence of others will make agents more greedy. To test this, we measure efficiency \(E\) as the average amount of resources the group consumes during the evaluation trial. and average it across 50 agents and 10 trials. We then compare the difference in performance between the previous set-up (no-reproduction) with the current one, where we average across 50 agents and 10 trials to observe whether there is a population-wide effect.
As Figure 9 illustrates, we observed a large change in the foraging efficiency of the agents when reproduction was on. In all three tasks,
Figure 7. Left: Diversity of environment between the 2 seeds (at timestep 600 000, zoom on the bottom right corner); Right: Percentage of the population with different amounts of movement of seed 2
efficiency increased by a statistically significant amount, which indicates the the sustainable foragers increased their greediness under peer pressure. However, we observed that, after an initial increase in resource consumption at the appearance of new agents, the group slows down again and its members tend to disperse and stay close to resources without consuming them (see videos 7 and 8 for an illustration of this behavior).
## 5. Discussion
Our empirical study demonstrates that neuroevolution can operate in large multi-agent environments, lead to efficient behaviors even in the absence of episodic survival-of-the-fittest and help evolve agents that exhibits adaptation within their lifetime without requiring weight updates. Specifically in regards to the latter, we identified agents that change their policy depending on resource density and presence of other agents. From an ecological perspective, our computational study proves that agents selected based on a minimal criterion learn sustainable behaviors and that the population exhibits dynamics that resemble those of natural populations, such as population size oscillations. We observed many interesting emerging examples of collective and individual adaptation:
* Population size exhibits bursts and crashes that are correlated with the density of resources.
* The system goes through phases related to the sustainability of the agents' foraging behavior: resources and population size initially grow until over-population leads to near-extinction of resources which creates a drive for agents to forage sustainably.
* The sustainable population exhibits diversity in individual behaviors: some agents specialize in long-distance travel, opportunistically consuming resources they find on their way, while others forage locally, staying close to resources to take advantage of the spread of resources and consuming sporadically to avoid death.
* Agents' influence each others behavior: agents that forage sustainably when alone, temporarily increase their consumption when others enter their field of view and then revert back to consuming less.
Our empirical study has not addressed all possible research questions one can ask about our system, due to its inherent complexity. For example, future hypotheses can focus on whether the memory component of the agent's cognitive architecture contributes to the observed intra-life adaptation. Also our current model is limited in some respects that we plan to expand on in future work: a) movement does not incur energy costs which could lead to more sustainable behavior by limiting long-distance traveling b) the ecological dynamics of the environment can be further complexified by investigating more niching models, types of resources and sources of danger (see e.g. (Srivastava et al., 2017)) c) we could study the evolution of intrinsic reward functions that may lead to more explorative behavior (Srivastava et al., 2017).
In the past ecologists have hinted to the limitations of an anthropocentric view on intelligence (Bradner et al., 2016): if we search for intelligence by looking at performance metrics only in tasks that we excel at, then we will inevitably miss a big part of the natural kingdom. Our study hints to a similar conclusion for artificial agents: evolving agents in natural environments with complex spatiotemporal dynamics in the absence of rewards and examining their behavior in toy lab environments may bring us closer to our quest for open-end behavior in artificial systems.
AcknowledgmentsThis research was partially funded by the French National Research Agency ([https://anr.fr/](https://anr.fr/), project ECOCURL, Grant ANR-20-CE23-0006). This work also benefited from access to
Figure 8. Greediness of sustainable forager across evaluation environments that differ in the amount of resources.
Figure 9. Average efficiency across the population across different density levels with reproduction activated and deactivated. Activating reproduction leads to increased resource consumption.
the HPC resources of IDRIS under the allocation 2020-[A0091011996] made by GENCI.
|
2310.01579 | Titration in Canonical and Grand-Canonical Ensembles | We discuss problems associated with the notion of pH in heterogeneous
systems. For homogeneous systems, standardization protocols lead to a well
defined quantity, which although different from S\o rensen's original idea of
pH, is well reproducible and has become accepted as the measure of the
``hydrogen potential". On the other hand, for heterogeneous systems, pH defined
in terms of the chemical part of the electrochemical activity is
thermodynamically inconsistent and runs afoul of the Gibbs-Guggenheim principle
that forbids splitting of the electrochemical potential into separate chemical
and electrostatic parts -- since only the sum of two has any thermodynamic
meaning. The problem is particularly relevant for modern simulation methods
which involve charge regulation of proteins, polyelectrolytes, nanoparticles,
colloidal suspensions etc. In this paper we show that titration isotherms
calculated using semi-grand canonical simulations can be very different from
the ones obtained using canonical reactive Monte Carlo simulations. | Amin Bakhshandeh, Yan Levin | 2023-10-02T19:16:20Z | http://arxiv.org/abs/2310.01579v1 | # Titration in Canonical and Grand-Canonical Ensembles
###### Abstract
We discuss problems associated with the notion of pH in heterogeneous systems. For homogeneous systems, standardization protocols lead to a well defined quantity, which although different from Sorensen's original idea of pH, is well reproducible and has become accepted as the measure of the "hydrogen potential". On the other hand, for heterogeneous systems, pH defined in terms of the chemical part of the electrochemical activity is thermodynamically inconsistent and runs afoul of the Gibbs-Guggenheim principle that forbids splitting of the electrochemical potential into separate chemical and electrostatic parts - since only the sum of two has any thermodynamic meaning. The problem is particularly relevant for modern simulation methods which involve charge regulation of proteins, polyelectrolytes, nanoparticles, colloidal suspensions etc. In this paper we show that titration isotherms calculated using semi-grand canonical simulations can be very different from the ones obtained using canonical reactive Monte Carlo simulations.
## I Introduction
The concept of pH was first introduced by Sorensen [1] in 1909. The original definition referred to pH as \(-\log_{10}[c_{\text{H}^{+}}/c^{\odot}]\), where \(c^{\odot}=\)1M is the standard concentration. Since in practice pH is measured using electrodes, Sorensen, later redefined pH in terms of activity of hydronium ions, pH\(=-\log_{10}\left[a_{+}/c^{\odot}\right]\), which was thought to be related to the electromotive force (EMF) measured by the system of electrodes through the Nernst equation. Later Linderstrom-Lang recognized that the experimental procedure used to measure pH did not lead exactly to pH\(=-\log_{10}[c_{\text{H}^{+}}/c^{\odot}]\), nor to pH\(=-\log_{10}\left[a_{+}/c^{\odot}\right]\), but to some other quantity which due to the convenience, became widely accepted as the measure of the hydrogen potential [2]. The problem with a direct measurement of pH is that separation of the electrochemical potential into chemical and electric potential is purely arbitrary, since only the sum of two has any physical meaning. The Gibbs-Guggenheim principle states that the difference of electrostatic potential between two points located in regions of different chemical composition can not be measured [3; 4]. As early as 1899 Gibbs wrote in a letter [5]: "Again, the consideration of the electrical potential in the electrolyte, and especially the consideration of the difference of potential in electrolyte and electrode, involves the consideration of quantities of which we have no apparent means of physical measurement, while the difference of potential in pieces of metal of the same kind attached to the electrodes is exactly one of the things which we can and do measure". In 1929, Guggenheim [6] formalized the observation of Gibbs by stating that "the decomposition of the electrochemical potential, into the sum of a chemical term \(\mu\) and an electrical term \(e\psi\) is quite arbitrary and without physical significance. In other words the chemical potential, or the activity of a single ion, and the electric potential difference between two points in different media are conceptions without any physical significance. " [7]
The confusion between exactly what can and is being measured has led to a proliferation of "local" pH measurements in soft matter and biophysics literature. The problem has become particularly acute, since the modern simulation methods employed to study charge regulation of protein and polyelectrolyte solutions often rely on constant pH (cpH) algorithms, which are intrinsically semi-grand canonical [8; 9; 10]. In such procedure, pH is specified inside a reservoir of acid and salt, and the protonation state of a protein, polyelectrolyte, or colloidal suspension is calculated using a suitably constructed Monte Carlo algorithm that must respect the detailed balance. Since only microions are exchanged between the simulation box and the reservoir, the two must be at different electrostatic potential. For an experimental system in which a colloidal suspension is separated from an external reservoir of acid and salt, this is known as the Donnan potential. Traditionally pH is defined in terms of the chemical part of the electrochemical potential. However, since Gibbs-Guggenheim principle forbids us from breaking up the electrochemical potential into separate electrostatic and chemical contributions, such definition appears to be thermodynamically unacceptable.
In practice pH is measured using EMF between a glass or hydrogen electrode and a saturated calomel (reference) electrode. Consider a colloidal suspension separated from a reservoir by a semipermiable membrane that allows free movement of ions, but restricts colloidal particles to system's interior, see Fig. 1. If the calomel reference electrode is placed in the reservoir and the EMF is measured between it and the hydrogen electrode, one finds constant EMF independent of the position of the hydrogen electrode, either in the reservoir or in the system's interior, see Fig. 1 panels (a) and (b). This is a clear indication of a constant electrochemical potential of hydronium ions across both system and the reservoir. On the other hand, if the two electrodes are placed inside the colloidal suspension, the EMF will depend on the distance of the reference electrode from the membrane, see Fig. 1 panels (c). Clearly, such measurement would result in thermodynamically ill-defined "local" pH. Such
experimental measurements were performed by Teorell et._al[11]_ more than 85 years ago. Already at that time he noted the difficulties with the usual definition of pH when applied to heterogeneous systems. One can argue that if both electrodes are placed deep into the system, far away from membrane, the resulting EMF will stabilize and will allow us to define the system pH, which will be different from that of the reservoir. This is correct, but does not resolve the underlying problem arising from the violation of the Gibbs-Guggenheim principle [11]. For example, consider now a colloidal suspension in a gravitational field [12; 13; 14]. Because of finite buoyant mass, colloidal column will become progressively rarefied with the height - characteristic gravitational length of colloidal particles is between micrometers and millimeters. On the other hand on experimental length scale, ionic buoyant mass is negligible. Therefore, the top part of suspension will be composed of a pure acid-salt electrolyte, with a well defined pH, since according to the Gibbs-Guggenheim principle, this is the region of uniform chemical composition in which one can measure the electrostatic potential difference between two points. In the present case, the gravitational field plays the role of a membrane that establishes inhomogeneity of suspension. This results in a height dependent Donnan potential \(\varphi_{D}(z)\) along the column, which in turn leads to different ionic concentrations at each \(z\). Nevertheless, if we place our reference electrode in the top (colloid-free) portion of the suspension, we will get exactly the same EMF (and consequently the same pH) independent of the placement of the hydrogen electrode inside the colloidal column. On the other hand, if the reference electrode is moved into colloid-dense region, each different position will lead to a different EMFs and, consequently, a different pH. One might argue that if both hydrogen and calomel electrodes are placed at _exactly_ the same height \(z\), the pH obtained using such measurement will have some physical meaning. Such proposition, however, once again seems untenable in view of the Gibbs-Guggenheim principle, since only the full electrochemical potential has any thermodynamic meaning. The confusion in literature is such that in a paper published some years back Brezinski wrote: "the uncertainty regarding interpretation of pH readings for colloids has led to the opinion that the pH value of neither the sediment nor the supernatant is very meaningful or useful for characterizing colloids" [15]. Based on the preceding discussion, such view seems overly pessimistic. While pH in the homogeneous supernatant of suspension is well defined thermodynamically, in the interior of a highly inhomogeneous suspension it runs afo of the Gibbs-Guggenheim principle. On the other hand, from a purely theoretical perspective, knowledge of pH in the inhomogeneous part of suspension is completely irrelevant. Specification of pH and salt concentration in the _homogeneous_ reservoir (supernatant) should be sufficient to calculate the state of protonation of colloidal particles and their density profile, both of each are easily accessible to experimental measurements. In theory - or simulation - one could even calculate hydronium density profile inside an inhomogeneous suspension, there is however no clear connection between this local density of hydronium ions and the extra-thermodynamic quantity such as "local" pH of an inhomogeneous suspension [14].
When performing classical charge regulation simulations, one has two options - either a _semi-grand canonical_ constant pH (cpH) simulation in which the system is placed in contact with an implicit reservoir or acid and salt [8; 9], or a _canonical_ simulation in which a fixed number of polyelectrolytes, protons, ions, water molecules are placed inside a simulation box [16; 17]. The two approaches are very different, requiring distinct implementations of the Monte Carlo algorithm to take into account protonation/deprotonation moves [17]. When performing cpH simulations, insertion of a proton into the system is accompanied by a simultaneous insertion of an anion, to preserve the overall charge neutrality. On the other hand, in a canonical simulation, a proton is transferred from a hydronium molecule inside the simulation box, to a polyelectrolyte monomer, so that the charge neutrality is always preserved. This requires a completely different implementation of the MC algorithm. Furthermore, in a canonical simulation pH is not an input parameter, and can only be calculated _a posteriori_ after the system has equilibrated. The consistency between the two simulation methods can be tested _a posterior_. For example, we can run a cpH simulation, for a given pH and salt concentration in the reservoir. This will provide us with the average number of protonated groups on polyelectrolytes, as well as with the average number of ions of each type inside the simulation cell. We can then isolate the system from the reservoir (canonical ensemble) keeping exactly the same number of ions inside the simulation cell as the averages obtained in cpH simulation. We then strip all the associated protons from polyelectrolyte and place them randomly (in the form of hydronium ions) together with the other ions into the simulation cell. We can then run a canonical reactive MC algorithm. Equivalence between ensembles then requires that we obtain _exactly_ the same number of protonated groups as was previously found using cpH simulation. This is precisely what is observed, showing consistency of the two simulation methods [17].
The cpH simulations start with a specified value of pH\({}_{gc}\) and salt concentration inside the reservoir. On the other hand, in canonical simulations pH\({}_{c}\) has to be determined _a posterior_ using Widom insertion method. If we define pH in the semi-grand canonical system in terms of the total electrochemical potential - corresponding to keeping the calomel electrode inside the reservoir, while the hydrogen electrode is "placed" into the simulation cell - then the system pH\({}_{sys}\) will be the same as of the reservoir pH\({}_{gc}\), and will, in general, be different from pH\({}_{c}\) in the canonical system. On the other hand, if we disregard the Gibbs-Guggenheim principle and separate the Donnan potential from the rest of the electrostatic potential, then the pH inside the system will be _differ
ent_ from pH\({}_{gc}\) and the _same_ as canonical pH\({}_{c}\). This situation corresponds to "placing" both hydrogen and reference electrode inside the simulation cell of a semi-grand canonical system. In practice, a calculation of the electrochemical potential in a canonical simulation is quite complicated, in particular if pH is large, since the simulation box will have only very few hydrovium ions, resulting in very poor statistics. This led to popularization of thermodynamically poorly defined "local" pH(**r**) = \(-\log_{10}[c_{H^{+}}(r)/c^{\odot}]\)[18]. To avoid these difficulties, and clearly demonstrate the effect of ensembles on titration isotherms, in this paper we will use a recently developed theory, which was shown to be in excellent agreement with the explicit ions cpH simulations [19].
## II Theory
### Semi-Grand Canonical Titration Theory
To explore the difference between canonical and grand canonical titration, we will use a cell model first introduced by S. Lifson and A. Katchalsky, and R. A. Marcus [20; 21] to study polyelectrolyte and colloidal systems of finite volume fractions. The model consists of a colloidal particle of radius \(a=60\) A, placed at the center \(r=0\) of a spherical cell of radius R, which is determined by the volume fraction of the colloidal suspension \(\eta_{c}=a^{3}/R^{3}\). The cell is assumed to be in contact with a reservoir of acid and 1:1 salt at concentrations \(c_{a}\) and \(c_{s}\), respectively. All ions are treated as hard spheres of diameter \(d=4\) A with a point charge located at the center. The nanoparticle has \(Z=600\) carboxylic groups of pK\({}_{a}=5.4\), uniformly distributed over its surface. Ref. [19] showed that the average number of deprotonated groups of a colloidal particle is given by:
\[Z_{eff}=\frac{Z}{1+10^{-\mathrm{pH}_{gc}+\mathrm{pK}_{a}}e^{-\beta(\pi\varphi _{0}+\varphi_{disc}-\mu_{sal})}}, \tag{1}\]
where \(q\) is the proton charge. The pH in the reservoir is determined by : pH\({}_{gc}=-\log_{10}\left[a_{\mathrm{H^{+}}}/c^{\odot}\right]\), with the activity of hydrovium ions in the reservoir \(a_{\mathrm{H^{+}}}=c_{\mathrm{H^{+}}}\exp(\beta\mu_{ex})\), where \(\mu_{ex}=\mu_{CS}+\mu_{MSA}\) is the excess chemical potential. The non-ideality effects due to Coulomb interactions are taken into account at the mean spherical approximation (MSA) level, while the hard core contribution is calculated using the Carnahan-Starling equation of state [22; 23; 24; 25; 26; 27; 28; 29; 30; 31]:
\[\beta\mu_{MSA}=\frac{\lambda_{B}\left(\sqrt{1+2\kappa d}-\kappa d-1\right)}{d ^{2}\kappa},\qquad\beta\mu_{CS}=\frac{8\eta-9\eta^{2}+3\eta^{3}}{\left(1- \eta\right)^{3}}, \tag{2}\]
Figure 1: Colloidal crystal separated from a reservoir of acid and salt by a semi-permeable membrane. Panels (a), (b) and (c) show the different locations of the (reference) calomel (C) electrode and the hydrogen (H) electrode. Note that EMF readings in panels (a) and (b) are the same, while in the panel (c) it is different.
Figure 2: Colloidal suspension in a gravitational field. The top portion is a homogeneous, colloid free electrolyte solution, where pH has a well defined thermodynamic meaning.
where \(\eta=\frac{\pi d^{3}}{3}c_{t}\), \(c_{t}=c_{s}+c_{a}\) is the total concentration of salt and acid, \(\lambda_{B}=q^{2}/\epsilon_{w}k_{B}T=7.2\) A is the Bjerrum length, and \(\kappa=\sqrt{8\pi\lambda_{B}c_{t}}\) is the inverse Debye length. The surface groups are characterized by \(\mathrm{pK}_{a}=-\log_{10}[\mathrm{K}_{a}/c^{\odot}]\), where \(\mathrm{K}_{a}\) is the acid dissociation constant of surface groups and \(\varphi_{0}\) is the mean-field electrostatic potential at the surface titration sites. The ion concentration inside the cell, for \(r\geq a+d/2\), is given by the Boltzmann distribution
\[\rho_{i}(\mathbf{r})=c_{i}e^{-\beta q_{i}\varphi(\mathbf{r})}\,, \tag{3}\]
where \(c_{i}\) is concentration of ions of type \(i\) in the reservoir. The mean field potential, \(\varphi(r)\), satisfies the Poisson-Boltzmann equation for \(r\geq a+d/2\),
\[\nabla^{2}\varphi(r)=\frac{8\pi q}{\epsilon_{w}}\left(c_{a}+c_{s}\right) \sinh[\beta q\varphi(r)], \tag{4}\]
and Poisson equations for \(a<r<a+d/2\). The discreteness of surface sites is taken into account self consistently using the electrostatic potential [19]
\[\beta\varphi_{disc}=-\frac{\lambda_{B}MZ_{eff}}{a\sqrt{Z}}, \tag{5}\]
where \(M\) is the Madelung constant of the two dimensional One Component Plasma (OCP) in a hexagonal crystal state [19; 32]. Finally, \(\mu_{sol}\) is the electrostatic solvation free energy of an isolated charged site:
\[\beta\mu_{sol}=\frac{\lambda_{B}}{2}\int_{0}^{\infty}\frac{k-\sqrt{\kappa^{2}+ k^{2}}}{k+\sqrt{\kappa^{2}+k^{2}}}e^{-kd}dk. \tag{6}\]
Solving numerically the non-linear PB equation with the boundary condition of vanishing electric field at the cell boundary (charge neutrality) and colloidal charge determined self-consistently by Eq. (1), we obtain the number of protonated groups for a given \(\mathrm{pH}_{gc}\). Note that at the surface of the cell there is a jump in the electrostatic potential - the reservoir is taken to be at zero potential, while at the cell boundary, \(r=R\), the electrostatic potential is calculated to have a finite value \(\varphi_{D}\). This is the Donnan potential of a suspension that is in contact with a reservoir of acid and salt through a semi-permeable membrane. The titration curves, for a fixed concentration of 1:1 salt inside the reservoir are presented by dashed red lines in Figs. 1 and 2 as a function of pH _in the reservoir_.
### Canonical Titration Theory
Suppose we run a cpH simulation from which we calculate the average number of deprotonated groups \(Z_{eff}\), the average number of free hydronium ions, and the average number of sodium and chloride ions inside the cell. We then isolate the cell from the reservoir (canonical ensemble), keeping exactly this number of free ions inside the cell and fixing the colloidal charge at \(Q=-qZ_{eff}\). Since the cell is no longer connected with the external reservoir, there is no Donnan potential at the cell boundary, and the electrostatic potential must be continuous between inside and outside the cell. Since outside the cell \(\phi=0\), we conclude that \(\phi(R)=0\).
At the level of approximation of the present theory, the distribution of hydronium ions inside the cell of a _canonical systems_ is given by:
\[\rho_{\mathrm{H^{+}}}\left(\mathbf{r}\right)=\frac{N_{\mathrm{H^{+}}}\mathrm{ e}^{-\beta q\phi(\mathbf{r})}}{4\pi\int_{a+\frac{d}{2}}^{R}r^{2}d\mathrm{r}e^{- \beta q\phi(\mathbf{r})}} \tag{7}\]
where \(N_{\mathrm{H^{+}}}\) is the number of free hydronium ions inside the cell and \(\phi(r)\) is the mean field electrostatic potential.
The electrochemical potential of hydronium inside the cell is:
\[\beta\mu_{c}=\ln\left[\rho_{\mathrm{H^{+}}}(\mathbf{r})\right]+\beta q\phi( \mathbf{r})+\mu_{ex} \tag{8}\]
where \(\mu_{ex}\) is the excess chemical potential due to the electrostatic and steric interactions between the ions, which at the level of the present theory we take to be constant and equivalent to \(\mu_{CS}+\mu_{MSA}\) in the reservoir. Clearly, the fact that the system became disconnected from the reservoir after equilibration does not affect the distribution of hydronium ions inside the cell, which must remain exactly the same as before. The only difference is that the canonical electrostatic potential is shifted from its grand canonical value by the Donnan potential \(\phi(r)=\varphi(r)-\varphi_{D}\), which does not affect the distribution given by Eq.(7). Therefore, the hydronium density profile, Eq.(7), can also be written in terms of the acid concentration \(c_{a}\) in the original reservoir, see Eq. (3), and the Donnan potential:
\[\rho_{\mathrm{H^{+}}}\left(\mathbf{r}\right)=c_{a}\mathrm{e}^{-\beta q\phi( \mathbf{r})-\beta q\varphi_{D}}. \tag{9}\]
Substituting this expression into Eq. (8), we obtain the relation between canonical and semi-grand canonical electrochemical potentials:
\[\mu_{c}=\mu_{gc}-q\varphi_{D}. \tag{10}\]
The activity of hydronium ions inside an isolated suspension is then \(a_{H^{+}}=\exp[\beta\mu_{c}]/c^{\odot}\), so that canonical and semi-grand canonical pH are found to be related by:
\[\mathrm{pH}_{c}=\mathrm{pH}_{gc}+\frac{\beta q\varphi_{D}}{\ln 10} \tag{11}\]
## III Results and Discussion
In Figs. 3 and 4 we present the titration isotherms for colloidal suspensions of various volume fractions and salt concentrations. The red dashed curves correspond to systems in which colloidal particles are separated from the acid-salt reservoir of \(\mathrm{pH}_{gc}\) and \(c_{s}\), by a semi-permeable membrane. On the other hand, the black solid curves
correspond to titrations performed in isolated colloidal suspensions containing a fixed salt concentration \(c_{s}\), indicated in the figures. To calculate these canonical titration curves, the concentration of salt in the reservoir is adjusted to get the desired concentration of salt inside the system, while solving the PB equation with the boundary conditions of vanishing electric field at \(r=R\) and the nanoparticle charge determined by Eq. (1). The pH\({}_{c}\) is then obtained using equation (11). We see that for suspensions of high volume fractions and low salt content, the canonical titration curves are very different from their semi-grand canonical counterparts. If salt content of suspension increases or if the volume fraction of colloidal particles decreases, we see that the difference between titration isotherms vanishes. This explains why these problems were not previously observed in cpH simulations used to study biologically relevant proteins. Such simulations are usually conducted at physiological concentrations of electrolyte, when the difference between canonical and grand canonical pH vanishes.
## IV Conclusions
After 115 years the measure of "hydrogen potential" still causes conceptual and practical difficulties. The original idea of Sorensen was to relate pH directly to the concentration of hydronium ions. The complexity of measuring local concentrations of protons has led him to later redefine pH in terms of activity of hydronium ions. This, however, resulted in a whole new set of difficulties, since the measurements of individual activity coefficients are not possible by any electrochemical means. What is in fact being measured using hydrogen and calomel electrodes, is neither activity nor concentration of hydronium ions, but some other quantity [33]. Nevertheless, due to standardization of such measurements, they have become well accepted by the scientific community [2]. For homogeneous, single phase systems, the situation, therefore, appears to be reasonably well understood. The difficulties arise when one tries to extend such measurements to heterogeneous systems, such as colloidal suspensions in gravitational fields, or even colloidal lattices in which translational symmetry is broken, and the electrostatic potential is a strongly inhomogeneous function of position. The Gibbs-Guggenheim principle forbids us from splitting the electrochemical potential into separate chemical and electrostatic parts, since only the sum of two has any thermodynamic meaning. [34] This suggests that the correct definition of activity should involve the full electrochemical potential \(a_{H^{+}}=\exp[\beta\mu]/c^{\odot}\), where \(\mu=\mu_{chem}(\mathbf{r})+q\varphi(\mathbf{r})\). Although both the chemical potential \(\mu_{chem}(\mathbf{r})\) and the _total_ electrostatic potential \(\varphi(\mathbf{r})\) are local functions of position, their sum is constant throughout an inhomogeneous system in which protons are in equilibrium. Such definition, however, would make activity of protons - and pH - in heterogeneous systems with Donnan equilibrium to be the same on both sides of a semi-permeable membrane transparent to H\({}^{+}\), even though the concentrations of hydronium ions on the two sides of such membrane are different. The price of such definition would, therefore, be to move the notion of pH even farther from Sorensen's original idea of measuring local hydronium concentration. The gain, however, would be to make pH a true thermodynamic variable, directly related to the electrochemical potential. The current state of affairs seems to be untenable for hetero
Figure 4: The titration isotherms for canonical and semi-grand-canonical systems of different colloidal volume fractions \(\eta\). Salt concentration is \(c_{s}=1\) mM. For dilute suspensions (low colloidal volume fraction) the difference between ensembles vanishes. Colloidal surface charge density \(\sigma\) is measured in millicoulombs per m\({}^{2}\).
Figure 3: The titration isotherms for canonical and semi-grand canonical ensembles. For an open system the pH refers to the pH in the reservoir, while for a closed system it is for the interior of suspension. Similarly for an open system \(c_{s}\) refers to salt content in the reservoir, while for a closed system it is salt concentration inside the system. At higher salt concentrations the difference between canonical and semi-grand canonical titration curves vanishes. Colloidal volume fraction is \(\eta=11\%\), in all cases. Colloidal surface charge density \(\sigma\) is measured in millicoulombs per m\({}^{2}\). |
2310.16005 | MLFMF: Data Sets for Machine Learning for Mathematical Formalization | We introduce MLFMF, a collection of data sets for benchmarking recommendation
systems used to support formalization of mathematics with proof assistants.
These systems help humans identify which previous entries (theorems,
constructions, datatypes, and postulates) are relevant in proving a new theorem
or carrying out a new construction. Each data set is derived from a library of
formalized mathematics written in proof assistants Agda or Lean. The collection
includes the largest Lean~4 library Mathlib, and some of the largest Agda
libraries: the standard library, the library of univalent mathematics
Agda-unimath, and the TypeTopology library. Each data set represents the
corresponding library in two ways: as a heterogeneous network, and as a list of
s-expressions representing the syntax trees of all the entries in the library.
The network contains the (modular) structure of the library and the references
between entries, while the s-expressions give complete and easily parsed
information about every entry. We report baseline results using standard graph
and word embeddings, tree ensembles, and instance-based learning algorithms.
The MLFMF data sets provide solid benchmarking support for further
investigation of the numerous machine learning approaches to formalized
mathematics. The methodology used to extract the networks and the s-expressions
readily applies to other libraries, and is applicable to other proof
assistants. With more than $250\,000$ entries in total, this is currently the
largest collection of formalized mathematical knowledge in machine learnable
format. | Andrej Bauer, Matej Petković, Ljupčo Todorovski | 2023-10-24T17:00:00Z | http://arxiv.org/abs/2310.16005v1 | # MLFMF: Data Sets for Machine Learning for Mathematical Formalization
###### Abstract
We introduce _MLFMF_, a collection of data sets for benchmarking recommendation systems used to support formalization of mathematics with proof assistants. These systems help humans identify which previous entries (theorems, constructions, datatypes, and postulates) are relevant in proving a new theorem or carrying out a new construction. Each data set is derived from a library of formalized mathematics written in proof assistants Agda or Lean. The collection includes the largest Lean 4 library Mathlib, and some of the largest Agda libraries: the standard library, the library of univalent mathematics Agda-unimath, and the TypeTopology library. Each data set represents the corresponding library in two ways: as a heterogeneous network, and as a list of s-expressions representing the syntax trees of all the entries in the library. The network contains the (modular) structure of the library and the references between entries, while the s-expressions give complete and easily parsed information about every entry. We report baseline results using standard graph and word embeddings, tree ensembles, and instance-based learning algorithms. The MLFMF data sets provide solid benchmarking support for further investigation of the numerous machine learning approaches to formalized mathematics. The methodology used to extract the networks and the s-expressions readily applies to other libraries, and is applicable to other proof assistants. With more than \(250\,000\) entries in total, this is currently the largest collection of formalized mathematical knowledge in machine learnable format.
## 1 Introduction
Applications of artificial intelligence to automation of mathematics have a long history, starting from early approaches based on a collection of hand-crafted heuristics for formalizing new mathematical concepts and conjectures related to them [Lenat, 1977]. In the last decade, there has been a growing interest in formalization of mathematics with _proof assistants_, which verify the formal correctness of
mathematical proofs and constructions, and help automate the tedious parts. The trend is correlated with the interest of machine learning community in aiding formalization efforts with its expertise.
Machine learning methods are often used to address _premise selection_, i.e., recommendation of theorems that are useful for proving a given statement. DeepMath (Irving et al., 2016) proposes using convolutional and recurrent neural networks to predict the relevance of a premise for proving the given statement. While many other approaches (Polu and Sutskever, 2020; Welleck et al., 2022) use transformers and general language models, Paliwal et al. (2020) have shown that taking into account the higher-order structure of logical expressions used in formalizing mathematics can greatly improve the performance of premise selection and automated proving. Indeed, many approaches use graph neural networks to learn from the higher-order structures, e.g., (Wang et al., 2017). More recently, graph neural networks have also been proven useful for explorative, unsupervised approaches to automated theorem proving with reinforcement learning (Bansal et al., 2020; Lample et al., 2022). Some of these approaches address alternative tasks, such as recommending or automatically selecting suitable _proof tactics_, i.e., routines for performing a series of proof steps, applying a decision procedure, or for carrying out proof search.
Data sets of different origins have been used to evaluate the proposed approaches. Welleck et al. (2022) evaluate their approach on a selection of three hundred proofs included in the ProofWiki (pro) library of mathematical proofs written in a combination of natural language and LaTeX. Polu and Sutskever (2020) use a standard library of the Metamath proof assistant. Lample et al. (2022) combine proofs from the Metamath library with proofs from the Mathlib library (Mathlib) of the Lean proof assistant. The latter has also been used for evaluating the approaches in (Han et al., 2022). Wang et al. (2017); Paliwal et al. (2020); Bansal et al. (2020) evaluate their models within the HOL Light proof assistant based on higher-order logic (Harrison, 2009). The formalized proofs in standard HOL libraries have been transformed into a HOLStep data set for machine learning, where examples correspond to more than 2 million steps from \(11\,400\) proofs (Kaliszyk et al., 2017). The training set includes proof steps in context (local hypotheses and the current statement being proved) and the library entry used in the step. Descriptions of the library entries are included in human-readable and machine-readable, tokenized versions. The data set has been recently upgraded to the interactive benchmark environment HoList for training automated proof systems with reinforcement learning (Bansal et al., 2019).
We present a collection of data sets, MLFMF, based on libraries of formalized mathematics encoded in two proof assistants, Agda and Lean. It supports evaluation and benchmarking of machine learning approaches for recommendation systems in the context of formalized mathematics.
We transform each library into a directed multi-graph whose nodes represent library entries (theorems, lemmas, axioms, and definitions), while edges represent the references between them. Consider the example in Table 1. It starts with a definition of the set of natural numbers with two simple constructors that define the first natural number 0 and constructs all the others inductively by asserting that a successor suc(n) of a natural number n is also a natural number. The definition of the addition of natural numbers follows their definition by asserting two simple rules for the left addition of 0 and the left addition of a successor. Note that the definition of \(+\) references the definition of \(\mathbb{N}\). Next, the first lemma establishes the rule for the right addition of zero as the first simple commutativity case.
\begin{table}
\begin{tabular}{l|l} ID & entry \\ \hline \(\mathbb{N}\) & \(\mathbb{N}:\) \\ & \(\mathtt{zero}\): \(\mathbb{N}\) \\ & \(\mathtt{suc}\)(n): \(\mathbb{N}\rightarrow\mathbb{N}\) \\ \hline \(+\) & \(0+n=n\), for all \(n\in\mathbb{N}\) \\ & \(\mathtt{suc}(m)+n=\mathtt{suc}(m+n)\), for all \(m,n\in\mathbb{N}\) \\ \hline Lemma 1 (L1) & \(m+0=m\) for all \(m\in\mathbb{N}\). \\ & This is proved by induction on \(m\). \\ \hline Lemma 2 (L2) & \(m+\mathtt{suc}(n)=\mathtt{suc}(m+n)\), for all \(m,n\in\mathbb{N}\). \\ & This is proved by induction on \(m\). \\ \hline Theorem (T) & \(m+n=n+m\), for all \(m,n\in\mathbb{N}\). \\ & This is proved by induction on \(m\). In the base case (\(m=0\)), we need L1. In the induction step (\(m=\mathtt{suc}(\ell)\)), we need L2. \\ \end{tabular}
\end{table}
Table 1: An example formalization of proof that the addition of natural numbers is commutative.
The second lemma establishes the right addition of a successor as the second case. The theorem at the end references the two lemmas to show (and prove) the commutativity of adding natural numbers.
The entries from Table 1 are transformed into a multi-graph depicted in Figure 0(a). It contains five nodes, each corresponding to a table row. The multi-graph includes an edge from the node \(+\) to the node \(\mathbb{N}\), indicating the reference to the set of natural numbers in the definition of addition. It also contains the self-reference of \(+\), since the second case of this definition is recursive. Similarly, there are four edges from the theorem node to the two lemma nodes and the two nodes defining natural numbers and addition thereof. The obtained data allows us to approach premise selection as a standard edge prediction machine learning task.
Furthermore, we transform each formalized entry into a directed acyclic graph that retains complete information about the entry, see Figure 0(b). By including the entire entry structures in the data sets, we make them suitable for further exploration of the utility of the state-of-the-art approaches to graph-based machine learning. A detailed description of the format is given in Sections 3.3 and 3.4.
Our approach is general and can be applied to other proof assistants based on type theory. Moreover, even though Agda and Lean have quite different internal representations, the corresponding data sets use a common format that requires little or no knowledge about the inner workings of proof assistants. Thus our collection provides the machine learning community with easy access to a large amount of formalized mathematics in familiar formats that allow immediate application of machine learning algorithms. To our knowledge, MLFMF is the first and most extensive collection of data sets featuring more than one proof assistant and providing access to the higher-order structured representation of more than \(250\,000\) mathematical formalization entries.
## 2 Formalized Mathematics
Formalized mathematics is mathematics written in a format that allows algorithmic checking of correctness of mathematical proofs and constructions. The programs that perform such checking are called _proof assistants_ or _proof checkers_. An early proof checker was AUTOMATH [14], while today the most prominent assistants are Isabelle/HOL [12], Coq [3], Agda [1] and Lean [1]. They are all _interactive_: As the user develops a piece of formalized mathematics the assistant keeps track of unfinished proof goals, displays information about the current goal, checks the input on the fly, and provides search and automation facilities.
The level of automation varies between different proof assistants. In Agda, which supports little automation, the user directly writes down proofs and constructions in abridged type-theoretic syntax that Agda checks and algorithmically elaborates to fully formal constructions. On the other end of the spectrum are Isabelle/HOL and Lean, where the user relies heavily on _tactics_, which are routines that automatically perform various tasks, such as running a domain-specific decision procedure, applying a heuristic, or carrying out proof search.
Figure 1: The two-part representation of a library. Library as a whole is represented as a network of references (a). Additionally, every entry is represented as a DAG which is shown here in its textual s-expression format (b). Note that some nodes of DAG were replaced by (...) for better readability.
The mathematical formalism most commonly used as the underpinning of a proof assistant is type theory, of which there are many variants (Church, 1940; Martin-Lof, 1975; Coquand and Huet, 1988). The proof assistant processes the user input by disambiguating mathematical notations, applying tactics and other meta-level processing commands, and internally stores the resulting proofs, theorems, constructions, definitions, and types as expressions, or syntax trees, of the chosen type theory. These are typically quite verbose, so that checking their correctness is straightforward, but contain many more details than a user may wish to look at.
Libraries of formalized mathematics comprise units, organized hierarchically with a module system or namespaces, each of which contains a number of entries: definitions of types, constructions of elements of types, statements and proofs of theorems, unproved postulates (axioms), as well as meta-level content, such as embedded documentation, definitions of tactics, hints for heuristics, and other automation mechanisms.
In the last decade the libraries of formalized mathematics have grown considerably, most recently with the rise of the popularity of the Lean proof assistant and the Mathlib library (Mathlib, community, 2019), around which a mathematical community of several thousand mathematicians has formed. Such growth presents its own challenges, many of which are of the software engineering kind and can be so addressed. In our work we addressed the specific problem of _recommendation_: given a large body of formalized mathematical knowledge, how can the proof assistant competently recommend theorems or constructions that are likely useful in solving the current goal? There are two typical scenarios: the user knows which theorem they would like to use but have a hard time finding it in the library, or the user is not aware of the existence of a potentially useful theorem that is already available. Both are obvious targets for machine-learning methods.
## 3 MLFMF Data Sets
In this section we describe our data sets in detail. We first explain the semantic content of the data extracted from libraries of formalized mathematics, describe the format and information content of the data sets, continue by reviewing the machine learning tasks for which the data sets were built, and finish with an overview of the technical aspects of the library-to-data-set transformation process.
### The Extracted Data
Formalized mathematics is written by the user in a domain-specific language, often called the _vernacular_ or the _meta-language_. The proof assistant evaluates the source code, which involves executing tactics, decision procedures, etc., verifies that the proofs and constructions so generated are mathematically valid, and stores the results using an internal type-theoretic format. One may apply machine learning techniques directly on the vernacular, as written by the user, or on the formal representation of mathematics. The former approach roughly corresponds to learning how to _do_ formalized mathematics, and the latter what formalized mathematics _is_.
We took the latter approach, namely learning on the formalized mathematics itself, for two reasons. First, because we aimed at a uniform approach that is applicable to most popular proof assistants, it made sense to use the internal type-theoretic representations, which are much more uniform across proof assistants than the vernaculars. Second, the vernacular contains meta-level information, such as what tactics to use, from which one cannot discern directly which theorems are actually used in a given proof. Without this information, one can hardly expect a recommendation system to work well.
Every data set that we prepared is generated from a library of formalized mathematics. Most libraries, and all that we incorporated, are organized hierarchically into modules and sub-modules, each of which is a unit of vernacular code that, once evaluated by the proof assistant, results in a list of _entries_: definitions of types, constructions of elements of types, theorems and their proofs, and unproved postulates. The entries refer to each other and across modules, possibly cyclically in case of mutually recursive definitions.
The internal representations of entries vary across assistants, but all have certain common features:
1. Each entry has a _qualified name_\(M_{1}.M_{2}\ldots M_{k}.N\) by which it is referred to, where \(M_{1}.M_{2}\ldots M_{k}\) is a reference to a module in the module hierarchy and \(N\) is the local name of the entry, for example Algebra.Group.FirstIsomorphismTheorem.
2. Each entry has an associated _type_\(T\), which specifies the information content of the body of the entry. For example, the type \(\mathtt{List}(\mathbb{N})\) specifies that the entry is a list of natural numbers. Importantly, logical statements are just a special sort of types, so that the type of a proof is the logical statement that it proves. (This is to be contrasted with first-order logic, where logical statements are strictly separated from types.)
3. An entry has a _body_, which is an expression of the given entry type. In some cases the body may be missing, for instance if the user declares an axiom.
4. Depending on the proof assistant, various _meta-level information_ is included, such as which arguments to functions are implicit (need not be provided by the user).
### Data Description
In this section, we describe the data set. We start with a brief description of data transformation process, continue with the detailed description of the resulting pair of computational graphs for the entries in the library, and the directed, multi graph of references in the library (see 3.3) and 3.4).
Every data set consists of two parts. The first part is a set \(\mathcal{T}\) of abstract syntax trees (AST) that correspond to the entries in the library, while the second is a directed multi-graph \(G(V,E)\), where \(V\) is a set of library entries, and \(E\) includes the references among them. ASTs are actually trees in the case of Agda libraries. However, in Lean, they are actually directed acyclic graphs (DAGs) due to memory optimization and node-sharing: all the parents that would potentially reference their own copy of a node (or a subtree), rather reference the same node. For this reason, we refer to them as computational graphs. They provide the full information about every entry in the library and are given in the s-expression format that is much easier to parse, as compared to the typically very flexible syntax of proof assistants that allows for implicit arguments, mix-fix notation, etc. For example, the function if_then_else x y z in Agda can be called as if x then y else z. Learning from the source code would put an additional burden on the machine learning algorithm. Learning directly from computational graphs, on the other hand, is much easier.
### The Computational Graphs
During compile time, the full type of every entry in the library is computed and the source code of the entry is converted to a (directed acyclic) computational graph. We intercept this procedure and export every entry as a Lisp _s-expression_, which is defined recursively as:
Figure 3: The DAG representing an entry has a single root node with three children: a node containing the entry name, a DAG containing the entry declaration, and a DAG representing the entry body.
Figure 2: The two stages of the data transformation. First, a language-dependent (i.e., Agda or Lean) command line tool is used to transform the library entries into s-expressions. In the second stage, Python scripts are used to explicitly construct the directed multi-graph, which contains library modules, entries, and references among them.
1. A literal is an s-expression, and
2. A list of s-expressions is an s-expression.
For example, the literals 12 and "foo" are s-expressions, and the list ("foo" ("bar" 12) "baz") is also an s-expression with three elements: "foo", ("bar" 12) (which contains two s-expressions) and "baz". Every s-expression that is obtained from the entries in a library is three-part, as shown in Fig. 3. In consists of the name of the entry, the s-expression that describes the declaration, and the s-expression that describes the body of the entry.
Even though the entries were manually encoded and mostly take at most a few kilobytes of space, their computational graphs can be much larger (more than a gigabyte), mostly due to the type checking and the expansion of the declared type of the entry.
### The Multi-Graph
For simplicity reasons, we will refer to the directed multi-graph \(G(V,E)\) simply as a graph. Its meta-structure is shown in Fig. 3(a). In the description below, we follow this structure (in the bottom-up manner) and the concrete example of a subgraph for Agda's standard library in Fig. 3(b).
Entry nodes.Every module in a library defines at least one entry (shown as green circles), e.g., Bijection DEFINES id, Bijection DEFINES Bijection (these are two different nodes), and Injection DEFINES injective. We further differentiate between different kinds of entries, as shown in Tab. 2. Most of the nodes in the graph are entries (and most of them are functions), and most of the edges are of type REFERENCE FROM DECLARATION/BODY).
Library and module nodes.The only nodes with no incoming edges (root nodes) are the library nodes (shown as blue squares). Every graph contains at least one library node--the one that corresponds to the library itself. In Fig. 3(b), this is the node stdlib. However, the graph might contain
Figure 4: A meta-graph of libraries (a) and a subgraph of the graph that was created from Agda’s standard library (b) that follows the prescribed meta-structure.
an additional node outer library if any of the library entries reference some external entries that are not part of the library (for example, built-in types). The library nodes are directly connected to the nodes representing modules (shown as blue diamonds) via the edges of the type CONTAINS, e.g., stdlib CONTAINS Function. Every module can contain zero or more (sub)modules, e.g., Function CONTAINS Bijection.
In the case of Agda libraries, the module nodes correspond to the modules that are actually present in the library and resemble the file system of the library. Lean, however, supports the use of namespaces. If the file a/b/c.lean defines an entry foo.bar.F, and the file d/e.lean defines an entry foo.bar.G, those two entries are part of the same namespace foo.bar and the exact location in the file system where these two entries were defined, is irrelevant. Therefore, module nodes for Lean's library Mathlib4 correspond to namespaces in the library. Following the previous example, we create module nodes foo and bar, together with the edge foo CONTAINS bar.
### Machine Learning Tasks
The main motivation for the creation of the data set was the development of machine learning algorithms that would enhance current proof assistants and help mathematicians using them. This translates to the following two machine learning tasks.
Link prediction.Given the current state of the multi-graph of references among the entries, learn a model that predicts the future, novel links (references) among the library entries. Formally, we learn a model \(M:(u,v)\mapsto M(u,v)\in[0,1]\) that given two nodes \(u\) and \(v\) outputs the model confidence in the presence of the edge \((u,v)\). The (current) computational graphs of the entries can be used as additional information for learning such a model. If learning from the multi-graph only, one can use standard node- or edge-embedding approaches as well as graph neural networks.
Recommendation.The problem of predicting the future references among the entries could be understood as a recommendation task as well. Given a specific unfinished entry (possibly with some additional context, such as the list of lemmas/ claims that were used last), the task is to recommend the candidates that could be referenced in the current computational graph of the entry to complete it.
Note that the two tasks are equivalent, i.e., solving one solves the other. A link prediction model \(M\) (see above) can be converted into a recommendation system by fixing the entry \(u\) and recommending the entries \(v\in V\) with the highest confidence levels \(M(u,v)\). Vice versa, given a recommendation model \(M^{\prime}:u\mapsto M^{\prime}(u)\subseteq V\), we can define a corresponding link-prediction model \(M\) as \(M(u,v)=1\) if \(v\in M^{\prime}(u)\), and \(M(u,v)=0\) otherwise.
Since the essential part of the MLFMF data set is a directed multi-graph (which represents a heterogeneous network), other standard learning tasks for graphs/ networks might also be interesting. Here, we mention two example instances of the common node classification task.
Entry class detection.A straight-forward instance of node classification task would be classifying the entries into their types from Table 2, e.g., function or axiom. This does not require additional manual labeling and should not be too hard, especially when computational graphs are taken into account, since the structure of a function is quite different from the structure of, e.g., record.
\begin{table}
\begin{tabular}{l|l} kind & description \\ \hline :data & inductive data type (natural numbers, lists and trees) \\ :constructor & data-type constructor (successor, cons) \\ :function & function (including constants as nullary functions) \\ :record & record type (a structure with named fields or attributes) \\ :axiom & postulated type or statement (no inhabitant or proof given) \\ :primitive & built-in (primitive) function \\ :sort & the sort of a type (proposition, universe at a given level) \\ :recursor & the induction/recursion principle associated with an inductive data-type \\ :abstract & entry whose body is hidden \\ \end{tabular}
\end{table}
Table 2: Tags of the nodes in s-expressions.
Claim detection.A more challenging instance of node classification is predicting whether a function entry is a claim (e.g., a lemma, corollary, theorem, etc.) or not, since some of the entries are simply definitions of, for example, the addition of natural numbers. Approaching this task, however, would require additional (manual) labeling of the entries.
### License
We make MLFMF publicly available under the Creative Commons Attribution 4.0 International1 (CC BY 4.0) license at [https://github.com/ul-fmf/mlfmf-data](https://github.com/ul-fmf/mlfmf-data).
Footnote 1: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)
## 4 Experiments and Results
In this section, we first introduce the experimental setup for the baseline experiments (how to prepare the train and test part of the data set, and which standard metrics can be used), and then, after briefly introducing the baseline methods, we report the experimental results.
### Train-test split
When splitting the graph into train and test data sets, we should split the multi-graph \(G(V,E)\) and the set of computational graphs of the entries. In the case of the link prediction and recommendation tasks, we should focus on function nodes, since these are the only nodes that correspond to a computational graph whose body contains a proof of a claim formalized in the declaration part of the computational graph.
In our baseline experiments from Sec. 4.4, we follow a generic approach to creating a train-test split. The approach takes two parameters: \(p_{\text{test}}\in(0,1)\), \(p_{\text{body}}\in[0,1)\). First, we randomly choose the proportion \(p_{\text{test}}\) of function nodes. We assume that those correspond to partially written entries whose computational graphs have completely specified type, i.e., the user knew how to formalize a claim, and _partially_ known body, i.e., the proof of the claim is not finished yet. Note that, often, proofs are not written linearly and might contain so-called holes at problematic parts where the right lemmas are yet to be applied (possibly with already known arguments). Thus, we need to modify the computational graphs of the test nodes to reflect the changes in the multi-graph.
We simulate the applications of the missing lemmas by keeping only the proportion of \(p_{\text{body}}\) of the references in the body. Since our graph contains a weighted edge u REFERENCE FROM BODY v, which we either remove or keep intact, we remove all references to \(v\) from the body of \(u\) or none of them. Then, the unfinished proofs are simulated by keeping the proportion of \(p_{\text{body}}\) of the body of \(u\), which is done by iterative pruning of the leaves of the body. At each iteration, a leaf is chosen uniformly at random. If the chosen leaf is a reference that we have to keep, the leaf is not pruned and we continue with the next iteration.
The removed edges represent positive test examples, and the negative test examples for learning predictive models need to be sampled. In the baseline experiments, the negative test examples were sampled uniformly at random.
### Evaluation metrics
For link prediction, one can use standard classification metrics, such as accuracy, precision, recall, and \(F_{1}\)-score. If the model returns its confidence \(M(u,v)\in[0,1]\) instead of the class value (\(M(u,v)\in\{0,1\}\)), one could additionally consider area under the receiver-operating-characteristic curve. Similarly goes for the recommendation models: one can use precision and recall.
If the recommendation model returns the relevance score of a candidate entry to the current context, we can rank candidates according to the score values, with the top recommendation having a rank of one (1). We can then compute the minimal (and the mean) rank of the actual references and average them over the testing examples. This is an important metric, since it counts the number of false recommendations with better ranks than any of the actual references. Ideally, the minimal rank is close to one, i.e, the top-ranked recommendation mostly matches the missing entry to be referenced.
### Baseline Methods
Dummy recommender.This recommender ignores the current context and always recommends the \(k\) nodes of the multi-graph with the highest in-degree.
Bag of Words/TFIDF recommenders.Bag of Words (BoW) recommender converts every computational graph \(g(u)\) of an entry \(u\) in a library into a bag of words \(\mathrm{BoW}(u)\). We compute the relevance \(M(u,v)\) of the candidate entry \(v\) for the current context \(u\) using the Jaccard similarity between the corresponding bag-of-words:
\[J(\mathrm{BoW}(u),\mathrm{BoW}(v))=\frac{|\,\mathrm{BoW}(u)\cap\mathrm{BoW}(v )|}{|\,\mathrm{BoW}(u)\cup\mathrm{BoW}(v)|}.\]
Similarly, TFIDF-recommender embeds \(g(u)\) into a term-frequency-inverse-document-frequency vectors (obtained from the corresponding bags-of-words) as implemented in Scikit-Learn 1.2.2 (Pedregosa et al., 2011). The relevance of the candidate entry \(v\) is computed as a Manhattan or a cosine distance between the TFIDF-vectors of \(u\) and \(v\).
FastText embedding recommender.It embeds every computational graph \(g(u)\) into a vector \(\vec{\varphi}(g(u))=\sum_{\text{word}\in g(u)}w(\text{word},g(u))\cdot\varphi_ {\text{cc}}(\text{word})\), where \(\varphi_{\text{cc}}(\text{word})\) is the vector of word obtained from the fastText model trained of Common Crawl (Mikolov et al., 2018), and \(w(\text{word},g(u))\) is the TFIDF weight of the word in the entry \(u\).
Recommendations via analogies.We design a recommender that is based on fastText analogies property, i.e., the fact that \(x=\) queen is one of the approximate solutions of \(\varphi_{\text{cc}}(\text{king})-\varphi_{\text{cc}}(x)=\varphi_{\text{cc}}( \text{man})-\varphi_{\text{cc}}(\text{woman})\). We design the _analogy recommender_ that for a given entry \(u\) recommends the nodes \(v\), for which a good analogy \(u^{\prime}\to v^{\prime}\) of the edge \(u\to v\) can be found. The relevance of the candidate entry \(v\) in a given context \(u\) is defined in terms of the Manhattan distance as
\[r(u,v)=1\quad/\min_{u^{\prime}\to v^{\prime}\in E(G)}\|[\varphi_{\text{cc}}(u )-\varphi_{\text{cc}}(v)]-[\varphi_{\text{cc}}(u^{\prime})-\varphi_{\text{cc}}( v^{\prime})]\|_{1}.\]
Node2vec-based link prediction.We train a node2vec (Grover and Leskovec, 2016) model (as implemented in Gensim 4.3.1 (Rehurek and Sojka, 2011) on the multi-graph to obtain node embeddings. We obtain the embedding of the edge \((u,v)\) by concatenation of the node embeddings for \(u\) and \(v\). A tree-bagging classifier \(M:\varphi(u\to v)\mapsto M(\varphi(u\to v))\in[0,1]\) is trained on the tabular data obtained with using the edge embeddings as inputs and the edge presence as the target to be predicted. We selected bag of trees ensemble since it is a robust classifier, working well on tabular data, when using the recommended settings of 100 fully grown (not pruned) classification trees.
We selected baseline methods that are not computationally expensive and are robust to hyper-parameter settings: if not mentioned otherwise, the methods use the default parameter settings. We can combine multiple embeddings (e.g., those from node2vec together with those from fastText) as the input to the similarity measure of the recommender or classifier for the link prediction task. However, as noted in the next section, this did not improve the best results. In all the experiments, we generated the train-test split with the parameters \(p_{\text{test}}=0.2\) and \(p_{\text{body}}=0.1\). The results here are reported for \(k=5\) recommended items and the threshold \(\vartheta=0.5\) for classification.
### Results
The experiments on Lean were run on a computer with 4 Intel Core i7-6700K CPU cores and 64 GB of RAM. The experiments on Agda were run on a smaller machine (2 Intel Core i7-5600U CPU cores, 12 GB of RAM). Experiments that would last more than a week were not carried out (analogies on the Type Topology and Mathlib4 libraries, and fastText on the Mathlib4 library).
Tab. 3 reports the results of the experiments: for extended report including other evaluation metrics (accuracy@k, area under the ROC curve, etc.), and ablation study of node2vec on Agda stldib, check the supplementary material. For the algorithms that were run with more than one parameter setting, the best results are reported (for example, TFIDF was run with cosine- and Manhattan-based similarity measures). The best-performing baseline method is node2vec. It is the only one that ranks on average at least one actual reference among the ten most relevant candidate references for the three
Agda libraries. However, it fails to do so for Lean Mathlib4 and this can be only partially explained by the size of the Mathlib4. Note that node2vec is also the only one that explicitly learns from the multi-graph and, apparently, humans writing proofs in Agda, structured the references better than the computers in Lean, where built-in search heuristics (tactics) are used. The multi-graph is partially used by the analogies recommender as well, since the candidate recommendations are evaluated by considering the existing references \(u^{\prime}\to v^{\prime}\) in the library. This might be the reason for its good performance on the Agda stdlib data set.
Surprisingly, TFIDF embeddings perform no worse (or even better) than FastText embeddings. The reason for this might be that many words, such as group, ring, etc. have different meanings in mathematics than in general texts. Note that we tried to run additional experiments with the combination of node2vec and TFIDF/fastText embeddings, but accuracy and minRank were both worse, as compared to the node2vec results.
In sum, the baseline results show that the information on the structure of the multi-graph is crucial for obtaining classifiers with performance beyond the default performance of the dummy baseline. The recommendation performance, measured as mean minimal rank, is valuable enough (less than five recommendations to be checked to find the right one) for two Agda libraries. Developing sound recommendation systems for the other two libraries remains a challenge to be addressed by machine learning methods beyond the baselines considered here.
## 5 Conclusion
We introduced MLFMF, a suite of four data sets corresponding to three libraries in Agda and one library in Lean proof assistants. It includes almost \(250\,000\) entries, i.e., definitions, axioms, and theorems with accompanying proofs. References between entries are included in a multi-graph, where nodes are entries, edges represent references among the entries, and each entry is represented with a direct acyclic graph reflecting the structure of the entry source code encoding. Such a structure provides machine learning researchers with an opportunity to address the task of recommending relevant entries for the goal at hand as a standard edge prediction task. Such a representation of the entries allows for use of graph-based methods that can exploit the structural and semantic information stored in the multi-graphs. The report on the results of the baseline methods establishes a benchmark for comparative evaluation of future developments of machine learning for mathematical formalization that goes beyond a single proof assistant.
A notable limitation of our data sets is the lack of information on the developmental evaluation of the libraries. If the latter could be followed, the more realistic test nodes \(V_{\text{test}}\) could be defined as the _latest_\(|V_{\text{test}}|\) nodes encoded in the library. However, even with version control information (available since the libraries are stored in GitHub repositories but currently not included), determining the entries' chronological order might be computationally expensive. An approximation of the chronological order might be obtained by computing a topological ordering on the function nodes and selecting nodes from the tail of the ordered list. However, existing definitions in a library might be rewritten, so the accuracy of such an approximation is questionable.
Finally, we plan to include other, more recent libraries in our data set collection. The newly incorporated libraries might include references to earlier, standard libraries, providing further opportunities for real-world testing scenarios.
\begin{table}
\begin{tabular}{l|r r|r r|r r|r r} & \multicolumn{2}{c|}{Agda stdlib} & \multicolumn{2}{c|}{Agda unimath} & \multicolumn{2}{c|}{Agda TypeTopology} & \multicolumn{2}{c}{Lean Mathlib4} \\ method & acc & minRank & acc & minRank & acc & minRank & acc & minRank \\ \hline Dummy & 0.51 & 218 & 0.53 & 2134 & 0.50 & 4556 & 0.51 & 26065 \\ BoW & 0.50 & 1608 & 0.50 & 1571 & 0.50 & 4496 & 0.50 & 15458 \\ TFIDF & 0.51 & 144 & 0.52 & 112 & 0.51 & 552 & 0.51 & 443 \\ fastText & 0.51 & 132 & 0.52 & 394 & 0.50 & 1292 & NA & NA \\ analogies & 0.52 & 37 & 0.51 & 158 & NA & NA & NA & NA \\ node2vec & **0.96** & **4.37** & **0.96** & **3.24** & **0.98** & **5.81** & **0.95** & **195** \\ \end{tabular}
\end{table}
Table 3: The accuracy (acc) and minimal rank of the true reference for the MLFMF data sets. The best results (bold) are obtained with a combination of a node2vec and a tree-bagging classifier.
Acknowledgments
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-21-1-0024. The authors also acknowledge the financial support of the Slovenian Research Agency via the research core funding No. P2-0103 and No. P1-0294.
|
2304.05680 | Multipath-based SLAM for Non-Ideal Reflective Surfaces Exploiting
Multiple-Measurement Data Association | Multipath-based simultaneous localization and mapping (SLAM) is a promising
approach to obtain position information of transmitters and receivers as well
as information regarding the propagation environments in future mobile
communication systems. Usually, specular reflections of the radio signals
occurring at flat surfaces are modeled by virtual anchors (VAs) that are mirror
images of the physical anchors (PAs). In existing methods for multipath-based
SLAM, each VA is assumed to generate only a single measurement. However, due to
imperfections of the measurement equipment such as non-calibrated antennas or
model mismatch due to roughness of the reflective surfaces, there are
potentially multiple multipath components (MPCs) that are associated to one
single VA. In this paper, we introduce a Bayesian particle-based sum-product
algorithm (SPA) for multipath-based SLAM that can cope with
multiple-measurements being associated to a single VA. Furthermore, we
introduce a novel statistical measurement model that is strongly related to the
radio signal. It introduces additional dispersion parameters into the
likelihood function to capture additional MPCs-related measurements. We
demonstrate that the proposed SLAM method can robustly fuse multiple
measurements per VA based on numerical simulations. | Lukas Wielandner, Alexander Venus, Thomas Wilding, Erik Leitinger | 2023-04-12T08:06:44Z | http://arxiv.org/abs/2304.05680v4 | Multipath-based SLAM for Non-Ideal Reflective Surfaces Exploiting Multiple-Measurement Data Association
###### Abstract
Multipath-based simultaneous localization and mapping (SLAM) is a promising approach to obtain position information of transmitters and receivers as well as information regarding the propagation environments in future mobile communication systems. Usually, specular reflections of the radio signals occurring at flat surfaces are modeled by virtual anchors (VAs) that are mirror images of the physical anchors (PAs). In existing methods for multipath-based SLAM, each VA is assumed to generate only a single measurement. However, due to imperfections of the measurement equipment such as non-calibrated antennas or model mismatch due to roughness of the reflective surfaces, there are potentially multiple multipath components (MPCs) that are associated to one single VA. In this paper, we introduce a Bayesian particle-based sum-product algorithm (SPA) for multipath-based SLAM that can cope with multiple-measurements being associated to a single VA. Furthermore, we introduce a novel statistical measurement model that is strongly related to the radio signal. It introduces additional dispersion parameters into the likelihood function to capture additional MPCs-related measurements. We demonstrate that the proposed SLAM method can robustly fuse multiple measurements per VA based on numerical simulations.
## I Introduction
Multipath-based simultaneous localization and mapping (SLAM) is a promising approach to obtain position information of transmitters and receivers as well as information regarding their propagation environments in future mobile communication systems. Usually, specular reflections of radio signals at flat surfaces are modeled by virtual anchors (VAs) that are mirror images of the physical anchors (PAs) [1, 2, 3, 4]. The positions of these VAs are unknown. Multipath-based SLAM algorithms can detect and localize VAs and jointly estimate the time-varying position of mobile agents [3, 4, 5]. The availability of VA location information makes it possible to leverage multiple propagation paths of radio signals for agent localization and can thus significantly improve localization accuracy and robustness. In non-ideal scenarios with rough reflective surfaces [6, 7] and limitations in the measurement equipment, such as non-calibrated antennas [8], those standard methods are prone to fail since multiple measurements can originate from the same PA or VA. This shows the need for developing new methods to cope with these limitations.
### _State of the Art_
The proposed algorithm follows the feature-based SLAM approach [9, 10], i.e., the map is represented by an unknown number of _features_, whose unknown positions are estimated in a sequential (time-recursive) manner. Existing multipath-based SLAM algorithms consider VAs [3, 4, 11, 12, 13] or master VAs (MVAs) [14, 15, 16] as features to be mapped. Most of these methods use estimated parameters related to multipath components (MPCs) contained in the radio signal, such as distances (which are proportional to delays), angle-of-arrivals (AOAs), or angle-of-departures (AODs) [17]. These parameters are estimated from the signal in a preprocessing stage [17, 18, 19, 20] and are used as "measurements" available to the SLAM algorithm. A complicating factor in feature-based SLAM is measurement origin uncertainty, i.e., the unknown association of measurements with features [3, 4, 11, 20, 21]. In particular, (i) it is not known which map feature was generated by which measurement, (ii) there are missed detections due to low signal-to-noise-ratio (SNR) or occlusion of features, and (iii) there are false positive measurements due to clutter. Thus, an important aspect of multipath-based SLAM is _data association_ between these measurements and the VA or the MVA. Probabilistic data association can increase the robustness and accuracy of multipath-based SLAM but introduces additional unknown parameters. State-of-the-art methods for multipath-based SLAM are Bayesian estimators that perform the sum-product algorithm (SPA) on a factor graph [3, 4, 11] to avoid the curse of dimensionality related to the high-dimensional estimation problems.
In these existing methods for multipath-based SLAM, each feature is assumed to generate only a single measurement [22, 23]. However, due to imperfections of the measurement equipment or model mismatch due to non-ideal reflective surfaces (such as rough surfaces characterized by diffuse multipath [6, 7]), there are potentially multiple MPCs that need to be associated to a single feature (VAs or MVAs) to accurately represent the environment. This is related to the multiple-measurement-to-object data association in extended object tracking (EOT) [24, 25, 26, 21]. In EOT, the point object assumption is no longer valid, hence one single object can potentially generate more than one measurement resulting in a particularly challenging data association due to the large number of possible association events [25, 27, 28]. In [26, 21], an innovative approach to this multiple-measurements-to-object data association problem is presented. It is based
on the framework of graphical models [29]. In particular, a SPA was proposed with computational complexity that scales only quadratically in the number of objects and the number of measurements avoiding suboptimal clustering of spatially close measurements.
### _Contributions_
In this paper, we introduce a Bayesian particle-based SPA for multipath-based SLAM that can cope with multiple-measurements associated to a single VA. The proposed method is based on a factor graph designed for scalable probabilistic multiple-measurement-to-feature association proposed in [21, 26]. We also introduce a novel statistical measurement model that is strongly related to the radio signal. It introduces additional dispersion parameters into the likelihood function to capture additional MPCs-related measurements. The key contributions of this paper are as follows.
* We introduce the multiple-measurement-to-feature data association proposed in [21] to multipath-based SLAM [3, 11].
* We use this multiple-measurement data association to incorporate additional MPC-related measurements originating from non-ideal effects such as rough reflective surfaces or non-calibrated antennas.
* We introduce a novel likelihood function model that is augmented with dispersion parameters to capture these additional MPC-related measurements that are associated to a single VA.
* We demonstrate based on synthetically generated measurements that the proposed SLAM method robustly associates multiple measurements per VA and that it is able to significantly outperform state-of-the-art multipath-based SLAM methods [3, 11] in case additional MPC-related measurements occur.
This paper advances over the preliminary account of our method provided in the conference publication [30] by (i) presenting a detailed derivation of the factor graph, (ii) providing additional simulation results, and (iii) demonstrating performance advantages compared to the classical multipath-based SLAM [3, 11].
_Notation_: Random variables are displayed in sans serif, upright fonts; their realizations in serif, italic fonts. Vectors and matrices are denoted by bold lowercase and uppercase letters, respectively. For example, a random variable and its realization are denoted by \(\mathsf{x}\) and \(x\), respectively, and a random vector and its realization by \(\mathbf{\mathsf{x}}\) and \(\mathbf{x}\), respectively. Furthermore, \(\|\mathbf{x}\|\) and \(\mathbf{x}^{\mathsf{T}}\) denote the Euclidean norm and the transpose of vector \(\mathbf{x}\), respectively, and \(\langle\mathbf{x},\mathbf{y}\rangle\) denotes the inner-product between the vectors \(\mathbf{x}\) and \(\mathbf{y}\); \(\propto\) indicates equality up to a normalization factor; \(f(\mathbf{x})\) denotes the probability density function (PDF) of random vector \(\mathbf{\mathsf{x}}\) (this is a short notation for \(f_{\mathbf{x}}(\mathbf{x})\)); \(f(\mathbf{x}|\mathbf{y})\) denotes the conditional PDF of random vector \(\mathbf{\mathsf{x}}\) conditioned on random vector \(\mathbf{\mathsf{y}}\) (this is a short notation for \(f_{\mathbf{\mathsf{x}}|\mathbf{x}}(\mathbf{x}|\mathbf{z})\)). The cardinality of a set \(\mathcal{X}\) is denoted as \(|\mathcal{X}|\). \(\delta(\cdot)\) denotes the Dirac delta function. Furthermore, \(1_{\Lambda}(\mathbf{x})\) denotes the indicator function that is \(1_{\Lambda}(\mathbf{x})=1\) if \(\mathbf{x}\in\mathbb{A}\) and 0 otherwise, for \(\mathbb{A}\) being an arbitrary set and \(\mathbb{R}^{+}\) is the set of positive real numbers. Finally, \(\delta_{\mathbf{e}}\) denotes the indicator function of the event \(\mathbf{e}=\mathbf{0}\) (i.e., \(\delta_{\mathbf{e}}=1\) if \(\mathbf{e}=\mathbf{0}\) and \(\mathbf{0}\) otherwise).
We define the following PDFs with respect to \(\mathsf{x}\): The Gaussian PDF is
\[f_{\mathsf{N}}(x;\mu,\sigma)=\frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^{2} }{2\,\sigma^{2}}} \tag{1}\]
with mean \(\mu\) and standard deviation \(\sigma\)[31]. The truncated Rician PDF is [32, Ch. 1.6.7]
\[f_{\text{TRRay}}(x;s,u,\lambda)=\frac{1}{Q_{1}(\frac{u}{s},\frac{ \lambda}{s})}\frac{x}{s^{2}}e^{\frac{-(x^{2}+\lambda^{2})^{2}}{2\,\sigma^{2}}}I _{0}(\frac{x}{s^{2}})1_{\mathbb{R}^{+}}(x-\lambda) \tag{2}\]
with non-centrality parameter \(u\), scale parameter \(s\) and truncation threshold \(\lambda\). \(I_{0}(\cdot)\) is the 0th-order modified first-kind Bessel function and \(Q_{1}(\cdot,\cdot)\) denotes the Marcum Q-function [31]. The truncated Rayleigh PDF is [32, Ch. 1.6.7]
\[f_{\text{TRRay}}(x;s,\lambda)=\frac{x}{s^{2}}\,e^{\frac{-(x^{2}-\lambda^{2}) ^{2}}{2\,\sigma^{2}}}1_{\mathbb{R}^{+}}(x-\lambda) \tag{3}\]
with scale parameter \(s\) and truncation threshold \(\lambda\). This formula corresponds to the so-called Swirling I model [32]. The Gamma PDF is defined as \(\mathcal{G}(x;\alpha,\beta)\), where \(\alpha\) is the shape parameter and \(\beta\) is the scale parameter. Finally, we define the uniform PDF \(f_{\text{U}}(x;a,b)=1/(b-a)1_{[a,b]}(x)\).
## II Geometrical Relations
At each time \(n\), we consider a mobile agent at position \(\mathbf{p}_{n}\) equipped with a single antenna and \(J\) base stations, called PAs, equipped with a single antenna and at known positions \(\mathbf{p}_{\text{pa}}^{(j)}=\left[p_{1,\text{pa}}^{(j)}\,p_{2,\text{pa}}^{(j)} \right]^{\text{T}}\in\mathbb{R}^{2}\), \(j\in\{1,\ldots,J\}\), where \(J\) is assumed to be known, in an environment described by reflective surfaces. Specular reflections of radio signals at flat surfaces are modeled by VAs that are mirror images of PAs. In particular, VA positions associated to single-bounce reflections are given by
\[\mathbf{p}_{l,\text{vx}}^{(j)}=\mathbf{p}_{\text{pa}}^{(j)}+2\big{(}\mathbf{u}_{l}^{\mathsf{ T}}\mathbf{e}_{l}-\mathbf{u}_{l}^{\mathsf{T}}\mathbf{p}_{\text{pa}}^{(j)}\big{)}\mathbf{u}_{l} \tag{4}\]
where \(\mathbf{u}_{l}\) is the normal vector of the according reflective surface, and \(\mathbf{e}_{l}\) is an arbitrary point on this surface. The second summand in (4) represents the normal vector w.r.t. this reflective surface in direction \(\mathbf{u}_{l}\) with the length of two times the distance between PA \(j\) at position \(\mathbf{p}_{\text{pa}}^{(j)}\) and the normal-point at the reflective surface \(s\), i.e., \(2\big{(}\mathbf{u}_{l}^{\mathsf{T}}\mathbf{e}_{l}-\mathbf{u}_{l}^{\mathsf{T}}\mathbf{p}_{\text{pa }}^{(j)}\big{)}\). An example is shown in Fig. 1a. VA positions associated to multiple-bounce reflections are determined by applying (4) multiple times. The current number of _visible_ VAs1 within the scenario (associated with single-bounce and higher-order bounce reflections) is \(L_{n}^{(j)}\) for each PA \(J\).
Footnote 1: A VA does not exist at time \(n\), when the reflective surface corresponding to this VA is obstructed with respect to the agent.
## III Radio Signal Model
At each time \(n\), the mobile agent transmits a signal \(s(t)\) from a single antenna and each PA \(j\in\{1,\ldots,J\}\) acts as a receiver having a single antenna. The received complex baseband signal at the \(j\)th PA is sampled \(N_{\text{s}}\) times with sampling frequency \(f_{\text{s}}=1/T_{\text{s}}\) yielding an observation period
of \(T=N_{\text{s}}\,T_{\text{s}}\). By stacking the samples, we obtain the discrete-time received signal vector
\[\mathbf{s}_{\text{x},n}^{(j)}\!=\!\sum_{l=1}^{L^{(j)}}\alpha_{l,n}^{(j)}\Big{(} \boldsymbol{s}\big{(}\tau_{l,n}^{(j)}\!\big{)}\!+\!\!\sum_{i=1}^{S^{(j)}}\beta_ {l,i}^{(j)}\boldsymbol{s}\big{(}\tau_{l,n}^{(j)}\!+\!\nu_{l,i}^{(j)}\big{)} \Big{)}\!+\!\mathbf{w}_{n}^{(j)} \tag{5}\]
where \(\boldsymbol{s}(\tau)\triangleq[s(-(N_{\text{s}}\!-\!1)/2\,T_{\text{s}}\!-\! \tau)\;\;\cdots\;\;s((N_{\text{s}}\!-\!1)/2T_{\text{s}}\!-\!\tau)]^{\text{T}} \in\mathbb{C}^{N_{\text{s}}\times 1}\) is the discrete-time transmit pulse. The first term contains the sum over the line-of-sight (LOS) component (\(l=1\)) and \(L^{(j)}_{n}-1\) specular MPCs (for \(l\in\{2,\ldots,L^{(j)}_{n}\}\)). While the left-hand side of the inner sum represents the actual signal components with their corresponding amplitudes \(\alpha_{l,n}^{(j)}\in\mathbb{C}\) and delays \(\tau_{l,n}^{(j)}\), the right-hand side term is a summation of \(S^{(j)}_{l}\) additional sub-components with amplitudes \(\alpha_{l,n}^{(j)}\beta_{l,i,n}^{(j)}\), where \(\beta_{l,i,n}^{(j)}\in\mathbb{R}\) is a relative dampening variable, and delays \(\tau_{l,n}^{(j)}+\nu_{l,i}^{(j)}\) with \(\nu_{l,i,n}^{(j)}\) being the excess delay. The delays \(\tau_{l,n}^{(j)}\) are proportional to the distances (ranges) between the agent and either the \(j\)th PA (for \(l=1\)) or the associated VAs (for \(l\in\{2,\ldots,L^{(j)}_{n}\}\)). That is, \(\tau_{l,n}^{(j)}=d_{l,n}^{(j)}/c=\big{\|}\boldsymbol{p}_{n}-\boldsymbol{p}_{l,n}^{(j)}\big{\|}/c\), where \(c\) is the speed of light. The measurement noise vector \(\mathbf{w}_{n}^{(j)}\in\mathbb{C}^{N_{\text{s}}\times 1}\) is a zero-mean, circularly-symmetric complex Gaussian random vector with covariance matrix \(\sigma^{(j)\,2}\boldsymbol{I}_{N_{\text{s}}}\) and noise variance \(\sigma^{(j)\,2}=N_{0}^{(j)}/T_{\text{s}}\). The component SNR of MPC \(l\) is \(\mathrm{SNR}_{l,n}^{(j)}=|\alpha_{l,n}^{(j)}|^{2}|\boldsymbol{s}\big{(}\tau_{ l,n}^{(j)}\big{)}|^{2}/\sigma^{(j)\,2}\). The component SNR of the sub-components is given as \(\mathrm{SNR}_{l,i,n}^{(j)}=\beta_{l,i,n}^{(j)}\mathrm{SNR}_{l,n}^{(j)}\). The corresponding normalized amplitude is \(\mathrm{w}_{l,n}^{(j)}\triangleq\mathrm{SNR}_{l,n}^{(j)\frac{1}{2}}\) and \(\mathrm{u}_{l,n,n}^{(j)}\triangleq\mathrm{SNR}_{l,n}^{(j)\frac{1}{2}}\), respectively. Details about the signal model given in (5) are provided in Appendix A.
To capture effects such as non-calibrated antennas, the scattering from a user-body as well as non-ideal reflective surfaces, we introduce the dispersion parameters \(\psi_{u,l,n}^{(j)}\) and \(\psi_{u,l,n}^{(j)}\). In this work, we assume the _following restrictions to this model:_ (i) the additional sub-components \(\nu_{l,i,n}^{(j)}\in[0,\psi_{d,l,n}^{(j)}]\) after each MPC \(l\) have the same support, i.e., \(\psi_{u,l,n}^{(j)}\triangleq\psi_{d,n}\) and (ii) the corresponding amplitudes are constant \(\beta_{l,i}^{(j)}\triangleq\psi_{u,l,n}^{(j)}\) with the same value for each MPC \(l\), i.e., \(\psi_{u,l,n}^{(j)}\triangleq\psi_{u,n}\). This model can be applied to ultra-wideband systems with non-calibrated antennas that introduce delay dispersion or to environments containing moderate non-ideal reflective surfaces that are approximately similar in behavior and do not change significantly over the explored area. An exemplary signal as well as the dispersion model is shown in Fig. 1b.2
Footnote 2: Note that the proposed algorithm can be reformulated in line with [21] to the general case with individual delay supports \(\psi_{d,l,n}^{(j)}\) and to more complex amplitudes distributions for \(\beta_{l,i,n}^{(j)}\), especially when multiple-antenna systems providing multiple MPC parameters (delay, AOA, AOD) [4, 11, 16].
### _Parametric Channel Estimation_
By applying at each time \(n\), a channel estimation and detection algorithm (CEDA) [18, 19, 20] to the observed discrete signal vector \(\boldsymbol{s}_{n,n}^{(j)}\), one obtains, for each anchor \(j\), a number of \(M_{n}^{(j)}\) measurements denoted by \(\boldsymbol{z}_{m,n}^{(j)}\) with \(m\in\mathcal{M}_{n}^{(j)}\triangleq\{1,\ldots,M_{n}^{(j)}\}\). Each \(\boldsymbol{z}_{m,n}^{(j)}=[\!z_{m,n}^{(j)}\;z_{m,n}^{(j)}]^{\text{T}}\) representing a potential MPC parameter estimate, contains a distance measurement \(z_{dm,n}^{(j)}\in[0,d_{\text{max}}]\) and a normalized amplitude measurement \(z_{dm,n}^{(j)}\in[\gamma,\infty)\), where \(\gamma\) is the detection threshold. The CEDA decomposes the signal \(\boldsymbol{s}_{n,n}^{(j)}\) into individual, decorrelated components according to (5), reducing the number of dimensions (as \(M_{n}^{(j)}\) is usually much smaller than \(N_{\text{s}}\)). It thus compresses the information contained in \(\boldsymbol{s}_{n,n}^{(j)}\) into \(\boldsymbol{z}_{m}^{(j)}=[\boldsymbol{z}_{1,n}^{(j)\text{T}}\cdots\boldsymbol{z }_{M_{n}^{(j),n}}^{(j)\text{T}}]^{\text{T}}\). The stacked vector \(\boldsymbol{z}_{n}=[\boldsymbol{z}_{n}^{(1)\text{T}}\cdots\boldsymbol{z}_{n}^{ (j)\text{T}}]^{\text{T}}\) is used by the proposed algorithm as a noisy measurement.
## IV System Model
At each time \(n\), the state \(\mathbf{x}_{n}=[\boldsymbol{p}_{n}^{\text{T}}\;\mathbf{v}_{n}^{\text{T}}]^{ \text{T}}\) of the agent consists of its position \(\boldsymbol{p}_{n}\) and velocity \(\mathbf{v}_{n}\). We also introduce the augmented agent state \(\tilde{\mathbf{x}}_{n}=[\mathbf{x}_{n}^{\text{T}}\;\mathbf{\Phi}_{n}^{\text{T}}] ^{\text{T}}\) that contains the dispersion parameters \(\boldsymbol{\Phi}_{n}=[\![\mathbf{\Phi}_{d,n}\;\!\mathbf{\Phi}_{n,n}]^{\text{T}}\). In line with [11, 20, 23], we account for the unknown number of VAs by introducing for each PA \(j\) potential VAs (PVAs) \(k\in\mathcal{K}_{n}^{(j)}\triangleq\{1,\ldots,K_{n}^{(j)}\}\). The number of PVAs \(K_{n}^{(j)}\) is the maximum possible number of VAs of PA \(j\) that produced measurements so far [23] (i.e., \(K_{n}^{(j)}\) increases with time). The state of PVA \((j,k)\) are denoted as \(\mathbf{y}_{k,n}^{(j)}\triangleq\big{[}\mathbf{x}_{k,n}^{(j)\text{T}}\;\!\mathbf{ r}_{k,n}^{(j)}\big{]}^{\text{T}}\) with \(\mathbf{x}_{k,n}^{(j)}=\big{[}\mathbf{p}_{k,n}^{(j)\text{T}}\;\!\mathbf{u}_{k,n}^{(j )}\big{]}^{\text{T}}\), which includes the normalized amplitude \(\mathrm{u}_{k,n}^{(j)}\)[11, 20]. The existence/ nonexistence of PVA \(k\) is modeled by the existence variable \(\mathbf{r}_{k,n}^{(j)}\in\{0,1\}\) in the sense that PVA \(k\) exists if and only if \(r_{k,n}^{(j)}\!=\!1\). The PVA state is considered formally also if PVA \(k\) is nonexistent, i.e., if \(r_{k,n}^{(j)}\!=\!0\).
Fig. 1: Exemplary indoor environment (a) and representative realization of a received signal (b), The floor plan in (a) includes an agent at position \(\boldsymbol{p}_{n}\) and a PA at position \(\boldsymbol{p}_{j}^{(j)}\) and two VAs at positions \(\boldsymbol{p}_{j}^{(j)}\) for corresponding surfaces. The signal shown in Fig. (b) is received by PA at position \(\boldsymbol{p}_{n}^{(j)}\). Non-ideal antennas or reflective surfaces as indicated in Fig. (a) by generic transfer functions \(h_{\text{m},n}^{(j)}(\tau)\) and \(h_{\text{m},n}^{(j)}(\tau)\) lead to the received signal \(\boldsymbol{s}_{n,n}^{(j)}\) shown in Fig. (b) (c.f. received signal without dispersion). Resulting measurements (MPC parameter estimates) \(\boldsymbol{z}_{m,n}^{(j)}\) are indicated in the received signal \(\boldsymbol{s}_{\text{x},n}^{(j)}\) shown in (b) alongside the proposed dispersion
Since a part of the PA state is unknown, we also consider the PA itself a PVA. Hence, we distinguish between the PVA \(k=1\) that explicitly represents the PA, which is a-priori existent and has known and fixed position \(\mathbf{p}_{1,\mathrm{va}}^{(j)}=\mathbf{p}_{\mathrm{pa}}^{(j)}\), and all other PVAs \(k\in\{2,\ldots,K_{n}^{(j)}\}\) whose existence and position are a-priori unknown. Note that the PVAs state representing the PA still considers the normalized amplitude \(\mathsf{u}_{1,n}^{(j)}\) as well as the existence variable \(\mathsf{r}_{1,n}^{(j)}\). The states \(\mathbf{\chi}_{k,n}^{(j)\intercal}\) of nonexistent PVA are obviously irrelevant. Therefore, all PDFs defined for PVA states, \(f(\mathbf{y}_{k,n})=f(\mathbf{x}_{k,n},r_{k,n})\), or if the form \(f(\mathbf{x}_{k,n}^{(j)},0)\)\(=f_{k,n}^{(j)}f_{d}(\mathbf{x}_{k,n}^{(j)})\), where \(f_{d}(\mathbf{x}_{k,n}^{(j)})\) is an arbitrary "dummy PDF" and \(f_{k,n}^{(j)}\in[0,1]\) is a constant. We also define the stacked vectors \(\mathbf{\mathbf{y}}_{n}^{(j)}\triangleq\big{[}\mathbf{\mathbf{y}}_{1,n}^{(j)\intercal }\cdots\mathbf{\mathbf{y}}_{K_{n}^{(j)},n}^{(j)\intercal}\big{]}^{\intercal}\) and \(\mathbf{\mathbf{y}}_{n}\triangleq\big{[}\mathbf{\mathbf{y}}_{n}^{(1)\intercal}\cdots \mathbf{\mathbf{y}}_{n}^{(J)\intercal}\big{]}^{\intercal}\). Note that according to the model introduced in Section III, \(\mathbf{\mathbf{\phi}}_{n}\) is common for all PVAs. However, this model can be extended to individual dispersion parameters for each PVAs (see [21]).
### _State Evolution_
For each PVA with state \(\mathbf{\mathbf{y}}_{k,n-1}^{(j)}\) with \(k\in\mathcal{K}_{n-1}^{(j)}\triangleq\{1,\ldots,K_{n-1}^{(j)}\}\) at time \(n-1\) and PA \(j\), there is one "legacy" PVA with state \(\underline{\mathbf{\mathbf{y}}}_{k,n}^{(j)}\triangleq\big{[}\underline{\mathbf{ \mathbf{x}}}_{k,n}^{(j)\intercal}\underline{\mathbf{\mathbf{t}}}_{k,n-1}^{(j) \intercal}\big{]}^{\intercal}\) with \(k\in\mathcal{K}_{n-1}^{(j)}\) at time \(n\) and PA \(j\). We also define the joint states \(\underline{\mathbf{\mathbf{y}}}_{k,n}^{(j)}\triangleq\big{[}\underline{\mathbf{ \mathbf{y}}}_{1,n}^{(j)\intercal}\cdots\underline{\mathbf{\mathbf{y}}}_{K_{n-1}^{( j)},n}^{(j)\intercal}\big{]}^{\intercal}\) and \(\underline{\mathbf{\mathbf{y}}}_{n}\triangleq\big{[}\underline{\mathbf{\mathbf{y}}}_ {n}^{(1)\intercal}\cdots\underline{\mathbf{\mathbf{y}}}_{n}^{(J)\intercal}\big{]}^ {\intercal}\). Assuming that the augmented agent state as well as the PVA states of all PAs evolve independently across \(k\), \(n\), and \(j\), the joint state-transition PDF factorizes as [3, 23]
\[f\big{(}\tilde{\mathbf{x}}_{n},\underline{\mathbf{\mathbf{y}}}_{n}| \tilde{\mathbf{x}}_{n-1},\mathbf{\mathbf{y}}_{n-1}\big{)}=f(\mathbf{x}_{n}|\mathbf{x}_{n-1})f( \mathbf{\psi}_{n}|\mathbf{\psi}_{n-1})\\ \times\prod_{j=1}^{J}\prod_{k=1}^{K_{n-1}^{(j)}}f\big{(}\underline {\mathbf{\mathbf{y}}}_{k,n}^{(j)}\big{|}\mathbf{y}_{k,n-1}^{(j)}\big{)} \tag{6}\]
where \(f(\underline{\mathbf{\mathbf{y}}}_{k,n}^{(j)}|\mathbf{y}_{k,n-1}^{(j)})\triangleq f \big{(}\underline{\mathbf{\mathbf{e}}}_{k,n}^{(j)},\underline{\mathbf{\mathbf{r}}}_{k,n}^{(j)}|\mathbf{x}_{k,n-1}^{(j)},r_{k,n-1}^{(j)}\big{)}\) is the legacy PVA state-transition PDF. If PVA did not exist at time \(n-1\), i.e., \(r_{k,n-1}^{(j)}\!=\!0\), it cannot exist as a legacy PVA at time \(n\) either. Thus,
\[f\big{(}\underline{\mathbf{\mathbf{x}}}_{k,n}^{(j)},r_{k,n}^{(j)}\big{|}\mathbf{x}_{k,n-1}^{(j)},0\big{)}=\begin{cases}f_{d}\big{(}\underline{\mathbf{\mathbf{x}}}_{k,n }^{(j)}\big{)},&\underline{\mathbf{\mathbf{r}}}_{k,n}^{(j)}\!=\!0\\ 0,&\underline{\mathbf{\mathbf{r}}}_{k,n}^{(j)}\!=\!1.\end{cases} \tag{7}\]
If PVA existed at time \(n\!-\!1\), i.e., \(r_{k,n-1}^{(j)}\!=\!1\), it either dies, i.e., \(\underline{\mathbf{\mathbf{r}}}_{k,n}^{(j)}\!=\!0\), or survives, i.e., \(\underline{\mathbf{\mathbf{r}}}_{k,n}^{(j)}\!=\!1\) with survival probability denoted as \(p_{\mathrm{s}}\). If it does survive, its new state \(\underline{\mathbf{\mathbf{y}}}_{k,n}^{(j)}\) is distributed according to the state-transition PDF \(f\big{(}\underline{\mathbf{\mathbf{x}}}_{k,n}^{(j)}\big{|}\mathbf{x}_{k,n-1}^{(j)} \big{)}\triangleq\delta\big{(}\underline{\mathbf{\mathbf{p}}}_{k,\mathrm{va}}^{(j)} -\mathbf{\mathbf{p}}_{k,\mathrm{va}}^{(j)}\big{)}f\big{(}u_{k,n}^{(j)}\big{|}u_{k,n- 1}^{(j)}\big{)}\)[3, 11]. Thus,
\[f\big{(}\underline{\mathbf{\mathbf{x}}}_{k,n}^{(j)},\underline{\mathbf{ \mathbf{r}}}_{k,n}^{(j)}|\mathbf{x}_{k,n-1}^{(j)},1\big{)}\\ =\begin{cases}(1\!-\!p_{\mathrm{s}})f_{d}\big{(}\underline{\mathbf{ \mathbf{x}}}_{k,n}^{(j)}\big{)},&\underline{\mathbf{\mathbf{r}}}_{k,n}^{(j)}\!=\!0\\ p_{\mathrm{s}}\delta\big{(}\underline{\mathbf{\mathbf{p}}}_{k,\mathrm{va}}^{(j)}- \mathbf{\mathbf{p}}_{k,\mathrm{va}}^{(j)}\big{)}f\big{(}u_{k,n}^{(j)}\big{|}u_{k,n- 1}^{(j)}\big{)},&\underline{\mathbf{\mathbf{r}}}_{k,n}^{(j)}\!=\!1\end{cases}. \tag{8}\]
The agent state \(\mathbf{\mathbf{x}}_{n}\) with state-transition PDF \(f(\mathbf{x}_{n}|\mathbf{x}_{n-1})\) is assumed to evolve in time according to a 2-dimensional, constant velocity and stochastic acceleration model [33] (linear movement) given as \(\mathbf{\mathbf{x}}_{n}=\mathbf{A}\,\mathbf{\mathbf{x}}_{n-1}+\mathbf{B}\,\mathbf{\mathbf{w}}_{n}\), with the acceleration process \(\mathbf{\mathbf{w}}_{n}\) being i.i.d. across \(n\), zero mean, and Gaussian with covariance matrix \(\sigma_{\mathrm{w}}^{2}\,\mathbf{\mathbf{I}}_{2}\), \(\sigma_{\mathrm{w}}\) is the acceleration standard deviation, and \(\mathbf{A}\in\mathbb{R}^{4\times 4}\) and \(\mathbf{B}\in\mathbb{R}^{4\times 2}\) are defined according to [33, p. 273], with observation period \(\Delta T\). The state-transition PDFs of the dispersion parameter states \(f(\mathbf{\psi}_{n}|\mathbf{\psi}_{n-1})=f(\psi_{\mathrm{d},n}|\psi_{\mathrm{d},n-1})f( \psi_{\mathrm{u},n}|\psi_{\mathrm{u},n-1})\) are assumed to evolve independently across \(n\). Since both dispersion parameters are strictly positive and independent, we model the individual state-transition PDFs by Gamma PDFs given respectively by \(f(\psi_{\mathrm{d},n}|\psi_{\mathrm{d},n-1})=\mathcal{G}(\psi_{\mathrm{d},n};q_{ 4},\psi_{\mathrm{d},n-1})\) and \(f(\psi_{\mathrm{d},n}|\psi_{\mathrm{d},n-1})=\mathcal{G}(\psi_{\mathrm{u},n};q_ {4},\psi_{\mathrm{u},n-1})\), where \(q_{4}\) and \(q_{4}\) represent the respective state noise parameters [21, 24]. Note that a small \(q\) implies a large state transition uncertainty. The state-transition PDF of the normalized amplitude \(\underline{\mathbf{\mathbf{u}}}_{k,n}^{(j)}\) is modeled by a Rician PDF, i.e., \(f(\underline{\mathbf{\mathbf{u}}}_{k,n}^{(j)}|u_{k,n-1}^{(j)})=f_{\mathrm{T}\mathrm{ Ric}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}
determined from the Fisher information given by \(\sigma_{\text{d}}^{2}(u)=c^{2}/(8\,\pi^{2}\,\beta_{\text{bw}}^{2}\,u^{2})\) with \(\beta_{\text{bw}}\) being the root mean squared bandwidth [35, 36] (see Section VI). The likelihood function of the corresponding normalized amplitude measurement \(z_{um,n}^{(j)}\) is obtained as4
Footnote 4: The proposed model describes the distribution of the amplitude estimates of the radio signal model given in (5) [37, 38, 39, 20].
\[f(z_{um,n}^{(j)}|\beta_{k,n}^{(j)},u_{k,n}^{(j)})\] \[\qquad\triangleq f_{\text{Trice}}(z_{um,n}^{(j)};\sigma_{u}(\beta_ {k,n}^{(j)}u_{k,n}^{(j)}),\beta_{k,n}^{(j)}u_{k,n}^{(j)}\gamma) \tag{11}\]
with scale parameter \(\sigma_{\text{u}}(\boldsymbol{\upbeta}_{k,n}^{(j)}u_{k,n}^{(j)})\) and non-centrality parameter \(\boldsymbol{\upbeta}_{k,n}^{(j)}u_{k,n}^{(j)}\), which is cut off at the detection threshold of the CEDA \(\gamma\)[39, 20]. As for the distance likelihood function, the scale parameter is determined from the Fisher information given as \(\sigma_{\text{u}}^{2}(u)=1/2+u\,/(4N_{\text{s}})\). Note that this expression reduces to \(1/2\) if the additive white Gaussian noise (AWGN) noise variance \(\sigma^{(j)^{2}}\) is assumed to be known or \(N_{\text{s}}\) to grow indefinitely (see [20, Appendix D] for a detailed derivation). The proposed normalized amplitude likelihood function (11) implies a probability of detection \(p_{\text{D}}(\boldsymbol{\upbeta}_{k,n}^{(j)}u_{k,n}^{(j)})\), which corresponds to the probability mass of the corresponding Rician distribution above the CEDA detection threshold \(\gamma\), i.e., \(p_{\text{D}}(u)\triangleq Q_{1}(u/\sigma_{\text{u}}(u),\gamma/\sigma_{\text{u }}(u))\)[20].
From the radio signal model (5) the joint PDF of the dispersion variables can be determined as
\[f(\nu_{k,n}^{(j)},\beta_{k,n}^{(j)}|\boldsymbol{\uppsi}_{n}) =\delta(\nu_{k,n}^{(j)})\,\delta(\beta_{k,n}^{(j)}-1)\] \[\qquad+f_{\text{U}}(\nu_{k,n}^{(j)};0,\psi_{\text{d},n})\delta( \beta_{k,n}^{(j)}-\psi_{\text{u},n}). \tag{12}\]
The PDF of a single measurement \(\boldsymbol{z}_{m,n}^{(j)}\) can now be obtained by integrating out the dispersion variables as
\[f(\boldsymbol{z}_{m,n}^{(j)}|\boldsymbol{\tilde{x}}_{n}, \boldsymbol{x}_{k,n}^{(j)})=f(\boldsymbol{z}_{m,n}^{(j)}|\boldsymbol{x}_{n}, \boldsymbol{\psi}_{n},\boldsymbol{x}_{k,n}^{(j)})\] \[=\int f(\boldsymbol{z}_{m,n}^{(j)}|\boldsymbol{x}_{n},\nu_{k,n}^{ (j)},\beta_{k,n}^{(j)},\boldsymbol{x}_{k,n}^{(j)})\] \[\qquad\qquad\times f(\nu_{k,n}^{(j)},\beta_{k,n}^{(j)}| \boldsymbol{\psi}_{n})\mathrm{d}\nu_{\text{d},n}^{(j)}\mathrm{d}\beta_{k,n}^{ (j)}\] \[=f(z_{dm,n}^{(j)}|\boldsymbol{p}_{n},\boldsymbol{x}_{k,n}^{(j)})f (z_{um,n}^{(j)}|u_{k,n}^{(j)})\] \[\qquad+f(z_{dm,n}^{(j)}|\boldsymbol{p}_{n},\boldsymbol{\psi}_{n}, \boldsymbol{x}_{k,n}^{(j)})f(z_{um,n}^{(j)}|u_{k,n}^{(j)},\psi_{\text{u},n}) \tag{13}\]
with the main-component distance PDF
\[f(z_{dm,n}^{(j)}|\boldsymbol{p}_{n},\boldsymbol{x}_{k,n}^{(j)})=f_{\text{N}} (z_{dm,n}^{(j)};\,d(\boldsymbol{p}_{k,\text{va}}^{(j)},\boldsymbol{p}_{n}),\, \sigma_{\text{d}}^{2}(u_{k,n}^{(j)})) \tag{14}\]
and the main-component amplitude PDF
\[f(z_{dm,n}^{(j)}|u_{k,n}^{(j)})=f_{\text{Trice}}(z_{um,n}^{(j)}; \sigma_{\text{u}}(u_{k,n}^{(j)}),u_{k,n}^{(j)},\gamma) \tag{15}\]
as well as the additional sub-component distance PDF
\[f(z_{dm,n}^{(j)}|\boldsymbol{p}_{n},\boldsymbol{\psi}_{n}, \boldsymbol{x}_{k,n}^{(j)})\] \[=\!\frac{1}{\psi_{\text{d},n}}\!\int_{0}^{\psi_{\text{d},n}}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\mathbf{z}^{(j)}_{m,n}\) did not originate from any PVA (_false alarm_) or if a PVA did not give rise to any measurement (_missed detection_). The associations between measurements \(\mathbf{z}^{(j)}_{m,n}\) and the PVAs at time \(n\) is described by the binary PVA-orientated association variables with entries [21, 26]
\[\mathsf{a}^{(j)}_{km,n}\triangleq\begin{cases}1,&\text{if measurement $m$ was } \text{generated by PVA $k$}\\ 0,&\text{otherwise}.\end{cases}\]
We distinguish between legacy and new PVAs-associated variables vectors given, respectively, as \(\underline{\mathbf{a}}^{(j)}_{k,n}\triangleq[\underline{a}^{(j)}_{k1,n}\ \cdots\ \underline{a}^{(j)}_{kM^{(j)}_{n},n}]^{\mathsf{T}}\) with \(k\in\mathcal{K}^{(j)}_{n,n-1}\) and \(\overline{\mathbf{a}}^{(j)}_{k,n}\triangleq[\overline{\mathbf{a}}^{(j)}_{k1,n}\)\(\cdots\ \overline{\mathbf{a}}^{(j)}_{kk,n}]^{\mathsf{T}}\) with \(k\in\mathcal{M}^{(j)}_{n}\) and \(\underline{\mathbf{a}}^{(j)}_{k,n}\triangleq[\underline{a}^{(j)\intercal}_{ k,n}\cdots\ \overline{\mathbf{a}}^{(j)\intercal}_{k,n}]^{\mathsf{T}}\)[26]. We also define \(\mathbf{a}^{(j)}_{n}\triangleq[\mathbf{a}^{(j)\intercal}_{1,n}\ \cdots\ \mathbf{a}^{(j)\intercal}_{k \mathcal{K}^{(j)}_{n},n}]^{\mathsf{T}}\) and \(\mathbf{a}_{n}\triangleq[\mathbf{a}^{(1)\intercal}_{n}\ \cdots\ \underline{a}^{(j)\intercal}_{n }]^{\mathsf{T}}\). To reduce computational complexity, following [3, 22, 23], we use the redundant description of association variables, i.e., we introduce measurement-orientated association variable
\[\mathsf{b}^{(j)}_{m,n}\triangleq\begin{cases}k\in\{1,\ldots,K^{(j)}_{n}\},& \text{if measurement $m$ was }\\ &\text{generated by PVA $k$}\\ 0,&\text{otherwise}\end{cases}\]
and define the measurement-oriented association vector \(\mathbf{b}^{(j)}_{1,n}=[\mathbf{b}^{(j)}_{1,n}\ \cdots\ \mathbf{b}^{(j)}_{M^{(j)}_{n},n}]^{ \mathsf{T}}\). We also define \(\mathbf{b}_{n}\triangleq[\mathbf{b}^{(1)\intercal}_{n}\ \cdots\,\ \mathbf{b}^{(j) \intercal}_{n}]^{\mathsf{T}}\). Note that any data association event that can be expressed by both random vectors \(\mathbf{a}_{n}\) and \(\mathbf{b}_{n}\) is a valid event, i.e., any measurement can be generated by at most one PVA. This redundant representation of events makes it possible to develop scalable SPAs [3, 22, 23, 20].
### _Joint Posterior PDF_
By using common assumptions [3, 20, 23], and for fixed and thus observed measurements \(\mathbf{z}_{1:n}\), it can be shown that the joint posterior PDF of \(\tilde{\mathbf{x}}_{1:n}\) (\(\tilde{\mathbf{x}}_{1:n}\triangleq[\widetilde{\mathbf{x}}_{1}^{\mathsf{T}} \cdots\widetilde{\mathbf{x}}_{n}^{\mathsf{T}}]^{\mathsf{T}}\)), \(\mathbf{y}_{1:n}\), \(\mathbf{a}_{1:n}\), and \(\mathbf{b}_{1:n}\), conditioned on \(\mathbf{z}_{1:n}\) for all time steps \(n^{\prime}\in\{1,\ldots,n\}\) is given by
\[f(\tilde{\mathbf{x}}_{1:n},\mathbf{y}_{1:n},\mathbf{a}_{1:n},\mathbf{b}_{1:n}| \mathbf{z}_{1:n})\] \[\propto f(\mathbf{x}_{1})f(\mathbf{\psi}_{1})\Bigg{(}\prod_{j^{\prime}=1}^ {J}\prod_{k^{\prime}=1}^{K^{(j^{\prime})}_{k^{\prime}=1}}f\big{(}\underline{ \mathbf{y}}^{(j^{\prime})}_{k^{\prime},1}\big{)}\Bigg{)}\] \[\times\ \prod_{n^{\prime}=2}^{n}f(\mathbf{x}_{n^{\prime}}|\mathbf{x}_{n^{ \prime}-1})f(\mathbf{\psi}_{n^{\prime}}|\mathbf{\psi}_{n^{\prime}-1})\] \[\times\ \prod_{j=1}^{J}\Bigg{(}\prod_{k=1}^{K^{(j)}_{n^{\prime}-1}}g \big{(}\underline{\mathbf{y}}^{(j)}_{k,n^{\prime}},\mathbf{\psi}_{n^{\prime}}| \mathbf{y}^{(j)}_{k,n^{\prime}-1}\big{)}\] \[\times\ \prod_{m^{\prime}=1}^{M^{(j)}_{n^{\prime}}}q\big{(}\tilde{\mathbf{x}} _{n^{\prime}},\underline{\mathbf{y}}^{(j)}_{k,n^{\prime}},\underline{a}^{(j)}_ {km^{\prime},n^{\prime}};\mathbf{z}^{(j)}_{m^{\prime},n^{\prime}}\big{)}\underline {\Psi}\big{(}\underline{a}^{(j)}_{km^{\prime},n^{\prime}},b^{(j)}_{m^{\prime },n^{\prime}}\big{)}\Bigg{)}\] \[\times\ \left(\prod_{m=1}^{M^{(j)}_{n^{\prime}}}f(\overline{\mathbf{y}}^{(j)} _{m,n^{\prime}}|\tilde{\mathbf{x}}^{\prime}_{n})v\big{(}\tilde{\mathbf{x}}_{n^{\prime}}, \overline{\mathbf{y}}^{(j)}_{m,n^{\prime}},\overline{a}^{(j)}_{mm,n^{\prime}};\bm {z}^{(j)}_{m,n^{\prime}}\big{)}\right.\] \[\times\ \!\!\!\prod_{h=1}^{m-1}q\big{(}\tilde{\mathbf{x}}_{n^{\prime}}, \overline{\mathbf{y}}^{(j)}_{m,n^{\prime}},\overline{a}^{(j)}_{m,n^{\prime}};\bm {z}^{(j)}_{h,n^{\prime}})\overline{\Psi}\big{(}\underline{a}^{(j)}_{mn,n^{ \prime}},b^{(j)}_{h,n^{\prime}}\big{)}\Bigg{)} \tag{19}\]
where \(q\big{(}\tilde{\mathbf{x}}_{n},\underline{\mathbf{y}}^{(j)}_{k,n},a^{(j)}_{km,n};\bm {z}^{(j)}_{m,n}\big{)}\), \(f(\overline{\mathbf{y}}_{m,n})\), \(\Psi(a^{(j)}_{km,n},b^{(j)}_{m,n})\), and \(v\big{(}\tilde{\mathbf{x}}_{n^{\prime}},\overline{\mathbf{y}}^{(j)}_{m,n^{\prime}}, \overline{a}^{(j)}_{mm,n^{\prime}};\mathbf{z}^{(j)}_{m,n^{\prime}}\big{)}\) are explained in what follows. The _pseudo state-transition function_ is given by
\[g(\underline{\mathbf{y}}^{(j)}_{k,n},\mathbf{\psi}_{n}|\underline{ \mathbf{y}}^{(j)}_{k,n-1})\] \[\triangleq\begin{cases}e^{-\mu_{n}\big{(}\mathbf{\psi}_{n},\underline{ \mathbf{u}}^{(j)}_{k,n}\big{)}}f\big{(}\underline{\mathbf{z}}^{(j)}_{k,n},1| \underline{\mathbf{z}}^{(j)}_{k,n-1},\underline{\mathbf{r}}^{(j)}_{k,n-1} \big{)},\ \underline{\mathbf{r}}^{(j)}_{k,n}\!\!=\!1\\ f(\underline{\mathbf{x}}^{(j)}_{k,n},0|\underline{\mathbf{x}}^{(j)}_{k,n-1}, \underline{\mathbf{r}}^{(j)}_{k,n-1}),\ \underline{\mathbf{r}}^{(j)}_{k,n}\!\!=\!0\end{cases} \tag{20}\]
and the _pseudo prior distribution_ as
\[f(\overline{\mathbf{y}}^{(j)}_{k,n}|\tilde{\mathbf{x}}_{n})\triangleq\begin{cases}\mu_{ n}f_{n}\big{(}\overline{\mathbf{x}}^{(j)}_{k,n}|\tilde{\mathbf{x}}_{n}\big{)}e^{-\mu_{n} \big{(}\mathbf{\psi}_{n},\overline{\mathbf{u}}^{(j)}_{k,n}\big{)}},\ \overline{\mathbf{r}}^{(j)}_{k,n}\!\!=\!1\\ f_{d}\big{(}\overline{\mathbf{x}}^{(j)}_{k,n}\big{)},\ \
where \(\tilde{\mathbf{x}}_{n}^{\text{MMSE}}=[\mathbf{x}_{n}^{\text{MMSE}}\,\tau\mathbf{\psi}_{n}^{ \text{MMSE}}\,\tau]^{\text{T}}\). The map of the environment is represented by reflective surfaces described by PVAs. Therefore, the state \(\mathbf{x}_{k,n}^{(j)}\) of the detected PVAs \(k\in\{1,\ldots,K_{n}^{(j)}\}\) must be estimated. This relies on the marginal posterior existence probabilities \(p(r_{k,n}^{(j)}\!=\!1|\mathbf{z}_{1:n})=\int f(\mathbf{x}_{k,n}^{(j)},r_{k,n}^{(j)})=1| \mathbf{z}_{1:n}^{(j)})\mathrm{d}\mathbf{x}_{k,n}^{(j)}\) and the marginal posterior PDFs \(f(\mathbf{x}_{k,n}^{(j)}|r_{k,n}^{(j)}\!=\!1,\mathbf{z}_{1:n})\!=\!f(\mathbf{x}_{k,n}^{(j)},r_{k,n}^{(j)}\!=\!1|\mathbf{z}_{1:n})/p(r_{k,n}^{(j)}\!=\!1|\mathbf{z}_{1:n})\). A PVA \(k\) is declared to exist if \(p(r_{k,n}^{(j)}\!=\!1|\mathbf{z}_{1:n})>p_{\text{de}}\), where \(p_{\text{de}}\) is a detection threshold [40, Ch. 2]. To avoid that the number of PVAs states grows indefinitely, PVAs states with \(p(r_{k,n}^{(j)}\!=\!1|\mathbf{z}_{1:n})\) below a threshold \(p_{\text{pr}}\) are removed from the state space ("pruned"). The number \(\hat{K}_{n}^{(j)}\) of PVA states that are considered to exist is the estimate of the total number \(L_{n}^{(j)}\) of VAs visible at time \(n\). For existing PVAs, an estimate of its state \(\mathbf{x}_{k,n}^{(j)}\) can again be calculated by the MMSE [40, Ch. 4]
\[\mathbf{x}_{k,n}^{(j)\text{MMSE}}\,\triangleq\int\mathbf{x}_{k,n}^{(j)}\,f(\mathbf{x}_{k, n}^{(j)}\,|\,r_{k,n}^{(j)}\!=\!1,\mathbf{z}_{1:n})\,\mathrm{d}\mathbf{x}_{k,n}^{(j)}. \tag{27}\]
The calculation of \(f(\tilde{\mathbf{x}}_{n}|\mathbf{z}_{1:n})\), \(p(r_{k,n}\!=\!1|\mathbf{z})\), and \(f(\mathbf{x}_{k,n}^{(j)}|\)\(r_{k,n}^{(j)}\!=\!1,\mathbf{z}_{1:n})\) from the joint posterior \(f(\tilde{\mathbf{x}}_{1:n},\mathbf{y}_{1:n},\mathbf{a}_{1:n},\)\(\mathbf{b}_{1:n}|\mathbf{z}_{1:n})\) by direct marginalization is not feasible. By performing sequential particle-based message passing using the SPA rules [3, 11, 41, 42, 43, 16] on the factor graph in Fig. 2, approximations ("beliefs") \(\tilde{f}(\tilde{\mathbf{x}}_{n})\) and \(\tilde{f}(\mathbf{y}_{k,n}^{(j)})\) of the marginal posterior PDFs \(f(\tilde{\mathbf{x}}_{n}|\mathbf{z}_{1:n})\), \(p(r_{k,n}^{(j)}\!=\!1|\mathbf{z}_{1:n})\), and \(f(\mathbf{x}_{k,n}^{(j)}|\)\(r_{k,n}^{(j)}\)\(=\!1,\mathbf{z}_{1:n})\) can be obtained in an efficient way for the agent state as well as all legacy and new PVAs states.
## V Proposed Sum-Product Algorithm
The factor graph in Fig. 2 has cycles, therefore we have to decide on a specific order of message computation [41, 44]. We choose the order according to the following rules: (i) messages are only sent forward in time; (ii) messages are updated and processed in parallel for each time step; (iii) along an edge connecting the augmented agent state variable node and a new PVA, messages are only sent from the former to the latter. This implies that messages from new PVAs do not contribute to the augmented agent state update. The corresponding messages are shown in Fig. 2. Note, that this scheduling is suboptimal since the extrinsic messages of the augmented agent state are neglected. This calculation order is solely chosen to reduce the computational demand. With these rules, the message passing equations of the SPA [41] yield the following operations at each time step.
### _Prediction Step_
A prediction step is performed for the augmented agent state and all legacy VAs \(k\in\mathcal{K}_{n-1}^{(j)}\). It has the form of
\[\alpha(\tilde{\mathbf{x}}_{n}) =\int f(\tilde{\mathbf{x}}_{n}|\tilde{\mathbf{x}}_{n-1})b(\tilde{\mathbf{x}}_ {n-1})\mathrm{d}\tilde{\mathbf{x}}_{n-1} \tag{28}\] \[\alpha(\underline{\mathbf{x}}_{k,n}^{(j)},r_{k,n}^{(j)}) =\sum_{r_{k,n-1}^{(j)}\in\{0,1\}}\int g(\underline{\mathbf{x}}_{k,n}^ {(j)},\underline{r}_{k,n}^{(j)}|\mathbf{x}_{k,n-1}^{(j)},r_{k,n-1}^{(j)})\] \[\times b(\mathbf{x}_{k,n-1}^{(j)},r_{k,n-1}^{(j)})\mathrm{d}\mathbf{x}_{k,n -1}^{(j)} \tag{29}\]
with \(b(\tilde{\mathbf{x}}_{n-1})\) and \(b(\mathbf{x}_{k,n-1}^{(j)},r_{k,n-1}^{(j)})\) denoting the beliefs of the augmented agent state and the legacy VA \(k\) calculated at the previous time step, respectively. The summation in (29), can be further written as
\[\alpha(\underline{\mathbf{x}}_{k,n}^{(j)},\underline{r}_{k,n}^{(j)}\! =\!1)\] \[=p_{\text{s}}e^{-\mu_{\text{a}}\left(\tilde{\mathbf{x}}_{n},\mathbf{y}_{k,n }^{(j)}\right)}\!\!\!\int\!\!f(\underline{\mathbf{x}}_{k,n}^{(j)},1|\mathbf{x}_{k,n-1}^ {(j)},1)b(\mathbf{x}_{k,n-1}^{(j)},1)\mathrm{d}\mathbf{x}_{k,n-1}^{(j)} \tag{30}\]
and \(\alpha(\underline{\mathbf{x}}_{k,n}^{(j)},\underline{r}_{k,n}^{(j)}\!\!=\!\!0)= \underline{\mathbf{a}}_{k}^{(j)}f_{\text{d}}(\underline{\mathbf{x}}_{k,n}^{(j)})\) with
\[\underline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbmbmbmbmbmbmbmbmbmbmbmbmbmbmbmbmbm }} }}}}}}}}}}}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\tilde{b}_{k,n-1}=\int b(\mathbf{x}_{k,n-1}^{(j)},0)\mathrm{d}\mathbf{x}_{k,n-1}^{(j)}\) approximates the probability of non-existence of legacy VA \(k\). For the new VAs, \(\alpha(\mathbf{\overline{x}}_{m,n}^{(j)},\mathbf{\overline{r}}_{m,n}^{(j)})\triangleq f (\mathbf{\overline{x}}_{m,n}^{(j)},\mathbf{\overline{r}}_{m,n}^{(j)})\) with \(m\in\mathcal{M}_{n}^{(j)}\).
### _Measurement Evaluation_
The messages \(\varepsilon(a_{kl,n}^{[p](j)})\) sent from factor nodes \(q(\mathbf{x},\mathbf{y}_{k,n}^{(j)},a_{kl,n}^{(j)},\mathbf{z}_{l,n}^{(j)})\) to variable nodes \(a_{kl,n}^{(j)}\) at message passing (MP) iteration \(p\) with \(k\in\{1,\ldots,K_{n}^{(j)}\}\) and \(l\in\{1,\ldots,M_{n}^{(j)}\}\) are defined as
\[\varepsilon^{[p]}(a_{kl,n}^{(j)})= \iint\tilde{\beta}_{kl}^{[p]}(\tilde{\mathbf{x}}_{n})\alpha_{l}^{[p]} (\mathbf{y}_{k,n}^{(j)})\] \[\times q(\mathbf{x},\mathbf{y}_{k,n}^{(j)},a_{kl,n}^{(j)},\mathbf{z}_{l,n}^{( j)})\mathrm{d}\tilde{\mathbf{x}}_{n}\mathrm{d}\mathbf{y}_{k,n}^{(j)}. \tag{32}\]
The messages from factor nodes \(v(\mathbf{x},\mathbf{\overline{y}}_{m,n}^{(j)},\mathbf{\overline{a}}_{mm,n}^{(j)},\mathbf{z}_ {l,n}^{(j)})\) to variable nodes \(\mathbf{\overline{a}}_{mm,n}^{(j)}\), \(m\in\{1,\ldots,M_{n}^{(j)}\}\), are given as
\[\varepsilon^{[p]}(\mathbf{\overline{a}}_{mm,n}^{(j)})= \iint\tilde{\beta}_{mn}^{[p]}(\tilde{\mathbf{x}}_{n})\alpha_{m}^{[p]} (\mathbf{\overline{y}}_{m,n}^{(j)})\] \[\times v(\tilde{\mathbf{x}}_{n},\mathbf{\overline{y}}_{m,n}^{(j)},\mathbf{ \overline{a}}_{mm,n}^{(j)},\mathbf{z}_{m,n}^{(j)})\mathrm{d}\tilde{\mathbf{x}}_{n} \mathrm{d}\mathbf{\overline{y}}_{m,n}^{(j)} \tag{33}\]
Note that for \(p=1\), \(\alpha_{l}^{[1]}(\mathbf{y}_{k,n}^{(j)})\triangleq\alpha(\mathbf{x}_{k,n}^{(j)},\mathbf{ r}_{k,n}^{(j)})\). For \(p>1\), \(\alpha_{l}^{[p]}(\mathbf{y}_{k,n}^{(j)})\) is calculated according to Section V-E. The message \(\tilde{\beta}_{kl}^{[p]}(\tilde{\mathbf{x}}_{n})\) will be defined in Section V-F. Using (32), \(\varepsilon(a_{kl,n}^{[p](j)})\) is further investigated. For the messages containing information about legacy VAs, it results in
\[\varepsilon^{[p]}(\mathbf{\underline{a}}_{kl,n}^{(j)}{=}1)= \iint\tilde{\beta}_{kl}^{[p]}(\tilde{\mathbf{x}}_{n})\alpha_{l}^{[p]} (\mathbf{\underline{c}}_{k,n}^{(j)},\mathbf{\underline{r}}_{k,n}^{(j)}{=}1)\] \[\times\frac{\mu_{\text{m}}\big{(}\tilde{\mathbf{x}}_{n},\mathbf{\underline {x}}_{k,n}^{(j)}\big{)}f(\mathbf{z}_{l,n}^{(j)}|(\tilde{\mathbf{x}}_{n},\mathbf{\underline {x}}_{k,n}^{(j)})}{\mu_{\text{d}}f_{\text{n}}(\mathbf{z}_{l,n}^{(j)})}\mathrm{d} \mathbf{\underline{z}}_{k,n}^{(j)}\mathrm{d}\tilde{\mathbf{x}}_{n}\] \[\varepsilon^{[p]}(\mathbf{\underline{a}}_{kl,n}^{(j)}{=}0)= \iint\tilde{\beta}_{kl}^{[p]}(\tilde{\mathbf{x}}_{n})\Big{(}\alpha_{l}^{[ p]}(\mathbf{\underline{x}}_{k,n}^{(j)},\underline{r}_{k,n}^{(j)}{=}1)\] \[\quad+\alpha_{l}^{[p]}(\mathbf{\underline{x}}_{k,n}^{(j)},\underline{r }_{k,n}^{(j)}{=}0)\Big{)}\mathrm{d}\mathbf{\underline{x}}_{n}^{(j)}\mathrm{d} \tilde{\mathbf{x}}_{n}. \tag{34}\]
This can be further simplify by dividing both messages by \(\varepsilon^{[p]}(\mathbf{\underline{a}}_{kl,n}^{(j)}=0)\), which results in \(\varepsilon^{[p]}(\mathbf{\underline{a}}_{kl,n}^{(j)}{=}0)=1\).
The messages \(\varepsilon^{[p]}(\mathbf{\overline{a}}_{kl,n}^{(j)})\) can be obtained similarly by using (32) and (33), yielding
\[\varepsilon^{[p]}(\mathbf{\overline{a}}_{kl,n}^{(j)}{=}1)= \iint\tilde{\beta}_{kl}^{[p]}(\tilde{\mathbf{x}}_{n})\alpha_{l}^{[p]} (\mathbf{\overline{x}}_{k,n}^{(j)},\mathbf{\overline{r}}_{k,n}^{(j)}{=}1)\] \[\times\frac{\mu_{\text{m}}\big{(}\tilde{\mathbf{x}}_{n},\mathbf{\overline {x}}_{m,n}^{(j)}\big{)}f(\mathbf{z}_{l,n}^{(j)}|\tilde{\mathbf{x}}_{n},\mathbf{\overline{x}}_ {k,n}^{(j)})}{\mu_{\text{d}}f_{\text{n}}(\mathbf{z}_{l,n}^{(j)})}\mathrm{d}\mathbf{ \underline{z}}_{k,n}^{(j)} \tag{35}\] \[\varepsilon^{[p]}(\mathbf{\overline{a}}_{kl,n}^{(j)}{=}0)= \iint\tilde{\beta}_{kl}^{[p]}(\tilde{\mathbf{x}}_{k})\Big{(}\alpha_{l}^{[ p]}(\mathbf{\overline{x}}_{k,n}^{(j)},\mathbf{r}_{k,n}^{(j)}{=}1)\] \[\quad+\alpha_{l}^{[p]}(\mathbf{\overline{x}}_{k,n}^{(j)},\mathbf{r}_{k,n}^ {(j)}{=}0)\Big{)}\mathrm{d}\mathbf{\overline{x}}_{k,n}^{(j)}\mathrm{d}\tilde{\mathbf{x}}_{n} \tag{36}\]
\[\varepsilon^{[p]}(\mathbf{\overline{a}}_{mm,n}^{(j)}{=}1)= \iint\tilde{\beta}_{kl}^{[p]}(\tilde{\mathbf{x}}_{n})\alpha_{m}^{[p]}(\mathbf{ \overline{x}}_{m,n}^{(j)},\mathbf{\overline{r}}_{m,n}^{(j)}{=}1)\] \[\times\frac{\mu_{\text{m}}\big{(}\tilde{\mathbf{x}}_{n},\mathbf{\overline {x}}_{m,n}^{(j)}\big{)}f(\mathbf{z}_{m,n}^{(j)}|\tilde{\mathbf{x}}_{n},\mathbf{\overline{x}}_ {m,n}^{(j)})}{\mu_{\text{d}}f_{\text{n}}(\mathbf{z}_{m,n}^{(j)})}\mathrm{d}\mathbf{ \overline{x}}_{m,n}^{(j)}\mathrm{d}\tilde{\mathbf{x}}_{n} \tag{37}\] \[\varepsilon^{[p]}(\mathbf{\overline{a}}_{mm,n}^{(j)}{=}0)\!\!\!=\!\!\!\iint \tilde{\beta}_{ml}^{[p]}(\tilde{\mathbf{x}}_{n})\alpha_{m}^{[p]}(\mathbf{\overline{x}}_ {m,n}^{(j)},\mathbf{\overline{r}}_{m,n}^{(j)}{=}0)\mathrm{d}\mathbf{\overline{x}}_{m,n}^ {(j)}\mathrm{d}\tilde{\mathbf{x}}_{n} \tag{38}\]
The expressions can be simplified by dividing all messages by \(\varepsilon(\mathbf{\overline{a}}_{kl,n}^{(j)}=0)\). This results in \(\varepsilon(\mathbf{\overline{a}}_{kl,n}^{(j)}=0)=1\). For \(\varepsilon^{[p]}(\mathbf{\overline{a}}_{mm,n}^{(j)})\), the messages differ, which leads to
\[\varepsilon^{[p]}(\mathbf{\overline{a}}_{mm,n}^{(j)}{=}1)\] \[=\frac{\iint\alpha_{m}^{[p]}(\mathbf{\overline{x}}_{m,n}^{(j)},1) \frac{\mu_{\text{m}}\big{(}\tilde{\mathbf{x}}_{n},\mathbf{\overline{x}}_{m,n}^{(j)} \big{)}f(\mathbf{z}_{m,n}^{(j)}|\tilde{\mathbf{x}}_{m,n}^{(j)})}{\mu_{\text{d}}f_{ \text{n}}(\mathbf{z}_{m,n}^{(j)})}\mathrm{d}\mathbf{\overline{x}}_{m,n}^{(j)} \mathrm{d}\mathbf{\overline{x}}_{m,n}^{(j)}\mathrm{d}\tilde{
### _Extrinsic Information_
For each legacy VA, the messages sent from variable node \(\underline{\mathbf{y}}_{k,n}^{(j)}\) to factor nodes \(q(\tilde{\mathbf{x}}_{n},\mathbf{y}_{k,n}^{(j)},a_{kl,n}^{(j)};\mathbf{z}_{n}^{(j)})\) with \(k\in\mathcal{K}_{n-1}^{(j)}\), \(l\in\mathcal{M}_{n}^{(j)}\) at MP iteration \(p\in\{1,\ldots,P\}\) are defined as
\[\alpha_{l}^{[p]}(\underline{\mathbf{y}}_{k,n}^{(j)})=\alpha(\underline{\mathbf{y}}_{k,n}^{(j)})\prod_{\begin{subarray}{c}\ell=1\\ \ell\neq l\end{subarray}}^{M_{\ell}^{(j)}}\gamma_{\ell}^{[p-1]}(\underline{\bm {y}}_{k,n}^{(j)})\,. \tag{47}\]
For new VAs, a similar expression can be obtained for the messages from variable node \(\overline{\mathbf{y}}_{m,n}^{(j)}\) to factor nodes \(q(\tilde{\mathbf{x}}_{n},\mathbf{y}_{m,n}^{(j)},a_{ml,n}^{(j)};\mathbf{z}_{n}^{(j)})\) and factor node \(v(\tilde{\mathbf{x}}_{n},\mathbf{y}_{l,n}^{(j)},a_{il,n}^{(j)};\mathbf{z}_{n}^{(j)})\),i.e.,
\[\alpha_{l}^{[p]}(\overline{\mathbf{y}}_{m,n}^{(j)})=\alpha(\overline{\mathbf{y}}_{m, n}^{(j)})\prod_{\begin{subarray}{c}\ell=1\\ \ell\neq l\end{subarray}}^{m}\gamma_{\ell}^{[p-1]}(\overline{\mathbf{y}}_{m,n}^{(j) })\,. \tag{48}\]
### _Measurement update for augmented agent state_
Due to the proposed scheduling, the augmented agent state is only updated by messages of legacy PVAs and only at the end of the iterative message passing. This result in
\[\tilde{\beta}_{kl}^{[p]}(\tilde{\mathbf{x}}_{n}) =\alpha(\tilde{\mathbf{x}}_{n}) \tag{49}\] \[\beta_{kl}^{[P]}(j)(\tilde{\mathbf{x}}_{n}) =\sum_{\underline{a}_{kl,n}^{(j)},\in\{0,1\}}\sum_{\underline{r}_ {k,n}^{(j)},\in\{0,1\}}\int\alpha_{l}^{[P]}(\underline{\mathbf{x}}_{k,n}^{(j)}, \underline{r}_{k,n}^{(j)})\] \[\times q(\tilde{\mathbf{x}}_{n},\underline{\mathbf{x}}_{k,n}^{(j)}, \underline{r}_{k,n}^{(j)},\underline{a}_{kl,n}^{(j)},\underline{s}_{l,n}^{(j) })\nu_{kl}^{[P]}(\underline{a}_{kl,n}^{(j)})\mathrm{d}\underline{\mathbf{x}}_{k,n} ^{(j)} \tag{50}\]
which can be further simplified to
\[\beta_{kl}^{[P]}(\tilde{\mathbf{x}}_{n}) =\int\alpha_{l}^{[P]}(\mathbf{x}_{k,n}^{(j)},1)\Big{(}q(\tilde{\mathbf{x }}_{n},\underline{\mathbf{x}}_{k,n}^{(j)},1,1,\mathbf{z}_{l,n}^{(j)})\nu_{kl}^{[P]}(1)\] \[\quad+\nu_{kl}^{[P]}(0)\Big{)}\mathrm{d}\underline{\mathbf{x}}_{k,n} ^{(j)}+\underline{\mathbf{\alpha}}_{k}^{\text{n},(j)}\underline{\nu}_{kl}^{[p]}(0). \tag{51}\]
### _Belief calculation_
Once all messages are available, the beliefs approximating the desired marginal posterior PDFs are obtained. The belief for the agent state is given, up to a normalization factor, by
\[b(\tilde{\mathbf{x}}_{n})\propto\alpha(\tilde{\mathbf{x}}_{n})\prod_{j=1}^{J}\prod_{k =1}^{K_{n-1}^{(j)}}\prod_{m=1}^{M_{m}^{(j)}}\beta_{km}^{[P](j)}(\tilde{\mathbf{x} }_{n}) \tag{52}\]
where we only use messages from legacy objects. This belief (after normalization) provides an approximation of the marginal posterior PDF \(f(\tilde{\mathbf{x}}_{n}|\mathbf{z}_{1:n})\), and it is used instead of \(f(\tilde{\mathbf{x}}_{n}|\mathbf{z}_{1:n})\) in (26).
Furthermore, the beliefs of the legacy VAs \(b(\underline{\mathbf{y}}_{k}^{(j)})\) and new VAs \(b(\overline{\mathbf{y}}_{k}^{(j)})\) are given as
\[b(\underline{\mathbf{y}}_{k,n}^{(j)}) \propto\alpha(\underline{\mathbf{y}}_{k,n}^{(j)})\prod_{l=1}^{M_{m}^ {(j)}}\gamma_{l}^{[P]}(\underline{\mathbf{y}}_{k,n}^{(j)}) \tag{53}\] \[b(\overline{\mathbf{y}}_{m,n}^{(j)}) \propto\alpha(\overline{\mathbf{y}}_{m,n}^{(j)})\prod_{l=1}^{m}\gamma _{l}^{[P]}(\overline{\mathbf{y}}_{m,n}^{(j)}) \tag{54}\]
A computationally feasible approximate calculation of the various messages and beliefs can be based on the sequential Monte Carlo (particle-based) implementation approach introduced in [23, 20, 42].
attenuated by \(1\,\mathrm{dB}\). The state transition variances are set as \(\sigma_{\text{w}}=$10^{-3}\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}$\), \(q_{\text{d}}=q_{\text{u}}=$10^{4}$\)[21, 24], and \(\sigma_{\text{u},k}=$0.05\,\mathrm{u}_{k,n-1}^{(j)\text{MMSE}}\). Note that for the normalized amplitude state \(u_{k,n-1}^{(j)}\) we use a value proportional to the MMSE estimate of the previous time step \(n-1\) as a heuristic. For the sake of numerical stability, we introduced a small regularization noise to the VA state \(\mathbf{p}_{k,\text{va}}\) at each time \(n\), i.e., \(\underline{\mathbf{p}}_{k,\text{va}}\!=\!\mathbf{p}_{k,\text{va}}\!+\!\mathbf{\omega}_{k}\), where \(\mathbf{\omega}_{k}\) is iid across \(k\), zero-mean, and Gaussian with covariance matrix \(\sigma_{a}^{2}\,\mathds{I}_{2}\) and \(\sigma_{a}=$10^{-3}\,\mathrm{m}$\). We performed \(100\) simulation runs using \(20000\) particles, each using the floor plan and agent trajectory shown in Fig. 3. In each simulation run, we generated noisy measurements \(\mathbf{z}_{m,n}^{(j)}\) according to the measurement model proposed in Section IV-B. For numerical stability, we reduce \(\beta_{\text{bw}}\) for VAs by a factor of 4. The CEDA detection threshold is \(\gamma=2\).
The particles for the initial agent state are drawn from a 4-D uniform distribution with center \(\mathbf{x}_{0}=[\mathbf{p}_{0}^{\text{T}}\ 0\ 0]^{\text{T}}\), where \(\mathbf{p}_{0}\) is the starting position of the actual agent trajectory, and the support of each position component about the respective center is given by \([-$0.1\,\mathrm{m}$,$0.1\,\mathrm{m}$]\) and of each velocity component is given by \([-$0.01\,\mathrm{m}\mathrm{/}\mathrm{s}$,$0.01\,\mathrm{m}\mathrm{/}\mathrm{s}$]\). At time \(n=0\), the number of VAs is \(0\), i.e., no prior map information is available. The prior distribution for new PVA states \(f_{\text{a}}\big{(}\overline{\mathbf{x}}_{m,n}^{(j)}|\bar{\mathbf{x}}_{n}\big{)}\) is uniform on the square region given by \([-$15\,\mathrm{m}$,$15\,\mathrm{m}$]\!\times\![-$15\,\mathrm{m}$,$15\, \mathrm{m}$]\) around the center of the floor plan shown in Fig. 3 and the mean number of new VAs at time \(n\) is \(\mu_{\text{a}}=$0.01$\). The probability of survival is \(p_{\text{s}}=$0.999$\), the detection threshold is \(p_{\text{de}}=$0.5$\), and the pruning threshold is \(p_{\text{pr}}=$10^{-3}$\).
The performance of the different methods discussed is measured in terms of the root mean squared error (RMSE) of the agent position and the dispersion parameters as well as the optimal subpattern assignment (OSPA) error [45] of all VAs with cutoff parameter \(c=$5\,\mathrm{m}$\) and order \(p=2\). The
Fig. 5: Estimated components and dispersion parameters for PA \(1\) for a single simulation run, represented by dot markers and boxes, respectively. The true components and respective dispersion parameters are indicated in red. All measurements are indicated in gray. Estimated components and respective desperation parameters are indicated in black.
Fig. 6: Results for converged simulation runs. (a) shows the RMSE of the agent position over the whole trajectory. (b) and (c) present the RMSE of the dispersion parameters. (d) and (g) present the map error in terms of the MOSPA for PA1 and PA2, respectively. (e) and (h) show the RMSE of the estimated VA positions for PA1 and PA2, respectively. (f) and (i) show the cardinality error of the estimated VAs for PA1 and PA2, respectively.
mean OSPA (MOSPA) errors and RMSEs of each unknown variable are obtained by averaging over all simulation runs. The dispersion parameters are set to fixed values over time \(n\), i.e., \(\psi_{\mathrm{d},n}=\psi_{\mathrm{d}}\) and \(\psi_{\mathrm{u},n}=\psi_{\mathrm{u}}\). We investigate the proposed algorithm with four different dispersion parameter settings, given as \(\psi_{\mathrm{d}}\), which takes values of 0 m, 0.03 m, 0.15 m and 0.3 m, and \(\psi_{\mathrm{u}}\), which is either set to 0 or 0.2. As an example, Fig. 5 depicts the evolution of the estimated components and respective dispersion parameters of PA \(1\) over time \(n\) for one simulation run. The results are shown in Fig. 6. In the case \(\psi_{\mathrm{d}}=$0\,\mathrm{m}$\) only main-component measurements are generated, which is equivalent to the system model in [11].
Fig. 6a shows the RMSE of the agent positions, Fig. 6b and 6c show the RMSE of the dispersion parameters, Fig. 6d and 6g show the MOSPA error and its VA position error and mean cardinality error contributions for PA \(1\) and PA \(2\) for the converged runs, all versus time (and for all investigated dispersion parameter settings). We declare a simulation run to be converged if \(\{\forall n:\|\mathbf{p}_{n}-\mathbf{p}_{n}^{\text{MMSE}}\|<0.2\ \text{m}\}\). Table I summarizes the number of converged runs (in percentage) as well as the number of detected VAs for all investigated dispersion parameter settings.
Fig. 6a shows that the RMSE of the agent position of the proposed algorithm is similar for all dispersion parameter settings. While the proposed algorithm significantly outperforms the algorithm in [11] in terms of converged runs for dispersion parameter settings \(\psi_{\mathrm{d}}>0\), it shows slightly reduced performance for \(\psi_{\mathrm{d}}=0\). Additionally, Fig. 7 shows the cumulative frequencies of the individual agent errors, i.e., \(\|\mathbf{p}_{n}-\mathbf{p}_{n}^{\text{MMSE}}\|\) for all simulation runs and time instances. It can be observed that the MMSE positions of the agent of the proposed algorithm show almost no large deviations, while the estimates of the algorithm in [11] exhibit large errors in many simulation runs.
For dispersion parameter settings \(\psi_{\mathrm{d}}>0\), measurements of the sub-components are available. Thus, as Fig. 6b and 6c show, the dispersion parameters are well estimated indicated by the small RMSEs. For the setting \(\psi_{\mathrm{d}}=0\), estimation of the dispersion parameters is not possible because there are no sub-component measurements, i.e., there is only one measurement generated by each VA. However, as Fig. 6a shows, this does not affect the accuracy of the agent's position estimation.
The MOSPA errors (and their VA positions and the mean cardinality error contributions) of the proposed algorithm shown in Fig. 6d and 6g are very similar for all dispersion parameter settings. They only slightly increase with increased dispersion parameter \(\psi_{\mathrm{d}}\). Only for the setting \(\psi_{\mathrm{d}}=$0.3\,\mathrm{m}$\), the proposed algorithm shows a larger cardinality error. This can be explained by looking at the distances from PA \(1\) and its corresponding VAs as shown in Fig. 4. At the end of the agent trajectory, many VAs show similar distances to the agents position making it difficult to resolve the individual components. For larger dispersion parameter \(\psi_{\mathrm{d}}\), this becomes even more challenging leading to increased MOSPA errors. For PA \(2\) and the corresponding VAs, Fig. 4 shows that all components are well separated by their distances at the end of the agent trajectory, which makes it easier for the proposed algorithm to correctly estimate the number and positions of VAs. Unlike the proposed algorithm, the algorithm in [11] completely fails to estimate the correct number of VAs for larger \(\psi_{\mathrm{d}}\) (and \(\psi_{\mathrm{u}}\)), resulting in a large cardinality error. This can be explained by the fact that the algorithm in [11] does not consider additional sub-components in the measurement and system model. We suspect that this estimation of additional spurious VAs is the reason for the large number of divergent simulation runs.
## VII Conclusions
We have proposed a new multipath-based SLAM method that can cope with multiple-measurements being generated by a single environment feature, i.e., a single VA. It is based on a novel statistical measurement model that is derived from the radio signal introducing dispersion parameters to MPCs. The resulting likelihood function model allows to capture the measurement spread originating from non-ideal effects such as rough reflective surfaces or non-calibrated antennas. The performance results show that the proposed method is able to cope with multiple measurements being produced per VA and outperforms classical multipath-based SLAM in terms of the agent positioning error and the map MOSPA error. We show that multiple measurements get correctly associated to their corresponding VA, resulting in a correctly estimated number of VAs. Furthermore, the results indicate that the proposed algorithm generalizes to the classical multipath-based SLAM for a single measurement per VA. Possible directions of future research include the extension to individual dispersion parameters for each feature as well as incorporating multiple-measurements-to-feature data association into the MVA-based SLAM method [16].
## Appendix A Radio Signal Model
In this section we derive the radio signal model described in Section III. Usually, specular reflections of radio signals at flat surfaces are modeled by VAs that are mirror images of the PAs [1, 2, 3, 4]. We start by defining the typical channel transfer function, given for time \(n\) and anchor \(j\) as
\[h_{\mathrm{c},n}^{(j)}(\tau)=\!\sum_{l=1}^{L^{(j)}}\alpha_{n,l}^{(j)}s\big{(} \tau\!-\!\tau_{n,l}^{(j)}\big{)}. \tag{55}\]
The first and second terms describe the LOS component and the sum of \(L_{n}^{(j)}-1\) specular MPCs with their corresponding
Fig. 7: Cumulative frequency of the deviation of the MMSE estimate of the agent position from the true agent position for all simulation runs and time instances. The legend is given in Fig. 6.
complex amplitudes \(\alpha_{n,l}^{(j)}\) and delays \(\tau_{n,l}^{(j)}\), respectively. The delays are related to respective distances via \(\tau_{n,l}^{(j)}=d_{n,l}^{(j)}/c\) with \(c\) being the speed of light.
In non-ideal radio channels we observe rays to arrive as clusters [6, 7, 46, 47]. The reason for this observation is manifold. Typical examples are non-calibrated antennas, the scattering from a user-body as well as non-ideal reflective surfaces. Fig. 1 visualizes these effects, introducing generic transfer functions \(h_{\text{am},n}^{(j)}(\tau)\) and \(h_{\text{surf},n}^{(j)}(\tau)\). We propose to model the overall transfer function encompassing all considered dispersion effects as
\[h_{d,n}^{(j)}(\tau)=\delta(\tau)+\sum_{i=1}^{S_{i}^{(j)}}\beta_{l,i,n}^{(j)} \delta(\tau-\nu_{l,i}^{(j)}) \tag{56}\]
where \(\beta_{l,i,n}^{(j)}\in\mathbb{R}\) is a relative dampening variable and \(\nu_{l,i,n}^{(j)}\) is the excess delay. The presented model denotes a marked Possion point process [47]. Its statistical properties, i.e, the distribution of \(\nu_{l,i,n}^{(j)}\), \(\beta_{l,i,n}^{(j)}\), and \(S_{l}^{(j)}\), are discussed in Section III and IV in detail. We obtain the complex baseband signal received at the \(j\)th anchor given by the convolution of \(h_{d,n}^{(j)}(\tau)\) and \(h_{c,n}^{(j)}(\tau)\) with the transmitted signal \(s(t)\) as
\[\tau_{n}^{(j)}(t)=\sum_{l=1}^{L_{n}^{(j)}}\alpha_{n,l}^{(j)}\Big{(} s(t-\tau_{n,l}^{(j)})\] \[+\sum_{i=1}^{S_{i}^{(j)}}\beta_{l,i,n}^{(j)}s(t-\tau_{n,l}^{(j)}- \nu_{l,i,n}^{(j)})\Big{)}+\mathfrak{n}_{n}^{(j)}(t)\,. \tag{57}\]
The second term \(\mathfrak{n}_{n}^{(j)}(t)\) represents an additive white Gaussian noise process with double-sided power spectral density \(N_{0}^{(j)}/2\). The received complex baseband signal at the \(j\)th PA is sampled \(N_{\text{s}}\) times with sampling frequency \(f_{\text{s}}=1/T_{\text{s}}\) yielding an observation period of \(T=N_{\text{s}}\,T_{\text{s}}\). By stacking the samples, we obtain the discrete-time received signal vector given in (5).
## Appendix B Data Association
This section contains the detailed derivation of the data association-related messages \(\varphi_{kl}^{[p]}(b_{l,n}^{(j)})\) and \(\nu_{kl}^{[p]}(a_{kl,n}^{(j)})\). Using the measurement evaluation messages in (32) and (33), the messages \(\varphi_{kl}^{[p]}(b_{l,n}^{(j)})\) and \(\overline{\varphi}_{ml}^{[p]}(b_{l,n}^{(j)})\) are calculated by
\[\underline{\varphi}_{kl}^{[p]}(b_{l,n}^{(j)})= \sum_{\underline{a}_{kl,n}^{(j)}\in\{0,1\}}\varepsilon^{[p]}(a_{kl,n}^{( j)})\underline{\Psi}(\underline{a}_{kl,n}^{(j)},b_{l,n}^{(j)}) \tag{58}\] \[\overline{\varphi}_{ml}^{[p]}(b_{l,n}^{(j)})= \sum_{\underline{\eta}_{ml,n}^{(j)}\in\{0,1\}}\varepsilon^{[p]}( \overline{a}_{ml,n}^{(j)})\overline{\Psi}(\overline{a}_{ml,n}^{(j)},b_{l,n}^{( j)}) \tag{59}\]
for \(k\in\{1,\ldots,\underline{K}\}\) with \(\underline{K}\triangleq K_{n-1}^{(j)}\) and \(m,l\in\{1,\ldots,M_{n}^{(j)}\}\) and are sent from factor node \(\underline{\Psi}(\underline{a}_{kl,n}^{(j)},b_{l,n}^{(j)})\) and \(\overline{\Psi}(\overline{a}_{ml,n}^{(j)},b_{l,n}^{(j)})\) to variable node \(b_{l,n}^{(j)}\), respectively. By making use of the indicator functions given in (24) and (25), respectively, (58) and (59) are also given as
\[\underline{\varphi}_{kl}^{[p]}(b_{l,n}^{(j)}=k) =\varepsilon^{[p]}(\underline{a}_{kl,n}^{(j)}=1) \tag{60}\] \[\underline{\varphi}_{kl}^{[p]}(b_{l,n}^{(j)}\neq k) =\varepsilon^{[p]}(\underline{a}_{kl,n}^{(j)}=0)\] (61) \[\overline{\varphi}_{ml}^{[p]}(b_{l,n}^{(j)}=\underline{K}+m) =\varepsilon^{[p]}(\overline{a}_{ml,n}^{(j)}=1)\] (62) \[\overline{\varphi}_{ml}^{[p]}(b_{l,n}^{(j)}\neq\underline{K}+m) =\varepsilon^{[p]}(\overline{a}_{ml,n}^{(j)}=0) \tag{63}\]
The messages in (60) - (63) can be rewritten in the form of
\[\underline{\varphi}_{kl}^{[p]}(b_{l,n}^{(j)})= \begin{cases}\frac{\varepsilon^{[p]}(\underline{a}_{kl,n}^{(j)}=1)}{ \varepsilon^{[p]}(\underline{a}_{kl,n}^{(j)}=0)},&b_{l,n}^{(j)}=k\\ 1,&b_{l,n}^{(j)}\neq k\end{cases} \tag{64}\] \[\overline{\varphi}_{ml}^{[p]}(b_{l,n}^{(j)})= \begin{cases}\frac{\varepsilon^{[p]}(\overline{a}_{ml,n}^{(j)}=1)}{ \varepsilon^{[p]}(\overline{a}_{ml,n}^{(j)}=0)},&b_{l,n}^{(j)}=\underline{K}+m \\ 1,&b_{l,n}^{(j)}\neq\underline{K}+m.\end{cases} \tag{65}\]
The messages \(\underline{\nu}_{kl}^{[p]}(\underline{a}_{kl,n}^{(j)})\) and \(\overline{\nu}_{ml}^{[p]}(\overline{a}_{ml,n}^{(j)})\) represent the messages from variable node \(\underline{a}_{kl,n}^{(j)}\) to factor node \(q(\tilde{\mathbf{x}}_{n},\underline{\mathbf{y}}_{n,n}^{(j)},\underline{a}_{kl,n}^{(j)} ;\mathbf{z}_{l,n}^{(j)})\) and from variable node \(\underline{a}_{ml,n}^{(j)}\) to factor node \(q(\tilde{\mathbf{x}}_{n},\overline{\mathbf{y}}_{m,n}^{(j)},\overline{a}_{ml,n}^{(j)} ;\mathbf{z}_{l,n}^{(j)})\), respectively. \(\overline{\nu}_{ml}^{[p]}(\overline{a}_{mm,n}^{(j)})\) represents the messages from variable node \(\overline{a}_{mm,n}^{(j)}\) to factor node \(v(\tilde{\mathbf{x}}_{n},\overline{\mathbf{y}}_{m,n}^{(j)},\overline{a}_{mm,n}^{(j)}; \mathbf{z}_{m,n}^{(j)})\). They are defined as
\[\underline{\nu}_{kl}^{[p+1]}(a_{kl,n}^{(j)})=\sum_{\begin{subarray}{c}k^{(j)} \end{subarray}}^{K^{(j)}}\prod_{i=1}^{K}\underline{\varphi}_{kl}^{[p]}(b_{l,n}^ {(j)})\prod_{m=l}^{M_{n}^{(j)}}\overline{\varphi}_{ml}^{[p]}(b_{l,n}^{(j)}) \tag{66}\] \[\overline{\nu}_{ml}^{[p+1]}(a_{ml,n}^{(j)})= \sum_{b_{l,n}^{(j)}=0}^{K^{(j)}}\prod_{i=1}^{K}\underline{\varphi}_{kl}^{[p]}(b _{l,n}^{(j)})\prod_{h=l+1}^{M_{n}^{(j)}}\overline{\varphi}_{hl}^{[p]}(b_{l,n}^ {(j)}). \tag{67}\]
Using the results from (64) and (65), (66) and (67) are, respectively, rewritten as
\[\underline{\nu}_{kl}^{[p+1]}(a_{kl,n}^{(j)}\!\!=\!\!1)= \prod_{\begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{\underline{\Sigma}}\underline{\varphi}_{il}^{[p]}(b_{l,n}^ {(j)}\!\!=\!\!k)\prod_{m=l}^{M_{n}^{(j)}}\overline{\varphi}_{ml}^{[p]}(b_{l,n}^ {(j)}\!\!=\!\!\underline{K}\!+\!k)\] (68) \[\underline{\nu}_{kl}^{[p+1]}(\underline{a}_{kl,n}^{(j)}=0) = \sum_{\begin{subarray}{c}b_{l,n}^{(j)}=0\\ b_{l,n}^{(j)}\neq k\end{subarray}}^{K^{(j)}}\prod_{\begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{\underline{\Sigma}}\underline{\varphi}_{il}^{[p]}(b_{l,n}^ {(j)})\prod_{m=l}^{M_{n}^{(j)}}\overline{\varphi}_{ml
\[\overline{\nu}_{ml}^{[p+1]}(\overline{a}_{ml,n}^{(j)})\] \[=\begin{cases}\frac{\prod_{i=1}^{K}\frac{\prod_{i=1}^{K}\frac{\nu_{i =1}^{[p]}(h_{i=1}^{(j)})}{\prod_{i=1}^{K}\frac{\nu_{i=1}^{[p]}(h_{i=2}^{(j)}- \underline{K}^{+}m)}{\prod_{i=1}^{K}\frac{\nu_{i=1}^{[p]}(h_{i=2}^{(j)}- \underline{K}^{+}m)}{\prod_{i=1}^{K}\frac{\nu_{i=1}^{[p]}(h_{i=2}^{(j)}- \underline{K}^{+}m)}{\prod_{i=1}^{K}\frac{\nu_{i=1}^{[p]}(h_{i=2}^{(j)}- \underline{K}^{+}m)}{\prod_{i=1}^{K}\frac{\nu_{i=1}^{[p]}(h_{i=2}^{(j)}- \underline{K}^{+}m)}{\prod_{i=1}^{K}\frac{\nu_{i=1}^{[p]}(h_{i=2}^{(j)}- \underline{K}^{+}m)}{1}}}}}},\overline{a}_{ml,n}^{(j)}=1\\ \\ 1,\end{cases} \tag{73}\]
Finally, by calculating the explicit summations and multiplications in (72) and (73), it results in
\[\underline{\nu}_{kl}^{[p+1]}(\underline{a}_{kl,n}^{(j)})\] \[=\begin{cases}1+\sum\limits_{i=1}^{\underline{K}}\frac{\nu_{i}^{[ p]}(h_{i=1}^{(j)}-i)+\sum\limits_{m=1}^{M_{i=1}^{[p]}}\overline{\nu}_{ml}^{[p]}(h_{i=2 }^{(j)}-\underline{K}^{+}m)}{1},&\underline{a}_{kl,n}^{(j)}=1\\ \\ 1,&\underline{a}_{kl,n}^{(j)}=0\end{cases} \tag{74}\]
\[\overline{\nu}_{ml}^{[p+1]}(\overline{a}_{ml,n}^{(j)})\] \[=\begin{cases}1+\sum\limits_{i=1}^{\underline{K}}\frac{\nu_{i}^{[ p]}(h_{i=1}^{(j)}-i)+\sum\limits_{n=1}^{M_{i=1}^{[p]}}\overline{\nu}_{kl}^{[p]}(h_{i=2 }^{(j)}-\underline{K}^{+}m)}{1},&\overline{a}_{ml,n}^{(j)}=1\\ \\ 1,&\overline{a}_{ml,n}^{(j)}=0.\end{cases} \tag{75}\]
|
2304.14955 | A Systematization of Cybersecurity Regulations, Standards and Guidelines
for the Healthcare Sector | The growing adoption of IT solutions in the healthcare sector is leading to a
steady increase in the number of cybersecurity incidents. As a result,
organizations worldwide have introduced regulations, standards, and best
practices to address cybersecurity and data protection issues in this sector.
However, the application of this large corpus of documents presents operational
difficulties, and operators continue to lag behind in resilience to cyber
attacks. This paper contributes a systematization of the significant
cybersecurity documents relevant to the healthcare sector. We collected the 49
most significant documents and used the NIST cybersecurity framework to
categorize key information and support the implementation of cybersecurity
measures. | Maria Patrizia Carello, Alberto Marchetti Spaccamela, Leonardo Querzoni, Marco Angelini | 2023-04-28T16:19:21Z | http://arxiv.org/abs/2304.14955v1 | # A Systematization of Cybersecurity Regulations,
###### Abstract
The growing adoption of IT solutions in the healthcare sector is leading to a steady increase in the number of cybersecurity incidents. As a result, organizations worldwide have introduced regulations, standards, and best practices to address cybersecurity and data protection issues in this sector. However, the application of this large corpus of documents presents operational difficulties, and operators continue to lag behind in resilience to cyber attacks. This paper contributes a systematization of the significant cybersecurity documents relevant to the healthcare sector. We collected the 49 most significant documents and used the NIST cybersecurity framework to categorize key information and support the implementation of cybersecurity measures.
Sapienza University of Rome
{carello, alberto, querzoni, angelini}@diag.uniroma1.it
## 1 Introduction
Worldwide, the _digital transformation of health services_ is seen as an important and influential process, increasing the integration of technology in healthcare organizations, ranging from the use of computers and electronic health records to home monitoring of patients, electronic medical devices, and decision support systems [71].
Digital transformation affects many aspects of healthcare systems and allows for the improvement of service quality. For example, it is known that the adoption of telemedicine decreases hospital mortality rates without a significant increase in cost [19, 32].
However, the extensive integration of technologies into existing organizations has caused cybersecurity incidents to become an increasing challenge. Therefore, preventing, mitigating, responding to, managing emergencies, and recovering from cyber-attacks are critical responsibilities in the health domain nowadays.
To answer the above needs, several regulations, standards, and best practices on healthcare security have been proposed worldwide to help and guide health organizations in improving their cybersecurity preparedness. However, the correct application of regulations, standards, and best practices poses several issues. Firstly, these guidelines have been designed by different actors for various purposes, and their fragmented nature makes integration and application challenging (_issue1_). Furthermore, they often provide a high-level overview of security measures in a discursive manner without specifying the technical security policies that need to be implemented (_issue2_). Moreover, there is significant overlap among documents published by different sources, and different terminology is used to refer to the same concepts (_issue3_). Finally, the extensive use of legal jargon and cross-references to other regulations makes it difficult to parse and extract security-focused elements (_issue4_).
This paper proposes a systematization of the corpus of documents mentioned above to overcome these issues. We extract succinct and informative excerpts related to security and data protection from non-technical sources, and then provide a consistent view of the stated security measures by analyzing the degree of overlap and filling the gaps in coverage of security-related
aspects. To accomplish this, we began by analyzing a corpus of 68 documents to identify relevant ones. From these, we extracted excerpts of interest and mapped them to the NIST Cybersecurity Framework [65]. Based on each mapped excerpt, we defined a set of cybersecurity controls that can be effectively used to build cybersecurity plans.
We also present the methodology used to conduct our study and exemplify its application in the healthcare sector, discussing findings that highlight possible areas for improvement.
The paper is organized as follows: Section 2 provides background information on cybersecurity regulations, standards, and best practices issued worldwide in the healthcare sector and illustrates related proposals; Section 3 introduces our novel methodology for systematizing such corpus of documents, and describes its results; Section 4 discusses important findings identified through this systematization; Section 5 concludes the paper.
## 2 Background and Related Work
The corpus of documents that govern cybersecurity and data protection for healthcare organizations can be grouped into three categories: _Regulations_, _Standards_, and _Best Practices_. Each category is briefly described in the following.
**Regulations** are issued by an executive authority or regulatory agency and have the force of law. They can be national or international (for the national ones, in this paper we refer to the Italian regulations). One of the first security regulations for the healthcare sector is the U.S. _Health Insurance Portability and Accountability Act (HIPAA)_[38], 1996. The main goal of HIPAA was to protect Personally Identifiable Information (PII) and to preserve privacy while allowing individuals to access their medical records. HIPAA was updated in 2003 and 2013, adding requirements for managing Electronic Protected Health Information and implementing penalties for privacy violations. The EU _General Data Protection Regulation (GDPR)_[68], 2018, regulates the processing and circulation of personal data; GDPR recognizes health data as special data that requires greater protection and specific security measures. The European Union issued the _Regulation on Medical Devices (MDR)_, 2017, that presents cybersecurity requirements of medical devices [69].
**Standards** are documents set up by authority or general consent as a model or example to be compliant with. In the last few years, several Standards have been released to promote the development of security requirements for the healthcare sector, for example, the _ISO 27799 Health informatics_[47], 2016, provides an implementation guide for the controls described in ISO/IEC 27002 and supplements them where necessary. More recently, the _ISO/TR 21332 - Health informatics_[55], 2021, provides an overview of the security and privacy of Electronic Health Records (EHR) in a cloud computing service and the _IEC 80001-1_[42], 2021, specifies security requirements for connecting medical devices.
**Best Practices** are guidelines to be used in a particular business or industry (such as healthcare) to meet cybersecurity objectives and to be compliant with regulations. For example, the _NIST Security Rule -SP 800-66_[64], 2008, summarizes HIPAA security standards to support healthcare organizations to be compliant with HIPAA regulations. In Europe, ENISA published several documents; we mention the _Procurement guidelines for cybersecurity in hospitals_, 2020, [26] and the _European Commission's (EC) Medical Devices Coordination Group (MDCG)_ published in 2020 a guide on how to fulfill all the essential cybersecurity requirements issued by the MDR and IVDR (In Vitro Diagnostic Medical Devices Regulation) regulations [61].
Related work.In the last ten years there has been a significant increase in the pace of publication about cybersecurity and healthcare [18]. The interest is motivated by the key role of Cybersecurity in the healthcare sector: any disruption in health services can be a disaster for patients' health, not only for organizations. In the following, we focus on studies that aim to provide a systematization for the large number of cybersecurity documents in the healthcare domain.
Jalali et al. [58] conducted a broad work on scientific literature: they surveyed 472 scientific contributions extracted from Pubmed and Web of Science at the intersection of cybersecurity and healthcare. Their findings show that most contributions focus on technological aspects, while 32% focus on managerial and policy-making topics. Differently from our work, they do not consider regulations, standards, and best practices, making their work complementary to our approach. Mohammed [62] discusses the compliance issues and challenges for healthcare organizations in the U.S., focusing on HITECH and HIPAA. The author lists among major challenges the vagueness and ambiguity of many of the prescriptions of those documents, similar to what we identified before.
Furthermore, it has been observed how cybersecurity standards and regulations are still uncertain, overlapping, and do not entirely address healthcare-specific concerns; as a consequence complying with cybersecurity rules is a challenging activity that involves time and expense for healthcare organizations, hindering their ability to develop adequate cybersecurity programs [23, 14, 20, 60, 21]. In [59] and [73] regulations and standards for medical device software are considered focusing on the device manufacturer as the intended target; our goal is to inform the management (i.e., CISOs and DPOs) inside healthcare organizations.
As a consequence of the aforementioned considerations, it is necessary to support healthcare organizations in navigating and making sense of these documents to support the extraction and modeling of cybersecurity measures.
## 3 Methodology
This section outlines a novel four-step methodology for the systematization of cybersecurity regulations, standards, and best practices that have been published over time for the healthcare sector. In the first step, we thoroughly searched public repositories to find documents of interest. In the second step, the documents are analyzed to identify excerpts that refer to technical security and governance measures. In the third step, cybersecurity excerpts are mapped on the Subcategories of the NIST Framework [65], and in the last step, a control definition procedure is carried out for each subcategory.
All the results and additional materials are available at this link:
[https://github.com/carelloSapienza/Systematization-healthcare](https://github.com/carelloSapienza/Systematization-healthcare).
Figure 1: Methodology Steps
### Documents Collection
We explored the information available on the main official sources of European, International, and National regulators (e.g., ENISA, NIST, Salute.gov) using main searched keywords such as _cybersecurity, privacy, electronic health record (EHR), medical device, telemedicine, cloud for healthcare_.
A second round of research was conducted using the main indexing platforms (e.g., Elsevier Scopus, Google scholar, IEEE Xplore), and the primary searched keywords were: _cybersecurity in healthcare, healthcare cybersecurity legislation, telemedicine security and privacy, cybersecurity of medical devices; security framework for healthcare_. We performed a forward and backward analysis for each document or paper collected. Afterward, to refine the research, we constrained each collected document to two key requirements: _(i)_ the regulation must be in effect, _(ii)_ the document must address data security, privacy issues, or cybersecurity measures for healthcare organizations or public administrations. Therefore, we did not include works that address only manufacturers of medical devices, external service providers, or government agencies.
Examples of documents excluded, as not deemed of interest, are _ISO/TR 17522:2015 Health informatics -- Provisions for health applications on mobile/smart devices_[45], focused only on interoperability, and _ISO 14971:2019 Medical devices -- Application of risk management to medical devices_[50] that is specifically addressed to manufacturers of medical devices.
**Results.** This step allowed us to gather 68 potential documents of interest first, then narrowed to 49 documents by considering the key requirements. The final corpus, therefore, is composed of _11 regulations, 21 best practices_, and _17 standards_ gathered by European (_9_), international (_19_), and national (_21_) sources.
### Documents Analysis
In this step, each of the 49 documents previously collected is accurately analyzed to identify key excerpts of text that refer to technical security and governance measures. A key excerpt of text is a sentence in a document that refers to areas of cybersecurity or data protection, such as information security policies, data privacy, incident management, etc. The identification has been performed manually by at least two members of our team with expertise in security governance, cybersecurity, and data protection. Once identified, the excerpt is extracted from its original document and collected in a table as output for the next step. Another group of information security specialists has regularly examined the collected excerpts to verify their relevance.
This step mitigates _issue 4_ helping to organize the texts and to extract only the relevant contents (security and data protection).
**Results.** Figure 2 shows an example of key excerpts identification on the document _Security and Resilience in eHealth Infrastructures and Services_[24].
To identify relevant key excerpts from non-relevant ones, consider the first sentence: _"An eHealth incident reporting mechanism, potentially part of a clinical incident reporting and alerting system, may improve patient safety"_. This excerpt does not give any technical information and therefore is not a key excerpt. Conversely, the sentence highlighted in green asserting that _"Computer Emergency Response Team should be created"_ and _"could potentially collaborate with the national CERT"_ has been considered a relevant excerpt since it provides clear cybersecurity indications.
Based on the analysis of the 49 documents collected, we extracted approximately _2,800_ excerpts distributed as depicted in Figure 3.
### Documents Mapping
The excerpts identified in the previous step are listed in a table in their original form. To systematize them, we choose the **NIST Cybersecurity Framework v1.1** that provides a common ground and standard terminology for cybersecurity functionalities. However, since several key excerpts refer to data security and privacy, it was necessary to extend it. We leveraged the Italian Cybersecurity Framework [15, 16], retro-compatible with the NIST framework, that includes categories and subcategories dedicated to data protection. Each excerpt has been accurately assessed for its semantic content and linked to one or more subcategories of the framework.
For each excerpt, the _Function_ it belongs to is first determined, followed by the assumed _Category_ and then the appropriate _Subcategory_. An example of mapping is shown in Table 1.
This step mitigates _issue 3_ by identifying, quantifying, and resolving overlap.
Figure 3: Documents Analysis: excerpts distribution
Figure 2: Example of Key Excerpts Extraction from [24]
**Results.** The 2,800 excerpts have been mapped mostly in the _Protection_ and _Identify_ functions. Very few excerpts address _Respond_, _Detect_, and _Recover_ functions, as visible in Figures 4.
### Controls Definition
In this step, the excerpts previously mapped are refined and modeled as cybersecurity controls. It is necessary to refine the excerpts to be syntactically uniform because they were retrieved from documents of various types, origins, and writing styles. For example, Best Practices have a purely technical nature and are made by sentences more direct and concise. In contrast, Regulations have a syntax typical of the legal world and are therefore made by sentences more discursive.
To define the controls and get a consistent and similar structure, _three key constraints_ were enforced during their definition:
1. **Self-contained**: the control contains every element that is essential for its semantic completeness;
2. **Homogeneous**: the control faithfully complies with the semantics of the excerpt;
3. **Verifiable**: an application of the control must be verifiable through a well-defined quantitative or qualitative approach.
A unique identifier then enumerates each control to retain its traceability. By analyzing each excerpt in Table 1 and applying the constraints, one or more controls have been defined. For example, a thorough semantic analysis of the excerpt _E2_ led to the definition of three controls: _ID.GV-2-01_ and _ID.GV-2-02_ directly derived from the original text while _ID.AM- 6_ has been added as an implicit requirement deriving from the former controls.
This step mitigates _issues 1 and 2_ by uniforming the contents and supporting the implementation of technical security measures.
**Results.** At the end of this step, the approximately 2800 sentences extracted from the previous phase led to the definition of approximately **3,320 controls**.
The control definition's 15% increase over the sentences extracted confirms the heterogeneity and fragmentation of the excerpt's content. The distribution of controls is uniform among the framework's categories (see Table 1).
Figure 4: Mapping (Functions) Figure 5: Mapping (Categories)
## 4 Findings
**Cybersecurity Controls Coverage.** The first finding, depicted in Fig 3, is the large gap among the number of relevant excerpts extracted from Regulations and Standards compared to Best Practices.
The gap is mostly due to the nature of the documents themselves: _Regulations_ have the lowest percentage of extracted excerpts (10,7%) because they are mostly discursive and do not address technological or procedural security measures, only stating general goals; _Best Practices_, on the other hand, have the highest percentage of excerpts (69,60%), since they are intended to serve as guidance for deploying cybersecurity measures, and therefore feature more technical and in-depth cybersecurity controls (_Finding 1_).
While Healthcare organizations experienced significant security incidents in recent years, with the majority of them caused by either phishing or ransomware attacks [14], there is still a lack of focus on how to address such threats. This is evidenced by the very low percentages of controls mapped on _Detect_ (4,99%), _Respond_ (6,24%), and _Recovery_ (1,40%) functions, as shown in Figure 5. As a result, the documents focus mainly on the identification of cybersecurity perimeter and assets protection, with _Protect_ (54,75%) and _Identify_ (32,62%) being the most covered functions, rather than the detection and management of cybersecurity incidents during and after their deployment (_Finding 2_).
**Cybersecurity Topics Coverage.** The previous findings, were derived using the NIST Cybersecurity Framework. As it is a mostly operative framework, it gave us an idea of the less covered actions. To provide additional insights focused on evaluating the coverage of key cybersecurity and data protection areas, we used a second taxonomy suggested by the Report _A Proposal for a European Cybersecurity Taxonomy_[63], issued by the European Commission. We selected the most pertinent topics for the healthcare sector (eight) and used a three-stage approach to evaluate the coverage level for each topic. Firstly, the team assigned each subcate
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Excerpt Detail** & **Subcategory** & **Control Definition** \\ \hline E1: Member States should develop incident response mechanisms to efficiently bring together the healthcare organizations with the national cyber security competent centers. & PR.IP-9 & PR.IP-9-01: Healthcare organizations develop incident response mechanisms to bring together with the national cybersecurity competent centers \\ \hline E2: An eHealth-focused Computer Emergency Response Team should be created, which could potentially collaborate with the national CERT on incident handling. Feedback directly to the eHealth service users (e.g., clinicians) is extremely important for their continued engagement. & ID.GV-2; ID.AV-6 & ID.GV-2-01: A Computer Emergency Response Team (CERT) has been created focused on eHealth. \\ \hline E3: In terms of eHealth incident handling and hazard control, further steps need to be taken: Systems for reporting and analyzing incidents both locally and nationally. & RS.AN-5 & RS.AN-5-01: Systems for reporting and analyzing incidents both locally and nationally have been implemented \\ \hline \end{tabular}
\end{table}
Table 1: Excerpts ext. & Controls Definition Document [24]_Recommendation 4_
gory of the NIST Framework to one of the taxonomy's security topics. Secondly, we counted the number of controls that fell into a specific subcategory for each document based on the mapping performed in the Control Definition Step. Thirdly, we developed three levels of coverage based on the number of occurrences: _Low_ if there were 1 to 3 controls that address the topic, _Medium_ (4 to 6 controls), and _High_ (more than 6 controls). Thresholds were derived from a statistical analysis of the distribution of extracted controls per document per subcategory. Figure 6 shows the topics addressed and the level of coverage for each document gathered, ranging from dark green for high coverage to white for no coverage. Using this approach, we provide an overview of documents coverage of cybersecurity topics and derive several findings.
Figure 6: Mapping Taxonomy
_T1: Security Management and Governance_ (Figure 6 column A) is the most addressed topic (around 85%) demonstrating the high regard by the documents collected. Moreover, the security measures are addressed by many publications issued by different sources, implying that the topic's contents are heavily overlapping (_Finding 3_). In addition, many documents go deeper in their analysis (deep green color), and as a result, it could prove challenging to homogenize the security measures extracted. Unlike T1, _T8: Assurance, Audit, Certif._ (Figure 6 column B) is the least addressed topic, with some scattered contributions from standards and best practices, even if focused. Surprisingly, regulations do not provide cybersecurity controls in this area. Similar considerations can be made for topics _T6: Incident Handling & Digital Forens._ and _T7: Education & Training_. T6 presents a shallower coverage than T7, raising the possibility that the resulting security measures could be incomplete (_Finding 4_).
Analyzing Figure 6 from a document-based perspective, the coverage area is determined by the document typology: Regulations and Standards are more focused on specific topics, leaving others completely or partially uncovered, while Best Practices are broad and cross-topics. For instance, the standard _ISO 17090_[48] (row C) focuses on a single topic (T2), analyzing it in depth and providing specific security measures. On the other hand, the _National Best Practice for Electronic Health Record_[37] (row D) covers a wider range of topics, where most of them are only given a shallow level of analysis, implying that the contents are overly generic (_Finding 5_). An interesting analysis is to compare international and national coverage mappings (see Figure 7). Even among the National corpus of documents, the most popular topics are T1, _T2: Data Security & Privacy_, and _T3: Identity Management_. It indicates that the national context tends to mirror the trend of international publications on these topics. On the contrary, the remaining topics result less covered, both in terms of the number of controls and depth of analysis. Critical topics are T7 and T8, with the first addressed in less than 20% of the documents and the second addressed by only one best practice (_Finding 6_). Notice that there is a lack of standards in the list of national documents since all standards gathered during the collection step are issued by international entities.
**Temporal trends.** The temporal analysis by date of publication confirms that cybersecurity is emerging as a top priority in the healthcare sector, with a steady increase in the pace of cybersecurity regulations, standards, and best practices publication since 2008.
As shown in Figure 8 there is a peak of publications in 2017 which may be related to the 2016
Figure 7: National Mapping Taxonomy
Hollywood Presbyterian ransomware attack (the first highly publicized cyberattack incident against a hospital) [17] and a second peak in 2021, when, among others, regulations on medical devices, along with related guidelines and standards, have been published (_Finding 7_).
Furthermore, Figure 9 illustrates that national publications tend to follow a similar trend, indicating that national authorities are attempting to keep up and align national regulations with international ones (_Finding 8_).
**Actors.** A healthcare system is an organization of people, institutions, and resources that delivers services for the population. We modeled, referring to literature, the healthcare sector as composed of five main providers, sorted by descending size:
* _Hospital_: an institution that provides diagnoses of disease, medical and surgical treatments, and nursing care for sick or injured people;
* _Private Structure_ (e.g., Care Homes, Diagnostic Centers, etc. ): structure that performs several health services but cannot perform hospitalizations;
* _Local Sanitary Unit_: the integrated primary health care public service covering a well-defined population;
* _Clinical laboratory_: healthcare facility providing a wide range of laboratory procedures for diagnosis and treatment;
* _Medical practitioner_: a self-employed or publicly employed health professional who works independently.
For each provider, we defined the delivered services classified as primary (compulsory to provide) mapped in green, secondary (optional to provide) indicated in yellow, and services not provided indicated in red (see Figure 10).
Afterward, for each primary service, we analyze which cybersecurity controls, defined in the Controls Definitions Step, could be fitting for securing the service.
As a result, for each provider, we obtain the number of cybersecurity controls that should be coped with to improve the providers' cybersecurity posture distributed by functions and originating sources type (see Figure 10).
Due to the fewer services offered, Medical Practitioners need to cope with less than 60% controls compared to a hospital organization. Overall, Identify and Protect remain the most addressed functions, and there is a uniform distribution of controls derived from Regulations, Standards, and Best Practices (_Finding 9_). We notice that the number of controls to cope remains high, disregarding the target actor. More effort should be put in place to streamline their implementation, considering their priority and the security of secondary services.
## 5 Conclusions
This paper systematized healthcare sector cybersecurity and data protection regulations, standards, and best practices, analyzing 49 documents and categorizing them using the NIST Framework. This resulted in 3200 security controls and nine findings, including that best practices present more technical controls than Regulations. We found an uneven distribution of controls for cybersecurity and data protection topics, particularly in the areas of Detect, Respond, and Recover. Future plans include updating the systematization with new documents, like NIS2, and utilizing the controls for cyber-posture assessments.
|
2305.15976 | Data-driven Quantum Dynamical Embedding Method for Long-term Prediction
on Near-term Quantum Computers | The increasing focus on long-term time series prediction across various
fields has been significantly strengthened by advancements in quantum
computation. In this paper, we introduce a data-driven method designed for
long-term time series prediction with quantum dynamical embedding (QDE). This
approach enables a trainable embedding of the data space into an extended state
space, allowing for the recursive retrieval of time series information. Based
on its independence of time series length, this method achieves depth-efficient
quantum circuits that are crucial for near-term quantum computers. Numerical
simulations demonstrate the model's improved performance in prediction accuracy
and resource efficiency over existing methods, as well as its effective
denoising capabilities. We implement this model on the Origin ''Wukong''
superconducting quantum processor with a learnable error-cancellation layer
(LECL) for error mitigation, further validating the practical applicability of
our approach on near-term quantum devices. Furthermore, the theoretical
analysis of the QDE's dynamical properties and its universality enhances its
potential for time series prediction. This study establishes a significant step
towards the processing of long-term time series on near-term quantum computers,
integrating data-driven learning with discrete dynamical embedding for enhanced
forecasting capabilities. | Tai-Ping Sun, Zhao-Yun Chen, Cheng Xue, Huan-Yu Liu, Xi-Ning Zhuang, Yun-Jie Wang, Shi-Xin Ma, Hai-Feng Zhang, Yu-Chun Wu, Guo-Ping Guo | 2023-05-25T12:12:49Z | http://arxiv.org/abs/2305.15976v3 | # Quantum-Discrete-Map-Based Recurrent Neural Networks
###### Abstract
Quantum machine learning is a rapidly growing domain and its potential has been explored for time series prediction and dynamics simulation in existing works. In this study, we propose a quantum-discrete-map-based recurrent neural network (QDM-RNN) to overcome the limitations posed by the circuit depth growing with the length of time series. From a discrete-dynamical perspective, quantum circuits are leveraged to build the discrete map and hence the discrete dynamical system. This approach involves measuring partial qubits to obtain historical information (memory) that is reused in the encoding layer of next time step, and measuring the other qubits to retrieve classical information as output. The nonlinear properties of the quantum discrete map make it appealing for embedding low-dimensional dynamics into higher dimensions, which is consistent with recurrent learning tricks. In numerical simulations, the QDM-RNN is implemented with one-feature datasets of waves and two-feature datasets of dynamics to demonstrate its capability. Our study introduces a new paradigm for quantum machine learning and highlights the potential of quantum computing in nonlinear dynamics.
## I Introduction
In recent years, there has been a surge of interest in time series prediction and dynamics simulation which are crucial in many fields, including finance, physics, and engineering, adhering to quantum systems [1; 2; 3; 4]. The quantum system provides an exponential large Hilbert space and properties including quantum entanglement to lift the information processing into quantum realm, which contributes to universal approximation properties [5; 6; 7; 8] and hence the inference function construction in machine learning domain. With embedding historical time series into quantum systems, the prediction is obtained by quantum measurements after the controllable unitary evolution. In this vein, there is of great significance to utilize quantum systems to implement feasible quantum algorithms within noisy intermediate-scale quantum (NISQ) devices for time series prediction and dynamics simulation.
One way to harness quantum information is to construct heuristic quantum algorithms based on the classical counterparts [2; 3; 4; 9; 10; 11]. Conventional classical algorithms designed for time series prediction are mainly based on the recurrent neural network (RNN) [12; 13; 14] and its variants [15; 16]. In addition, a special type of network called an Echo State Network (ESN) [17; 18], based on a concept of Reservoir Computing [19], has demonstrated potential for emulating and predicting dynamics systems. Inspired by the memory representation and neural network architecture of these models, the quantum counterparts such as quantum reservoir computing (QRC) [1; 20] and quantum recurrent neural networks (QRNNs) [2; 3; 4; 11], have shown great success in capturing the temporal dependencies of sequential data [11; 21; 22; 23; 24; 25; 26; 27; 28]. Although there exists quantum embedding of classical dynamics [29] revealing the simulation of classical dynamics on quantum computers, the dynamical mechanics of these recurrent quantum models, especially the interdependency between the memory and the complementary space of observed data, is still ambiguous. On the other hand, despite previous reports on the physical implementation of QRC [30; 31] and discussions of realistic quantum measurement protocols [32], it is still an open question of how to preserve fragile quantum coherence with time sequence growing on the NISQ devices.
In this study, we propose a quantum-discrete-map-based recurrent neural network (QDM-RNN) for learning temporal data which is generated from observation of dynamical systems. The QDM-RNN is developed as a method for trainable embedding of original space (called data space) into the entire state space of the quantum discrete map (QDM), and then the learning procedure is analyzed from a dynamical perspective. In this way, the qubits are divided into data register for the original space and memory register for the remaining space of the entire discrete map. With quantum circuits, including data encoding and measurement, served as encoder, the recurrent neural network is established analogously to classical ones [12; 13; 14; 15; 16] to learn the dynamical
features. During the training phase, a gradient-based optimization technique is used to minimize the cost function, defined as the mean square error (MSE) between observed and reproduced data. As an example, the weakly nonlinear oscillator is analyzed theoretically for its embedding into high-dimensional dynamics, further demonstrates the proposed method numerically on one-dimensional wave predictions and two-dimensional nonlinear dynamics simulation tasks. The quantum circuit depth is expected not to grow with the length of time series. This study combines quantum machine learning and dynamical system analysis, which will contribute to the time series prediction in various fields.
The rest of this paper is organized as follows. Quantum neural networks and recurrent learning methods are introduced in Sec. II. In Sec. III, we describe the implementation of the QDM-RNN, provide algorithm details and analyze the algorithm feasibility from a dynamical perspective. Numerical results and analysis are shown in Sec. IV. Conclusion and discussion are given in Sec. V.
## II Definitions and Preliminaries
### Quantum neural networks
Quantum neural networks (QNNs), which target to learn the features or the distributions of data, have been successfully employed to a variety of supervised tasks [33, 34, 35, 36, 37, 5, 37]. The basic idea of QNNs is to reproduce a classical data that approximates its corresponding teacher value. The quantum circuit used to process classical data, which is called ansatz, is composed of a set of either fixed or parametrized quantum gates. Typically, the ansatz is widely used with a state preparation layer followed by several unitary blocks, which can be formulated as \(|\psi(\mathbf{\theta})\rangle=U(\mathbf{\theta})S(\mathbf{x})|0\rangle^{\otimes n}\), where \(|\psi(\mathbf{\theta})\rangle\) is the final state, \(|0\rangle^{\otimes n}\) is an \(n\)-qubit initial state, \(U(\mathbf{\theta})\) represents the parametrized quantum circuit and \(S(\mathbf{x})\) is the state preparation layer for some input \(\mathbf{x}\), shown in Fig. 1. Then the quantum circuit is measured with a selected observable, and the results will be fed into an optimizer to minimize the loss function of learning tasks.
As the beginning of a quantum neural network, the initial state is prepared with an encoding layer \(S(\mathbf{x})\). A variety of data encoding methods [38] are considered in quantum machine learning domain such as amplitude encoding and angle encoding. As a hardware-efficient encoding method, the angle encoding method encodes the input data \(\mathbf{x}\) into rotation angles with single layer rotation gates.
The unitary operation \(U(\mathbf{\theta})\) is composed of single-qubit rotations and multi-qubit entanglement such as CZ gates. Then the quantum circuit is measured with an observable, \(\mathcal{O}\), which can be decomposed into a linear combination of Pauli tensor product \(P_{\alpha}\) such that
\[\mathcal{O}=\sum_{\alpha}c_{\alpha}P_{\alpha}, \tag{1}\]
where \(P_{\alpha}=\sigma_{1}^{\alpha_{1}}\otimes\sigma_{2}^{\alpha_{2}}\otimes\cdots \otimes\sigma_{n}^{\alpha_{n}}\) with \(\sigma_{i}^{j}\in\{\mathbb{I},\sigma^{x},\sigma^{y},\sigma^{z}\}\), \(\alpha\) is an index set and \(c_{\alpha}\) is the real coefficient of \(P_{\alpha}\). The expectation of the trial state can be written as:
\[L(\mathbf{\theta})=\langle\psi(\mathbf{\theta})|\mathcal{O}|\psi(\mathbf{\theta})\rangle. \tag{2}\]
Finally, classical optimization techniques are used to update the parameters of the quantum circuit in order to minimize the objective function. The gradient of the cost function with respect to parameters can be computed with the parameter-shift rule [39] which would allow the parameter updated using gradient-based optimizer. Additionally, gradient-free optimization algorithms can also be used in numerical simulations.
### Recurrent learning: classical and quantum
In the machine learning domain, the typical supervised learning task involves a dataset \(\mathcal{D}=\{(\mathbf{x}^{i},\mathbf{y}^{i})\}\) of \(N\) samples, with the goal of learning an inference function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) that maps a data point \(\mathbf{x}\in\mathcal{X}\) to its corresponding teacher signal \(\mathbf{y}\in\mathcal{Y}\). For our purposes, the problem is restricted to sequential tasks, specifically, learning from temporal event series \(\{\mathbf{x}_{t}\}_{t=0}^{N-1}\) with each corresponding label \(\mathbf{y}_{t}=\mathbf{x}_{t+1}\). The inference procedure can be formulated as follows:
\[\hat{\mathbf{y}}_{t}=f(\mathbf{h}_{t-1},\mathbf{x}_{t};\mathbf{\theta}), \tag{3}\]
where \(\mathbf{h}_{t-1}\) represents historic information at time \(t\), \(\mathbf{\theta}\) is the parameter vector of the model and \(\hat{\mathbf{y}}_{t}\) is the estimator of \(\mathbf{y}_{t}\).
The objective of this task is to minimize the cost function, which is defined as the mean square error on the training subset of size \(T\), represented by the following equation:
\[C(\mathbf{\theta})=\frac{1}{T}\sum_{t}\|\hat{\mathbf{y}}_{t}-\mathbf{y}_{t}\|^{2}. \tag{4}\]
Various RNN models [12, 13, 14, 15, 16] have been developed for these types of tasks. As a special framework of RC [19],
Figure 1: **Generic architecture for quantum neural networks.** The gate \(S(\mathbf{x})\) is for state preparation with some input \(\mathbf{x}\) and \(U(\mathbf{\theta})\) is parametrized quantum circuit.
ESN [18] has shown the potential for dynamics emulation and prediction. These models provide insight into memory representation and neural network architecture. In these models, classical memory is updated at each time input and transmitted to the next time step. In addition to classical neural networks, the innovation of memory has also been explored in the realm of quantum quantum computing. Quantum memory, manifested as quantum density matrix on the partial state space, plays a crucial role in the information processing.
However, the information processing with quantum systems is comparatively different from the classical ones. As shown in Sec. II.1, the classical data is encoded into quantum states by the rotation gates and then followed by the evolution of quantum systems at each time step. The unobserved part of the system, which is denoted as the quantum memory, is interacted with injecting states throughout the whole time series in QRC and QRNN models [1; 2]. Then the predictions of time series are obtained by quantum measurements on the remaining system. By employing permissible linear combinations of quantum measurements, these models achieve maximal predictive capacity within their scope. Nevertheless, the classical-quantum-classical data processing mode lacks a dynamical perspective that explains its underlying mechanisms. This gap motivates us to investigate the link between the hidden space of dynamics and memory representation. On the other hand, the information processing protocol necessitates the circuit depth equivalent to the length of the time series, resulting in non-negligible challenges for the quantum system, particularly the fragile quantum coherence. Consequently, the existing quantum recurrent learning techniques is severely restricted by the current hardware capabilities in the NISQ era.
In the following section, the QDM-RNN is presented to learn the dynamics evolution of time series by leveraging quantum circuits as discrete maps of data and memory for each time step. This method establishes the internal connection between the discrete map and dynamical systems, and overcomes the limitations posed by circuit depth.
## III Methods
To begin with, the QDM-RNN is introduced and analyzed theoretically. Then, the nonlinear dynamics exhibited by the QDM-RNN is examined and the expressiveness of the QDM circuit is demonstrated through an illustrative example.
### Quantum-discrete-map-based recurrent neural networks
The dynamical system and time series are highly interconnected: time series can be viewed as classical measurement results of a complex dynamics. As low-dimensional discrete time mapping is associated with a high-dimensional ordinary differential equation, it is convenient to reduce the study of continuous time flows to discrete maps, and inversely embed the discrete map into high-dimensional system. Unfortunately, there are no common methods to construct the original state space with observed data sequence. To simplify the analysis, the dynamical system is constrained to a discrete autonomous system formulated as follows:
\[X_{t+1}=F(X_{t}). \tag{5}\]
This is consistent with Eq. (3) when the data is autonomous, i.e., \(\mathbf{y}_{t}=\mathbf{x}_{t+1}\), and the state vector \(X_{t}=(\mathbf{m}_{t},\mathbf{x}_{t})\) consists of a memory part \(\mathbf{m}_{t}\) and a data part \(\mathbf{x}_{t}\). (For consistency, \(\mathbf{m}_{t}\) is used to represent the hidden space and called memory in time series rather than \(\mathbf{h}_{t}\) in recurrent neural network formulation). The analysis in Sec. III.2 reveals that the function \(F(\cdot)\), which is produced by a quantum circuit and does not necessarily need to be linear, plays an important role in dynamics simulation and time series prediction.
The scheme of the discrete map is shown in Fig. 2. The proposed QDM for recurrent neural network consists of two quantum registers, called memory register (upper part) and data register (lower part), with \(n_{m}\) and \(n_{x}\) qubits respectively. The quantum circuit consists of three parts: encoding, parametrized evolving and quantum measurement. The quantum gates \(U_{in}(\mathbf{m}_{t})\) and \(U_{in}(\mathbf{x}_{t})\) are applied to two registers which are initialized to \(|0\rangle^{\otimes n_{m}}\otimes|0\rangle^{\otimes n_{x}}\) at each time step, respectively. Then, a parametrized gate \(U(\mathbf{\theta})\) is applied to the state, similar to QNNs. Measurements in a quantum system are generally realized by a set of projective operators. Here, denote \(\mathcal{O}_{m}=(\mathcal{O}_{m}^{1},\ldots,\mathcal{O}_{m}^{n_{m}})\) and \(\mathcal{O}_{x}=(\mathcal{O}_{x}^{1},\ldots,\mathcal{O}_{x}^{n_{x}})\), then each element of them is a combination of selected projective operators taking the form as Eq. (1) with \(n=n_{m}+n_{x}\). The classical information is retrieved by measurements formulated as \(\hat{X}_{t+1,j}=\mathrm{Tr}[\mathcal{O}^{j}\rho]\), where \(\hat{X}_{t+1}=(\mathbf{m}_{t+1},\hat{\mathbf{x}}_{t+1})\) and \(\hat{\mathbf{x}}_{t+1}\) is the estimator of \(\mathbf{x}_{t+1}\). However, a subset of Pauli tensor products, \(\{\sigma_{z,i}\}_{i=1}^{n}\) for \(\sigma_{z,i}\) as the \(\sigma_{z}\) measurement on \(i^{\mathrm{th}}\) qubit, is widely used as observables and is fruitful for special supervised tasks.
Hence, the QDM can be seen as a time iterator, taking the state vector at time step \(t\) as input and producing another state vector at time step \(t+1\) as output. The training and test phases are depicted in Fig. 3. The cost function is computed after the whole training sequence for \(T\) steps, as showed in Eq. (4). The prediction error, which is also measured as MSE, is calculated on a subset of data sequence from \(T\) to \(N\). The parameters are then optimized with an algorithm called Broyden-Fletcher-Goldfarb-Shanno (BFGS) [40; 41; 42; 43] implemented in SCIPY [44] on a classical computer in numerical simulations.
### Quantum discrete map: a dynamical perspective
Many existing studies have considered quantum circuits as a type of feature map [5; 6; 7; 8]. The QDM-RNN can be viewed as a more complex feature map. Although it has been demonstrated that parametrized qauntum circuits can be powerful function approximators with the use of effective data encoding and post-processing techniques, their ability for function learning with hidden states and autoregressive constraints is still not clear. The data component of state vector \(X_{t}\), \(\mathbf{x}_{t}\), can be non-uniquely mapped, which implies that the desired model should depend not only on the input \(\mathbf{x}_{t}\) but also on the hidden \(\mathbf{m}_{t}\). Nevertheless, if the state vector \(X_{t}\) as a whole is considered under an unknown function \(F(\cdot)\), the stability over whole time sequence needs to be taken into account, which makes the learning task much more challenging than in previous studies [5; 6; 7]. Motivated by this, the quantum discrete map is analyzed using methods in discrete dynamical systems.
An \((n_{m},n_{x})\)-QDM is defined following the architecture shown in Fig. 2(a), with \(n_{m}\) qubits in the memory register and \(n_{x}\) qubits in the data register. To simplify the circuit of quantum discrete map, the encoding layer \(U_{in}(\cdot)\), which takes both the data vector \(\mathbf{x}_{t}\) and memory vector \(\mathbf{m}_{t}\) as input, is replaced with a rotation \(R_{Y}^{i}(\arccos(\cdot))\) for the \(i^{\text{th}}\) qubit. Here, \(R_{Y}(\cdot)\) is defined as \(R_{Y}(\theta)=e^{-i\frac{\theta}{2}\sigma_{y}}\).
Figure 3: **The training and test phase illustrations of QDM-RNN.** The quantum discrete map is treated as an iterator with input \(X_{t}=(\mathbf{m}_{t},\mathbf{x}_{t})\) and out \(X_{t+1}=(\mathbf{m}_{t+1},\mathbf{x}_{t+1})\) at time step \(t\). (a) In training phase, each input data \(\mathbf{x}_{t}\) is taken from the training ensemble in time order, whereas the output data \(\hat{\mathbf{x}}_{t}\) is collected and fed into cost function. After \(T-1\) steps of updates with external control, the system is autonomously evolved as illustrated in (b). The memory state is evolved simultaneously with data state in both cases but its is not included in the cost function.
Figure 2: **Quantum circuit for discrete map. (a) The upper part, named the memory register with \(n_{m}\) qubits, is used for inputting memory (hidden variables) \(\mathbf{m}_{t}\) at time \(t\). Similarly, the lower part, named data register with \(n_{x}\) qubits, is used for inputting data \(\mathbf{x}_{t}\) at time \(t\). The circuit is evolved under a parametrized unitary operator \(U(\mathbf{\theta})\) after encoding layer. At the end of the circuit, we measure it with different observable ensembles \(\mathcal{O}_{m}\) and \(\mathcal{O}_{x}\) to obtain classical information. (b) The simplified version of (a) with two qubits uses \(R_{Y}(\cdot)\) as the encoding gates and parametrized single-rotations, and CZ as the entanglement gate.**
This is based on the intuition that the input itself can be retrieved with \(\sigma_{z}\) expectation without applying any evolution. Specifically, for a selected dimensionality \(x\) from the rescaled state vector \(X\in[-1,1]^{\otimes n}\), the retrieving can be formulated as follows:
\[x=\langle 0|R_{Y}^{\dagger}(\arccos(x))\sigma_{z}R_{Y}(\arccos(x))|0\rangle. \tag{6}\]
The unitary operator \(U(\mathbf{\theta})\) is simplified to multi-layer single rotations and controlled-\(Z\) gates. By utilizing a \((1,1)\)-QDM architecture, as illustrated in Fig. 2(b), and the settings described above, the theoretical mapping equations are in the following form:
\[\left\{\begin{array}{ll}m_{t+1}&=m_{t}\cos\theta_{1}-x_{t}\sqrt{1-m_{t}^{2}} \sin\theta_{1},\\ x_{t+1}&=x_{t}\cos\theta_{2}-m_{t}\sqrt{1-x_{t}^{2}}\sin\theta_{2}.\end{array}\right. \tag{7}\]
In our analysis, the original data \(x_{t+1}\) and its estimator \(\hat{x}_{t+1}\) are equivalent because as a generative model, the teacher signals are not considered here. Although quantum computation is linear and the quantum operators must be unitary, it is possible to obtain a nonlinear discrete map of the state vector in the QDM-RNN. The nonlinearity, which appears as a square root in Eq. (7), arises from the encoding and measurement protocols. However, analyzing nonlinear dynamics systems involves seeking for fixed points and linearizing the system in a small neighborhood of those fixed points. This approach is not efficient when the initial values are far away from fixed points, as it cannot be extrapolated to the entire phase plane. Although we can easily calculate the fixed point as \((0,0)\), the linear properties near it cannot be generalized to the entire phase space.
For a special case where \(\theta_{2}=-\theta_{1}=\theta>0\) and \(x,m\in[-0.5,0.5]\), the square root terms in Eq. (7) vary from \(0.866\) to \(1.0\). To linearize the equations, the square root is substitute \(\sqrt{\cdot}\) with a constant \(\gamma\in(0.866,1.0)\) and then the Eq. (7) are rewritten in matrix from as \(X_{t+1}=RX_{t}\). Here \(R\) is the Jacobi matrix:
\[R=b\begin{bmatrix}\cos a&\gamma\sin a\\ -\gamma\sin a&\cos a\end{bmatrix}, \tag{8}\]
where \(a\) is an angle correction of \(\theta\) and \(b\) is an amplitude correction compared to \(1\). The eigenvalues of \(R\) are \(\lambda_{\pm}=b(\cos a\pm i\gamma\sin a)\). By optimizing \((a,b,\gamma)\) for a given initial point \(X_{0}=(0,0.5)\) and fixed value of \(\theta=0.04\pi\) (the angle corresponds to the angular frequency of cosine wave in Sec. IV, which will be further discussed later), the eigenvalues satisfy \(|\lambda_{\pm}|<1\) based on the optimal values \((a^{*},b^{*},\gamma^{*})=(0.12754,0.9996,0.9973)\). This means that the fixed point \((0,0)\) is a stable focus and that the dynamics will converge to it over a long evolution.
A more concise way to express the discrete map is to rescale this matrix with \(R=\beta\tilde{R}\), where \(\beta=b\sqrt{\cos^{2}a+\gamma^{2}\sin^{2}a}<1\) and \(\tilde{R}\) is an orthogonal matrix defined as:
\[\tilde{R}=\begin{bmatrix}\cos\alpha&\sin\alpha\\ -\sin\alpha&\cos\alpha\end{bmatrix}, \tag{9}\]
with \(\alpha=\arctan(\gamma\arctan(a))\). For a given initial point \(X_{0}\), the result after \(t\) iterations is \(X_{t}=\beta^{t}\tilde{R}^{t}X_{0}\). Therefore the norm of \(X_{t}\) is \(\|X_{t}\|=\|\beta^{t}\tilde{R}^{t}X_{0}\|=\beta^{t}\|X_{0}\|\), which reveals the trajectory on the phase plane is logarithmic spiral, shown in Fig. 4. The trajectory generated by quantum discrete map is bounded by the surface generated by linear dynamics of Eq. (8), which means that the linear approximation is reasonable when QDM is weakly nonlinear.
However, the observables of quantum circuits can be adjusted to realize similar dynamic simulations. By adding a scalar \(\mu\) on the right-hand side of Eq. (7), the Jacobi matrix becomes \(\mu R=\mu\beta\tilde{R}\), and the eigenvalues are \(\lambda_{\pm}=\mu\beta(\cos\alpha\pm i\gamma\sin\alpha)\). By adjusting \(\mu\), the norm of eigenvalues can be equal to or larger than \(1\), which means the fixed point is a center or an unstable focus, respectively.
Another feature of QDM is that the dynamics expressivity increases as the qubit number of the memory register increases. For example, the \((n_{m},1)\)-QDM is proved to be more powerful than the \((1,1)\)-QDM when focusing on the dynamics simulation on data space. In other words, it is proved that \((1,1)\)-QDM is just one special case of the \((n_{m},1)\)-QDM. Take \(n_{m}=2\) as an example, the QDM equations with simplified quantum gates are as follows:
Figure 4: **Comparison of QDM trajectory and its linear approximation.** The trajectory is generated by QDM of Eq. (7) with extension on the timeline and the surface is plotted with dynamics of Eq. (8), where the parameters are set to \((a^{*},b^{*},\gamma^{*})\) in the main text.
\[\left\{\begin{array}{ll}m_{1,t+1}&=m_{1,t}\cos\theta_{1}+\sqrt{1-m_{1,t}^{2}}(-1 -m_{2,t}-x_{t}+m_{2,t}x_{t})\sin\theta_{1},\\ m_{2,t+1}&=m_{2,t}\cos\theta_{2}+\sqrt{1-m_{2,t}^{2}}(-1-m_{1,t}-x_{t}+m_{1,t}x _{t})\sin\theta_{2},\\ x_{t+1}&=x_{t}\cos\theta_{3}+\sqrt{1-x_{t}^{2}}(-1-m_{1,t}-m_{2,t}+m_{1,t}m_{2,t })\sin\theta_{3},\end{array}\right. \tag{10}\]
where \((\theta_{1},\theta_{2},\theta_{3})\) are parameters of quantum circuits, and \(X_{t}=(m_{1,t},m_{2,t},x_{t})\) is the state vector with a two-dimensional memory and its value constraint is set like the \((1,1)\)-QDM. With \(m_{1,0}=-1\) and \(\theta_{1}=0\), the three-dimensional QDM is reduced to two-dimensional map similar to Eq. (7). As shown in Fig. 5(a), when the initial value of \(m_{1,0}\) varies far away from \(-1\) in the domain \([-1,1]\), the trajectory departs the logarithmic spiral and quickly converges to fixed point. As indicated in Fig. 5(b), the perturbation of \(\theta_{1}\) on different directions leads to contrasting results, which shows that the dynamics can be very different in the parameter space. The case with more memory qubits is just similar to \((1,1)\)-QDM and it is proved that the ability of QDM with more memory qubits is more powerful than the QDM with fewer qubits.
The architecture of the ansatz circuit is crucial to the QDM. However, the nonlinearity of the ansatz circuit, which is valuable in generating nonlinear dynamics, poses significant challenges in analyzing the fixed points and flows theoretically. Despite these obstacles, the scalability and nonlinearity of QDM make the QDM-RNN model a promising solution for time series prediction by learning the features of a given sequence and embedding it into high-dimensional space dynamics.
## IV Demonstrations of the QDM-RNN for time series prediction
This section presents several demonstrations to showcase the versatility of QDM-RNNs. We start by outlining the general settings for the proposed QDM-RNN to establish the scope of our discussion. The demonstrations include simulating three wave functions, studying the dynamics under Rayleigh equations, and modeling a coupled two-dimensional oscillator system. These demonstrations bridge the domains of machine learning, quantum computation, and dynamics systems. Subsequently, the detailed analysis of the performance is provided for each task.
### Model setup
The circuit architecture of the QDM is tailored to the complexity of the task. In this work, two quantum circuit architectures, represented by \(A_{1}\) and \(A_{2}\), are proposed for numerical simulation. For simplicity, the encoding gates applied on data register of QDM-RNNs are designed to be \(R_{Y}(\arccos(\cdot))\). However, it is worth noting that the encoding gate formulation can vary depending on tasks. Furthermore, the qubit numbers of data register and memory register also vary with the size of each task. Therefore, each task is demonstrated with independent settings.
Architecture \(A_{1}\):This is a simplified circuit of the QDM, as shown in Fig. 2(b). A CZ gate entanglement layer and single-qubit rotations \(R_{Y}(\cdot)\) are employed to the prepared state. The CZ gate is converted to multi-memory qubit controlled Z gate on data register when the qubit number of memory register is larger than one. The combination of \(R_{Y}(\cdot)\) and CZ gate is repeated for \(D_{ansatz}\) layers to enhance the expressivity of the QDM. The QDM circuit is then measured with the expectation \(\langle\sigma_{z}\rangle\) to obtain the state vector of the next time step.
Architecture \(A_{2}\):This is designed to enhance the ansatz expressivity in each ansatz layer when simulating more complex dynamics. The unitary operator \(U(\mathbf{\theta})\) is composed of single-qubit rotations and Hamiltonian evolution on the entire circuit. Specifically, the single-qubit rotations are specified as in:
\[U_{1}(\alpha,\beta,\gamma)=R_{X}(\alpha)R_{Z}(\beta)R_{X}(\gamma), \tag{11}\]
where \(\alpha\), \(\beta\) and \(\gamma\) are real numbers and belong to parameter ensemble \(\mathbf{\theta}\), and \(R_{X}\) and \(R_{Z}\) are single-qubit rotations defined as \(R_{X}(\theta)=e^{-\frac{i}{2}\theta\sigma_{x}}\) and \(R_{Z}(\theta)=e^{-\frac{i}{2}\theta\sigma_{z}}\) for some angle \(\theta\), respectively. After rotations, the Hamiltonian evolution is applied and denoted as \(e^{-iH\tau}\), where \(\tau\) is the evolution time and \(H\) is defined as [2; 45]:
\[H=\sum_{i=1}^{n}h_{i}\sigma_{x,i}+\sum_{i=2}^{n}\sum_{j=1}^{i-1}J_{ij}\sigma_{z,i}\sigma_{z,j}. \tag{12}\]
The coefficients \(h_{i},J_{ij}\) are generated randomly from a uniform distribution on \([-1,1]\) and serve as hyper-parameters during the training phase. The measurement protocol is the same as \(A_{1}\).
It is important to note that the linear transformation on measurement results is permitted and usually appears in the form of a scalar number \(r\) on data register expectations, where \(r\) is optimized with parameters \(\mathbf{\theta}\) together. In the training phase, the objective function, known as MSE loss function in Eq. (4), is optimized with BFGS algorithm. Also, the MSE on the entire test dataset is calculated and shown together with circuit settings in Tab. 1.
### Wave prediction
Cosine wave:The cosine wave can be expressed as \(x_{t}=0.5\cos(\pi\Delta t)\), where \(\Delta\) is time slice value and is set to be \(\Delta=0.04\) in practical demonstrations for this and subsequent examples, unless otherwise specified. The time sequence is set to be \(0\leq t<200,t\in\mathbb{Z}\) and each phase of training and test is contains 100 data points. In Fig. 6(a) and Tab. 1, the QDM-RNN, using a two-qubit QDM and architecture \(A_{1}\), predicts the cosine wave with an extremely low mean square error in autonomous data regeneration shown. In Sec. III.2, the relationship is
Figure 5: **Trajectories of \((1,2)\)-QDM with different parameter settings.** (a) With different initial values of \(m_{1}\), the first readout of quantum memory register, the dynamics of quantum discrete map, which is proved as logarithmic spiral, changes erratically. It is also shown that the fixed point gradually moves away from the origin on the projective plane supported by axes \(m_{2}\) and \(x\). (b) The performance of quantum discrete map is greatly influenced by slight perturbation on \(\theta_{1}\), the first parameter of the quantum circuit.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Qubit Number \((n_{m},n_{x})\)} & \multirow{2}{*}{Architecture} & \multirow{2}{*}{\(D_{ansatz}\)} & \multirow{2}{*}{\(m_{init}\)} & Scalar on & Scalar on & MSE \\ & & & & & Memory & Data & \\ \hline Cosine wave & \((1,1)\) & \(A_{1}\) & \(1\) & \(0\) & \(1\) & \(1\) & \(1.10\times 10^{-5}\) \\ Damped oscillator & \((1,1)\) & \(A_{1}\) & \(1\) & \(0\) & \(1\) & \(r\) & \(4.03\times 10^{-4}\) \\ Triangular wave & \((1,1)\) & \(A_{2}\) & \(3\) & \(0\) & \(1\) & \(r\) & \(1.02\times 10^{-4}\) \\ Rayleigh equation & \((1,2)\) & \(A_{2}\) & \(3\) & \(0\) & \(1\) & \(r\) & \(2.81\times 10^{-2}\) \\ Coupled oscillators & \((2,2)\) & \(A_{2}\) & \(3\) & (0,0) & \(1\) & \(r\) & \(3.32\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main architecture settings and results of time series predictions.
discussed between the QDM and the logarithmic spiral on a plane supported by \((m,x)\) in the 2-qubit settings. This implies that the QDM-RNN has the potential to learn the parameters of a cosine wave which can be viewed as the visible part of state vector \(X_{t}\) in weak linearization. Although the desired map of \(X_{t}\), taken as \(X_{t+1}=f(X_{t})\), is linear, as discussed before, the iteration of \(x_{t}\) is not a mapping because the prediction may be in opposite direction with same value but a different time index of \(x_{t}\).
Damped oscillator:As an exactly solvable linear system, the damped oscillator is defined as the following equation:
\[\ddot{x}+\varepsilon\dot{x}+\omega^{2}x=0, \tag{13}\]
where \(\varepsilon\) is the damping coefficient and \(\omega\) is the angular frequency. In the simulation, the initial value of \((x,\dot{x})\) is set to be \((0.5,-0.125)\) and the hyper-parameters \(\varepsilon\) and \(\omega\) are set to be \(0.5\) and \(\pi\), respectively. All QDM-RNN settings for this task are the same as those for the cosine wave simulation, except that the scalar on data register is changed to be \(r\) and optimized together with circuit parameters. From Fig. 6(b) and Tab. 1, it can be observed that the predicted wave closely matches the teacher signals, with only a slight shift at the end of the run due to the modification of the expectation's scalar.
Triangular wave:The formulation of the triangular wave is as follows:
\[x_{t}=\left\{\begin{array}{ll}|t\Delta-1|-\frac{1}{2},&(0\leq t<50),\\ |t\Delta-3|-\frac{1}{2},&(50\leq t<100),\\ |t\Delta-5|-\frac{1}{2},&(100\leq t<150),\\ |t\Delta-7|-\frac{1}{2},&(150\leq t<200).\end{array}\right. \tag{14}\]
The prediction of a triangular wave is a more complicated task that previous two because there is a discontinuity in the plotting curve. The previous analysis in Sec. III is based on weakly nonlinear approximation, and it is unclear whether the QDM-RNN can learn strong nonlinearities. Nevertheless, this example demonstrates the potential for such ability. To investigate the learnability
Figure 6: **Performances for 1-dimensional wave predicting tasks.** The qubit number, ansatz architecture, and MSE information, etc., are listed in Tab. 1. (a) The cosine wave task. (b) The damped oscillator task. (c) The triangular wave task. In all three tasks, the training data \(\{x_{t}\}_{t=0}^{T-1}\) and test data \(\{x_{t}\}_{t=T}^{N-1}\), are represented by blue solid lines and orange sold lines, respectively. Similarly, the training output \(\{\hat{x}_{t}\}_{t=0}^{T-1}\) and test output \(\{\hat{x}_{t}\}_{t=T}^{N-1}\) are scattered with blue points and orange points, respectively.
of the QDM-RNN, we use architecture \(A_{2}\) and increase the ansatz layer number to 3. Additionally, a scalar \(r\) on data register is used after the measurement. The predicted sequence is shown in Fig. 6(c), and the MSE is bounded to \(10^{-4}\).
### Dynamics simulation
The second part of numerical experiments involves dynamics simulation, including the dynamics under the Rayleigh equation as well as coupled oscillators.
Rayleigh equation:The Rayleigh equation, first derived and studied theoretically by Rayleigh, as well as its generalizations, have been widely studied and used in mathematics and engineering [46, 47, 48]. As a typical nonlinear dynamics system, analytical solutions for the Rayleigh equation are difficult to obtain, and only approximations are constructed. It is important to simulate the nonlinear system governed by the Rayleigh equation.
For convenience, we adopt the form of the Rayleigh equation developed by van der Pol [49]:
\[\ddot{x}-\varepsilon\dot{x}\left(1-\delta x^{2}\right)+\omega^{2}x=0, \tag{15}\]
where \(x\) is the position coordinate, which is a function of time \(t\), \(\epsilon\) and \(\delta\) are real parameters indicating the nonlinearity and the strength of the damping, and \(\omega\) is the angular frequency of the system.
The velocity term can be defined as \(\dot{v}\), and Eq. (15) can be rewritten as:
\[\left\{\begin{array}{ll}\dot{x}=&v,\\ \dot{v}=&\varepsilon v(1-\delta x^{2})-\omega^{2}x.\end{array}\right. \tag{16}\]
In the following simulation of the Rayleigh equation, the parameters are set to \(\epsilon=\pi\), \(\delta=3\), and \(\omega=\pi\). The input data, \(\{\mathbf{x}_{t}=(x_{t},v_{t})\}\) is a two-dimensional dataset, which is the visible part of state vector \(X_{t}=(m_{t},\mathbf{x}_{t})\). The initial input at time \(t=0\) is set to be \(\mathbf{x}_{0}=(0,0.01)\). Figure 7(a) depicts the performance of a three-qubit QDM on the Rayleigh oscillator system. After the training phase over 100 time steps, our system tends to remain relatively stable in the ideal trajectory but deviates when the prediction steps reach a large value.
Coupled oscillators:Exploring complex networks, such as coupled oscillators, is inherently a challenging task. From a nonlinear dynamics perspective, the crucial point is how a system of dynamically interacting components behaves collectively when given the coupled strength as well as the individual dynamics. Since theoretical solutions for this complex system are not accessible [50], predicting the dynamics with discrete time tends to be more important. Here, the simplest case is considered where two oscillators coupled to each other with the same characteristic frequency. Each oscillator is formulated as follows:
\[\left\{\begin{array}{ll}\dot{x}_{i}=&v_{i},\\ \dot{v}_{i}=&-\omega^{2}x_{i}+u_{i},\end{array}\right. \tag{17}\]
Figure 7: **Performances for 2-dimensional dynamics simulation tasks.** (a) Rayleigh equation. (b) Coupled oscillators. The dashed lines are plotted to indicate the resonance positions in the long run. The remaining training and test settings are the same to Fig. 6, and the QDM settings are listed in Tab. 1.
where \(i=1,2\) and \(u_{i}(t)\) is the coupling strength of each oscillator. For simplicity, by supposing \(u_{1}(t)=-u_{2}(t)=-0.3(v_{1}-v_{2})\) and eliminating the velocity terms, the equations of 2-coupled oscillators are obtained:
\[\left\{\begin{aligned} \ddot{x}_{1}+\omega^{2}x_{1}+0.3(\dot{x}_{1}- \dot{x}_{2})&=~{}0,\\ \ddot{x}_{2}+\omega^{2}x_{2}+0.3(\dot{x}_{2}-\dot{x}_{1})& =~{}0.\end{aligned}\right. \tag{18}\]
In this simulation, the input data \(\mathbf{x}_{t}\) consists of two position variables \(x_{1}\) and \(x_{2}\). With two qubits as the memory register, the state vector is \(X_{t}=(\mathbf{m}_{t},\mathbf{x}_{t})\). The initial values of coupled oscillator pair are set to \(\omega=\pi,x_{1,0}=0.5,x_{2,0}=0,\dot{x}_{1,0}=0,\dot{x}_{2,0}=0.2\pi\), and the time step length is \(\Delta=0.05\). Figure 7(b) plots the corresponding results of the four-qubit system with the initialized parameters, which indicates that the QDM-RNN can be used to emulate the dynamics of high dimensional system with a selected hidden space. As the time step increases, the two oscillators tend to resonance in the long runtime, which is consistent with the original dynamics of the system.
## V Conclusion and Discussion
In this paper, we proposed a quantum-discrete-map-based recurrent neural network to explore the features of dynamical system and predict time series. The QDM-RNN significantly combines the quantum machine learning and RNNs and reframes the explanation of recurrent learning methods in a dynamical context.
Compared to existing studies on time series prediction with quantum methods, this approach offers a heuristic solution that use a classical-quantum recurrent loop to address the key challenges of the quantum decoherence as the circuit depth increases. The circuit depth of the QDM-RNN, which is expected not to increase with the length of time series, make it easier to implement on NISQ devices. Numerical simulations demonstrate that the QDM-RNN is capable of regenerating classical dynamics when trained with prior series data and extrapolated into the prediction phase.
Furthermore, the QDM-RNN is potentially capable of higher-dimensional systems, which is not discussed here. The nonlinearity of the input data mapping and the correlation between different features are obtained through the data encoding and measurement protocols. The QDM-RNN provides the guideline for future studies to learn the nonlinearity of classical-quantum systems and the embedding of dynamics. This study presents an approach that integrates quantum machine learning with dynamical system analysis, thereby providing a new perspective for understanding the physical characteristics inherent to time series prediction.
**ACKNOWLEDGMENTS**
This work was supported by the National Natural Science Foundation of China (Grant No. 12034018), and Innovation Program for Quantum Science and Technology No. 2021ZD0302300.
|
2308.04887 | Targeted and Troublesome: Tracking and Advertising on Children's
Websites | On the modern web, trackers and advertisers frequently construct and monetize
users' detailed behavioral profiles without consent. Despite various studies on
web tracking mechanisms and advertisements, there has been no rigorous study
focusing on websites targeted at children. To address this gap, we present a
measurement of tracking and (targeted) advertising on websites directed at
children. Motivated by lacking a comprehensive list of child-directed (i.e.,
targeted at children) websites, we first build a multilingual classifier based
on web page titles and descriptions. Applying this classifier to over two
million pages, we compile a list of two thousand child-directed websites.
Crawling these sites from five vantage points, we measure the prevalence of
trackers, fingerprinting scripts, and advertisements. Our crawler detects ads
displayed on child-directed websites and determines if ad targeting is enabled
by scraping ad disclosure pages whenever available. Our results show that
around 90% of child-directed websites embed one or more trackers, and about 27%
contain targeted advertisements--a practice that should require verifiable
parental consent. Next, we identify improper ads on child-directed websites by
developing an ML pipeline that processes both images and text extracted from
ads. The pipeline allows us to run semantic similarity queries for arbitrary
search terms, revealing ads that promote services related to dating, weight
loss, and mental health; as well as ads for sex toys and flirting chat
services. Some of these ads feature repulsive and sexually explicit imagery. In
summary, our findings indicate a trend of non-compliance with privacy
regulations and troubling ad safety practices among many advertisers and
child-directed websites. To protect children and create a safer online
environment, regulators and stakeholders must adopt and enforce more stringent
measures. | Zahra Moti, Asuman Senol, Hamid Bostani, Frederik Zuiderveen Borgesius, Veelasha Moonsamy, Arunesh Mathur, Gunes Acar | 2023-08-09T11:37:39Z | http://arxiv.org/abs/2308.04887v2 | # Targeted and Troublesome: Tracking and Advertising on Children's Websites
###### Abstract
On the modern web, trackers and advertisers frequently construct and monetize users' detailed behavioral profiles without consent. Despite various studies on web tracking mechanisms and advertisements, there has been no rigorous study focusing on websites targeted at children. To address this gap, we present a measurement of tracking and (targeted) advertising on websites directed at children. Motivated by the lack of a comprehensive list of child-directed (i.e., targeted at children) websites, we first build a multilingual classifier based on web page titles and descriptions. Applying this classifier to over two million pages from the Common Crawul dataset, we compile a list of two thousand child-directed websites. Crawling these sites from five vantage points, we measure the prevalence of trackers, fingerprinting scripts, and advertisements. Our crawler detects ads displayed on child-directed websites and determines if ad targeting is enabled by scraping ad disclosure pages whenever available. Our results show that around 90% of child-directed websites embed one or more trackers, and about 27% contain targeted advertisements--a practice that should require verifiable parental consent. Next, we identify improper ads on child-directed websites by developing an ML pipeline that processes both images and text extracted from ads. The pipeline allows us to run semantic similarity queries for arbitrary search terms, revealing ads that promote services related to dating, weight loss, and mental health; as well as ads for sex toys and flirting chat services. Some of these ads feature repulsive, sexually explicit and highly inappropriate imagery. In summary, our findings indicate a trend of non-compliance with privacy regulations and troubling ad safety practices among many advertisers and child-directed websites. To ensure the protection of children and create a safer online environment, regulators and stakeholders must adopt and enforce more stringent measures. Keywords-online tracking, children, privacy
## 1 Introduction
The proliferation of online tracking for analytics, behavioral advertising, and marketing has resulted in over a decade's worth of research into this (now mature) ecosystem. Prior research has shown that not only is online tracking rampant on the web [1] but that trackers use increasingly-invasive tracking mechanisms--e.g., third-party cookies, tracking pixels, evercookies, and browser fingerprinting [2, 3, 4, 1]--to relentlessly build detailed profiles of users across the web without any consent for targeted advertising.
Such privacy concerns aside, online advertising has shown to be problematic in other ways. Ads and ad networks are a vector for distributing ransomware, malicious programs, and cryptojackers--posing a serious security threat to users [5, 6, 7, 8, 9, 10, 11, 12, 13].
Ad networks also suffer from click fraud, which is estimated to reach $100 billion in 2023 [14, 15]. Finally, online ads often contain clickbait, untrustworthy, or distasteful content that peddle software downloads, listicles, and health supplements--all of which users find problematic to their online experience [16].
While online tracking and targeted advertising pose a threat to users of all ages, children especially bear an acute cost. Children may not fully understand the consequences of online tracking and revealing their personal data online [17, 18], but they yield immense "pester power" to influence their parents' purchase decisions [19]. Thus, children are an attractive target audience for advertisers and marketers alike [19, 20], they are more vulnerable to persuasive advertising [21, 22, 23], and they are more susceptible to harmful content [24, 25].
Despite the aforementioned evidence that suggests a differential impact on children, there is little empirical research on online tracking and advertising practices on children's websites. The lack of a comprehensive and updated list of websites directed at children poses a major challenge for studying children's websites. Previous large-scale internet measurement studies have relied on popular website lists such as Tranco and Alexa (before it shut down in 2021) [26, 27, 28], but these lists may not specify website categories, and even when they do, the website categories may not be reliable and comprehensive [29, 30]. As a result, prior work [31, 23] has only examined online tracking on at most a hundred children's websites and has been restricted in scope and methods--lacking a comprehensive investigation of both online tracking and advertising. To overcome this limitation, we built our own repository of child-directed
websites. We trained a text-based classifier that detects children's websites using HTML metadata fields such as <title> and <description>. The classifier is based on a pre-trained multilingual model that we fine-tuned for our binary classification task. Applying the classifier to the Common Crawl dataset [32], we compiled a list of 2K manually verified child-directed websites.
To study several online tracking, ad targeting, and problematic ad practices, we crawl our list of 2K child-directed websites--varying the location (five vantage points) and form factors (desktop & mobile). Starting with ad targeting, we study the extent to which ads that appear on children's websites are targeted--a practice that has come under increasing scrutiny both in the EU and the US [33, 34, 35]. We then present an exploratory analysis of ads from categories deemed problematic for children, such as dating, weight loss, mental health, and ads that contain ray content. Next, we turn to online tracking, which is a necessary ingredient of behavioral advertising. We study the ecosystem of trackers and specifically quantify the prevalence of trackers, cookies, and use of browser fingerprinting techniques such as Canvas, Canvas Font, AudioContext Fingerprinting, and WebRTC local IP discovery [1]. Our work is especially pertinent in light of impending regulatory changes. In the US, there have been calls [35] to update the Children's Online Privacy Protection Act (COPPA) [36] in order to prohibit "internet companies from collecting personal information from users who are 13 to 16 years old without their consent" and to "ban targeted advertising to children and teens." The US President Biden has called for a ban on collecting data on and serving targeted ads to children [34]; whereas in the EU, the upcoming Digital Services Act (DSA) will specifically prohibit ads targeted at children [33].
Our research seeks to offer empirical evidence on advertising and tracking practices employed on children's websites by making the following specific contributions:
* Using a lightweight classifier based on web page metadata fields, we build a repository of child-directed websites and crawl them to measure tracking and advertising practices using multiple vantage points and form factors (desktop & mobile).
* We measure targeted ads using two ad vendors' (Google and Criteo) ad disclosure statements, and find that targeting is enabled for 73% of the ads we could measure.
* Using text and images extracted from the ads, we detect _racy_ ads, and ads about _weight loss_, _dating_, and _mental health_ using semantic similarity search based on a lightweight, multilingual language model. While this content analysis is exploratory, our method enables human-in-the-loop investigations with arbitrary queries, and it paves the way for the automatic content analysis of ads.
* We also find ads linking to malicious content, and improper ads of sex toys, dating services, and ads containing sexually suggestive images (Figure 1).
All the data and software from our study will be made available to researchers.1
Footnote 1: We share the list of identified child-directed websites and a sample of advertisement disclosures on _[https://anonymous.4open.science/t/tracking-and-ads-on-child-directed-sites-BEF8_](https://anonymous.4open.science/t/tracking-and-ads-on-child-directed-sites-BEF8_).
## 2 Related Work
### Web tracking measurements
Over the past decade, several web privacy measurements have shown the scale and complexity of online tracking [37, 1, 38, 39, 40]. Research on _stateful_ tracking has examined how unique tracking identifiers are stored on the client side [41] using cookies [42, 43], localStorage [2], cache (ETags) [2], or other client-side storage mechanisms.
On the other hand, research on _stateless_ tracking has examined the use of fingerprinting, a mechanism that exploits differences in prowers and devices to obtain a likely unique identifier [44]. Past research has shown that there are various fingerprinting vectors, including fonts, clock skew, GPUs, audio hardware, installed writing scripts and browser extensions, among others [45, 46, 47, 1, 48, 49, 50].
Research on defense against fingerprinting has contributed methods to detect fingerprinting, tracking and advertising [51, 4, 1, 38, 52, 53].
Our study borrows heuristics from prior work [1, 38] to detect fingerprinting scripts, and we use existing filter lists to identify trackers and advertisers.
### Tracking & ads on children's media
Motivated by the challenges posed by ads to children, Cai and Zhao [23] manually labeled ads displayed on 117 children's websites. They found that 68% of the websites featured ads, and less than half complied with COPPA. The authors also argued that children are unlikely to distinguish many ads from the website's original content. Vlajic et al. [31] investigated online tracking on twenty websites from Alexa's "Kids and Teens" category [27] from two vantage points (EU & US). The authors manually analyzed the HTTP headers and quantified hidden images (i.e., likely tracking pixels) loaded from ads and analytics domains. Compared to this past work, we study orders of magnitude more websites, follow more rigorous tracking measurement methods, and compare results across different vantage points. Additionally, we automatically detect targeted advertisements using ad disclosure pages as well as present an exploratory analysis of the content of ads that appear on children's websites.
Focusing on mobile platforms, Reyes et al. [54] dynamically analyzed around 6,000 free children's Android apps and found that most apps violate COPPA due to their use of third-party SDKs.
### _Improper and malicious ads_
A recent line of research has investigated the content of online ads. Zeng et al. [16] conducted a survey with 1,000 participants to determine the type of advertising content (e.g., chumboxes, clickbait, political, and low-quality content) that makes people dislike ads. In [55], the same authors also studied problematic ads in the news and misinformation websites, where they found problematic ads served by native ad platforms. Finally, Zeng et al. [56] also investigated online political advertising leading to the 2020 US elections. They found that ads for misleading political polls that aim to collect email addresses are widely used in online political advertising. Subramani et al. [7] studied the role of web push notifications in ad delivery, especially malicious ads. Through a large-scale study of malicious ads, Zarras et al. [5] showed that some ad exchanges are more prone to serving malicious ads due to inadequate detection. Akgul et al. [57] examined influencer VPN ads on YouTube and found advertisements disseminating misleading claims about online safety. Ali et al. [58] measured how the distribution of potentially harmful ads on Facebook varies across users. Venkatadri et al. [59] used Facebook advertiser interface to study how Facebook obtains personal identifiers information used in advertising.
### _Ad transparency_
In response to concerns about targeted advertising, ad networks and platforms have offered ad transparency interfaces that allow users to ascertain when and how they are being targeted. Andreou et al. [60] investigated Facebook Ad explanations and found that they are often incomplete or misleading. Researchers have also argued that ad networks should provide users with interpretable explanations and increase the visibility of disclosure mechanisms [61].
Bin Musa and Nithyanand [62] developed ATOM, a technique for determining data sharing between online trackers and advertisers. They used simulated personas to crawl websites, collect ad images, and conduct statistical analyses to identify correlations between tracker presence and advertiser behavior. Liu et al. [63] developed a framework called _AdReveal_ to investigate different mechanisms used in online targeted advertising. Vallina et al. used statements found in Google's ad disclosure pages in their crowdsourced measurement of online behavioral advertisements [64]. In order to detect stealthy ads that aim to bypass adblockers, Storey et al. [65] developed an extension that detects the AdChoices icon using perceptual hashing. While we considered applying Storey et.'s method, we found URL-based detection of ad disclosure links (SS4.6) to be more reliable and efficient.
### _Website categorization_
The majority of studies on web categorization have focused on text-based classifiers because most web content and metadata are text-based [66], [67], [68], [69], [70], [71], [72], [73], [74]. Various studies used machine learning models such as BERT and recurrent neural networks to learn contextual representations and features of web pages using meta tags and body content [67], [66], [73], [69].
Other researchers proposed image-based web page classification techniques using pre-trained convolutional neural networks and using Google image search results [75], [76]. In our work, we built a lightweight classifier by fine-tuning an existing distilled language model and using text-based website metadata to detect child-directed websites.
Fig. 1: A sample of improper ads found on child-directed websites in our crawls.
## 3 Building a list of child-directed websites
It is estimated that there are more than one billion websites on the Internet [77], but only a small fraction are targeted at children. A central challenge, therefore, is identifying the websites that contain content directed to children. We initially searched and found three curated lists of children's websites: kidSAFE Seal Program [78], CommonSense (filtered for children below the age of 13) [79], and a list compiled by Kaspersky [80]. Unfortunately, these lists contained only a total of 355 websites, some of which were no longer online.
To expand our limited list, we experimented with web categorization services such as McAfee, WebShrinker, and SimilarWeb, but decided to use VirusTotal's (academic) API because other services were either not freely available or did not let us query in bulk. VirusTotal aggregates category labels from third-party scanners and categorization services, including BitDefender and TrendMicro [81]. We used the VirusTotal API to retrieve web category data for the top one million websites from the Chrome User Experience Report (CrUX) list from May 2022 [82]. We observed VirusTotal's rate limits (20K/day/per academic license) during the process, which took roughly four weeks. By searching for substrings "kid" and "child" in the returned category labels and removing false positives (such as "Child abuse"), we obtained 1,264 websites categorized as related to children. However, our manual verification of these websites following the criteria presented in Appendix A revealed that 68.6% of them were false positives, yielding only 396 child-directed websites.
Note that the low accuracy and inconsistency of domain classification/categorization services align with findings from prior work [30]. Combining our initial 355 websites with our verified list of 396 websites and removing all inaccessible (5) and duplicate (164) websites, we obtained a total of 582 child-directed websites.
Motivated by the lack of accurate, up-to-date, and comprehensive sources of child-directed websites, we built a classifier to detect child-directed websites using the list of 582 websites as labeled data. Figure 2 illustrates the training and fine-tuning process. We define "child-directed websites" as websites that are primarily intended for use by children and contain content, activities, or other features that are likely to be appealing to children. Additional details about our criteria for identifying children's websites can be found in Appendix A. Note that our criteria for labeling websites as child-directed do not fully overlap with COPPA's definition [36], and as such, we do not claim to measure compliance with COPPA or other relevant laws.
### Labeled data for ML classifier
Many web page classification methods use the entire text of the page [67], which can be resource-intensive and time-consuming. Alternatively, researchers have explored web page classification on metadata fields such as <title>, <description>, and <keywords>, which tend to be shorter and shown to have strong correlations with the topic of web pages [69]. Our preliminary analysis of over 500K web pages from the most popular one million websites in the Common Crawl dataset [32] showed that more than 97% of the websites have a title, 63% of the websites include a description, and 24% contain a keywords meta tag. Based on these availability statistics, we used the titles and descriptions for classification, leaving out the keywords. In order to extract the titles and descriptions, we use the following HTML tags: title, description, og:[title|description], and twitter:[title|description].
Applying this method to the WAT metadata files from the June-July 2022 Common Crawl snapshot [32], we extracted the titles and descriptions, limiting ourselves to the top million websites in the Tranco [26] or the CrUX [82] list. We further refined our data by keeping a single page with the shortest URL from each hostname, which is more likely to be the home page. This resulted in metadata from 2.28 million pages. We also extracted the same title and metadata information from the 582 known child-directed websites using a simple script based on Playwright [83]. In both instances, when the page had more than one description or title available, we picked the longest one.
After completing the data collection process, we constructed a training set for our classifier. For negative samples, we randomly selected 2,500 of the 2.28 million pages and manually checked to remove children's websites. Our positive samples consisted of 576 title-description pairs after filtering out websites with titles shorter than ten characters.
### Building the ML classifier
Our training data contained a limited number of labeled samples and our input consisted of text-based meta fields, potentially in multiple languages. This made designing naive classifiers such as bag-of-words and TF-IDF less suitable for our task. Instead, we employed a pre-trained and multilingual language model. Pre-trained models have proven to be adequate for general text classification tasks, but they need to be fine-tuned for the specific task [67]. In particular, we decided to use the _Paraphrase-Multilingual-MPNet-base-v2_ (PM-MPNet-v2) model from the _SentenceTransformers_[84, 85] library, which is a pre-trained multilingual and distilled model based on the MPNet method [86]. The
Figure 2: Pipeline for building a list of child-directed websites.
distillation process [84, 87] involves training a smaller model (student) to mimic the behavior of a larger model (teacher). In particular, PM-MPNet-v2 is fine-tuned with a large number of paraphrased sentences in more than 50 languages [84].
PM-MPNet-v2 cannot be directly used for text classification since it only produces embeddings that are useful for semantic similarity-related tasks. Thus, we used _HuggingFace's Trainer API_[88] and _AutoModelForSequenceClassification_[89] class to fine-tune the model and add a binary classification layer on top of the PM-MPNet-v2's embedding outputs. As input to the classifier, we used the concatenation of title and descriptions since this combination gave the best accuracy compared to using title or description alone. In particular, we fine-tuned the model to detect child-directed websites using the training set explained in SS3.1. We used Hugging Face Transformers [90] and Ray Tune's Population Based Training (PBT) algorithm [91, 92] to find the best-performing hyperparameters (batch size=12, epochs=2, and learning rate=4.2e-05). The fine-tuning process took roughly five minutes on a consumer-grade GPU (GeForce RTX 3080 Ti). Ultimately, our classifier achieved a precision of 86% and a recall of 70% using 10-fold cross-validation, as detailed in Table 8 in A.1.
### The list of 2K children's websites
Using the fine-tuned classifier, we calculated the label and probability score for 2.28M web pages from Common Crawl, excluding websites used in the training process. This process took roughly 5 hours. Our classifier identified 53,092 web pages as children's websites. We then manually verified the top 2,500 websites sorted by classifier probability, that is, starting with websites that are most likely to be child-directed. An evaluation of our classifier and the details of our manual verification process can be found in Appendices A.1 and A.2. Our final list contained 2,004 websites in 48 distinct languages after eliminating false positives and deduplicating websites by their registrable domain (TLD+1).
English was the most prevalent, accounting for 63% of all websites. The other prominent languages, including Russian, Spanish, French, German, and Portuguese, each accounted for a smaller proportion, with prevalence rates ranging between 3% and 6%. The list included 582 websites from the training data and 1,422 websites identified by the classifier.
**Website ranks:** 1,422 of the 2,004 websites were ranked in the top 1 million Tranco list (median rank 304K). While over a quarter of the websites are in the top 200K ranks, websites from all popularity levels are captured in our list. 404 of the 582 websites that are not ranked by Tranco were ranked in the top one million by the CrUX list. Only 163 (8%) websites were not ranked either by Crux or Tranco in the top one million.
**DNS0 Kids filter check:** DNS0 Kids [93] is a domain name resolver that detects and filters out content that is not suitable for children such as adult, dating, and copyright-infringing content. In order to find out the status of the websites in our list, we compared DNS0 Kids with CloudFlare's DNS resolver. If a website can be resolved by CloudFlare, but not by DNS0, we treated it as blocked. We found that only ten (0.5%) of the 2,004 websites in our list were blocked by DNS0. Reviewing these ten websites, we found six of them to contain pirated videos, including cartoons. The remaining four websites contained activities for children and it was not clear to us why they were blocked.
## 4 Web Tracking and Advertising Measurements
To assess the prevalence of trackers, fingerprinting scripts, and (targeted) advertisements on child-directed websites, we extended Tracker Radar Collector (TRC) [94]. TRC is a Puppeteer-based [95] web crawler, which consists of modules called _collectors_ that record different types of data during a crawl, such as HTTP requests/responses, cookies, screenshots, and JavaScript API calls. New collector modules can be easily added to collect data necessary to perform different web measurements such as ours. Specifically, we added the following collectors to TRC:
* FingerprintCollector (4.1): detects fingerprinting related function calls and property accesses
* LinkCollector (4.3): extracts inner page links
* VideoCollector (4.5): captures the crawl video
* AdCollector (4.6): detects ads and scrapes ad disclosures
We also used the existing TRC collectors, including RequestCollector to capture request/response details and detect tracking-related requests (4.2), TargetCollector to detect newly opened tabs in 4.6, CookieCollector to analyze cookies, and finally CMPCCollector (4.4) to interact with the consent dialogs and consent management platforms (CMP). We used TRC's anti-bot measures [94], which thwarts bot detection to a certain extent by overwriting artifacts typically probed by anti-bot scripts (e.g., navigator.plugins, Notification.permission) [96].
### Identifying fingerprinting attempts
Identifying fingerprinting scripts can be challenging due to obfuscation and potential false positives. For example, scripts may use Canvas API for both drawing images or fingerprinting the user's browser [47]. We draw on well-established methods to distinguish between fingerprinting and benign use of fingerprinting vectors [38, 1]. Specifically, we focused on Canvas, WebRTC, Canvas Font, or AudioContext fingerprinting and detected them using the heuristics presented by Iqbal et al. [38]. To detect fingerprinting attempts, we modified the getter and setter methods of the several Web APIs such as CanvasRenderingContext2D.fillText and HTMLCanvasElement.toDataURL to intercept potentially fingerprinting-related function calls and property
accesses. Although TRC has the capability to intercept JavaScript API calls, we implemented a separate collector (FingerprintCollector) to avoid a known issue that prevented TRC from intercepting early function calls [97]. FingerprintCollector simply injects the instrumentation script into each page and its descendant frames as soon as they are created. We verified that our collector captures the calls missed by TRC on fingerprinting test pages we developed and external fingerprinting demo pages such as BrowserLeaks [98].
### _Identifying tracking-related requests_
To determine whether a request is tracking related, we used the uBlock Origin Core [99] npm package, which reproduces the blocking behavior of uBlock Origin, a popular tracking protection extension [100]. We used the default filter lists used by uBlock Origin, which includes EasyList and EasyPrivacy, among others [101]. In order to correctly determine the blocked status of a request, we passed to uBlock Origin Core the resource type of the request (such as image or script) along with the page and request URL. We extracted the resource type and other details from the HTTP request/response details saved by the crawler.
We mapped the tracker domains to their owner entities (i.e., organizations/companies) using DuckDuckGo's entity map [102]. Using entities to quantify tracker prevalence reduces overcounting as multiple domains can be owned by the same business (e.g., googleanalytics.com and doubleclick.net are both owned by Google).
### _Discovering inner pages_
We refrained from only focusing on homepages as prior work found that websites' inner pages tend to contain more trackers and cookies [103, 104]. Thus, we also gathered five inner links from each of the 2,004 websites by conducting four separate link-collection crawls (desktop and mobile crawls from Frankfurt and NYC). To achieve this, we preferred to crawl sites from two vantage points, one from the EU and one from the US to minimize the time and effort required for the link collection process. We excluded links to external domains and documents such as PDFs or images. We also prioritized picking links closer to the center of the viewport to avoid collecting unrelated links from footers or other less visible parts of the page. Once we acquired the inner links, we combined them with the homepage URLs, resulting in the final URL set used for our study.
### _Interacting with consent dialogs_
Since the GDPR came into effect, websites typically show consent dialogs when viewed from the EU and to some extent even from the US [105]. Ignoring these dialogs may lead to _undermeasurement_ of the tracking and advertising practices. We decided to provide affirmative consent to all data processing request options (accept all) in our crawls to measure the full extent of advertisements and tracking a child could experience. To handle consent dialogs in an accurate and automated manner, we used DuckDuckGo's autoconsent library [106], which comes bundled with TRC [107]. Autoconsent incorporates rules from Consent-O-Matic [108, 109], and allows for programmatic interactions with the detected consent management provider (CMP).
### _Video screen captures_
To detect ads and scrape their disclosures, our crawler performed a series of interactions with the page, including dismissing popup dialogs, interacting with CMPs, and clicking on visible ad disclosure links(SS 4.6). To monitor these interactions, we added a video capture functionality to the crawler (VideoCollector). We used videos of the crawler's interactions to troubleshoot potential issues with the crawler process as well as to label animated ads and other crawl artifacts manually.
### _Identifying ads and ad targeting criteria_
The AdCollector performed three main functions: 1) detecting ads, 2) scraping ads --including its screenshot, links, iframes, scripts, videos, and background images, and 3) detecting and scraping ad disclosure pages to determine whether an ad is targeted or not.
**Detecting ads:** To detect ads, we built on Zeng et al.'s [16] approach to use EasyList's rules [110]. EasyList rules are commonly employed by popular add blockers to block or hide ads. For each detected ad element, the crawler recorded a set of attributes, including its position on the page, its dimensions, class, ID, and name, in addition to the complete HTML source and a screenshot. If the ad element contained any child elements, which was mostly true, the crawler recursively recorded their details, including all links, images, video, script, and iframe elements. Small elements (\(<30px\) in either dimension) and elements lacking any link, image, background image, or video were excluded.
In addition to taking a screenshot of each ad, the crawler separately downloaded image and video elements that were descendants of the ad element. These media are then used in the ML-based ad content analysis pipeline, in addition to the ad screenshots 4.6. The crawler sent a single HTTP request during the page visit with the appropriate HTTP headers--such as the HTTP Referer [sic.] set to the current address-bar URL--when downloading these files. Finally, the crawler also saved data-URL images found in the ad's subtree.
In their study on inferring tracker-advertiser relationships, Bin Musa and Nithyanand [62] also employed EasyList's rules for ad identification, but their implementation differs from ours. While they focus on detecting image-containing HTTP responses using the EasyList filter set, we query the DOM to detect ad elements such as div elements, and their relevant descendant elements, such as images, iframes, links (a) and videos. Operating at the DOM level also allows us to detect and scrape ad disclosure pages to
detect targeted ads. To verify how accurately our crawler detects ads, we performed a sanity check by randomly sampling 15 ads from each of the seven crawls. The crawler correctly detected ads in 85% of cases, misidentified non-ads in 7.5%, and captured blank or empty ads in 7.5%. Some ad screenshots also included multiple (2.8%) or only part (4.5%) of the ads. However, the overall accuracy and quality of our ads appear to be higher than prior work by Zeng et al. [55], which reported 34% unrendered (blank/unreadable) ads. We attribute this difference in data quality to two potential reasons. First, our use of a more realistic crawler equipped with anti-bot measures; and second, unlike Zeng et al.'s, we opted to not click the ads--which may trigger more stringent anti-bot, anti-fraud protections that prevent the delivery or rendering of the ads. We also verified the accuracy of the ad images separately downloaded by the crawler, finding them all to be present in the ads shown on the page.
**Determining targeting criteria:** In order to measure the prevalence of targeted advertisements at scale, we automated the process of scraping ad disclosure (e.g., "Why this ad") pages. While the content of ad disclosure pages may vary by ad platform, they generally explain in broad terms why a specific advertisement was shown to a user. The reasons may include, for instance, _Google's estimation of your interests_ or _Websites you've visited_. The disclosure pages may also contain information about the website and the advertiser, and whether ad targeting is turned off for the website or a specific ad. Two example disclosure pages for a targeted and non-targeted ad are shown in Figure 3.
Ad disclosure pages are reachable by clicking the AdChoices icon \(\clubsuit\) and the "Why this ad" button for Google ads [111] and other ad providers. Initially, we attempted to detect the ad disclosure links using fuzzy image matching based on the AdChoices icon. However, we found that the icon's shape and visibility substantially vary across different ad vendors, and sometimes the icon can be hidden, making it unclickable. As a result, we decided to detect the ad disclosure links using their URLs and limit ourselves to a fixed set of providers that we can reliably and deterministically detect. Based on our analysis of ad disclosure pages encountered in the pilot crawls, we compiled a list of hostnames (i.e., adssettings.google.com, privacy.us.criteo.com and privacy.eu.criteo.com) that appear in the ad disclosure links and provide an explanation about whether an ad is targeted or not. We limited our investigation to ad disclosure pages from these two providers because other providers we encountered in our pilot crawls did not offer any useful information about the targeting criteria of the ads.
Once the crawler detects and clicks on the AdChoices link, the ad disclosure page opens in a new tab. We interpreted this new tab, stored its URL, screenshot and text contents (via document.innerText) for analysis. The scraped text contents are then used to detect whether ad targeting is enabled or not. Specifically, we searched in the ad disclosure texts, for specific disclosure statements indicating whether and how an ad was targeted. The disclosure statements include, for instance, _Google's estimation of your interests_ (targeted), _Websites you've visited_ (targeted) and _Ad personalization is turned off_ (non-targeted). If one or more statements indicating targeted ads occur in an ad disclosure text, we label the ad as targeted. Otherwise, we label the ad as non-targeted. Note that we count behavioral or retargeted ads also in the targeted category. We compiled a list of 18 statements (Appendix A.3) in an incremental fashion, using over 40K ad disclosure texts extracted during the crawls. We made sure that all ad disclosure contain at least one of these statements, to make sure our analysis is exhaustive.
**Interacting with the page and ads:** Upon loading a page, our crawler waited for 2.5 seconds and dismissed any popup dialogs using heuristics from prior work [29]. We dismissed these dialogs to prevent them from blocking our crawler's interactions with the webpage. The crawler then waited for another second before scrolling through the page in 10 steps, taking strides of about 500-600 pixels each interlaced with a random delay of 500-600 milliseconds. Finally, after waiting for another second, it scrolled up to the beginning of the page using the same scrolling behavior. We engineered this up-and-down scrolling behavior to allow the webpage to load any ad slots that are lazily loaded as the user scrolls the page below the landing fold.
The crawler then identified all the advertisements on the page. It set the border color of each ad to red to visually mark the advertisements for manual review. The crawler then took a screenshot of the entire page and then scraped each ad in a top-down fashion. To ensure that an advertisement is fully seen, it scrolled down to each ad
Figure 3: Google’s ad disclosure pages indicating whether an ad is targeted or not. The top figure belongs to a targeted ad (indicated by _Google’s estimation of your interests_), while the bottom one is for a non-targeted ad (indicated by _Ad personalization is turned off_)
before taking its screenshot. Finally, the crawler detected ad disclosure links and clicked each one individually to capture all ad disclosure texts and screenshots. We limited the number of scraped ads per page visit to ten, which limits over-representation by a few websites with many ads.
**Analyzing advertisement content:** We identified and measured four kinds of advertisements in our corpus: weight loss ads, mental health ads, dating services ads, and ads that contain clickbait racy content. While our dataset of ads can be used to perform fuller content analysis, we focused on these four categories since prior work [112, 113] and regulatory reports [114] have argued that these can be especially harmful to children. In fact, many ad networks' moderation policies [115, 116] explicitly restrict these categories of ads from appearing on children's websites. An overview of the ad content analysis pipeline is shown in Figure 4. To identify ads containing click-bait racy content, we employed Google Cloud Vision API's SafeSearch Detection [117], which is a service that uses deep learning to analyze images and identify potentially unsafe content. It evaluates images against categories such as adult, violent, racy, and medical content and returns likelihood scores for each category, ranging from 'VERY_UNLIKELY' to 'VERY_LIKELY.' Upon manually evaluating the output generated by the algorithm, we focused on the 'racy' category with a likelihood of 'VERY_LIKELY'. We also tested Microsoft's Adult Content Detection [118], part of Azure Cognitive Services, to identify racy images. However, due to more false positives compared to Google Cloud Vision API, we chose the latter for our study.
We used the Google Cloud Vision API to extract text from ad images following a similar approach to Bin Musa and Nithyanand [62].
The text in each image was extracted using the Optical Character Recognition (OCR) feature of the API, specifically by employing the fullTextAnnotation attribute of the API response. This allowed us to extract text data at different levels, such as page, paragraph and word. We opted to use the paragraph level since it gives the best separation in ads promoting multiple unrelated products. Despite their name, paragraphs returned by the API were relatively short and akin to sentences (21 characters, on average).
We then employed semantic similarity to identify the most similar ad texts (paragraphs) corresponding to a given search query, which in our case were "weight loss", "mental health", and "dating". This approach is versatile and can be used to retrieve ads related to any arbitrary words or phrases. To compute the embeddings of the queries and ad paragraphs, we used the "paraphrase-multilingual-mpnet-base-v2" model, the distilled multilingual model we used to classify web pages (3.2). To find the most similar results, we calculated the cosine similarity between the embeddings of the search query and each ad paragraph and sorted them accordingly. Next, we manually reviewed the 100 most similar distinct paragraphs and their associated images, including ad screenshots or background ad images, to identify those that pertained to the three categories of interest. We also experimented with BERTopic [119] to create topic models and searched for clusters similar to our chosen categories. While this resulted in well-grouped texts, it required manual verification of numerous (many thousands) clusters. Sorting based on semantic similarity proved to be faster, more flexible, and easier to implement and evaluate, making it the preferred approach for manual reviewing.
### List of crawls
The main dataset used in our study consists of seven crawls, all of which were run in April 2023 using cloud-based servers on Digital Ocean. Crawls were run in parallel using separate servers with moderate resources (8 vCPU cores, 16GB RAM); each crawl took between 13 and 32 hours to complete. We ran these crawls from Frankfurt, Amsterdam, London, San Francisco, and New York City, using desktop browsers in accept-all consent mode--meaning we consented to any cookie dialogs that appeared. During each crawl, we visited both landing pages and inner pages, following the process described in 4.3. Three additional cities were introduced to capture differences in ads due to vantage points. We ran two mobile browser crawls from different vantage points (Frankfurt and New York City), again using the accept-all consent mode. We limited the mobile browser crawls to two vantage points because we do not focus on mobile-desktop comparison, which we leave to future work.
## 5 Measurement Results
Table 1 summarizes the overall statistics for measurement crawls. A total of 71,567 pages were loaded successfully across all crawls. The success rate of our crawler was over 93%, according to the successful visit criteria we developed and applied (Appendix A.4).
For simplicity, certain comparative results presented below are based on desktop crawls from NYC and Frankfurt, representing one location each in the US and the EU.
### Ad targeting and content analysis
Our crawler scraped 70,303 ads from 804 of the 2,004 distinct websites across seven crawls. An average of 36%
Figure 4: Overview of the advertisement content analysis pipeline.
of the pages contained one or more ads, and we detected targeted ads on 27% of the pages we crawled. The crawler scraped 10,839 and 9,447 ads on average in the crawls from the US and Europe, respectively.
#### 5.1.1 Over 70% of ads are targeted in nature
Our crawler captured a total of 40,281 ad disclosure pages, which we used to determine the advertiser's identity and whether ad targeting is enabled or not. There are fewer disclosure pages than ads due to ads without disclosure links and failures in detecting or opening those links. In fact, we only consider ad disclosures from two ad providers: Google (97.8%) and Criteo (2.2%), since ad disclosure pages of other providers did not reveal the targeted status of the ad or the advertisers' identity. Limiting our analysis to 40,281 ad disclosure pages, we found that targeting was enabled for 73% of the ads. Comparing across different privacy jurisdictions, we find 68% of the ads on average were targeted in the EU crawls, compared to 76% in the UK and the US crawls. Comparing the crawls from the two US cities (SF & NYC), we find that 67% of the ads were targeted in the SF desktop crawl, compared to 79% and 82% in the NYC-based desktop and mobile crawls, respectively. Although these variations might be attributed to stricter privacy regulations like CCPA and GDPR, our available data and methods do not permit us to make this attribution. Comparing the Tranco ranks of the 689 websites that contain at least one targeted ad to 59 websites that only contain non-targeted ads, we find a tendency for popular websites to disable ad targeting (Figure 5). Sites with targeted ads had a median rank of \(\sim 340K\), while those that show only non-targeted ads had a median rank of \(\sim 128K\). Note that we only include 40,281 ads for which we can determine the targeted status in this analysis.
#### 5.1.2 Ads can be targeted from anywhere
The "About the advertiser" section in Google's ad disclosures shows the name and location (country) of the advertisers. This information is only available in 70% of the ad disclosures in our dataset. Extracting these fields from the ad disclosure texts, we identified 1,685 distinct advertisers from 81 different countries. Advertisers with the most ads in our data are displayed in Table III. We note that due to the transient, targeted and localized nature of ad campaigns, the list in Table III may not represent the most common advertisers on child-directed websites in general. Further, in certain cases (e.g., Gloworld LLC and Marketism), an advertising or marketing agency is listed on the ad disclosure page instead of the company offering the advertised products or services.
The top ten advertisers are located in seven different countries and three continents. We observed that many of those advertisers are located far from our crawl vantage points, thus indicating that children visiting websites in our list can be targeted with ads from anywhere in the world. By reviewing a sample of 100 ads from each advertiser, we characterize the type of ads they run in the rightmost column. Five of the ten advertisers display ads for search results about various products on lesser-known search engines such as IngoSearch [120]. Ads from Bettertime [121], a "behavioral healthcare app" with more than 100M installations, featured plans for weight loss, muscle gain, and intermittent fasting (e.g., Figure 1 ).2 Brain Metrics Initiative displays ads for IQ tests, an example for which is given in Figure 1 ). Alibaba Hong Kong, on the other hand, displays ads featuring racy and disturbing images of products sold on alibaba.com. For instance, the ad on the top left () in Figure 1 features recurring images in Alibaba ads: a naked baby model (leftmost), rabbit meat (rightmost), and a semi-transparent underwear ad in the middle. We investigate similar racy clickbait ads and other improper ads in the following subsection.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Form factor** & **Vantage point** & **Successfully loaded pages** & **Successful crawling rate** \\ \hline \multirow{4}{*}{**Desk.**} & NYC & 10,310 & 95\% \\ & SF & 10,301 & 95\% \\ & LON & 10,270 & 95\% \\ & FRA & 10,221 & 95\% \\ & AMS & 10,014 & 93\% \\ \hline \multirow{2}{*}{**Mobile**} & NYC & 10,168 & 94\% \\ & FRA & 10,283 & 96\% \\ \hline
**Sum/Avg.** & & 71,567 & 95\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: Crawl statistics based on different vantage points.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Form factor** & **Vantage point** & **\# ads** & **\% sites with targeted** & **\% sites** & **\% targeted** \\ & **point** & **ads** & **ads** & **ads** & **ads** \\ \hline
**Desk.** & NYC & 11,288 & 38\% & 30\% & 79\% \\ & SF & 10,950 & 38\% & 28\% & 67\% \\ & LON & 9,702 & 36\% & 27\% & 76\% \\ & FRA & 9,700 & 36\% & 26\% & 68\% \\ & AMS & 9,250 & 35\% & 26\% & 67\% \\ \hline \multirow{2}{*}{**Mobile**} & NYC & 10,278 & 36\% & 29\% & 82\% \\ & FRA & 9,135 & 33\% & 26\% & 70\% \\ \hline
**Sum/ Avg.** & & 70,303 & 36\% & 27\% & 73\% \\ \hline \hline \end{tabular}
\end{table} TABLE II: Number of visits and scraped ads, along with percentages of ads/targeted ads per crawl. *: Percentage of targeted ads is only based on ads with disclosures. In the rightmost two columns, we include a site if we scraped at least one ad/targeted ad from one of its pages.
Fig. 5: Tranco rank (x-axis) distribution of sites that use targeted vs. non-targeted ads. Popular websites (below) appear to be more prone to disabling ad targeting.
#### 5.1.3 Improper ads on child-directed sites
In total, our crawler collected 199,935 screenshots and images from the 70,303 scraped ads. After deduplicating the images, we queried the Cloud Vision API to obtain the category and OCR texts of the resulting 98,264 distinct images. We manually reviewed 741 images classified as 'VERY_LIKELY'racy by the API. Separately, we reviewed 1,136 ad images with OCR text that are semantically most similar to our search terms (mental health, dating, and weight loss). Due to study limitations, we only examined the ads related to the top 100 distinct texts for each term. Since each distinct text may appear in multiple ads in different ways, we labeled the images separately, and used videos captured by the crawler when the ad is animated or the ad screenshot was obscured. Table IV shows the number of improper ads identified in each crawl, amounting to a total of 1,003 across 311 distinct websites. A notable finding is the higher prevalence of such ads on mobile devices compared to desktops in general.
**Racy images.** We found 177 racy ads and 163 ads that were somewhat racy, which were considered edge cases due to their potential inappropriateness for child-directed websites. These ads were identified across 80 distinct websites mostly ranked within the top one million according to the Tranco list, with a median rank of 426K. Figure 1,,, are examples of some of these ads. Notably, the majority of these racy ads, over half, were encountered on mobile devices within the NYC region. From a total of 177 racy ads, 38 had ad disclosure pages that allowed us to determine whether they were targeted. Our analysis indicated that the majority of them - 35 out of these 38 ads - were indeed targeted, with only 3 classified as non-targeted.
**Mental health.** By manually labeling 236 ad images, we identified 81 ads related to mental health on 48 distinct websites. Examples of ads in this category contained "take a depression test" (Figure 1 ), "online psychiatrists," "how to get over depression," and a "mental health chatbot which helps people with depression."
**Dating.** Manually labeling 231 ad images, we identified 70 dating-related ads on 48 distinct websites, most of which targeted mobile users. The ads promoted dating services such as "dating.com," and "Live Me," a live streaming app with ads featuring suggestive imagery (Figure 1,,, ). Another ad for DateMyAge.com featured a call to "[m]get your mature singles" ().
**Weight loss.** We identified 512 ads related to weight loss on 170 distinct websites by labeling 669 ad images. Notably, there was a higher number of weight loss ads on mobile devices, indicating campaigns targeting mobile users. Examples of text featured in these ads included "intermittent fasting for weight loss," "keto weight loss plan," and "eating plan to lose weight" (Figure 1 ).
In Figure 1, we provide additional examples of advertisements that are likely not suitable for children. Examples of these included an ad for a test called "Am I Gay Test", for a sex toy 1 and a sex toy shop 2 featuring an image of ice cream that could be appealing to children, and other ads featuring clickbait and sexually suggestive images. The ads were found on websites related to K-12 e-learning, kid games, coloring books and worksheets, among others.
Footnote 2: Reportedly Germany’s largest online adult retailer [123]
**Malicious ad links.** Finally, we present an exploratory analysis on whether ads on child-directed websites link to malicious pages. We submitted a sample of links extracted from the ad elements to the VirusTotal API in August 2023. Specifically, we removed links with duplicate hostnames, and for Google ads, we extracted a direct link to the ad landing page using the 'adurl' parameter [124]. While the overwhelming majority of the links were classified as benign, 149 of the nearly 3,940 scanned links were flagged as malicious or phishing by at least one scan engine. Notably, the word "tabool" was mentioned in 78 of the 149 detected links as a URL parameter that seems to indicate the ad network (network=taboola).
### Tracking and fingerprinting analysis
Table V shows the prevalence of third-party trackers detected across different crawls. We find that around 90% of the websites have at least one tracker domain, and over 93% embed at least one third-party domain.
**Third-party trackers.** The average number of tracker domains per site differs significantly, e.g., 15.6 and 23.4 in Frankfurt and NYC crawls, respectively, while the median is 15 and 16 respectively. The difference in averages is likely because of outliers (i.e., websites with a high number of
\begin{table}
\begin{tabular}{l l r r r r r r} \hline \hline
**Form factor** & **Vantage point** & **Dating** & **Mental health** & **Weight loss** & **Racy** & **Some- whatracy** & **Total** \\ \hline \multirow{4}{*}{Desk.} & NYC & 4 & 21 & 16 & 21 & 26 & 88 \\ & SF & 7 & 9 & 15 & 6 & 25 & 62 \\ & LON & 10 & 17 & 48 & 12 & 31 & 118 \\ & FRA & 1 & 0 & 48 & 19 & 25 & 93 \\ & AMS & 8 & 4 & 82 & 10 & 33 & 137 \\ \hline \multirow{2}{*}{**Mobile**} & NYC & 22 & 25 & 113 & 98 & 17 & 275 \\ & FRA & 18 & 5 & 190 & 11 & 6 & 230 \\ \hline Total & & 70 & 81 & 512 & 177 & 163 & 1003 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Number of improper ads identified for each crawl.
\begin{table}
\begin{tabular}{l l r r l l} \hline \hline
**Advertiser** & **Location** & **\# ads** & \multicolumn{2}{c}{**\% targeted**} & \multicolumn{1}{c}{**Type of ads**} \\ \hline Vinden.nl B.V. & Netherlands & 4,707 & 86\% & Search results \\ EXPLORADS & Cyprus & 3,265 & 73\% & Search results \\ All Response & UK & 2,453 & 68\% & Search results \\ Glowfold LLC & USA & 2,365 & 55\% & Online learning \\ Amomama M. & Cyprus & 921 & 72\% & Workout muscle \\ & Media Quest & UAE & 910 & 79\% & Search results \\ Brain Metrics I. & Cyprus & 814 & 50\% & IQ tests \\ BetterMe & Cyprus & 731 & 85\% & Weight loss \\ Marketism & Israel & 645 & 49\% & Search results \\ Alibaba.com HK & Hong Kong & 541 & 86\% & Products sold \\ \hline \hline \end{tabular}
\end{table} TABLE III: Top ten advertisers by the number of ads across all crawls.
trackers) in the NYC crawl. This explanation is in line with the results displayed in Table 7, which shows the top five websites with the most trackers in Frankfurt and NYC crawls. Most of these websites are among the top one million that receive substantial traffic. Notably, all of these sites displayed ads that were targeted. The numbers shown in the table - number of trackers, requests, and cookies - reflect averages across the web pages. In the NYC crawl, visiting mathfunworksheets.com triggered a total of 1,547 requests involving 161 unique third-party tracker entities (i.e., organizations/companies). Another website, woojr.com found to contain 148 distinct third-party tracker entities when visited from NYC. This website includes resources for children's activities and educational materials, including printable worksheets and fun activity pages. When visited from Frankfurt, www.wowescape.com, a website offering various games for children and teenagers, triggered requests to 95 distinct third-party tracker entities.
**Most prevalent trackers.** Table 6 shows the tracker entities with the most prevalence in Frankfurt and NYC desktop crawls. We found a tracking-related request to Google domains including its analytics, advertising and tag management scripts on \(\sim\)84% of the 2,004 child-directed websites in both crawls. Facebook is the second most prevalent entity in the Frankfurt crawl mostly due to Facebook Pixel (on 427 websites), which facilitates ad retargeting and conversion measurement, among others [125]. Largely thanks to Linked Insight Tag (px.ads.linkedin.com, 466 websites), Microsoft is the second most prevalent entity in the NYC crawl. Linked Insight Tag serves multiple purposes, including retargeting, conversion measurement, and providing demographic insights about website visitors [126].
**Regional differences.** To explore the differences in tracker entities across vantage points, we compared the tracker entities from Frankfurt and NYC desktop crawls. Despite a considerable overlap among the detected tracker entities (Jaccard index=\(0.85\)), we also identified variations. Specifically, our investigation unveiled 47 tracker entities exclusive to the Frankfurt crawl and 118 tracker entities that were only found in the NYC crawl. For instance, tracking related requests to _advanced STORE_[127] (ad4m.at & ad4mat.net, 236 websites) exclusively appear in the crawl from Frankfurt, whereas _Throthe_, a company that provides an identity graph to marketers and advertisers, only appears on 171 websites in the NYC crawl [128].
Furthermore, we find that the majority of the websites in both Frankfurt and NYC crawls (70% and 72%, respectively) contain third-party trackers that set at least one cookie with the SameSite=None attribute and a lifespan of over three months. Primarily through doubleclick.net domain, Google set these cookies on over 51% of the websites.
While identifying the individual purposes of these cookies is out of scope, this combination of cookie attributes (esp. setting SameSite=None) makes it possible to track users across websites.
**Sites with and without ads.** As part of our investigation, we conducted an additional analysis to compare how the number of third parties and trackers change between websites with and without ads. Figure 6 shows that websites with ads tend to have substantially more third-party and tracker domains. More specifically, the figure shows websites with ads tend to contain two to four times more third-party and tracker domains.
**Browser Fingerprinting.** We now discuss our findings on fingerprinting scripts on child-directed websites.
Table 5 shows that we detect fingerprinting scripts on 176 (9%) and 218 (10%) websites in Frankfurt and NYC crawls, respectively.
The overall prevalence of fingerprinting aligns with the recent research by Iqbal et al., which finds fingerprinting on 10.18% of the top-100K websites [38]. One of the most
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{**FRA**} & \multicolumn{2}{c}{**NYC**} \\ \hline
**Entity** & **\# Sites** & **Entity** & **\# Sites** \\ \hline Google & 1,702 & Google & 1,718 \\ Facebook & 458 & Microsoft & 549 \\ Index Exchange & 424 & Adobe & 543 \\ Xandr & 416 & Xandr & 516 \\ Adform & 412 & The Trade Desk & 501 \\ The Trade Desk & 390 & Index Exchange & 495 \\ OpenX & 378 & IPONWEB & 467 \\ Adobe & 366 & Facebook & 456 \\ Quantcast & 361 & Magnite & 446 \\ PubMatic & 359 & OpenX & 426 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Prevalence of tracker entities in terms of number of distinct websites in Frankfurt and NYC desktop crawls.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{2}{c}{**Furn**} & \multicolumn{1}{c}{**Vantage**} & \multicolumn{1}{c}{**Jot-Party**} & \multicolumn{1}{c}{**Tracker**} & \multicolumn{1}{c}{**Tracker**} & \multicolumn{1}{c}{**with**} & \multicolumn{1}{c}{**\% sites**} & \multicolumn{1}{c}{**\% site**} \\ \multicolumn{2}{c}{**factor**} & \multicolumn{1}{c}{**point**} & \multicolumn{1}{c}{**domains**} & \multicolumn{1}{c}{**domains**} & \multicolumn{1}{c}{**entities**} & \multicolumn{1}{c}{**sid**} & \multicolumn{1}{c}{**rackers**} & \multicolumn{1}{c}{**FP**} \\ \hline \multirow{4}{*}{**Dex.**} & NYC & 31.6 & 23.4 & 20.0 & 95\% & 90\% & 9\% \\ & SF & 29.3 & 21.3 & 17.8 & 95\% & 91\% & 9\% \\ & LGN & 21.3 & 14.3 & 10.6 & 96\% & 91\% & 7\% \\ & ERA & 23.2 & 15.6 & 11.7 & 95\% & 90\% & 10\% \\ & AMS & 21.4 & 14.3 & 10.6 & 93\% & 89\% & 7\% \\ \hline \multirow{2}{*}{**Mobile**} & NYC & 29.8 & 21.8 & 18.4 & 95\% & 91\% & 9\% \\ & ERA & 22.6 & 15.2 & 11.5 & 95\% & 90\% & 11\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average number of third-party and tracker domains, and the prevalence of tracking and fingerprinting on child-directed websites based on crawls from five vantage points.
Figure 6: Comparative analysis of the average number of third-party and tracker domains/entities on websites, with and without ads.
prevalent fingerprinters in both crawls is an online payment company (Stripe; 66, 67 sites on Frankfurt and NYC crawls, respectively). According to their help pages [129] Stripe primarily employs fingerprinting for fraud prevention purposes. Webgains (82 sites in the Frankfurt crawl), an affiliate marketing company, also mentions fingerprinting in their Data Processing Agreement with Merchants [130], but without specifying its purpose.
The most commonly used fingerprinting method is Canvas fingerprinting, present on about 208 sites in the Frankfurt crawl and about 172 sites in the NYC crawl.
We found one or more trackers to be present on more than 90% of mobile websites (Table V), which is similar to our finding for the desktop websites. NYC and Frankfurt crawls differ slightly in the number of ads: we scraped 10,278 ads in the NYC crawl and only 9,135 in the Frankfurt crawl--the latter is the crawl with the least amount of ads. Slightly more (29 vs 26%) websites in the NYC mobile crawl have targeted ads; and NYC mobile crawl has the highest proportion of targeted ads (82%) across all crawls. We also discovered that improper ads, particularly racy and weight loss ads, were more prevalent on mobile devices compared to desktops.
## 6 Discussion
Our research paints a troubling picture of tracking and inappropriate advertising practices on child-directed websites. Advertisements featuring sexually suggestive imagery and ads about weight loss, dating, and mental health may pose potential risks to children's emotional and psychological welfare. We discuss the legal implications, ethical considerations and limitations of our study below.
### _Legal implications_
In this section, we discuss what the law says about tracking and advertising practices uncovered in our research. We focus on the EU General Data Protection Regulation (GDPR) and the US Children's Online Privacy Protection Act (COPPA).4
Footnote 4: We do not analyze whether specific companies breach the law. For such an analysis, each case would have to be examined separately, considering all the circumstances of that specific case. Rather, we discuss legal requirements in general terms.
**The GDPR and the ePrivacy Directive.** Under the GDPR, companies are only allowed to process personal data if they have a legal basis for such processing. The GDPR provides six possible legal bases (article 6 GDPR). However, generally, the data subjects consent is the only possible legal basis for online tracking and behavioral (targeted) advertising [131]. Moreover, the ePrivacy Directive [132] requires, in short, companies to ask the internet user for consent before they use tracking cookies or similar tracking technologies (article 5(3)).
The GDPRs requirements for valid consent are strict. Consent is only valid if it is really voluntary (freely given), and specific and informed.
The data controllers (the website owner and the companies involved in tracking and targeted advertising) must be able to demonstrate that the data subject has consented to process of his or her personal data (Article 7(1) GDPR). The GDPRs requirements for valid consent also apply to consent (for cookies etc.) as prescribed by the ePrivacy Directive. The GDPR has specific rules for consent by children. Roughly summarized, children cannot give valid consent; the parent should give consent instead (article 8 GDPR). EU member states have set different minimum consent ages, ranging from 13 to 16 years [133]. Hence, only parental consent can legitimize tracking on a children website. Observe that a parent clicking a consent dialog (as done by our crawler) does not constitute parental consent under GDPR. Even in low-risk cases, verification of parental responsibility via email may be necessary [134].
**The EU Digital Services Act.** The rules for tracking and targeting children will become stricter in the EU. From 17 February 2024 on, the EU Digital Services Act [135] applies. Article 28 says, roughly summarized, that online platforms must not use behavioral advertising 'when they are aware with reasonable certainty that the recipient of the service is a minor' [135]. This prohibition cannot be overridden with the consent of the child or the parent. The DSA also requires "very large online platforms" [136] (with more than 45 million users in the EU) to publish the advertisements that it presented to users in an online repository, together with information about, for instance, the targeting criteria (article 33, 39 DSA). The methods that we used in this paper could be used to check the completeness and accuracy of data published in those repositories.
**COPPA.** COPPA regulates companies offering a website or online service directed to children under the age of 13. Specifically, COPPA applies to companies using children'personal information,' which includes 'persistent identifiers such as cookies and device fingerprints' (COPPA 312.2) [137]. The website owner is responsible for data collection by third parties through its site. Such third parties must also comply with COPPA. Companies based outside the US must also comply with COPPA if their services are directed to children in the US [137].
Our results showed that 27% of the child-directed websites use targeted advertising. Under COPPA, data collection for targeted advertising on these websites is only allowed after getting parents Verifiable Parental Consent (VPC). VPC
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline
**Lee** & **Website** & **\# Tracks** & **\# Requests** & **\# Cookies** & **Rank** \\ \hline \multirow{5}{*}{**FRA**} & multiarmedscribers.com & 161 & 1,547 & 395 & 669K \\ & woog.com & 148 & 2,181 & 391 & 83K \\ & imeraldblind.com & 139 & 1,235 & 336 & 308K \\ & kidfed.com & 138 & 1,050 & 272 & 279K \\ & h
entails utilizing stringent verification methods, including credit card verification, face recognition, and government ID checks [138]. This makes VPC much more complex than simply clicking an accept button on a dialog. We note that our crawler simply lacks the ability to give VPC.
### _Research Ethics_
Our crawler visited over 166K pages and it triggered many ad impressions that could be viewed by a real visitor (likely a child). Given the huge scale of the digital ad market (projected to reach US$700bn in 2023 [139]) we believe these ad impressions are a negligible cost for raising the transparency around tracking and ads targeted to children. Furthermore, we took several measures to limit our footprint on the crawled websites. For instance, we only crawled five inner pages from each site in a crawl, and we randomly shuffled the target URLs to avoid concurrently visiting the inner pages of a website. We also took appropriate measures to ensure that no harm was done to collaborators involved in the project, especially when dealing with explicitly graphic images.
**Disclosures and outreach** In May 2023, we shared highlights of our findings with Children Advertising Review Unit (CARU), a self-regulatory COPPA safe harbor program in the US [140]. We still await their response as of August 2023. In July 2023, we reached out to five companies that we found to serve racy ads. One of the companies thanked us for our report and stated that they immediately commenced an internal investigation. Another company said they transferred our request to the relevant department. Moreover, we disclosed 34 racy ads to Google by manually visiting the ad disclosure URLs of each racy (Google) ad; and using the _Report this ad_ button. In order to identify the ad vendors that involved in serving the ad, we used a combination of ad images, and src/href attributes of the ads descendant iframe, image and link elements (SS4.6.
We also shared our preliminary results with a European data protection agency (DPA), and a consumer protection agency. Both showed interest; the DPA asked if there are any websites from their country containing improper ads. The consumer protection agency stated that they will discuss our paper in a private enforcement agencies meeting and asked for permission to share it with their country's DPA. We plan to further share our study's results with the regulators and other relevant stakeholders.
While using the VirusTotal API, we found and reported three porn websites miscategorized as kids-related to the respective third-party categorization service. While we did not hear back, we found that the website categories were later rectified.
### _Limitations_
While our classifier detected child-directed sites in 48 different languages, it may be biased towards English websites due to the over-representation of English pages in the training data. Moreover, our classifier may favor websites with good search engine optimization (SEO) practices due to more descriptive website titles and descriptions. Our list-building pipeline and crawler may suffer from other biases as well, depending on the age, design or accessibility of a website.
While we found fewer targeted ads in the EU than in the US, we cannot directly attribute this to differences in privacy regulation or another specific factor. Failure to detect and interact with consent dialogs may be a confounding factor, among others. When detecting targeted ads, we only used ad disclosure pages from two providers (Google and Criteo) due to the unavailability of useful ad disclosures from other vendors.
When manually verifying the classified websites, we conservatively labeled websites as child-directed. However, a small percentage (2.2%) of websites in our list are _mixed audience websites_: they have content directed to both adults and children. While those mixed audience websites explicitly covered under COPPA [36], when extracting inner links from those websites, we might have collected pages that are not directed at children. We believe the relative infrequency of such sites ensures that this does not have a significant impact on our results. We conducted four sets of inner link collection crawls: two from NYC and two from Frankfurt, encompassing both desktop and mobile crawls. The SF crawl utilized links extracted from the NYC crawl, while the London and Amsterdam crawls utilized links from the Frankfurt crawl. This constraint does not appear to impact the success rate of visits across these vantage points; nonetheless, future research could explore the possibility of identifying inner pages during the crawling process.
We used cloud-based servers to run the crawls. Websites may treat cloud-based IP addresses or automated browsers differently [141, 142, 39]. To curb such effects, we used the anti-bot detection features of TRC [94]. Reviewing the screenshots captured during the visits, we observed very few blocked visits.
Since we use a fresh profile for each visit, we may not capture re-targeted or other personalized ads that are only shown to users with a behavioral profile. Future work could extend our method to incorporate personas and warm-up crawls to study such ads. Overall we do not claim that our findings are representative of tracking and advertising practices on child-directed websites. Our focus in this study is not on how ads are targeted, but simply on _whether the targeting is enabled or not_.
## 7 Conclusion
We presented an empirical study of online tracking and advertisements on over 2,000 child-directed websites. Building a lightweight and versatile ML pipeline to analyze ad content, we identify hundreds of cases of improper ads, including weight loss and mental health ads, and ads featuring dating services, racy and sexually suggestive imagery. Our study reveals several notable trends: websites featuring advertisements tend to contain two to four times more number of trackers, mobile websites exhibit a greater
prevalence of inappropriate ads, and popular websites are less likely to deploy targeted advertisements. Our findings provide concrete evidence of troublesome practices that are likely illegal, unethical, or simply careless. We call for more research, regulation and enforcement to limit the ongoing violation of children's privacy and well-being.
## 8 Acknowledgments
Asuman Senol was funded by the Cyber-Defence (CYD) Campus of armasuisse Science and Technology. Veelasha Moonsamy was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2092 CASA - 390781972.
|
2310.10460 | Experimental Validation of Memristor-Aided Logic Using 1T1R TaOx RRAM
Crossbar Array | Memristor-aided logic (MAGIC) design style holds a high promise for realizing
digital logic-in-memory functionality. The ability to implement a specific gate
in a MAGIC design style hinges on the SET-to-RESET threshold ratio. The TaOx
memristive devices exhibit distinct SET-to-RESET ratios, enabling the
implementation of OR and NOT operations. As the adoption of the MAGIC design
style gains momentum, it becomes crucial to understand the breakdown of energy
consumption in the various phases of its operation. This paper presents
experimental demonstrations of the OR and NOT gates on a 1T1R crossbar array.
Additionally, it provides insights into the energy distribution for performing
these operations at different stages. Through our experiments across different
gates, we found that the energy consumption is dominated by initialization in
the MAGIC design style. The energy split-up is 14.8%, 85%, and 0.2% for
execution, initialization, and read operations respectively. | Ankit Bende, Simranjeet Singh, Chandan Kumar Jha, Tim Kempen, Felix Cüppers, Christopher Bengel, Andre Zambanini, Dennis Nielinger, Sachin Patkar, Rolf Drechsler, Rainer Waser, Farhad Merchant, Vikas Rana | 2023-10-16T14:41:59Z | http://arxiv.org/abs/2310.10460v1 | Experimental Validation of Memristor-Aided Logic Using 1T1R \(\mathbf{TaO_{x}}\) RRAM Crossbar Array
###### Abstract
Memristor-aided logic (MAGIC) design style holds a high promise for realizing digital logic-in-memory functionality. The ability to implement a specific gate in a MAGIC design style hinges on the SET-to-RESET threshold ratio. The \(\mathbf{TaO_{x}}\) memristive devices exhibit distinct SET-to-RESET ratios, enabling the implementation of OR and NOT operations. As the adoption of the MAGIC design style gains momentum, it becomes crucial to understand the breakdown of energy consumption in the various phases of its operation. This paper presents experimental demonstrations of the OR and NOT gates on a 1T1R crossbar array. Additionally, it provides insights into the energy distribution for performing these operations at different stages. Through our experiments across different gates, we found that the energy consumption is dominated by initialization in the MAGIC design style. The energy split-up is 14.8%, 85%, and 0.2% for execution, initialization, and read operations respectively.
MAGIC, RRAM, logic-in-memory, fabrication
## I Introduction
Memristive devices, such as resistive random access memory (RRAM), offer a solution to the von Neumann bottleneck by implementing operations within memory itself [1, 2]. One approach to implementing operations in memory is designing digital logic gates exploiting two distinct states - the high resistive state (HRS) and low resistive state (LRS) of an RRAM. These states are correspondingly mapped to logic "0" and logic "1" respectively. Depending on the input combination stored as a resistive state, the output memristor state can switch from one state to another, representing a logical output value. Several methods for achieving digital logic-in-memory (LiM) have been suggested in the literature. Various stateful and non-stateful logic techniques have been presented in the literature such as IMPLY [3], FELIX [4], majority logic [5], and memristor-aided logic (MAGIC) [6]. Amongst all the techniques, MAGIC stands out as a popular choice because it stores the output in the form of the memristor's state itself, representing stateful logic.
Experimental validation of MAGIC gates has recently been achieved using fabricated valence change memory (VCM) devices [7, 8]. However, this study specifically focuses on passive crossbar architectures, which suffer from sneak path currents and scalability challenges [9]. The passive crossbars also encounter difficulties during device formation, requiring significant initial current. To address these issues and enhance forming capabilities, a solution involves incorporating a transistor in series with the memristive device. This configuration creates a 1T1R cell, effectively mitigating sneak path problems. The 1T1R cell enables precise current control at the individual device level, offering enhanced control for the MAGIC operations. Despite the increase in physical footprint, the 1T1R configuration renders the overall system more scalable and allows for better control [9].
The MAGIC design style offers the potential for a variety of logic gates. However, the availability of specific logic gates is contingent upon the SET and RESET switching thresholds, which are directly influenced by the material stack used in the fabrication process [10]. The RRAM device based on Pt/TaO\({}_{x}\)/W/Pt stack offers implementation of OR, NIMP, and 2-cycled XOR gates and has been demonstrated using 1R passive devices [7]. Additionally, in this specific stack configuration, the output device is initialized to the HRS (logic "0") state, and the input combination, along with execution voltage, determines its transition to the LRS ("1") from the HRS state.
To the best of our knowledge, for the first time, this paper demonstrates the implementation of MAGIC gates on a _IT1R
Fig. 1: (a) 1T1R cell schematic (b) schematic cross-section of CMOS integrated memristive device, (c) SEM image of fabricated memristive devices, and (d) I-V switching characteristics of a 1T1R cell after forming.
_TaO\({}_{\text{x}}\) RRAM crossbar array_ and illustrates the realization of both OR and NOT gates, which can be effectively combined to implement any Boolean operation as they are functionally complete. Additionally, the paper offers a detailed breakdown of energy consumption during each operation phase within the MAGIC design style. Given the potential application of the MAGIC design style in creating general-purpose processing units, it becomes imperative to examine its design from an energy consumption perspective [11]. Such an analysis is crucial for gaining insights into its viability in future technologies. The following are the contributions of this paper:
* Fabrication of TaO\({}_{\text{x}}\) RRAM devices and their integration with CMOS to make the 1T1R crossbar array.
* Experimental validation of OR and NOT gates based on MAGIC design style on fabricated 1T1R crossbar array.
* Comprehensive energy assessment and breakdown of energy consumption during the MAGIC operations.
The rest of the paper is organized as follows. We provide a background of the used technique in Section II. Section III discusses the experimental methods used for logic implementation and energy estimations. The results obtained from the study are explained in Section IV. Finally, we conclude the paper in Section V.
## II Background and Related Work
### _Memristive Devices_
Memristive devices have emerged as a significant advancement in non-volatile memory technology. Initially proposed as a concept by Professor Leon Chua in 1971 [12], RRAM has gained prominence due to its unique ability to store data by modulating resistance states [13]. The resistance modulation is achieved by applying a voltage across the terminals of the RRAM. In response, the resistance of the devices changes based on the magnitude and direction of the current flow. Remarkably, the memristor preserves its resistance value even when devoid of power, safeguarding its data until a new voltage is applied, firmly establishing its status as a non-volatile memory element [1].
These memristive devices can be interconnected to form a crossbar structure. However, when individual memristive devices are connected in a passive crossbar configuration, issues related to forming and sneak-path currents can arise. To mitigate these concerns, memristive devices are fabricated with a CMOS transistor in series, resulting in what is known as a 1T1R cell. In Fig. 2 (a), the SEM image of the fabricated 1T1R cell is shown, with multiple cells interconnected in an 8x4 (rows \(\times\) columns) crossbar structure (fabrication detailed is discussed in Section III). Fig. 2 (b) provides a schematic layout of the 1T1R crossbar array. The word lines (WLs) from 1 to 4 are linked to the gates of the devices connected in columns, while the source lines (SLs) from 1 to 8 are connected in a row-wise fashion, shorted to all the source pins of the transistors within the same row. The bit lines (BLs) from 1 to 4 are connected to the top electrode (TE) pin of each memristor within a column, as illustrated in Fig. 2 (b).
### _MAGIC Design Style_
MAGIC represents a stateful logic methodology that employs crossbar-connected memristive devices to execute logic operations. Each memristor is programmable to two distinct states, HRS and LRS, which are subsequently mapped to logic "0" and "1" respectively. An initialization step is necessitated to facilitate the MAGIC operations, configuring the output memristor to its initial state. The range of achievable gates within these devices is contingent upon the SET-to-RESET switching threshold ratio. The SET-to-RESET voltage ratio for the device stack used in our study made logic OR, NIMP, and NOT gates attainable, diverging from the original NOR and NOT gates proposed in [6]. The output memristor (\(y_{out}\)) is always initialized to the HRS state rather than the LRS state.
In the OR operation, an execution voltage (\(V_{exe}\)) is applied to the input memristors (\(x_{1}\) and \(x_{2}\)). Simultaneously, the output memristor, initially set to the HRS state, is grounded, as depicted in Fig. 2 (c). Furthermore, the NOT operation necessitates three memristors but with different voltage values compared to the OR operation. In this context, one of the inputs is consistently initialized to LRS (designated as \(x_{1}\) for clarity) in conjunction with the output memristor as shown in Fig. 2 (d). Additionally, distinct execution voltages (\(V_{exe}\) and \(V_{exe}\)/3) are employed for the \(x_{1}\) and \(x_{in}\) memristors, respectively.
### _Related Work_
The experimental validation of MAGIC design style on TaO\({}_{\text{x}}\) RRAM devices using _passive crossbars_ has been demonstrated in existing literature [7]. Nonetheless, passive crossbars are plagued by the issue of sneak-path currents, which can disrupt the accurate reading of the final state. Additionally, forming processes in passive crossbars poses challenges. Non-stateful logic (e.g., scouting and majority logic) has been demonstrated using 1T1R cells [14]. This paper represents the pioneering demonstration of _stateful logic on a 1T1R TaO\({}_{\text{x}}\) RRAM crossbar_.
Fig. 2: Logic gates mapping on fabricated 8x4 1T1R array. (a) SEM image of the fabricated 8x4 array (array orientation rotated by 90\({}^{\circ}\)), (b) Schematic of 8x4 1T1R array, (c) schematic of two input OR gate implemented in the array, (d) schematic of NOT gate implemented in the array.
Prior studies have indicated that energy consumption in the MAGIC design style is predominantly influenced by initialization energy [15]. However, it's essential to note that these findings are primarily derived from simulation studies. Furthermore, the presented energy figures are based on simulation models and pertain specifically to NOT and NOR gates based on the MAGIC design style. This paper takes strides towards calculating the energy consumption of OR and NOT gates in fabricated TaO\({}_{\text{x}}\) RRAM devices.
## III Experimental Methods
### _Device Fabrication_
For the experimental validation of MAGIC gates, 1T1R-based active memristive arrays were fabricated and integrated with CMOS 180 nm technology provided by X-FAB. The dimensions of the memristive devices in the arrays are 100 \(nm\) x 100 \(nm\) and consist of Pt/TaO\({}_{\text{x}}\)/W/Pt stack. Fig. 1 (b) shows the schematic vertical cross-section of the fabricated device. Firstly, the W plugs from the processed wafers were exposed, and the 25 nm thick Pt layer was deposited as the bottom electrode (BE) with DC sputtering. The BE layer was then patterned using electron beam lithography and back etching using reactive ion etching (RIE). A 7 nm thick TaO\({}_{\text{x}}\) layer was then deposited by RF sputtering in Ar (77%) and O\({}_{2}\) (23%) gas mixture at 236W RF power followed by deposition of 13 nm thick W electrode using the DC sputtering. Subsequently, a 25 nm thick Pt layer was deposited as TE using the DC sputtering. Finally, the deposited switching oxide and TE stack were patterned using electron beam lithography and RIE-based back etching. Fig. 1 (c) shows the SEM image of the fabricated 1T1R TaO\({}_{\text{x}}\) RRAM device.
### _Electrical Characterization Setup_
The electrical characterization of the 1T1R memristive devices was carried out using Keithley 4200 SCS. Fig. 1 (a) shows the schematic of the fabricated 1T1R memristive device with different terminals. The memristive device in its pristine state needs a one-time forming step, which involves the creation of a conductive filament in the switching oxide by applying a positive voltage across the memristor. The current through the memristor during the electroforming process is controlled by applying an appropriate DC gate voltage to avoid permanent breakdown of the oxide. To realize digital (logic in memory) LiM using these devices, four distinct operations are necessary for any logic operation, which are discussed below.
* **SET Operation:** The SET operation involves changing the device's state from HRS to LRS. This is achieved by applying a positive ramp voltage from 0 to 1.8V at the TE electrode while maintaining a constant DC voltage of 1.6V on the gate terminal to limit the current flowing through the memristive to 500\(\mu A\). The drain and bulk pins of the transistor are connected to the ground.
* **RESET Operation:** The RESET operation switches the device from LRS to HRS by applying a positive ramp voltage from 0 to 2V at the source terminal, while keeping TE and bulk grounded. A minimum current requirement is crucial to dissolve the filament, and if not met, the device remains in LRS. Achieving this current requirement entails applying the maximum allowed voltage of 5V at the NMOS gate, opening the channel for current conduction, and affecting the minimum transistor size usable with memristors for SET and RESET processes.
* **Execution Operation:** During the execution operation, specific voltage configurations are applied to memristors for executing the OR and NOT operations. Once the input memristors are loaded with the input, the execution voltage applied across them decides the logic operation being executed on them. During this operation, the output memristor is connected to the ground. The detailed voltage value is discussed in Section IV-B and IV-C.
* **Read Operation:** The read operation serves to determine the current state of the memristor. A read voltage of 0.5V is applied at the TE and the source and the bulk terminals are pinned to the ground. A gate voltage of 3.3V is applied at the gate to open the NMOS channel during read operation.
Through skillful control of these operations, it becomes possible to design any arbitrary logic configuration on the crossbar. In this specific instance, these voltage sequences are combined to generate OR and NOT gates on the crossbar. However, these voltage patterns can also be harnessed to execute complex circuits sequentially or implement architectures resembling single instruction multiple data. Subsequently, we look into the outcomes achieved by sequentially applying these voltages to achieve the desired operations.
## IV Results and Discussion
Within this section, we unveil the outcomes derived from our experimental examinations and analyses, offering valuable insights into the energy consumption associated with the implementation of MAGIC OR and NOT gates.
Fig. 3: Electrical characterization of TaO\({}_{\text{x}}\) RRAM devices. (a) Forming and median I-V curve for 100 switching cycles. (b) Cycle-to-cycle variability for 100 switching cycles. (c) Device-to-device variability for 17 devices with 100 switching cycles each.
### _1T1R \(\text{TaO}_{\text{x}}\) RRAM Switching Characteristics_
The initial forming and subsequent 100 switching cycles are shown in Fig. 3 (a). The memristive devices exhibit counterclockwise switching characteristics. Fig. 3 (b), shows the CDF plot of HRS and LRS states obtained by switching the 1T1R devices 100 times. The device exhibits low cycle-to-cycle (C2C) variation with a consistent HRS/LRS ratio of 10. For evaluating device-to-device (D2D) variability, around 17 devices were switched 100 times each as shown in Fig. 3 (c).
From the C2C and D2D variability plots, it is evident that the HRS state exhibits higher variability with resistances distributed over two orders of magnitude ranging from around 100K\(\Omega\) to 1M\(\Omega\). The high variability in HRS can be attributed to the stochastic or uncontrolled breaking of filament during the RESET process [16]. Although the devices exhibit variability in the HRS state, a _consistent HRS/LRS ratio of 10_ is obtained across all the tested devices and suffices for the implementation of logic gates. Next, we will discuss the execution of logic OR and NOT operations on these devices.
### _MAGIC OR Implementation_
Fig. 2 (c) shows the schematic to implement the logic OR gate on the crossbar. To implement OR gate only three memristors are required so this needs to be mapped to the fabricated crossbar with a size of 8x4 as shown in Fig. 2 (c). To map the OR gate on the crossbar, three memristive devices sharing a common WL and a BL in the array are used to implement the gates. Firstly, the inputs of the OR gate are stored as the resistance state in \(x_{1}\) and \(x_{2}\). Subsequently, the output memristor (\(y_{out}\)) is initialized to the "0" state. Next, an execution voltage (\(V_{exe}\)) sweep from 0 to 3.3V at the TE terminal of \(x_{1}\) and \(x_{2}\) and the current at \(y_{out}\) is monitored. During this cycle, the BL shared by three \(x_{1}\), \(x_{2}\) and \(y_{out}\) is kept floating. While WL/\(V_{G}\) is connected to a DC voltage of 3.3V and the SL of the output memristor is grounded. All the other unused WLs, SLs, and BLs are kept floating. Post-execution cycle, the state of \(y_{out}\) is obtained by performing a READ operation. The memristor output currents during execution cycles for different inputs are shown in Fig. 4.
The truth table for the OR gate with the input and output states of the memristor are shown in Table I. In Table I, \(y_{init}\) column shows the initialization state of the output memristor before the execution cycle. It can be inferred from the truth table that for successful OR gate operation, the \(y_{out}\) changes its state for all combinations of inputs except for the input "00" during the execution cycle. Fig. 4 shows the execution cycles for different input combinations. For the input "00" case, both \(x_{1}\) and \(x_{2}\) are in HRS state represented by the current value \(I_{in1}\) and \(I_{in1}\), respectively. When an execution voltage is applied at the inputs, the current through the \(y_{out}\) is limited by the parallel combination of \(x_{1}\) and \(x_{2}\). This current is not sufficient to drive the \(y_{out}\) to the SET state. This is evident from the fact that no sharp change in output current (\(I_{out}\)) w.r.t applied \(V_{exe}\) is observed in the \(y_{out}\) as shown in Fig. 4 (a).
Fig. 4: Execution operation in OR gate for (a) input “00”, (b) input “01”, (c) input “10” and (d) input “11”. For input “00” there is no sharp change in \(\text{I}_{out}\) implying no change of \(\text{Y}_{out}\) state. For the rest of the input combinations, \(\text{Y}_{out}\) undergoes the SET process by drawing the majority of the current through the input memristors that are in the LRS state.
Fig. 5: Logic OR operation. Output read currents for the different combinations of input read currents.
Fig. 6: Execution operation in NOT gate for (a) input “1”, (b) input ”0”.
On the other hand, for the input combination of "01", "10", and "11", either one or both of the input memristors are in LRS states. This allows a sharp increase in \(I_{out}\) during the execution cycle, contributing to the change of state of \(y_{out}\) as shown in Fig. 4 (b), Fig. 4 (c) and Fig. 4 (d), respectively. In Fig. 5, the read currents of output memristors for different combinations of input memristors are displayed, showing a successful logic OR operation.
### _MAGIC NOT Implementation_
The schematic of the MAGIC NOT gate is shown in Fig. 2 (d). For performing NOT logic operation, memristor \(x_{1}\) and \(y_{out}\) are initialized to LRS and HRS state, respectively. Input memristor \(x_{in}\) is initialized in accordance with the input. An execution ramp voltage of \(V_{exc}\) from 0 to 1.5V is applied at the source terminal of \(x_{1}\) whereas 1/3 \(V_{exc}\) voltage is ramped at the \(x_{in}\). Table I shows the truth table for NOT gate with different input combinations. For the successful operation of NOT gate, \(y_{out}\) memristor should change its state when input "0" is applied and should remain in "0" state for input "1". The currents in the output memristor during the execution cycle for different inputs are shown in Fig. 6.
The input voltage applied at \(x_{in}\) is 1/3rd of the voltage applied at \(x_{1}\). Because of this, during the input "0", the potential difference across the \(y_{out}\) memristor (currently in HRS state) is higher as compared to the \(x_{in}\). This higher potential across \(y_{out}\) allows it to undergo a transition from RESET to SET through \(x_{1}\) (currently in LRS) state. This can be seen as a sharp rise in output current as depicted in Fig. 6 (b). On the other hand, for the case of input bit "1", the \(y_{out}\) memristor path becomes higher in resistance as compared to \(x_{in}\) path and does not receive enough current to drive it into SET from RESET state as shown in Fig. 6 (a). This allows \(y_{out}\) to change its state only when \(x_{in}\) is in the HRS state. Fig. 7 summarizes the output read currents for different input read current combinations and successfully demonstrates NOT gate.
### _Energy Calculations_
The energy consumption of logic operations is heavily dependent on the initialization as well as execution energies. In earlier works, researchers have typically calculated the energy consumption of a log-in memory system through a coarse-grained approach by multiplying the average energy of operations by the number of operations [17]. However, this method has been found to underestimate the actual energy consumption as it ignores the initialization energy involved during the operations [15, 18]. However, these results are in the simulation and with different memristor models. Therefore, in the current study, a similar approach has been considered for real devices, for calculating the energy consumption of logic operations by taking execution as well as initialization energy into account.
During OR operation, the different initialization steps involve numerous SET and RESET operations as mentioned in Table III. In the study, triangular wave sweep voltages are used to perform, SET, RESET, and READ as well as execution operations. The energy is obtained by multiplying the voltage waveforms with the sensed current and then integrating the product over the measurement time as per \(\int_{0}^{t}v(t)\times i(t)\,dt\), where \(t\) is the pulse time considered for energy calculations. The I,V-t curves corresponding to median SET and RESET operations are shown in Fig. 8. Two approaches have been used for calculating the energy: a) using full voltage ramp cycle time and b) using optimum times. In the first technique, energy is calculated for the whole triangular voltage sweep. In contrast, in the case of optimum energy calculation, the SET, RESET, and execution times have been derived from the experimental data, and energy is calculated for the same as shown in Fig. 8.
The former technique gives actual energy consumption numbers but is an overestimate. Therefore, for more realistic energy values, the latter technique is used. Table IV summarises the energy consumption for OR and NOT gate operation through both techniques. It is quite evident that for both the optimal and full voltage ramp cycle-based energy calculations, depending on the inputs of the OR operation, around 35-85% of the energy consumption is constituted by the initialization energy, execution energy constitutes around 64.8-14.8% while read energy only constitutes around 0.2% of energy Evidently, as the accuracy of energy calculation
Fig. 7: Logic NOT operation. Output read currents for the different combinations of input read currents.
increases, the gap between total energy and execution energy also increases. Therefore, for accurate calculation of energy consumption for logic operations using the MAGIC technique, the initialization energy must be taken into consideration.
## V Conclusion
In this paper, TaO\({}_{\text{x}}\) RRAM devices were fabricated and integrated on a 180 nm CMOS substrate to create a 1T1R crossbar array. The fabricated devices consistently exhibited an HRS/LRS ratio of approximately 10, making them eligible for the implementation of MAGIC gates. Subsequently, we demonstrated the implementation of logic OR and NOT gates, and along with energy consumption values. Energy consumptions for logic OR and NOT operations were calculated by evaluating both the initialization and execution energies. It was found that the initialization energy played a significant contributing role in the overall energy consumption during logic implementation, similar to the trends observed in the simulation study.
## Acknowledgments
This work was supported in part by the Federal Ministry of Education and Research (BMBF, Germany) in the project NEUROTEC II under Project 16ME0398K, Project 16ME0399, German Research Foundation (DFG) within the Project PLiM (DR 287/35-1, DR 287/35-2) and through Dr. Suhas Pai Donation Fund at IIT Bombay. Special thanks go out to Dr. Michael Schiek for his management work in the NEUROTECH II project.
|
2303.10558 | H.E.S.S. follow-up observations of GRB221009A | GRB221009A is the brightest gamma-ray burst ever detected. To probe the
very-high-energy (VHE, $>$\!100 GeV) emission, the High Energy Stereoscopic
System (H.E.S.S.) began observations 53 hours after the triggering event, when
the brightness of the moonlight no longer precluded observations. We derive
differential and integral upper limits using H.E.S.S. data from the third,
fourth, and ninth nights after the initial GRB detection, after applying
atmospheric corrections. The combined observations yield an integral energy
flux upper limit of $\Phi_\mathrm{UL}^{95\%} = 9.7 \times
10^{-12}~\mathrm{erg\,cm^{-2}\,s^{-1}}$ above $E_\mathrm{thr} = 650$ GeV. The
constraints derived from the H.E.S.S. observations complement the available
multiwavelength data. The radio to X-ray data are consistent with synchrotron
emission from a single electron population, with the peak in the SED occurring
above the X-ray band. Compared to the VHE-bright GRB190829A, the upper limits
for GRB221009A imply a smaller gamma-ray to X-ray flux ratio in the afterglow.
Even in the absence of a detection, the H.E.S.S. upper limits thus contribute
to the multiwavelength picture of GRB221009A, effectively ruling out an IC
dominated scenario. | H. E. S. S. Collaboration, :, F. Aharonian, F. Ait Benkhali, J. Aschersleben, H. Ashkar, M. Backes, A. Baktash, V. Barbosa Martins, R. Batzofin, Y. Becherini, D. Berge, K. Bernlöhr, B. Bi, M. Böttcher, C. Boisson, J. Bolmont, M. de Bony de Lavergne, J. Borowska, M. Bouyahiaoui, F. Bradascio, M. Breuhaus, R. Brose, F. Brun, B. Bruno, T. Bulik, C. Burger-Scheidlin, S. Caroff, S. Casanova, J. Celic, M. Cerruti, T. Chand, S. Chandra, A. Chen, J. Chibueze, O. Chibueze, G. Cotter, S. Dai, J. Damascene Mbarubucyeye, J. Devin, A. Djannati-Ataï, A. Dmytriiev, V. Doroshenko, K. Egberts, S. Einecke, J. -P. Ernenwein, S. Fegan, G. Fichet de Clairfontaine, M. Filipovic, G. Fontaine, M. Füßling, S. Funk, S. Gabici, S. Ghafourizadeh, G. Giavitto, D. Glawion, J. F. Glicenstein, P. Goswami, G. Grolleron, M. -H. Grondin J. A. Hinton, T. L. Holch, M. Holler, D. Horns, Zhiqiu Huang, M. Jamrozy, F. Jankowsky, V. Joshi, I. Jung-Richardt, E. Kasai, K. Katarzyński, R. Khatoon, B. Khélifi, W. Kluźniak, Nu. Komin, R. Konno, K. Kosack, D. Kostunin, R. G. Lang, S. Le Stum, F. Leitl, A. Lemière, M. Lemoine-Goumard, J. P. Lenain, F. Leuschner, T. Lohse, I. Lypova, J. Mackey, D. Malyshev, D. Malyshev, V. Marandon, P. Marchegiani, A. Marcowith, G. Martí-Devesa, R. Marx, M. Meyer, A. Mitchell, L. Mohrmann, A. Montanari, E. Moulin, T. Murach, K. Nakashima, M. de Naurois, J. Niemiec, A. Priyana Noel, P. O'Brien, S. Ohm, L. Olivera-Nieto, E. de Ona Wilhelmi, M. Ostrowski, S. Panny, M. Panter, R. D. Parsons, G. Peron, D. A. Prokhorov, H. Prokoph, G. Pühlhofer, M. Punch, A. Quirrenbach, P. Reichherzer, A. Reimer, O. Reimer, H. Ren, M. Renaud, B. Reville, F. Rieger, G. Rowell, B. Rudak, E. Ruiz-Velasco, V. Sahakian, H. Salzmann, A. Santangelo, M. Sasaki, J. Schäfer, F. Schüssler, H. M. Schutte, U. Schwanke, J. N. S. Shapopi, A. Specovius, S. Spencer, Ł. Stawarz, R. Steenkamp, S. Steinmassl, C. Steppa, I. Sushch, H. Suzuki, T. Takahashi, T. Tanaka, R. Terrier, N. Tsuji, Y. Uchiyama, M. Vecchi, C. Venter, J. Vink, S. J. Wagner, R. White, A. Wierzcholska, Yu Wun Wong, M. Zacharias, D. Zargaryan, A. A. Zdziarski, A. Zech, S. J. Zhu, N. Żywucka | 2023-03-19T03:59:44Z | http://arxiv.org/abs/2303.10558v1 | # H.E.S.S. follow-up observations of GRB 221009A
###### Abstract
GRB 221009A is the brightest gamma-ray burst ever detected. To probe the very-high-energy (VHE, \(>\)100 GeV) emission, the High Energy Stereoscopic System (H.E.S.S.) began observations 53 hours after the triggering event, when the brightness of the moonlight no longer precluded observations. We derive differential and integral upper limits using H.E.S.S. data from the third, fourth, and ninth nights after the initial GRB detection, after applying atmospheric corrections. The combined observations yield an integral energy flux upper limit of \(\Phi_{\rm UL}^{95\%}=9.7\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) above \(E_{\rm thr}=650\) GeV. The constraints derived from the H.E.S.S. observations complement the available multiwavelength data. The radio to X-ray data are consistent with synchrotron emission from a single electron population, with the peak in the SED occurring above the X-ray band. Compared to the VHE-bright GRB 190829A, the upper limits for GRB 221009A imply a smaller gamma-ray to X-ray flux ratio in the afterglow. Even in the absence of a detection, the H.E.S.S. upper limits thus contribute to the multiwavelength picture of GRB 221009A, effectively ruling out an IC dominated scenario.
Gamma-rays: general, Gamma-rays: bursts, emission mechanism: non-thermal +
Footnote †: journal: ApJL
0000-0002-2070-3885]H.E.S.S. Collaboration
## 1 Introduction
In the last few years, several gamma-ray bursts (GRBs) have been detected in Very-High-Energy (VHE, \(>\) 100 GeV) gamma rays (Abdalla et al. (H.E.S.S. Collaboration), 2019; Abdalla et al. (H.E.S.S. Collaboration), 2021). These explosive phenomena originate from the deaths of massive stars or the mergers of compact objects (e.g. Meszaros, 2002). GRBs are observed as bright flashes of gamma rays, referred to as the prompt emission, followed by long-lived and slowly evolving multiwavelength afterglow emission (see Noda and Parsons (2022) for a recent review of VHE observations of GRBs). The prompt emission is thought to come from interactions within the ultrarelativistic jet produced by the catastrophic progenitor event, though the development of accurate theoretical models of the physical mechanisms underlying the emission are challenging (Iyyani, 2022). In contrast, the afterglow is produced by the jet's subsequent interactions with the surrounding environment, and during this time the jet is well described as a conical section of a decelerating spherical blast wave (see Dai et al., 2017, for a recent review of GRB theory). The GRB afterglow therefore provides a well-defined laboratory for studying particle acceleration under extreme conditions.
The primary emission mechanism of photons with energies \(\ll\) GeV in the afterglow is well established as synchrotron emission by a population of accelerated charged particles. Assuming a homogeneous magnetic field, synchrotron photons can in principle extend up to a maximum energy \(\approx 100\)U MeV, where \(\Gamma\) is the bulk Lorentz factor of the emission zone (de Jager et al., 1996; Abdalla et al. (H.E.S.S. Collaboration), 2021). A second spectral component associated with inverse Compton scattering of either ambient or the synchrotron photons is expected at higher energies, though tension between observations and single-zone synchrotron self-Compton (SSC) models have been found in the VHE domain (Abdalla et al. (H.E.S.S. Collaboration), 2021). The properties of any detected VHE emission therefore has important ramifications for GRB studies; the unambiguous detection of an SSC component would set constraints on the phys
ical properties of the emission zone, while strong deviations from the expected SSC spectrum could indicate the need for a more complex set of assumptions. The two VHE-bright GRBs with the most complete data sets so far have not provided a firm conclusion on this issue (MAGIC Collaboration et al., 2019; Abdalla et al., 2021).
The recent GRB 221009A is the GRB with the brightest detected prompt emission, and its redshift of \(z=0.151\) -- corresponding to a luminosity distance of around 750 Mpc -- implies an isotropic equivalent energy release in the prompt emission \(E_{\rm iso}\) of the order of \(10^{54}\) erg (de Ugarte Postigo et al., 2022), marking it as an extremely energetic GRB. The large \(E_{\rm iso}\) and close proximity resulted in the potential detection for the first time of a GRB at energies above 10 TeV and likely the very first detection of VHE photons during the prompt phase, by the Large High Altitude Air Shower Observatory (LHAASO) (Huang et al., 2022). To further extend this picture and characterize the VHE emission in the afterglow, the High Energy Stereoscopic System (H.E.S.S.) observed GRB 221009A on the third night, as soon as it became possible following a period of bright moonlight. H.E.S.S. is sensitive to photons at energies above tens of GeV, and has so far detected VHE emission from two GRBs (Abdalla et al. (H.E.S.S. Collaboration), 2019, 2021), including a detection more than 50 hours after the initial detection for GRB 190829A.
In this paper, we present the H.E.S.S. observations of GRB 221009A starting on the third night after the GRB detection. We discuss the observations themselves in Section 2, and the analysis of both H.E.S.S. data and multiwavelength data in Section 3. We find no significant emission from a source at the GRB position, and derive upper limits assuming an intrinsic \(E^{-2}\) spectrum. In order to place these into context, we discuss the multiwavelength modeling in Section 4, and conclude in Section 5. Throughout the paper we assume a flat \(\Lambda\)CDM cosmology with H\({}_{0}\) = 67.4 km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{\rm m}=0.315\)(Planck Collaboration et al., 2020).
## 2 Observations
### Initial detection
The detection of the prompt emission of GRB 221009A was first reported by the _Fermi_ Gamma-Ray Burst Monitor (GBM), which triggered on the GRB on 2022-10-09 at 13:16:59 UTC (Veres et al., 2022); we refer to the GBM trigger time as T\({}_{0}\). GeV emission was also reported by the _Fermi_ Large Area Telescope (LAT) (Bissaldi et al., 2022). The GRB also triggered the Neil Gehrels _Swift_ Observatory once it became visible to the _Swift_ Burst Alert Telescope an hour later. This caused the satellite to automatically slew to the source, allowing for follow-up observations by the other instruments on _Swift_ such as the X-Ray Telescope (XRT), which reported a localization of right ascension = 19h 13m 03s, declination = +19\({}^{\circ}\) 48' 09" (J2000) with a positional uncertainty of 5.6" (Dichiara et al., 2022).
### H.E.S.S. observations
H.E.S.S. is a system of five Imaging Atmospheric Telescopes located in the Khomas Highland of Namibia (23\({}^{\circ}\)16'18'', 16\({}^{\circ}\)30'00'') at 1800 m above sea level. Four 12-m telescopes (CT1-4) (Aharonian et al. (H.E.S.S. collaboration), 2006), each with a mirror area of 108 m\({}^{2}\), are placed in a square formation with 120 m side. A fifth, 28-m telescope (CT5) with a mirror area of 612 m\({}^{2}\) is placed in the center of the array (Holler et al., 2015).
On October 9 and 10, H.E.S.S. could not observe the GRB as the night-sky background was too high due to the full Moon. On October 11, H.E.S.S. started observations with its 12-m telescopes as soon as observing conditions allowed. During that night, an extended 32-minute observation run was taken in nominal conditions during dark time (when the Moon was still below the horizon) followed by a second run using settings optimised for observations under high levels of optical background light such as moonlight (Tomankova et al., 2022). H.E.S.S. continued observing GRB 221009A in the following nights. The observations were conducted under poor atmospheric conditions due to clouds and a higher aerosol content in the atmosphere due to the regular biomass-burning season (Formenti et al., 2019). The quality of the atmospheric conditions is quantified by the atmospheric transparency coefficient (Hahn et al., 2014) with lower values corresponding to lower transmission of Cherenkov light through the atmosphere (Table 1). Nominally accepted values of the atmospheric transparency coefficient are above 0.8. As the transparency coefficients were lower than this during the H.E.S.S. observations of this GRB, a correction procedure has been applied (discussed in the next section). Additional datasets, including ones taken on other nights, are excluded from the analysis due to further degradation of the atmospheric conditions by the presence of clouds. Unfortunately, CT5 data are not available for this study. The data taken with CT1-4 are used. Table 1 summarizes the H.E.S.S. observations used in this analysis.
## 3 Analysis and Results
### H.E.S.S. analysis
The H.E.S.S. data acquired during the follow-up period are analyzed using the ImPACT reconstruction procedure (Parsons & Hinton, 2014) which uses an image
template-based maximum likelihood fit. The hadronic background events produced by cosmic rays are rejected with a multivariate analysis scheme (Ohm et al., 2009). The results are independently cross-checked with a separate analysis chain based on the Model Analysis (de Naurois and Rolland, 2009) which performs a log-likelihood comparison between the recorded shower images and semianalytically generated templates. Moreover, in order to correct for atmospheric disturbances, we apply a scheme developed to assess the impact of the enhanced aerosol levels in the atmosphere on the instrument response functions derived from Monte-Carlo simulations. The scheme calculates a correction factor to the expected Cherenkov light by comparing the actual transmission profile with the ideal one used in the simulations. The correction is then applied by modifying _a posteriori_ the instrument response functions and reconstructed event energies (Holch et al., 2022). These corrections are cross-checked by an analysis that uses dedicated _runwise_ simulations taking into consideration the actual observation conditions and telescope configuration during the GRB 221009A H.E.S.S. observations following the method outlined in Holler et al. (2020). We use _loose cuts_(Aharonian et al. (H.E.S.S. collaboration), 2006) for the selection of gamma-ray showers. In the high-level analysis, we converted our data into GADF format1(Deil et al., 2022), and use the open source analysis package GAMMAPY (Deil et al., 2017; Acero et al., 2022) (v1.0).
Footnote 1: [https://gamma-astro-data-formats.readthedocs.io/en/latest/index.html](https://gamma-astro-data-formats.readthedocs.io/en/latest/index.html)
In order to search for a possible signal, and avoid accidentally including emission from other sources, we generate maps of excess gamma-ray counts and significances within a range of \(+\)/- 2.0 degrees from the expected emission position. These maps are generated using the ring background technique (Berge et al., 2007) with circular ON regions of \(0.122\deg\) centered at each point on the map, and corresponding annular OFF-source regions centered on the same positions with radii 0.5 to \(0.8\deg\) deg to measure pure background. We exclude a circular region of \(0.3\deg\) around the expected emission region from the OFF-source regions. When computing the exposure ratio between the ON and OFF-source region at each test position, a radially-symmetric model for the background acceptance within the field of view of each observation was integrated spatially over the regions. For all three nights combined, we obtain on the position of the source \(\rm N_{ON}=39\) events and \(\rm N_{OFF}=686\) events, with a ratio of on-source exposure to off-source exposure of 0.0638. Using the statistical formulation described in Li and Ma (1983) we calculate the excess counts to be \(-4.8\) and we find \(\rm N_{ON}\) events to be in agreement with the expected background at \(-0.7\,\sigma\) level. The excess and significance maps are derived and shown in Figure 1.
Following this analysis, we detect no significant emission of VHE gamma rays at the GRB location in the combined dataset nor for each night separately. We thus compute upper limits to constrain the VHE gamma-ray emission from the GRB at the time of H.E.S.S. observations using the Reflected Background method described in Berge et al. (2007) with same-size circular ON and OFF regions. The energy threshold \(E_{\rm thr}\) sets the lower limit of the spectral analysis and is defined as the lowest energy at which the bias between reconstructed and simulated energies is below 10%. We find a value of \(E_{\rm thr}=650\) GeV for the dataset combining all observations. We assume a generic intrinsic \(E^{-2}\) dN/dE spectrum, and use the redshift of the source \(z=0.151\) and the model of the extragalactic background light described in Dominguez et al. (2011), and compute 95% confidence level (C.L.) flux upper limits using a Poisson likelihood method described in Rolke et al. (2005). The
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Calendar date & Interval & \(\rm T_{Start}-T_{0}\) [s] & \(\rm T_{End}-T_{0}\) [s] & Average zenith angle [deg] & ATC \\ \hline October 11 2022 & Night 3 & \(1.901\times 10^{5}\) & \(1.920\times 10^{5}\) & 49.3 & 0.46 \\ October 11 2022\({}^{\rm a}\) & Night 3 & \(1.922\times 10^{5}\) & \(1.929\times 10^{5}\) & 52.7 & 0.44 \\ October 12 2022 & Night 4 & \(2.765\times 10^{5}\) & \(2.782\times 10^{5}\) & 49.6 & 0.49 \\ October 12 2022 & Night 4 & \(2.783\times 10^{5}\) & \(2.800\times 10^{5}\) & 52.6 & 0.45 \\ October 12 2022 & Night 4 & \(2.800\times 10^{5}\) & \(2.818\times 10^{5}\) & 57.0 & 0.41 \\ October 17 2022 & Night 9 & \(7.087\times 10^{5}\) & \(7.104\times 10^{5}\) & 51.7 & 0.47 \\ October 17 2022 & Night 9 & \(7.105\times 10^{5}\) & \(7.122\times 10^{5}\) & 56.9 & 0.65 \\ \hline \hline \end{tabular}
* taken under moderate moonlight
\end{table}
Table 1: H.E.S.S. observations of GRB 221009A. Column 2 denotes the number of nights after \(\rm T_{0}\). Columns 3 and 4 represent the run start and end time since \(\rm T_{0}\), in seconds, respectively. Column 5 shows the average zenith angle under which the observations were conducted and column 6 shows the Atmospheric Transparency Coefficient (ATC).
differential upper limits are shown in Figure 2. Detailed results are available on a dedicated webpage2.
Footnote 2: [https://www.mpi-hd.mpg.de/hfm/HESS/pages/publications/auxiliary/2023.GRB_221009A](https://www.mpi-hd.mpg.de/hfm/HESS/pages/publications/auxiliary/2023.GRB_221009A)
We compute integral flux upper limits between \(E_{\rm thr}\) and 10 TeV for each night, where the upper bound is chosen to be the energy above which \(\rm N_{\rm OFF}<10\). The upper limits of the integral energy flux are shown in Figure 3. The combined dataset yields an integral energy flux upper limit of \(\Phi_{\rm UL}^{95\%}=9.7\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\), and per-night integral energy flux upper limits are given in Table 2. Systematic effects include uncertainties of the atmospheric corrections, the assumed intrinsic energy spectrum, differences in EBL absorption models and general uncertainties in the flux and energy scale. The systematic uncertainties are conservatively estimated to be about a factor of 2, with expected worsening of systematics with energy, for both the differential and integral upper limits on the gamma-ray flux.
### Swift-XRT analysis
_Swift_-XRT is an X-ray imaging spectrometer with an energy range of 0.3 to 10 keV (Burrows et al., 2005). The XRT data are obtained using the Time-Sliced Spectra tool3(Evans et al., 2009). The time intervals are chosen to overlap with the H.E.S.S. observations. There are no simultaneous XRT observations on two of the three H.E.S.S. nights, so we instead define the time ranges in such a way that they encompass one set of contiguous XRT observations immediately before and after the H.E.S.S. observations. However, using this rule for the first night of H.E.S.S. observations resulted in a too-low XRT exposure time. Hence, for this night we extend the time range to include _two_ sets of contiguous XRT observations on either side. (Note that, as we are using larger time bins for the XRT observations, we are underestimating what the true uncertainties would be for strictly contemporaneous observations.) In our analyses, we only include Photon Counting (PC) data. By using the data products from the Time-Sliced Spectra tool and selecting only the PC mode data, we avoid contamination from the dust rings (Tiengo et al., 2022; Evans, 2022).
Footnote 3: [https://www.swift.ac.uk/xrt_spectra/addspec.php?targ=01126853&origin=GRB](https://www.swift.ac.uk/xrt_spectra/addspec.php?targ=01126853&origin=GRB)
The XRT data are fit using XSPEC v12.13.0c with a power law of the form \(dN/dE=k(E/E_{0})^{-\alpha}\) (\(E_{0}=1\) keV) and two absorption components (Evans et al., 2009). More specifically, we fit the data with the model TBabs * zTBabs * powerlaw with the Galactic column density \(N_{\rm H,gal}=5.38\times 10^{21}\) cm\({}^{-2}\)(Willingale et al., 2013). We simultaneously fit the three nights with the
Figure 1: Left: Excess count map computed from the H.E.S.S. observational data taken on GRB 221009A presented in Table 1 with a 0.1\({}^{\circ}\) oversampling radius (yellow circle). Middle: Significance map computed from the H.E.S.S. excess count map of GRB 221009A. Right: Significance distribution of the H.E.S.S. significance map entries in black and a Gaussian distribution fit in red.
Figure 2: 95% C.L. differential flux upper limits on an intrinsic (EBL-corrected) \(E^{-2}\) GRB spectrum, derived from the H.E.S.S. observational data taken on GRB 221009A in all nights combined (left) and Night 3 only (right).
column density at the source \(N_{\rm H,int}\) tied across all spectra but free to vary, under the assumption that the intrinsic absorption does not vary on these timescales. We keep \(k\) and \(\alpha\) free in each time interval. We define the fitting statistic to be the C-statistic4, suitable for XRT data. Assuming a constant \(N_{\rm H,int}\), we obtain \(N_{\rm H,int}=(1.32\pm 0.18)\times 10^{22}\) cm\({}^{-2}\). The results are presented in Table 2 and plotted in Figure 3, and are compatible with those presented in Williams et al. (2023), which similarly reports a softening in the X-ray spectrum on these timescales.
Footnote 4: [https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixStatistics.html](https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixStatistics.html)
The assumption of constant column density on these timescales has been challenged in other GRBs, recently by Campana et al. (2021) for GRB 190114C. For GRB 221009A, indications of a higher degree of absorption at earlier times have indeed been noted in the optical data (Fulton et al., 2023). Because there is some degeneracy between the effects of \(N_{\rm H,int}\) and \(\alpha\) (e.g., a larger value of \(N_{\rm H,int}\) can be somewhat compensated by a softer value of \(\alpha\)) this also has an effect on the returned best-fit photon spectrum. If \(N_{\rm H,int}\) is indeed higher around Night 3 (\(\approx 2\) days after T\({}_{0}\)) than the later nights, then the true value of \(\alpha\) for Night 3 should be softer than the returned 1.7 and therefore more similar to the value of 1.9 that we find for the other two H.E.S.S. nights (although we note that the indices are consistent within \(2\sigma\)). A thorough study of this effect is beyond the scope of this paper, so for the purposes of the discussion in Section 4, we do not require that our modeling explain the XRT data on Night 3 very strictly.
### Fermi-LAT analysis
The _Fermi_-LAT is a pair conversion telescope that detects gamma rays between tens of MeV and hundreds of GeV (Atwood et al., 2009). We perform an unbinned likelihood analysis of _Fermi_-LAT data over time ranges spanning each set of H.E.S.S. observations (Table 1) using gtBurst v. 03-00-00p5 (Vianello, 2016). We use the P8R3_SOURCE event class, recommended for analyses on these timescales, and the corresponding instrument response functions5. We select events in the energy range 100 MeV and 10 GeV within \(12^{\circ}\) of the burst position and a zenith angle cut of \(100^{\circ}\). We use the PowerLaw2 model6 for the GRB spectrum, and the latest Galactic (fixed normalization) and isotropic templates for the
\begin{table}
\begin{tabular}{c||c c c c|c} \hline \hline Interval & Time since T\({}_{0}\) & \(\alpha\) & \(k\times 10^{-2}\) & XRT en. flux \(\times 10^{-11}\) & H.E.S.S. en. flux UL \(\times 10^{-11}\) \\ & [s] & & [ph keV\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\)] & [erg cm\({}^{-2}\) s\({}^{-1}\)] & [erg cm\({}^{-2}\) s\({}^{-1}\)] \\ \hline Night 3 & \((1.68-2.22)\)\(\times\)\(10^{5}\) & \(1.69\pm 0.10\) & \(2.14\pm 0.30\) & \(14.9\ \pm 2.3\) & 4.06 \\ Night 4 & \((2.61-2.90)\)\(\times\)\(10^{5}\) & \(1.90\pm 0.12\) & \(1.31\pm 0.20\) & \(7.80\pm 1.33\) & 1.77 \\ Night 9 & \((6.85-7.25)\)\(\times\)\(10^{5}\) & \(1.85\pm 0.25\) & \(0.23\pm 0.09\) & \(1.42\pm 0.62\) & 2.85 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of analyses of XRT and H.E.S.S. data. The entries in the first column correspond to the second column of Table 1. The second through fourth columns show the results of fitting XRT data in time intervals bracketing the nights during which H.E.S.S. observed the GRB, with \(1\sigma\) uncertainties (statistical only). The last column lists the H.E.S.S. energy flux upper limits for the time interval defined by the third and fourth columns of Table 1. The XRT energy flux is calculated in the 0.3–10 keV range and the H.E.S.S. energy flux in the 0.65–10 TeV range.
Figure 3: The H.E.S.S. integral energy flux upper limits (red circles; 95% C.L.) are derived assuming an intrinsic \(E^{-2}\) spectrum. The automated XRT data (gray) are obtained from the Burst Analyserb (Evans et al., 2010); multiple XRT observations around the H.E.S.S. observations are then combined and refit (blue, \(1\sigma\) uncertainty). Note that the Burst Analyser assumes a larger value of intrinsic absorption than we find in our analysis and therefore returns a larger unabsorbed energy flux. The extension of the H.E.S.S. error bars in the \(x\) direction, depicting the duration of the H.E.S.S. observations, is smaller than the size of the markers.
background7, and include all catalog sources (Abdollahi et al., 2020) within \(20^{\circ}\) of the GRB position. No significant emission from the GRB (Test Statistics \(<1\)) is observed in the _Fermi_-LAT data during the H.E.S.S. observations, so 95% C.L. upper limits are computed assuming an \(E^{-2}\) spectrum. We find differential energy flux upper limits (between 100 MeV and 10 GeV) of \(7.1\times 10^{-10}\) and \(2.6\times 10^{-10}\)\(\rm erg\,cm^{-2}\,s^{-1}\) during Night 3 and Night 4, respectively; the upper limit for Night 3 is shown in Figure 4.
Footnote 7: [https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html)
## 4 Discussion
The H.E.S.S. upper limits, when combined with multiwavelength observations, can be used to constrain possible emission scenarios of GRB 221009A several days after the prompt event. Figure 4 shows the spectral energy distribution (SED) including the results of the H.E.S.S., XRT, and LAT analyses. Also included are the optical \(i\)-band flux during the time of the H.E.S.S. observations (extracted from Figure 2 of Fulton et al., 2023) and a publicly available radio observation (Rhodes et al., 2022) that most closely matches the time window of the first night of H.E.S.S. observations.
The SEDs on all nights during which H.E.S.S. measurements took place are consistent with synchrotron emission from a single electron population. In such a model, the synchrotron spectrum peaks above the energy range covered by the XRT, implying Klein Nishina (KN) suppression of any inverse Compton (IC) component. For IC-dominated cooling, KN suppression could account for the hard cooled spectrum of electrons emitting in the XRT range (e.g. Agaronyan and Ambartsumyan, 1985; Nakar et al., 2009; Breuhaus et al., 2021). However, this is difficult to reconcile with the H.E.S.S. upper limits in the VHE range, as electrons producing X-ray photons via synchrotron emission should produce a comparable or greater gamma-ray flux via their IC emission. The energy density of optical photons was insufficient for internal absorption of TeV photons, while the LHAASO detection suggests that absorption on a local external field was unlikely. Ruling out IC-dominated cooling, we consider instead models in which synchrotron dominated cooling can account for the multiwavelength observations. We adopt a single-zone thin shell model (see Huang et al., 2022), assuming self-similar expansion of a relativistic shock following an impulsive point-like explosion (Blandford and McKee, 1976).
Within this model, we assume the radio to X-ray emission is produced by single population of continuously injected electrons. The photon index of the XRT emission (see Table 2) was consistent with \(\alpha\approx 1.8\) on all nights (see also Williams et al., 2023). Data from the Nuclear Spectroscopic Telescope Array (NuSTAR) indicate this spectrum continued unbroken above the XRT range (Laskar et al., 2023). These measurements imply either an uncooled electron population \(dN_{\rm e}/d\bar{\gamma}_{\rm e}\propto\bar{\gamma}_{\rm e}^{-2.6}\), or the cooled spectrum assuming continuous injection: \(dN_{\rm inj}/d\bar{\gamma}_{\rm e}\propto\bar{\gamma}_{\rm e}^{-1.6}\). Here \(\bar{\gamma}_{\rm e}\) is the electron Lorentz factor in the fluid frame. For the cooling break to remain above the NuSTAR range (3-79 keV), requires specific conditions (see for example Huang et al., 2022, eq. 14). This scenario was considered by Laskar et al. (2023), who modelled the synchrotron component assuming a wind profile. To match the measured flux, more than half of the downstream internal energy needs to be converted to non-thermal electrons and magnetic field energy, with these two components being in near equipartition. The SSC flux is negligible in such a scenario. To maintain the cooling break above the X-ray range requires a low mass loss rate, and though low mass loss rates are expected from the polar regions of low-metallicity stars (Muijres et al., 2012), it is unclear if such a wind profile can be sustained over a large distance from the progenitor.
We consider the alternative possibility of a cooled hard injection spectrum, comparing against the first night of H.E.S.S. observations. To match the flux levels, we introduce a parameter \(\eta_{\rm inj}\leq 1\), the ratio of non-thermal particle density flux, to the particle density flux downstream. The injection energy is left as a free parameter, and since the total integrated energy for an injection spectrum with index \(<2\) is determined by the maximum electron energy, the is fixed by the energy efficiency. The model parameters provided in Table 3 were chosen to match the selected measurements, while just reaching the H.E.S.S. upper limits. The cooling break in the synchrotron component occurs between the radio and optical data points, at a flux level comparable to the H.E.S.S. upper limit. As synchrotron cooling dominates for the chosen parameters, the corresponding IC flux remains below the upper limits.
## 5 Summary and Conclusion
H.E.S.S. began observing GRB 221009A approximately 53 hours after the initial _Fermi_-GBM detection. The observations were taken under less-than-optimal atmospheric conditions caused by clouds and aerosols. No significant VHE signal is detected on the third, fourth, and ninth nights after the detection, nor in the combined dataset of all three nights. When
combining the data from all three nights, we find a 95% upper limit on the 0.65-10 TeV energy flux of \(\Phi_{\rm UL}^{95\%}=9.7\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\).
The H.E.S.S. upper limits help constrain possible emission scenarios when compared with the multiwavelength observations. The X-ray spectra on the three H.E.S.S. nights were found to remain hard, with photon indices ranging from 1.7 to 1.9. Taken together with the approximately contemporaneous optical and radio data, these measurements suggest synchrotron emission from a single electron population, with continuous injection of either an uncooled soft spectrum, or a cooled hard spectrum. The photon energy spectrum peaks above the XRT range, (and also that of the NuSTAR, Laskar et al., 2023) and KN suppression of any inverse Compton emission is unavoidable. An IC-dominated loss scenario appears to be ruled out by the H.E.S.S. upper limits. In contrast, the multiwavelength SED of the nearby low-luminosity GRB 190829A, which was detected at TeV energies three nights after the prompt emission, was better described by a single component from X-rays to VHE gamma rays. Those data were consistent with photon indices \(\approx 2\) for both the XRT and H.E.S.S. energy ranges on all nights (Abdalla et al. (H.E.S.S. Collaboration), 2021). The results from GRB 221009A, potentially the brightest ever detected GRB, highlight the distinct character of these two bursts, both in terms of their non-thermal particle acceleration and emission properties. As discussed in Abdalla et al. (H.E.S.S. Collaboration) (2021) (see also Huang et al., 2022; Salafia et al., 2022), an accurate reproduction of the MWL observations of GRB 190829A is challenging within a single-zone SSC framework. Other theoretical models put forward to account for the multiwavelength measurements of GRB 190829A, including external Compton (Zhang et al., 2021) or two-zone (Khangulyan et al., 2023) models serve to highlight the necessity for high quality spectral and temporal data of GRBs and their afterglows at all available wavelengths to understand the underlying physical mechanisms at play.
With upper-limits in the VHE band, we consider only a single-zone model for the first night of H.E.S.S. observations, assuming an electron population whose cooled spectrum is consistent with the inferred XRT spectrum. An alternative uncooled electron scenario was considered in Laskar et al. (2023) (see also Sato et al., 2022). The hard injection scenario requires a spectral index deviating substantially from that predicted from shock acceleration theory (e.g. Achterberg et al., 2001). Harder spectra have been predicted from other acceleration schemes such as relativistic shear acceleration (Rieger and Duffy, 2005) or converter mechanisms (Derishev et al., 2003; Stern, 2003; Derishev and Piran, 2019). A detailed consideration of the underlying acceleration process is however beyond the scope of the current paper, though the emerging multiwavelength dataset for GRB 221009A will provide a valuable data set for future theoretical studies. Our results highlight the role imaging atmospheric Cherenkov telescopes have in improving our understanding of these powerful transient events.
We thank Lauren Rhodes for discussions on the radio and optical data, and Phil Evans for assistance in analyzing XRT data. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.
Figure 4: The H.E.S.S. 95% upper limits on Night 3 (red) are plotted along with the XRT (blue, 1\(\sigma\)) best-fit spectrum and LAT (purple, 95% C.L.) upper limit, as well as publicly available radio data from the Submillimeter Array (black open circle; Rhodes et al. (2022)) and an optical flux (green square; extracted from Figure 2 of Fulton et al. (2023)) in a multiwavelength SED. An example set of synchrotron and SSC emission components — arising from a single, partially cooled electron population described in Table 3 — are also shown to illustrate a possible explanation of the multiwavelength observations.
\begin{table}
\begin{tabular}{l c} \hline \hline Explosion energy \(E\) & \(10^{54}\) erg \\ External density \(n_{\rm ext}\) & \(1.7\) cm\({}^{-3}\) \\ Injection fraction \(\eta_{\rm inj}\) & \(0.1\)\% \\ Electron equipartition fraction \(\epsilon_{e}\) & \(9\times 10^{-4}\) \\ Magnetic equipartition fraction \(\epsilon_{B}\) & \(8\times 10^{-4}\) (0.07G) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Parameters used for single zone model fit, adopting the constant external-density solution of Blandford and McKee (1976). We choose \(\bar{\gamma}_{\rm min}=0.66m_{\rm p}/m_{\rm e}\) where \(m_{\rm p}/m_{\rm e}\) the proton to electron mass ratio.
The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is gratefully acknowledged, as is the support by the German Ministry for Education and Research (BMBF), the Max Planck Society, the German Research Foundation (DFG), the Helmholtz Association, the Alexander von Humboldt Foundation, the French Ministry of Higher Education, Research and Innovation, the Centre National de la Recherche Scientifique (CNRS/IN2P3 and CNRS/INSU), the Commissariat a l'energie atomique et aux energies alternatives (CEA), the U.K. Science and Technology Facilities Council (STFC), the Irish Research Council (IRC) and the Science Foundation Ireland (SFI), the Knut and Alice Wallenberg Foundation, the Polish Ministry of Education and Science, agreement no. 2021/WK/06, the South African Department of Science and Technology and National Research Foundation, the University of Namibia, the National Commission on Research, Science & Technology of Namibia (NCRST), the Austrian Federal Ministry of Education, Science and Research and the Austrian Science Fund (FWF), the Australian Research Council (ARC), the Japan Society for the Promotion of Science, the University of Amsterdam and the Science Committee of Armenia grant 21AG-1C085. We appreciate the excellent work of the technical support staff in Berlin, Zeuthen, Heidelberg, Palaiseau, Paris, Saclay, Tubingen and in Namibia in the construction and operation of the equipment. This work benefited from services provided by the H.E.S.S. Virtual Organisation, supported by the national resource providers of the EGI Federation. Astropy (Astropy Collaboration et al., 2022), matplotlib (Hunter, 2007), numpy (Harris et al., 2020), gammay (Deil et al., 2017; Acero et al., 2022), XSPEC (Arnaud, 1996), gtburst (Vianello, 2016)
|
2301.09670 | Systematic uncertainties in the characterisation of helium-dominated
metal-polluted white dwarf atmospheres | White dwarf photospheric parameters are usually obtained by means of
spectroscopic or photometric analysis. These results are not always consistent
with each other, with the published values often including just the statistical
uncertainties. The differences are more dramatic for white dwarfs with
helium-dominated photospheres, so to obtain realistic uncertainties we have
analysed a sample of 13 of these white dwarfs, applying both techniques to up
to three different spectroscopic and photometric data sets for each star. We
found mean standard deviations of < $\sigma T_{\mathrm{eff}}$ > = 524 K, <
$\sigma \log g$ > = 0.27 dex and < $\sigma \log(\mathrm{H/He})$ > = 0.31 dex
for the effective temperature, surface gravity and relative hydrogen abundance,
respectively, when modelling diverse spectroscopic data. The photometric fits
provided mean standard deviations up to < $\sigma T_{\mathrm{eff}}$ > = 1210 K
and < $\sigma \log g$ > = 0.13 dex. We suggest these values to be adopted as
realistic lower limits to the published uncertainties in parameters derived
from spectroscopic and photometric fits for white dwarfs with similar
characteristics. In addition, we investigate the effect of fitting the
observational data adopting three different photospheric chemical compositions.
In general, pure helium model spectra result in larger $T_{\mathrm{eff}}$
compared to those derived from models with traces of hydrogen. The $\log g$
shows opposite trends: smaller spectroscopic values and larger photometric ones
when compared to models with hydrogen. The addition of metals to the models
also affects the derived atmospheric parameters, but a clear trend is not
found. | Paula Izquierdo, Boris T. Gänsicke, Pablo Rodríguez-Gil, Detlev Koester, Odette Toloza, Nicola P. Gentile Fusillo, Anna F. Pala, Pier-Emmanuel Tremblay | 2023-01-23T19:11:42Z | http://arxiv.org/abs/2301.09670v1 | Systematic uncertainties in the characterisation of helium-dominated metal-polluted white dwarf atmospheres
###### Abstract
White dwarf photospheric parameters are usually obtained by means of spectroscopic or photometric analysis. These results are not always consistent with each other, with the published values often including just the statistical uncertainties. The differences are more dramatic for white dwarfs with helium-dominated photospheres, so to obtain realistic uncertainties we have analysed a sample of 13 of these white dwarfs, applying both techniques to up to three different spectroscopic and photometric data sets for each star. We found mean standard deviations of \(\langle\sigma T_{\rm eff}\rangle=524\) K, \(\langle\sigma\log g\rangle=0.27\) dex and \(\langle\sigma\rm log(H/He)\rangle=0.31\) dex for the effective temperature, surface gravity and relative hydrogen abundance, respectively, when modelling diverse spectroscopic data. The photometric fits provided mean standard deviations up to \(\langle\sigma T_{\rm eff}\rangle=1210\) K and \(\langle\sigma\log g\rangle=0.13\) dex. We suggest these values to be adopted as realistic lower limits to the published uncertainties in parameters derived from spectroscopic and photometric fits for white dwarfs with similar characteristics. In addition, we investigate the effect of fitting the observational data adopting three different photospheric chemical compositions. In general, pure helium model spectra result in larger \(T_{\rm eff}\) compared to those derived from models with traces of hydrogen. The \(\log g\) shows opposite trends: smaller spectroscopic values and larger photometric ones when compared to models with hydrogen. The addition of metals to the models also affects the derived atmospheric parameters, but a clear trend is not found.
keywords: stars: white dwarfs - chemically peculiar - fundamental parameters - techniques: spectroscopic - photometric
## 1 Introduction
About 20 per cent of all white dwarfs in the Galaxy are known to have helium-dominated atmospheres (Bergeron et al., 2011). These are thought to form either after a late shell flash, if the white dwarf progenitor burns all its residual hydrogen in the envelope (Herwig et al., 1999; Althaus et al., 2005; Werner & Herwig, 2006), or via convective dilution or mixing scenarios, where a thin hydrogen layer is diluted by the deeper convective helium one (Fontaine & Wesemael, 1987; Cunningham et al., 2020). The helium-dominated white dwarfs with effective temperatures, \(T_{\rm eff}\), between 10 000 and 40 000 K1 are called DBs and are characterised by He i absorption lines dominating their optical spectra.
Footnote 1: The He i optical transitions originate from states with principal quantum number \(n=2\). For \(T_{\rm eff}\leq 10000\) K, helium is mostly in its ground state, and hence, the optical spectra of cool white dwarfs with helium atmospheres are featureless and classified DC. For \(T_{\rm eff}\geq 40\,000\) K, helium is mostly ionised, and the spectra of these hot white dwarfs show He ii transitions and are classified DO.
The first fully characterised DB white dwarf (GD 40; Shipman et al., 1977) paved the way for numerous studies in the following 25 years (see e.g. Wickramasinghe & Reid, 1983; Koester et al., 1985; Liebert et al., 1986; Wolff et al., 2002), establishing the techniques currently used to derive the photospheric parameters of these degenerates. Their \(T_{\rm eff}\), surface gravity, \(\log g\), and chemical abundances are obtained by means of (1) grids of synthetic spectra to fit the helium (plus hydrogen, if present) absorption lines identified in their observed spectra (see e.g. Koester & Kepler, 2015), (2) reproducing their photometric spectral energy distribution (SED; Bergeron et al., 1997), or (3) a hybrid approach that simultaneously fits the spectroscopy and photometry to deliver a more consistent set of parameters (see e.g. Izquierdo et al., 2020). Even though no major issues have been reported, these techniques do not always lead to consistent parameters (e.g. Bergeron et al., 2011; Koester & Kepler, 2015; Tremblay et al., 2019; Cukanovaite et al., 2021).
The discrepancies are likely a consequence of the several hurdles
that determining the atmospheric parameters of DBs has to face. It is hard to obtain accurate \(T_{\rm eff}\) values in the \(\simeq 21\,000-31\,000\) K range2, where a plateau in the strength of the He i absorption lines gives rise to similar \(\chi^{2}\) values on each side of this temperature range (usually referred to as the "hot" and "cool" solutions). Likewise, there appears to exist a problem related to the implementation of van der Waals and resonance broadening mechanisms for neutral helium, the two dominant interactions in white dwarfs with \(T_{\rm eff}\leq 15\,000\) K (Koester and Kepler, 2015). On top of that, as white dwarfs cool, they develop superficial convection zones that grow bigger and deeper with decreasing \(T_{\rm eff}\)(Tassoul et al., 1990). The treatment of convective energy transport is neither fully understood nor implemented, even though Cukanovaite et al. (2021) presented a complete implementation for DBs with no free parameters, in contrast to the canonical and simplistic mixing-length (ML) theory3. Nevertheless, the actual DB convective efficiency is still under debate, which likely gives rise to uncertainties in the model spectra.
Footnote 2: This range coincides with the instability strip of DBs, where most white dwarfs (Nitta et al., 2009) undergo non-radial oscillations which complicate their characterisation (e.g. Winget et al., 1982; Vanderbosch et al., 2022).
Footnote 3: Convection in white dwarfs is thought to be highly turbulent, and currently the most common treatment relies on the ML approximation (Prandt, 1925; Bohm-Vitense, 1958). For white dwarf model atmospheres, this approximation has four free parameters to describe the convective energy flux, among which we find the ratio of the mixing length, \(l\), to the pressure scale height, \(H_{\rm P}\), known as the convective efficiency, \(\alpha=l/H_{\rm P}\). These four free parameters change from version ML1 to ML2 (see Koester, 2010, for further details).
There are other possible sources of systematic uncertainties in the characterisation of helium-dominated white dwarfs. The same analysis of an individual star using independent data sets, even if obtained with the same telescope/instrument, can yield to significantly discrepant results (see e.g. Voss et al., 2007; Izquierdo et al., 2020, for spectroscopic and photometric comparisons, respectively). This may be partially due to the different instrument setups, which ultimately differ in their spectral ranges and resolutions, the accuracy of the flux calibrations, the atmospheric conditions, and/or the signal-to-noise ratio (SNR) of the data.
An appropriate choice of the grids of synthetic spectra is essential too, since the structure of the photosphere depends on its chemical composition. This is a difficult task when analysing large samples of white dwarfs by means of parallaxes and archival photometry (see e.g. Gentile Fusillo et al., 2019, 2021), where the use of canonical model spectra (pure H or He photospheres) may neglect possible traces of hydrogen, helium or metals. In fact, about 75 per cent of DB white dwarfs do show traces of hydrogen (thus becoming DBAs since the A accounts for the presence on hydrogen; Koester and Kepler, 2015), whose origin is attributed to the convective dilution and convective mixing mechanisms (Strittmatter and Wickramasinghe, 1971; Cunningham et al., 2020), or to accretion from external sources (MacDonald and Vennes, 1991; Gentile Fusillo et al., 2017). Even a relatively small hydrogen abundance, that may go unnoticed depending on the spectral resolution, the SNR and the wavelength range of the observed spectra, may have an effect on the measurements, leading to an incorrect determination of the white dwarf photospheric parameters.
Besides some amount of hydrogen, about 10 per cent of DB white dwarfs also contain traces of metals (Koester and Kepler, 2015), which furthers the complexity of their atmospheric structure. An iconic example is the metal-polluted GD 362, which was initially classified as a DAZ white dwarf (the Z denotes the presence of metals; Gianninas et al., 2004; Kawka and Vennes, 2005), and only later was it found to have a helium-dominated atmosphere (Zuckerman et al., 2007). Correspondingly, the atmospheric parameters derived using the different chemical compositions diverge dramatically (Fig. 1).
Whereas GD 362 is certainly an extreme example, the presence of metals in the photospheres of white dwarfs has often been neglected, maybe due to low spectral resolution and/or SNR observing data, that make the identification of metal lines, and thus the estimate of their abundances, harder. Metals change the atmospheric structure: they contribute to both the opacity and the ionisation balance, as the ionisation of metals occurs at relatively low temperatures, which injects free electrons into the atmosphere. Metal blanketing has a considerable effect on the slope of the continuum due to the numerous strong metal lines in the ultraviolet (UV), which block the outgoing flux in that spectral range. This results in an energy redistribution towards more transparent regions that causes a back-warming effect. As a consequence, the structure of the photosphere is altered, and so is the emitted SED. Hence, to obtain reliable estimates of the \(T_{\rm eff}\) and log \(\delta\) of a metal-polluted white dwarf, a realistic treatment of the full chemical composition of its photosphere is needed.
Given the challenges that characterising helium-dominated white dwarfs pose, and the discrepancies encountered in the literature for the same objects (see e.g. Tremblay et al., 2019), it is clear that systematic uncertainties intrinsic to each modelling approach must be explored and assessed. In this paper, we present spectroscopic and photometric modelings of a sample of 13 helium-dominated
Figure 1: Atmospheric parameters of the helium-dominated white dwarf GD 362 as derived from spectroscopic (filled markers) and photometric (void markers) modelings by different authors, employing models with the chemical compositions displayed in the legend. Gianninas et al. (2004) and Kawka and Vennes (2006) fit spectroscopic data with H+Z model spectra (no He), while Zuckerman et al. (2007) and Giannnichele et al. (2012) used a He+H+Z model grid. Leggett et al. (2018) performed a photometric modelling using He+H+Z models, while Gentile Fusillo et al. (2021) fit the _Gaia_ DR3 photometry with H, He and H+He models. This is an extreme example of the very first studies misinterpreting the strong Balmer absorption lines in GD 362 as characteristic of a hydrogen-dominated atmosphere. As such, it illustrates the strong dependence of the atmospheric parameters determined from either spectroscopy or photometry on the detailed assumptions about the atmospheric chemical composition.
white dwarfs with traces of hydrogen and metals, which allow us to estimate the systematic uncertainties inherent to each technique.
In what follows, we provide an overview on the most important analyses of DB and DBA white dwarfs to date, where attempts to measure the systematic uncertainties were reported. The details of the model atmospheres, such as the use of different broadening mechanisms, the convective efficiency and the addition of different blanketing sources, fitting procedures, and discrepancies between different studies are presented.
## 2 Past studies of DB and DBA white dwarfs
The first analysis of a large sample of DB white dwarfs was reported in Beauchamp et al. (1996), who reviewed previous studies of about 80 DBs and DBAs, and secured high quality spectra of the objects. They compared the \(T_{\rm eff}\) derived from UV and optical spectra for 25 of them and found an average standard deviation around the 1:1 correspondence of 1600 K (random scatter). They adopted the ML2 version, which has also been employed in all the remaining studies cited in the present paper, but they did not supply any further details of the model atmospheres.
The work by Voss et al. (2007) was a milestone in the understanding of the nature and evolution of DBs and DBAs. They used the spectra of 71 white dwarfs with helium-dominated photospheres, observed by the ESO Supernova Ia Progenitor Survey (SPY; Napiwotzki et al. 2003), to estimate their \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H}/{\rm He})\) by fitting the absorption-line profiles with helium-dominated model atmospheres with different amounts of hydrogen. These authors adopted the ML2 with a convective efficiency of \(\alpha=0.6\), included blanketing effects due to the presence of hydrogen and helium when appropriate, and implemented the treatment of the van der Waals line broadening mechanism (see Finley et al. 1997; Koester et al. 2005, for further detail). A comparison of their derived atmospheric parameters with those reported in Beauchamp et al. (1999), Friedrich et al. (2000) and Castanheira et al. (2006) revealed \(\simeq\pm 10\) per cent differences in \(T_{\rm eff}\) and an average of \(\pm 0.15\) dex in \(\log g\). Voss et al. attributed these discrepancies to the different atmospheric models used, the fitting procedures and the SNR of the spectra. In addition, they did the same analysis with independent sets of 22 SPY spectra and found \(\left(\frac{\Delta T_{\rm eff}}{T_{\rm eff}}\right)=0.0203\), \(\left\langle\Delta\log g\right\rangle=0.06\) dex and \(\left\langle\Delta\log({\rm H}/{\rm He})\right\rangle=0.02\) dex4. These revealed that the statistical uncertainties quoted for the derived atmospheric parameters of white dwarfs were unrealistically small (the formal uncertainties from the \(\chi^{2}\) routine they used amounted to a few times 10 K), and that the true uncertainties are likely dominated by systematic effects.
Footnote 4: Throughout this paper, the angle brackets denote the mean.
A statistical analysis of 108 spectra of helium-atmosphere white dwarfs, of which 44 per cent are DBAs, was published by Bergeron et al. (2011). They computed the model atmospheres with the code described in Tremblay & Bergeron (2009) and tested various convective efficiencies, accounting for the different element opacities and including the van der Waals line-broadening treatment. Bergeron et al. (2011) derived \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H}/{\rm He})\) by fitting the absorption-line profiles and demonstrated that the smoothest and most uniform distribution of their sample in terms of \(T_{\rm eff}\) and \(\log g\) (as predicted by the white dwarf luminosity function) is obtained for a convective efficiency of \(\alpha=1.25\), a value that has been adopted as the canonical choice in many published DB analyses. They assessed the systematic uncertainties due to flux calibration by comparing the atmospheric parameters of 28 DBs with multiple spectra, finding \(\left\langle\frac{\Delta T_{\rm eff}}{T_{\rm eff}}\right\rangle=0.023\) and \(\left\langle\Delta\log g\right\rangle=0.052\) dex. A comparison of their atmospheric parameters with those of Voss et al. (2007) revealed that Bergeron et al.'s \(\log g\) values are larger by 0.15 dex and that a random scatter of \(\simeq 3900\) K in the \(T_{\rm eff}\) between the two data sets exists for \(T_{\rm eff}\leq 19\,000\) K (see fig. 19 in Bergeron et al. 2011).
Using Sloan Digital Sky Survey (SDSS) spectroscopy and photometry of 1107 DBs, Koester & Kepler (2015) increased the number of characterised DBs by a factor of 10. They found a DBA fraction of 32 per cent, which increases to 75 per cent when restricting the analysis to spectra with SNR \(>40\). The synthetic spectra used in this study were computed with the code of Koester (2010) and to determine the \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H}/{\rm He})\) they applied an iterative technique: the photometric data are initially used to estimate the \(T_{\rm eff}\) with \(\log g\) fixed at 8.0 dex (note that no prior information about the distances was available), which serves to distinguish between the spectroscopic \(T_{\rm eff}\) hot and cool solutions. Then, the absorption-line profiles are fitted with pure helium model spectra to derive the \(T_{\rm eff}\) and \(\log g\), which are subsequently fixed to measure the \(\log({\rm H}/{\rm He})\). This procedure is repeated until convergence is obtained. In their study, Koester & Kepler carried out an assessment of their parameter uncertainties using 149 stars with multiple spectra, which resulted in random average differences of 3.1 per cent, 0.12 dex and 0.18 dex for \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H}/{\rm He})\), respectively. A comparison of the stars in common with the ones in Bergeron et al. (2011) yields average systematic differences of +1.3 per cent and +0.095 dex in \(T_{\rm eff}\) and \(\log g\), respectively (both parameters being larger in average for the Koester & Kepler's sample), with mean dispersions of 4.6 per cent and 0.073 dex.
Tremblay et al. (2019) modelled the _Gaia_ DR2 photometric data of 521 DBs that had already been spectroscopically characterised (Koester & Kepler 2015; Rolland et al. 2018), and compared the resulting atmospheric parameters with the published spectroscopic results. Tremblay et al. used an updated version of the code described in Tremblay & Bergeron (2009) to compute 1D pure helium model atmospheres. They fit the photometric points, previously unreddened using the 2D dust reddening maps of Schlafly & Finkbeiner (2011), with \(T_{\rm eff}\) and the white dwarf radius, \(R_{\rm WD}\), as free parameters. To compare the results produced by both fitting techniques, they first derived the _spectroscopic_ parallaxes from the atmospheric parameters provided by the spectroscopic technique, the _Gaia_\(G\)-band apparent magnitude and the theoretical mass-radius relation of Fontaine et al. (2001). They observed reasonable agreement (within 2-\(\sigma\)) with the _Gaia_ parallaxes for \(T_{\rm eff}\geq 14\,000\) K in the Rolland et al. (2018) and Koester & Kepler (2015) DB sample. However, for cooler white dwarfs larger differences became apparent, again likely caused by problems with the neutral helium line broadening. They also compared the spectroscopic and photometric \(T_{\rm eff}\) and \(\log g\) and found that the fits to the _Gaia_ photometry systematically provide lower \(T_{\rm eff}\) and randomly scattered differences in the \(\log g\). This points once more to an inadequate treatment of the van der Waals broadening. They concluded that the photometric technique, and in particular the use of _Gaia_ photometry and parallaxes, can give solid atmospheric parameters and is, in particular, more reliable in constraining the \(\log g\) for the cooler DBs (\(T_{\rm eff}\leq 14000\) K) as compared to the spectroscopic method.
A similar study was presented by Genest-Beaulieu & Bergeron (2019), who also used the _Gaia_ DR2 parallaxes and compared the photometric and spectroscopic \(T_{\rm eff}\), \(\log g\), \(\log({\rm H}/{\rm He})\), \(\log\) (Ca/He), the white dwarf mass, \(M_{\rm WD}\), and \(R_{\rm WD}\) of more than 1600 DBs from the SDSS. They adopted the grid of synthetic models of
Bergeron et al. (2011), but used an improved version of the van der Waals broadening. The photometric and spectroscopic techniques were carried out as follows: (1) the \(T_{\rm eff}\) and the solid angle, \(\pi(R_{\rm WD}/D)^{2}\), were obtained from fitting the observed SDSS photometry points (unreddened with the parametrisation described in Harris et al., 2006) and the distance \(D\) derived from _Gaia_ DR2; (2) the \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H}/{\rm He})\) were derived by fitting the continuum-normalised absorption lines with synthetic profiles. The results show statistical errors of 10 per cent in the photometric \(T_{\rm eff}\) and \(\langle\sigma M_{\rm WD}\rangle=0.341\,{\rm M}_{\odot}\), while the uncertainties in the spectroscopic parameters are of 4.4 per cent for \(T_{\rm eff}\), \(\langle\sigma\log g\rangle=0.263\,\)dex, \(\langle\sigma\log({\rm H}/{\rm He})\rangle=0.486\,\)dex and \(\langle\sigma M_{\rm WD}\rangle=0.156\,{\rm M}_{\odot}\). The authors also estimated the uncertainties in the spectroscopic parameters by repeating the same procedure for 49 stars with multiple spectra, resulting in \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle=0.024\), \(\langle\Delta\log g\rangle=0.152\,\)dex, \(\langle\Delta M_{\rm WD}\rangle=0.086\,{\rm M}_{\odot}\) and \(\langle\Delta\log g\rangle=0.2\,\)dex. Genest-Beaulieu & Bergeron (2019) then concluded that both techniques yield the \(T_{\rm eff}\) with similar accuracy, but stated that the photometric method is better suited for white dwarf mass determinations.
The last effort to assess the systematic effects in the characterisation of DB atmospheres was carried out by Cukanovaite et al. (2021), who presented a thorough study on the input microphysics, such as van der Waals line broadening or non-ideal effects, and convection models used in the computation of synthetic spectra. They demonstrated the need for 3D spectroscopic corrections5 by using the cross-matched DB and DBA sample of Genest-Beaulieu & Bergeron with the _Gaia_ DR2 white dwarf catalogue (Gentile Fusillo et al., 2019), removing all spectra with SNR \(<20\), which resulted in 126 DB and 402 DBA white dwarfs. In particular, they presented significant corrections for the spectroscopically derived \(\log g\) in the \(T_{\rm eff}\) range where the high-\(\log g\) problem is found (DBs with \(T_{\rm eff}\leq 15\,000\,\)K). Although these corrections represent a starting point towards solving the issues with the synthetic DB models due to their superior input physics, they have not yet accounted for the dramatic differences in the photospheric parameters of DBs derived from photometry and spectroscopy (see e.g. figs. 9, 10, 14 and 15 in Cukanovaite et al. 2021).
Footnote 5: The simplistic ML theory employed in the treatment of convective energy transport was related to the DA high-\(\log g\) problem (Tremblay et al., 2013). This issue was overcome by the use of 3D radiation-hydrodynamical models, which treat convection using first principles and do not depend on any free parameters as the ML approximation.
## 3 The white dwarf sample
Gentile Fusillo et al. (2015) presented the spectral classification of 8701 white dwarfs brighter than \(g=19\) with at least one SDSS DR10 spectrum. We visually inspected all the spectra flagged by Gentile Fusillo et al. as metal-contaminated and selected 13 stars that (1) had moderately strong Ca ii H and K absorption lines, and (2) were either confirmed, via the detection of helium absorption lines, or suspected helium-atmosphere white dwarfs (because of shallow and asymmetric Balmer line profiles). The selected white dwarfs are presented in Table 1.
Additionally, we obtained X-shooter spectra for each target and collected the available SDSS and Pan-STARRS1 (PS1) photometry, and _Gaia_ eDR3 astrometry plus photometry for all of them (Fig. 2 and Table 2).
### SDSS spectroscopy
As mentioned above, our target selection is based on SDSS DR10. However, SDSS sometimes reobserves the same object, so we inspected the DR16 database (Ahumada et al., 2020) and retrieved all available spectra of our 13 targets. Several white dwarfs were observed with both the original SDSS spectrograph (3800\(-\)9200 A wavelength range and \(R\approx 1850-2200\) spectral resolution) and the BOSS spectrograph (3600\(-\)10 400 A, \(R\approx 1560-2650\); Smee et al. 2013; see Table 1).
### VLT/X-shooter spectroscopy
We obtained intermediate resolution spectroscopy of the 13 white dwarfs using the X-shooter spectrograph (Vernet et al., 2011) mounted on the UT2 Kueyen telescope of the 8.2-m Very Large Telescope at Cerro Paranal, Chile, in January and July 2018 (ESO programmes 0100.C\(-\)0500 and 0101.C\(-\)0646). X-shooter is a three arm echelle spectrograph that simultaneously covers the ultraviolet-blue (UVB, 3000 \(-\) 5600 A), visible (VIS, 5500 \(-\) 10 200 A) and near-infrared (NIR, 10 200 \(-\) 24 800 A) wavelength ranges. We used slit widths of 1.0 (UVB), 0.9 (VIS) and 0.9 arcsec (NIR) to achieve spectral resolutions \(R\approx 5400\), 8900 and 5600, respectively. However, the NIR spectra were of insufficient SNR for a quantitative analysis and were discarded. Depending on the target brightness and the observing conditions, we obtained between two and six exposures per star. Details on the observations are given in Table 1, and a comparison between the X-shooter and SDSS/BOSS spectra for three white dwarfs of our sample is shown in Fig. 3.
We reduced the data within the ESO Reflex environment (Freudling et al., 2013). In brief, we removed the bias level and dark current, flat-fielded the images, identified and traced the echelle orders and established a dispersion solution. Then, we corrected for the instrument response and atmospheric extinction using observations of a spectrophotometric standard star observed with the same instrumental setup, merged the individual orders and applied a barycentric velocity correction to the wavelength scale. Telluric absorptions were corrected for using molecfit(Kausch et al., 2015; Smette et al., 2015). Finally, we computed the UVB and VIS averages from the individual spectra of each white dwarf using the inverse of their variance as weights.
The X-shooter spectra of the 13 white dwarfs (Fig. 2) display at least the Ca ii H and K lines, H\(\alpha\), and different helium absorption lines. Particular cases are 0827+1731, where the low \(T_{\rm eff}\approx 10500\,\)K of the white dwarf only allows a really shallow helium line (He i \(\lambda\)5876) to be identified in addition to H\(\alpha\) and H\(\beta\) and a few shallow Ti ii absorption lines (in the \(3300-3400\,\)A range), and 0958+0550, whose spectra display He and shallow metallic lines of Mg, Ca, Ti, Cr, Mn or Fe, but only a hint of H\(\alpha\) due to the small hydrogen abundance.
## 4 methodology
In order to explore the underlying systematic uncertainties in the determination of the atmospheric parameters of helium-dominated white dwarfs with traces of hydrogen and metals, we tested the spectroscopic and photometric techniques using the different data sets available for each star and synthetic spectra computed for several chemical compositions.
The spectroscopic analyses were performed using at least two different spectra per star: SDSS/BOSS and X-shooter (a few targets
have both SDSS and BOSS spectra, in which case we also tested the level of agreement between those two data sets). For the photometric approach we used three catalogues: SDSS, PS1 and _Gaia_ eDR3.
For both techniques we used model spectra with three different chemical compositions: (1) pure He, (2) He with variable H contents and (3) He with variable H and Z contents. We first employed (1) pure He atmosphere models, and hence the spectroscopic method only considered helium absorption lines. This approach was historically applied for white dwarfs for which only a limited amount of spectroscopic information is available, e.g. H\(\alpha\) is not covered at all or at poor SNR. We then fitted the spectroscopic data with (2) mixed H/He atmosphere models (He+H henceforth) that were hydrogen-blanketed, now including log(H/He) as the third free parameter after \(T_{\rm eff}\) and log \(\rm g\), and also using the Balmer lines present in the observed spectra.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline Star & Short name & _Gaia G_ & \(D\) & \multicolumn{2}{c}{Spectral} & \multicolumn{2}{c}{X-shooter observations} & \multicolumn{2}{c}{SDSS} \\ & & & & (pc) & classification & Date & Exposure time (s) & UVB & VIS & BOSS & SDSS \\ \hline WD J003003.23+152629.34 & 0030+1526 & 17.6 & \(175\pm 3\) & _DABZ_ & DBAZ & 2018-07-11 & 2x(1250/1220/1300) & 54.9 & 40.0 & - & 29.1 \\ WD J025934.98\(-\)072134.29 & 0259\(-\)0721 & 18.2 & \(222\pm 7\) & _DBZ_ & DBAZ & 2018-01-12 & 4x(1221/1255/1298) & 48.0 & 40.9 & - & 19.5 \\ WD J082708.67+173120.52 & 0827+1731 & 17.8 & \(127\pm 2\) & _DAZ_ & DBAZ & 2018-01-12 & 4x(1221/1255/1298) & 47.9 & 48.4 & 38.4 & 22.8 \\ WD J085934.18+112302.946 & 0859+1123 & 19.1 & 340 \(28\) & _DABZ_ & DBAZ & 2018-01-10 & 5x(1221/1255/1298) & 45.2 & 30.3 & 20.1 & - \\ WD J090301.00+0661852.93 & 0930+0618 & 17.9 & \(227\pm 7\) & _DABZ_ & DBAZ & 2018-01-12 & 4x(1221/1255/1298) & 36.6 & 30.8 & - & 36.0 \\ WD J094431.28\(-\)003933.75 & 0944\(-\)0039 & 17.8 & \(160\pm 3\) & _DBZ_ & DBAZ & 2018-01-11 & 4x(1221/1255/1298) & 54.5 & 49.7 & 44.0 & 26.1 \\ WD J095854.96+055021.50 & 0958+0550 & 17.8 & \(182\pm 6\) & _DBZ_ & DBAZ & 2018-01-12 & 4x(1221/1255/1298) & 48.4 & 44.7 & 27.0 & - \\ WD J1013147.13+205913.28 & 1031+0259 & 18.2 & \(202\pm 9\) & _DABZ_ & DBAZ & 2018-01-10 & 4x(1221/1255/1298) & 48.3 & 42.6 & 25.2 & 27.1 \\ WD J119507.82+18128.07 & 1109+1318 & 18.7 & \(29\pm 8\) & _DABZ_ & DBAZ & 2018-01-11 & 4x(1221/1255/1298) & 37.0 & 27.8 & 20.2 & 13.7 \\ WD J135933.24\(-\)021715.16 & 1359\(-\)0217 & 17.8 & \(217\pm 6\) & _DABZ_ & DBAZ & 2018-07-12 & 2x(1250/1220/1300) & 41.3 & 31.5 & 43.1 & 24.5 \\ WD J151642.97\(-\)004042.50 & 1516\(-\)0040 & 17.3 & \(143\pm 2\) & _DABZ_ & DBAZ & 2018-07-10 & 4x(1200/1200/1200) & 60.0 & 60.8 & 43.3 & - \\ WD J162703.34+172327.59 & 1627+1723 & 18.6 & \(278\pm 13\) & _DBZ_ & DBAZ & 2018-07-12 & 4x(1450/1420/1450) & 33.0 & 16.3 & 28.5 & 12.9 \\ WD J232404.70\(-\)0001813.01 & 2324\(-\)0018 & 18.9 & \(329\pm 33\) & _DABZ_ & DBAZ & 2018-07-10 & 5x(1250/1220/1300) & 45.5 & 36.0 & 22.9 & - \\ \hline \end{tabular}
\end{table}
Table 1: White dwarf sample, including the WD J names from Gentile Fusillo et al. (2019), the short names used in this paper, the _Gaia G_ magnitude, the distance \(D\) of the source (derived as \(D\) (pc) = 1000/ \(\varpi\), being \(\varpi\) the parallax in mas; Riello et al., 2020), the spectral classification of Gentile Fusillo et al. (2015) (in italics) and the updated one based on our X-shooter spectra, the log of the X-shooter spectroscopy and the signal-to-noise ratio of the UVB and VIS X-shooter, BOSS and SDSS spectra (the last four columns).
Figure 2: Normalised X-shooter spectra of the 13 metal-polluted white dwarfs. Hydrogen, helium and Ca ii H and K absorption lines are marked with pink, blue and yellow vertical lines, respectively. The effective temperature increases from bottom to top. The spectra are offset vertically for display purposes.
spectra. Notice that we fix the \(\log(\mathrm{H}/\mathrm{He})\) at the spectroscopic value to perform these photometric fits. The final approach was performed with (3) mixed H/He + metals atmosphere models (hydrogen- and metal-blanketed). These synthetic grids, He+H+Z henceforth, which are computed individually for each white dwarf (see Fig. 4), take into account the relative abundances of the metals estimated from the X-shooter spectra6. As in the case of the He+H analysis, the spectroscopic technique was performed first, in order to estimate the chemical composition [\(\log(\mathrm{H}/\mathrm{He})\) + \(\log(\mathrm{Z}/\mathrm{He})\)] of each star, which is then fixed in the photometric fits.
Footnote 6: Reliable metal abundances cannot be constrained from the SDSS/BOSS spectra due to their low SNR and resolution, which is insufficient to properly sample the narrow metallic lines. These have an average equivalent width of about 0.6 Å, significantly smaller than the \(\simeq\) 4-Å resolution of the BOSS/SDSS spectra.
### Model atmospheres and fitting procedure
We used the latest version of the Koester (2010) code to generate all the synthetic model spectra. The substantial convection zones of helium-dominated white dwarfs were accounted for using a 1D ML prescription. In particular, we adopted the ML2 parametrisation and fixed the convective efficiency, \(\alpha\). A more realistic line fitting would need 3D spectral synthesis, with a range of \(\alpha\) values that describe the different spectral lines of the white dwarf (Cukanovaite et al., 2019). These 3D models are still too computationally expensive and, for the scope of this paper, we are using 1D models and have fixed the convective efficiency at \(\alpha=1.25\), which is the canonical and most extensively used value in the characterisation of DB white dwarfs (Bergeron et al., 2011).
Our pure He and He+H grids spanned \(T_{\mathrm{eff}}=5\,000-20\,000\,\mathrm{K}\) in steps of 250 K and \(\log g=7.0-9.5\) dex in steps of 0.25 dex. For the He+H grid we explored the \(\log(\mathrm{H}/\mathrm{He})\) range from \(-7.0\) to \(-3.0\) dex in steps of 0.25 dex. Notice that these two grids were computed with no metals, thus neglecting any metal line blanketing.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Star & \(u\) & \(g\) & \(r\) & \(i\) & \(z\) & & SDSS \\ & & \(g\) & \(r\) & \(i\) & \(z\) & \(y\) & PS1 \\ & & \(G_{\mathrm{BP}}\) & & \(G\) & & \(G_{\mathrm{BP}}\) & & _Gaia_ \\ \hline
0030+1526 & \(17.317\pm 0.016\) & \(17.431\pm 0.022\) & \(17.742\pm 0.014\) & \(17.952\pm 0.017\) & \(18.241\pm 0.025\) & \\ & & \(17.481\pm 0.017\) & \(17.746\pm 0.017\) & \(17.981\pm 0.014\) & \(18.193\pm 0.027\) & \(18.317\pm 0.047\) & \\ & & \(17.529\pm 0.006\) & & \(17.5752\pm 0.0029\) & & \(17.6731\pm 0.0143\) & \\ \hline
0259–0721 & \(18.031\pm 0.018\) & \(18.062\pm 0.014\) & \(18.326\pm 0.015\) & \(18.552\pm 0.018\) & \(18.823\pm 0.054\) & \\ & & \(18.093\pm 0.022\) & \(18.328\pm 0.019\) & \(18.565\pm 0.048\) & \(18.784\pm 0.041\) & \(18.921\pm 0.070\) & \\ & & \(18.1484\pm 0.0139\) & & \(18.1763\pm 0.0035\) & & \(18.2509\pm 0.0491\) & \\ \hline
0827+1731 & \(17.848\pm 0.019\) & \(17.800\pm 0.018\) & \(17.964\pm 0.015\) & \(18.143\pm 0.016\) & \(18.324\pm 0.028\) & \\ & & \(17.820\pm 0.020\) & \(17.959\pm 0.023\) & \(18.153\pm 0.022\) & \(18.337\pm 0.072\) & \(18.438\pm 0.054\) & \\ & & \(17.8475\pm 0.0102\) & & \(17.840\pm 0.0030\) & & \(17.8321\pm 0.0159\) & \\ \hline
0859+1123 & \(18.878\pm 0.042\) & \(18.979\pm 0.017\) & \(19.213\pm 0.020\) & \(19.555\pm 0.036\) & \(19.775\pm 0.073\) & \\ & & \(18.994\pm 0.033\) & \(19.255\pm 0.066\) & \(19.523\pm 0.047\) & \(19.722\pm 0.088\) & \(19.790\pm 0.226\) & \\ & & \(19.0889\pm 0.0224\) & & \(19.0886\pm 0.0035\) & & \(19.1602\pm 0.0460\) & \\ \hline
0930+0618 & \(17.775\pm 0.017\) & \(17.838\pm 0.019\) & \(18.135\pm 0.016\) & \(18.380\pm 0.022\) & \(18.765\pm 0.041\) & \\ & & \(17.910\pm 0.019\) & \(18.181\pm 0.018\) & \(18.414\pm 0.034\) & \(18.658\pm 0.041\) & \(18.800\pm 0.085\) & \\ & & \(18.002\pm 0.0030\) & & \(17.9364\pm 0.0115\) & & \(18.1420\pm 0.0201\) & \\ \hline
0944–0039 & \(17.717\pm 0.014\) & \(17.749\pm 0.015\) & \(17.973\pm 0.019\) & \(18.187\pm 0.019\) & \(18.407\pm 0.028\) & \\ & & \(17.783\pm 0.034\) & \(18.005\pm 0.024\) & \(18.212\pm 0.045\) & \(18.424\pm 0.029\) & \(18.551\pm 0.123\) & \\ & & \(17.8396\pm 0.0097\) & & \(17.8452\pm 0.0029\) & & \(17.9183\pm 0.0183\) & \\ \hline
0958+0550 & \(18.293\pm 0.022\) & \(18.215\pm 0.015\) & \(18.385\pm 0.018\) & \(18.524\pm 0.021\) & \(18.763\pm 0.033\) & \\ & & \(18.222\pm 0.025\) & \(18.391\pm 0.022\) & \(18.549\pm 0.034\) & \(18.743\pm 0.032\) & \(18.851\pm 0.143\) & \\ & & \(18.2631\pm 0.0033\) & & \(18.2750\pm 0.0281\) & & \(18.2012\pm 0.0172\) & \\ \hline
1013+0259 & \(18.064\pm 0.022\) & \(18.146\pm 0.018\) & \(18.353\pm 0.020\) & \(18.546\pm 0.018\) & \(18.748\pm 0.043\) & \\ & & \(18.144\pm 0.011\) & \(18.361\pm 0.020\) & \(18.560\pm 0.030\) & \(18.773\pm 0.041\) & \(18.982\pm 0.101\) & \\ & & \(18.1782\pm 0.0157\) & & \(18.2165\pm 0.0034\) & & \(18.1847\pm 0.0468\) & \\ \hline
1109+1318 & \(18.493\pm 0.022\) & \(18.622\pm 0.026\) & \(18.902\pm 0.021\) & \(19.145\pm 0.032\) & \(19.357\pm 0.059\) & \\ & & \(18.625\pm 0.017\) & \(18.909\pm 0.034\) & \(19.148\pm 0.064\) & \(19.388\pm 0.049\) & \(19.490\pm 0.137\) & \\ & & \(18.7296\pm 0.0037\) &
The He+H+Z grids are computed in various steps (see the flowchart in Fig. 4). First, we performed an iterative analysis starting with a photometric fit to determine \(T_{\rm effphot}\) and \(\log g_{\rm photc}\), with \(\log({\rm H}/{\rm He})\) fixed at \(-5.0\) dex. Then, a spectroscopic fit is performed with \(\log g\) fixed at \(\log g_{\rm photc}\), which yields \(T_{\rm effspec}\) and \(\log({\rm H}/{\rm He})\). This \(\log({\rm H}/{\rm He})\) is then used in the photometric fit and the procedure is iterated until convergence is achieved. As a result, we obtain the \(T_{\rm effphot}\), \(\log g_{\rm photc}\)7 and \(\log({\rm H}/{\rm He})\), which we fix to compute 1D grids for _each_ metal identified in the X-shooter spectra of each star. The only parameter that varies throughout these 1D grids is \(\log({\rm Z}/{\rm He})\), and the synthetic models are centred at the Solar values and sampled in steps of 0.2 dex. Then, the normalised absorption lines of each metal are fitted individually to obtain the \(\log({\rm Z}/{\rm He})\) relative abundances. These are then included in the computation of the He+H+Z model grid for each star. The \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H}/{\rm He})\) steps of the He+H+Z model grids are the same as used for the He+H grid, but probe a smaller parameter space centred on the He+H best-fit values obtained.
Footnote 7: We chose \(T_{\rm effphot}\) because it is not affected by the dubious implementation of the resonance and van der Waals broadening in the computation of the synthetic models, and \(\log g_{\rm photc}\) because it is well constrained by a reliable parallax estimate.
We fit the synthetic model spectra to the different data subsets using the Markov Chain Monte Carlo (MCMC) emcee package within python(Foreman-Mackey et al., 2013). The parameter space was explored and the logarithmic function maximised using 16 different seeds and 10 000 steps per seed. We employed flat priors for all the parameters within the grid boundaries provided above, except for the _Gaia_ parallax \(\varpi\), for which we used Gaussian priors (with a Gaussian width set to the published parallax uncertainty).
### Spectroscopic fits
We first degraded the synthetic spectra to the resolution of the observed ones (see Section 3 for details). Then, we continuum-normalised each of the relevant absorption lines in both the observed and synthetic spectra (helium, Balmer or metal lines, as appropriate) by fitting low-order polynomial functions to the surrounding continuum. Metal lines that are superimposed on helium or Balmer lines were masked out in the pure He and He+H fits. For the fits obtained with the He+H+Z models, we did not mask the narrow metal lines contained in the much broader helium or Balmer lines. However, the metal abundances were fixed at the values obtained by the 1D metal fits (see Fig. 4 and Table A1).
For all the spectroscopic fits we used the neutral helium lines \(\lambda\)3820, \(\lambda\)3889, \(\lambda\)4026, \(\lambda\)4120, \(\lambda\)4388, \(\lambda\)4471, \(\lambda\)4713, \(\lambda\)4922, \(\lambda\)5876, \(\lambda\)6678 and \(\lambda\)7066 (except for 0827+1731, see Appendix A: A2 for further details). For the He+H and He+H+Z spectroscopic fits, we modelled H\(\alpha\) for all the stars, and H\(\beta\), H\(\gamma\) and H\(\delta\) when present. To obtain the estimates of the metal abundances we considered the absorption lines listed in Table 3 that were present in the individual X-shooter spectra of each star.
For the three chemical composition grids, \(T_{\rm eff}\) and \(\log g\) were treated as free parameters, with the addition of \(\log({\rm H}/{\rm He})\) when using the He+H and He+H+Z grids, exploring the parameter space with flat priors in all the cases.
Figure 3: Comparison between the UVB+VIS X-shooter (spectral resolution \(R=5400\), 8900; black), BOSS (\(R\simeq 1850-2200\); magenta) and SDSS (\(R\simeq 1850-2200\); cyan) spectra of three white dwarfs in our sample. Hydrogen, helium and Ca ii H and K absorption lines are marked with blue, pink and yellow vertical lines, respectively. The effective temperature increases from bottom to top. The spectra are offset vertically for display purposes. We note that the spikes in the BOSS and SDSS spectra (marked with a dashed vertical grey line) are artefacts derived from the data calibrations.
### Photometric fits
As a first step of the photometric fitting technique, the synthetic spectra were scaled by the solid angle subtended by the star, \(\pi(R_{\rm WD}/D)^{2}\), where \(D\) was derived from the _Gaia_ eDR3 parallax \(\varpi\) (in mas, Riello et al. 2020) as \(D=1000/\varpi\) (pc). We account for the interstellar extinction by reddening the synthetic spectra with the \(E(B-V)\) values determined from the 3D dust map produced by stilism8 using the distances. The white dwarf radii were calculated using the mass-radius relation9 derived with the last evolutionary models of Bedard et al. (2020). This mass-radius relation is appropriate for helium-dominated white dwarfs with C/O cores and thin hydrogen layers (\(\sim 10^{-10}M_{\rm H}/M_{\rm WD}\), with \(M_{\rm H}\) the mass of the H layer).
Footnote 8: [https://stilism.obspm.fr/](https://stilism.obspm.fr/)
Footnote 9: [http://www.astro.umontreal.ca/~bergeron/CoolingModels](http://www.astro.umontreal.ca/~bergeron/CoolingModels)
The comparison of the actual photometric data with the computed brightness from the scaled and reddened model spectra in each photometric passband was carried out in flux space. Hence, we converted the observed magnitudes into fluxes using the corresponding zero points and computed the integrated synthetic fluxes in all the filters using their transmission curves. The zero points and passbands of the SDSS, PS1 and _Gaia_ were obtained from the Spanish Virtual Observatory (SVO) Filter Profile Service10.
Footnote 10: [http://svo2.cab.inta-csic.es/theory/fps/](http://svo2.cab.inta-csic.es/theory/fps/)
In all the photometric fits we fixed the chemical composition of the grid, i.e. the log(H/He) for the He+H grid as well as the metal abundances for the He+H+Z grid, at the best-fit spectroscopic values, since photometry alone is hardly sensitive to these two parameters. Consequently, the photometric fits have \(T_{\rm eff}\), log \(g\) and \(\varpi\) as free parameters11 and we explore the parameter space with flat priors for the former two and a Gaussian prior for the latter. Note that we tested by how much the reddening changed given the parallax and its uncertainty and, for our sample, the variation in \(E(B-V)\) was negligible, which validates our fixed reddening approach.
Footnote 11: The parallax was treated as a free parameter with boundaries extending to the uncertainties published in _Gaia_ eDR3.
## 5 Results and discussion
All the available photometric and spectroscopic data for the 13 white dwarfs in our sample were analysed following the methods outlined above. We used model spectra computed for three different atmospheric compositions: pure He, He with traces of H (He+H), and He with traces of H and metals (He+H+Z). This work resulted in a very large number of solutions for the atmospheric parameters, which we will discuss in the following.
We begin by investigating the overall trends from different sets of observational data (Section 5.1), providing an assessment of the associated systematic uncertainties. As a second test, we inspect the effects of using synthetic model spectra with different chemical compositions (Section 5.2). Then, we compare our spectroscopic and photometric solutions (Section 5.3) and contrast them with previously published works (Section 5.4).
The individual results of the spectroscopic and photometric fits for the 13 helium-dominated white dwarfs using the pure He, He+H and He+H+Z grids are presented in full detail in Appendix A (Tables A2-A14), along with notes on individual stars.
The probability distributions in the \(T_{\rm eff}-\log g\) plane are shown for each star in Figs. 5-7, illustrating the results obtained with different data sets, chemical compositions and fitting techniques. The distributions are downsampled to match that with the minimum number of samples and then are normalised to the region with maximum probability.
### Systematic uncertainties: different data sets
#### 5.1.1 Spectroscopy
We estimated the systematic uncertainties arising from the use of diverse spectroscopic data sets (X-shooter, BOSS and SDSS) by means of the differences in the best-fit \(T_{\rm eff}\), log \(g\) and log(H/He) determined from the different observations. The spectroscopic results obtained
\begin{table}
\begin{tabular}{l l} \hline \hline Ion & Air wavelength (\(\lambda\)) \\ \hline O i & 7771.94, 7774.17, 7775.39 \\ Na i & 5889.95, 5895.92 \\ Mg i & 3829.36, 3832.30, 5167.32, 5172.68, 5183.60 \\ Mg ii & 3838.29, 4384.64, 4390.56, 4481.33 \\ Al i & 3944.01 \\ Al ii & 3856.56, 3587.07, 3587.45, 4663.06 \\ Si ii & 3853.66, 3856.02, 3862.60, 4128.07, 4130.89, 5055.98 \\ Ca ii & 3179.33, 3181.28, 3736.90, 3933.66, 3968.47 \\ Ti ii & 3321.70, 3322.94, 3372.79, 3380.28, 3383.76, 3387.83, 3394.57 \\ Cr ii & 3216.55, 3402.40, 3403.32, 3408.77, 3421.21, 3422.74, 3585.29, \\ & 3285.057, 3677.86, 377.84 \\ Mn i & 3441.98, 3460.31, 3474.04, 3474.13, 3487.90, 3495.83, 3496.81, \\ & 3497.53 \\ Fe i & 3190.82, 3249.50 \\ Fe ii & 3192.07, 3192.91, 3193.80, 3210.45, 3213.31, 3247.18, 3247.39, \\ & 3255.87, 3258.77, 3259.05, 4233.16, 4351.76, 4583.83 \\ Ni i & 3465.6, 3471.3, 3524.54 \\ \hline \end{tabular}
\end{table}
Table 3: Spectral lines used in the determination of the metal chemical abundances.
Figure 4: Flow chart of the procedure used to add metals to the synthetic spectra of He+H white dwarfs.
Figure 5: Probability distributions of the \(\log g\) as a function of the \(T_{\rm eff}\) for the different spectroscopic and photometric fits. The distributions are normalised to the same number of samples. The previously published results (Tables 4 and 5) are displayed in pink: Eisenstein et al. (2006) as squares, Kleinman et al. (2013) as circles, Koester & Kepler (2015) as stars, Kepler et al. (2015) as triangles, Coutu et al. (2019) as inverted triangles and Gentile Fusillo et al. (2021) as diamonds. Note that only literature results within our plotting regions are shown.
Figure 6: Same as Fig. 5
from the He+H+Z fitting of the three data sets are shown in Fig. 8 and the \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H/He})\) average differences are computed to probe for systematic trends between the three data sets (see Fig. 9). Note that the effect of using different chemical composition models is not discussed here, but will be presented in detail in Section 5.2.
On average, the X-shooter spectra provide smaller values of the atmospheric parameters than BOSS (X-shooter - BOSS) by 222 K, 0.07 dex and 0.14 dex for \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H/He})\), respectively. Even though multiple factors can play a role in these differences, the lower SNR of the BOSS spectra when compared to X-shooter (\(\Delta\)SNR \(\simeq\) 14) may be decisive: the hydrogen lines, which are key in measuring the three atmospheric parameters, could be not fully resolved in the BOSS (and SDSS) spectra. One would expect the higher SNR and spectral resolution of X-shooter to provide more reliable \(\log({\rm H/He})\) estimates, translating in larger hydrogen abundances due to its ability to detect shallower lines. However, the BOSS \(\log({\rm H/He})\) values are on average larger than those measured in the X-shooter spectra with no clear explanation.
Comparing the X-shooter to the SDSS parameters we obtain av
Figure 8: Atmospheric parameters of the 13 white dwarfs in our sample obtained by fitting the X-shooter (diamonds), BOSS (pentagons) and SDSS (stars) spectra with He+H+Z synthetic models (only six stars have three spectroscopic data sets; see Table 1). The metal abundances of the models were estimated from the metallic absorption lines identified in the X-shooter spectra. Note that the systematic differences between the parameters based on the individual spectra clearly exceed the statistical uncertainties (displayed as error bars in the figure).
Figure 7: Same as Fig. 5
Figure 9: The average differences in \(T_{\rm eff}\) (top), \(\log g\) (middle) and \(\log({\rm H/He})\) (bottom panel) between the X-shooter (XS), SDSS and BOSS spectroscopic fits for the pure He, He+H and He+H+Z synthetic grids (left to right) are used to check for general trends between the different data sets. There is no hydrogen in the pure He models, and thus no \(\log({\rm H/He})\) estimate (bottom panel). Note that the uncertainties are the standard deviations and hence show how dispersed are the data related to the mean value.
erage differences (X-shooter - SDSS) of \(-455\) K, \(-0.26\) dex and \(0.03\) dex, which follow the same trend as X-shooter-BOSS, with the exception of \(\log(\mathrm{H/He})\). The SNR fraction between the SDSS and X-shooter UVB spectra (\(\Delta\)SNR\(=23\)), which contains most of the absorption lines are, could again lead to less reliable results.
On average, (BOSS - SDSS) yields a \(T_{\mathrm{eff}}\) difference of \(-438\) K, \(-0.18\) dex for \(\log g\), and a larger \(\log(\mathrm{H/He})\) in the BOSS spectra by \(+0.10\) dex. The reasons behind the differences between these two data sets are unclear, although it should be noted that systematic parameter offsets between SDSS spectra and data from other instruments have already been found, and are attributed to the data reduction procedure. However, no exact cause could be determined (Tremblay et al., 2011).
Whereas the average of the parameter differences reflect systematic offsets between the results from different data sets, the standard deviation provides an estimation of the amount of variation of those values and hence represents the typical magnitude of the true systematic uncertainties in the analysis.
We find X-shooter - BOSS mean standard deviations of \(\langle\sigma T_{\mathrm{eff}}\rangle=462\) K, \(\langle\sigma\log g\rangle=0.23\) and \(\langle\sigma\log(\mathrm{H/He})\rangle=0.24\) dex. These differences are larger for X-shooter - SDSS and are very likely related to the bigger SNR disparity between the two data sets: \(\langle\sigma T_{\mathrm{eff}}\rangle=623\) K, \(\langle\sigma\log g\rangle=0.26\) and \(\langle\sigma\log(\mathrm{H/He})\rangle=0.25\) dex. Finally, the BOSS - SDSS mean standard deviations are: \(\langle\sigma T_{\mathrm{eff}}\rangle=485\) K, \(\langle\sigma\log g\rangle=0.33\) and \(\langle\sigma\log(\mathrm{H/He})\rangle=0.43\) dex. In the last case, the statistics are obtained with just five objects (we are not taking into account \(1627+1723\) since the SNR of the SDSS spectra is below \(13\) and gives untrustworthy results; see Table 13 for more details), but still these numbers are dominated by the results obtained for \(1109+1318\), with a SDSS spectra SNR of \(14\).
We conclude that the analysis of separate spectroscopic data sets, in particular if obtained with different instrumental setups can result in differences in the resulting atmospheric parameters that are significantly larger than the statistical uncertainties of the fits to the individual spectra.
We suggest these results to be taken into account to assess the actual uncertainties inherent to spectroscopic analyses for cool helium-dominated white dwarfs, in particular when employing spectra with similar SNR and resolution. From our analysis, we derive systematic uncertainties of the spectroscopic \(T_{\mathrm{eff}}\), \(\log g\) and \(\log(\mathrm{H/He})\) of \(524\) K, \(0.27\) dex and \(0.31\) dex, respectively (the average of the X-shooter - BOSS, X-shooter - SDSS and BOSS - SDSS mean standard deviations).
#### 5.1.2 Photometry
Here, we explore and compare the systematic differences in \(T_{\mathrm{eff}}\) and \(\log g\) obtained from the photometric fits using the magnitudes of three independent catalogues: SDSS, PS1 and _Gaia_, adopting different chemical compositions (we refer to Section 5.2 for the discussion on the use of different chemical composition models).
In Fig. 10 we show the parameter differences for the He+H+Z model spectra, with \(\log(\mathrm{H/He})\) fixed to the X-shooter best-fit spectroscopic value12. There is a steep correlation between \(T_{\mathrm{eff}}\) and \(\log g\): the published fluxes of the three catalogues are really similar for each star (e.g. an average \(0.14\) per cent difference in the SDSS-\(g\) and PS1-\(g\) bands) and scaled by the same distance (provided by the _Gaia_ eDR3 parallax) and hence, even a slight increase in \(T_{\mathrm{eff}}\) translates to a smaller radius to conserve the flux, which ultimately leads to larger \(\log g\) (see Fig. 11).
Footnote 12: This is just a choice to illustrate the general trend. The He+H+Z synthetic grids assess the full chemical composition of each photosphere and the X-shooter spectra have the highest spectral resolution, wavelength coverage and SNR.
The \(T_{\mathrm{eff}}\) and \(\log g\) derived from all the SDSS and PS1 photometric fits are consistent with each other for the \(13\) white dwarfs except for \(0030+1526\) (see Appendix A for comments on individual stars). However, we find mean standard deviations between the results derived from these two surveys of \(\langle\sigma T_{\mathrm{eff}}\rangle=485\) K and \(\langle\sigma\log g\rangle=0.05\) dex, which could be related to the SDSS \(u\)-band, with no analogous in the PS1 survey and a measure that adds important constraints to the SED. Since no systematic offset between these two catalogues has been reported they should lead to the same set of parameters and thus we suggest these differences to be taken into account when quoting uncertainties derived from each of this data sets, being considerably larger than those usually published in the literature.
The _Gaia_ atmospheric parameters are, in general, inconsistent with the SDSS and PS1 sets of solutions, leading to average standard deviations of \(\langle\sigma T_{\mathrm{eff}}\rangle=1210\) K and \(\langle\sigma\log g\rangle=0.13\) dex. This might be related to the extremely broad _Gaia_ passbands, but the smaller number of filters cannot be discarded. We suggest these mean standard deviations to be the minimum uncertainty quoted when retrieving atmospheric parameters from _Gaia_ photometry for relatively cool helium-dominated white dwarfs.
We conclude that, as already found for the spectroscopic method, the analysis of different photometric data sets can result in atmospheric parameters that are discrepant by more than the statistical uncertainties. Underlying reasons include the use of different bandpasses, and systematic uncertainties in the zero-points (e.g. Tonry et al., 2012).
Figure 10: Atmospheric parameters obtained by fitting the SDSS (circles), PS1 (triangles) and _Gaia_ DR3 (squares) photometry with He+H+Z synthetic models (the \(\log(\mathrm{H/He})\) are fixed at the X-shooter spectroscopic values). Just the _Gaia_ uncertainties (the largest in all the cases) are displayed. The best-fit solutions for each target stray along a diagonal in \(T_{\mathrm{eff}}-\log g\), illustrating the correlation between these two parameters.
### Systematic uncertainties: atmospheric models with different chemical abundances
#### 5.2.1 Spectroscopy
In this section we assess the systematic uncertainties in \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H/He})\) when fitting spectroscopic data with atmospheric models of different chemical compositions. This situation may be encountered when having spectra with insufficient SNR to sample narrow or shallow lines or when having just a limited wavelength coverage, not including transitions of all relevant chemical elements. In those cases, we might fit the available observed spectra with synthetic models that do not take into account the complete chemical composition of the white dwarf.
The spectroscopic \(\log g\) as a function of \(T_{\rm eff}\) obtained from the fits to the X-shooter spectra (the only set with spectra for all 13 white dwarfs) using pure He, He+H and He+H+Z synthetic models is displayed in Fig. 13. The metallic lines blended with the helium and hydrogen lines were included in the He+H+Z fit since metals are implemented in those models, but the metal abundances were fixed to the values derived from the 1D metal fits (see Table 1).
We explored the likely errors introduced when fitting helium-dominated white dwarfs with traces of hydrogen and metals with pure He models. To do so, we determined the average \(\Delta T_{\rm eff}=T_{\rm eff}\,{\rm He+H}-T_{\rm eff}\)\({}^{\rm pareHe}\) and \(\Delta\log g=\log g\)\({}^{\rm He+H}-\log g\)\({}^{\rm pareHe}\) differences for the X-shooter, SDSS and BOSS spectra for each star13 to be \(\langle\Delta T_{\rm eff}\rangle=-335\) K and \(\langle\Delta\log g\rangle=0.01\) dex for X-shooter, \(\langle\Delta T_{\rm eff}\rangle=-251\) K and \(\langle\Delta\log g\rangle=0.02\) dex for SDSS and \(\langle\Delta T_{\rm eff}\rangle=-317\) K and \(\langle\Delta\log g\rangle=0.03\) dex for BOSS. We see thus a generic trend when adding hydrogen: \(T_{\rm eff}\)\({}^{\rm He+H}<T_{\rm eff}\)\({}^{\rm pareHe}\) and \(\log g\)\({}^{\rm Be+H}>\log g\)\({}^{\rm pareHe}\) (\(\simeq-300\) K, \(\simeq+0.02\) dex, respectively). This result is ex
Figure 11: Corner plot for the white dwarf 095840550 using He+H models with fixed \(\log({\rm H/He})=-5.7\) dex, showing the probability distribution of the parameters obtained by fitting the SDSS (red), PS1 (blue) and _Gaia_ eDR3 photometry (orange). It illustrates the compatible values between the three catalogues and the correlation between \(T_{\rm eff}\) and \(\log g\): the published fluxes of the three catalogues are similar and scaled by the same distance (provided by the _Gaia_ eDR3 parallax) and hence, even a small change in \(T_{\rm eff}\) produces a readjustment of the radius (and thus the \(\log g\)) to conserve the flux.
Figure 12: The average differences in \(T_{\rm eff}\) and \(\log g\) between the SDSS, PanSTARRS1 (PS1) and _Gaia_ eDR3 photometric results for the pure He, He+H and He+H+Z synthetic grids (left to right). No overall trend between the three catalogues is observed. Note that the uncertainties are the standard deviations, i.e. how dispersed is the data related to the mean value.
Figure 13: Spectroscopic X-shooter results using pure He (crosses), He+H (circles) and He+H+Z (arrow head) synthetic models. Metal absorption lines superimposed on the hydrogen and helium lines have been included in the He+H+Z fits (see Section 4). The stars identified with an asterisk lack a pure He analysis since their spectra are fully dominated by Balmer lines (see Fig. 2 and Table 1). The average error bars are displayed in the top right corner. Note that in some cases the pure He and He+H+Z results are not visible due to their similarity to the He+H values. The inclusion of hydrogen in the models (pure He \(\rightarrow\) He+H) produces a drop in \(T_{\rm eff}\) of \(\simeq 300\) K and a slight increase in \(\log g\) (\(\simeq 0.02\) dex). The addition of metals to the models (He+H \(\rightarrow\) He+H+Z) suggests a small increase of \(60\) K in \(T_{\rm eff}\), while \(\log g\) remains, on average, equal.
pected from the hydrogen-line blanketing: the addition of hydrogen increases the opacity (most noticeably) in the UV) and thus produces a back-warming effect in the optical, which translates in an overall lower \(T_{\rm eff}\) to match the _unblanketted_ model. However, we note this phenomenon has commonly been discussed for a fixed \(\log g\), which is different from our analysis where \(T_{\rm eff}\) and \(\log g\) are free parameters. Regarding the trend seen in \(\log g\) we highlight that, for the majority of cases, \(\log g\) decreases, and thus this average increase (\(\simeq+0.02\) dex) is dominated by the outliers.
We carried out the same analysis to assess the systematic differences in \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H/He})\) that may arise when fitting helium-dominated white dwarfs with traces of hydrogen and metals neglecting the presence of the latter in the photosphere. We found \(\left<T_{\rm eff}{\rm He+H+Z}-T_{\rm eff}{\rm He+H}\right>=60\) K, no \(\log g\) difference and \(\left(\log({\rm H/He})^{\rm He+H+Z}-\log({\rm H/He})^{\rm He+H}\right)=-0.01\) dex for X-shooter14. The inclusion of metals in the models produces an small overall increase in \(T_{\rm eff}\) (i.e. metal-line blanketing) even though the change in the helium/hydrogen absorption lines is not noticeable (Fig. 14).
Footnote 14: The BOSS and SDSS data were also fitted with He+H+Z synthetic spectra but using the metal abundances estimated from the X-shooter spectra (see Section 4.2 for more details and Tables A2 to A14 for those fits).
#### 5.2.2 Photometry
Despite the rapid increase of spectroscopically characterised white dwarfs, the largest parameter analyses still rely on candidates retrieved from photometric surveys (e.g. Gentile Fusillo et al., 2021). In these cases, but also for white dwarfs with poor SNR spectra, the chemical compositions might be unknown or unreliable, which might translate in inaccurate photospheric parameters.
We explore this situation by investigating the differences in the best-fit photometric \(T_{\rm eff}\) and \(\log g\) for different chemical compositions of the model spectra (pure He, He+H and He+H+Z), illustrating the miscalulations/uncertainties that arise from the use of incorrect chemical composition models. These differences are presented in Fig. 15 for the three grids best fits to the SDSS photometric data15.
Footnote 15: Both the SDSS and PS1 photometry lead to consistent parameters and this is just a choice to show the general trend. All the individual results can be found in Appendix A.
The addition of hydrogen to the model spectra (pure He \(\rightarrow\) He+H) produces an overall drop in the best-fit \(T_{\rm eff}\) and \(\log g\) (on average, 440 K and 0.06 dex, respectively and thus \(T_{\rm eff}\),\(\log g^{\rm He+H}<T_{\rm eff}\),\(\log g^{\rm profile}\)). The addition of hydrogen introduces line-blanketing from this species (mostly from Ly\(\alpha\)), which translates into a rise of the emitted flux in the optical range to compensate for the blocked flux in the UV. Considering that we only have optical data, this might explain the drop in \(T_{\rm eff}\) and \(\log g\) (these are positively correlated). The stars with larger hydrogen abundances (0827+1731, 1013+0259, 2324-0018) clearly stand out with bigger deviations between the pure He and He+H results.
However, we see the opposite trend after adding metals (He+H \(\rightarrow\) He+H+Z): both \(T_{\rm eff}\) and \(\log g\) increase (on average, 117 K and 0.01 dex, respectively and thus \(T_{\rm eff}\), \(\log g^{\rm He+H+Z}>T_{\rm eff}\), \(\log g^{\rm He+H}\)). This trend is at odds with the one obtained for the metal-polluted helium-dominated white dwarf GD 424 (Izquierdo et al., 2020), where a He+H and He+H+Z analysis was performed and the results showed an increase of both \(T_{\rm eff}\) and \(\log g\) when adding metals. For this sample, a further analysis focused on this matter will be needed to disentangle the behaviour of \(T_{\rm eff}\) from that of \(\log g\). The blanketing effect that the metals produce, which dominates in the UV where most metallic absorption lines reside, is expected to increase the emitted radiation towards redder wavelengths and hence rise the \(T_{\rm eff}\). However, in our analysis, there is an additional free parameter, \(\log g\), which is strongly correlated to the \(T_{\rm eff}\).
We note that the differences obtained by comparing SDSS, PS1 and _Gaia_ eDR3 are significantly smaller with the addition of metals to the models, i.e. for the He+H+Z fits (see the standard deviations in Fig. 12). This highlights the more reliable estimate of the white dwarf parameters when the chemical composition of the photosphere is fully characterised.
### Comparison between spectroscopic and photometric results
In Fig. 16, the \(T_{\rm eff}\) and \(\log g\) obtained from the best fits to the X-shooter, BOSS and SDSS16 spectra are compared to the photometric results using the SDSS+PS1 fluxes and the He+H+Z synthetic models.
Footnote 16: The inclusion of BOSS and SDSS spectroscopic results in Fig. 16 highlights the important differences obtained between distinct methods and data sets, but note that only the numerical comparison between the X-shooter spectroscopic and SDSS+PS photometric parameters is calculated.
Comparing the X-shooter spectroscopic results with those retrieved by fitting the SDSS+PS1 photometry shows that \(T_{\rm effspec}\) is, on average, 950 K larger than \(T_{\rm effphot}\). The same behaviour is obtained for the surface gravity, where \(\log g_{\rm spec}\) is 0.22 dex larger than \(\log g_{\rm phot}\). Despite the large overall differences between the parameters provided by the spectroscopic and photometric fits, we note an important decrease in these deviations for white dwarfs with \(T_{\rm eff,phot}\geq 15\,000\) K: \(\left<T_{\rm effspec}-T_{\rm effphot}\right>=480\) K and \(\left<\log g_{\rm spec}-\log g_{\rm phot}\right>=0.13\) dex. This fact reflects the yet unsolved issues with the broadening mechanisms of the neutral helium lines, which notably affects the spectroscopic method (the \(T_{\rm eff}\) and \(\log g\) are measured from the width and depth of the absorption lines), but do not affect the photometric analysis. These significant differences between the spectroscopic and photometric results have been previously highlighted in the literature (Section 2) and a forthcoming analysis, with a different sample that just contains objects above 15 000 K, is necessary to test the suitability of the spectroscopic, photometric and hybrid techniques to determine what is the most reliable method to characterise the population of helium-dominated white dwarfs with traces of hydrogen (and metals).
The goal of this paper was to assess the magnitude of systematic errors - which are often overlooked - that arise from the characterisation of white dwarfs with helium-dominated photospheres. Whereas we demonstrated the discrepancy in the atmospheric parameters derived from different photometric and spectroscopic data sets, there is currently no straight-forward answer to the question "_which are the most reliable parameters_". Based on our experience, the photometric method based on SDSS and PS1 data, when using the appropriate models for the given atmospheric composition of a star, provides consistent results for \(T_{\rm eff}\) and \(\log g\). Turning to the analysis of different spectroscopic data sets, one would ideally obtain multiple observations of each star, in the hope that the differences in the resulting parameters average out.
Looking beyond the topic of systematic uncertainties, there are a range of studies of individual white dwarfs that require \(T_{\rm eff}\) and \(\log g\) as a starting point for more detailed analyses, such as measuring the photospheric metal abundances. We will present such an
analysis for the 13 stars used here in a forthcoming paper. Given the characteristics of this sample (helium-dominated white dwarfs with \(T_{\rm eff}\lesssim 15\,000\) K) the photospheric parameters are derived by means of an iterative method (similar to that employed in Izquierdo et al., 2020), where the \(T_{\rm eff}\) and \(\log g\) are obtained from the photometric fit of SDSS+PS1 photometry and the \(\log({\rm H/He})\) from the X-shooter spectroscopy. Then, we fix those parameters to measure the photospheric metal abundances and translate them into parent body planetesimal composition.
### Previously published results
The 13 white dwarfs presented in this work have been previously characterised by Eisenstein et al. (2006), Kleinman et al. (2013), Koester & Kepler (2015), Kepler et al. (2015), Coutu et al. (2019)
Figure 16: Atmospheric parameters obtained by fitting the SDSS+PS1 photometric data sets (stars), the X-shooter spectra (diamonds) and the BOSS and SDSS spectra (filled and open hexagons, respectively) with He+H+Z synthetic models.
Figure 14: Synthetic spectra of a white dwarf with \(T_{\rm eff}=16\,000\) K and \(\log g=8.0\) dex. The \(\log({\rm H/He})\) is fixed to \(-4.5\) dex for the He+H and He+H+Z spectra and the relative metal abundances of the latter are fixed to those of 0930+0618 (see Table 1). The H\(\beta\) and He i \(.4922\) absorption lines have been zoomed-in and continuum-normalised to illustrate the slight increase in line width and depth as a result of the inclusion of hydrogen and metals. The hydrogen and helium lines are indicated by the blue and pink vertical lines, respectively.
Figure 15: Photometric fits of the SDSS photometry data using pure He (crosses), He+H (circles) and He+H+Z (arrow head) synthetic models. For each star, the \(\log({\rm H/He})\) has been fixed to the X-shooter value for the He+H and He+H+Z spectroscopic fits. The average uncertainties are shown in the top left corner. The stars identified with an asterisk are clearly dominated by Balmer absorption lines and hence the difference between pure He and He+H results is larger (see Fig. 2 and Table 1).
and/or Gentile Fusillo et al. (2021)17. Their atmospheric parameters are listed in Tables 4 and 5 along with the ones obtained in this analysis. We chose the X-shooter spectroscopic results since this is the only data set common to the 13 white dwarfs and it has the highest spectral resolution and wavelength coverage. The selection of the SDSS+PS1 photometric results was based on the consistency of the parameter values between the two catalogues, the lack of photometry issues reported in the literature and our previous experience with the white dwarf GD 424 (Izquierdo et al., 2020). As described earlier, the He+H+Z synthetic models most realistically treat the complex chemical composition of the studied white dwarfs. In what follows, we compare our spectroscopic and photometric results with the atmospheric parameters given in the literature in terms of average differences.
Footnote 17: Each star has been examined by at least four of the cited studies.
Eisenstein et al. (2006) performed spectroscopic and photometric fits to SDSS DR4 data with the latest version available at the time of publication of D. Koester's DA and DB synthetic models (ML2/\(\alpha=0.6\)). They used autogfit(Kleinman et al., 2004), an automatic fitting technique based on \(\chi^{2}\) minimisation, where the model spectra can be freely re-fluxed to incorporate flux calibration errors and unreliable or unknown reddening. To overcome the degeneracies produced by similar strengths and profiles of the absorption lines, they calculated the synthetic SDSS colours of the best-fit models yielded by the spectroscopic fits and compared them to the observed colours. They adopted the parameters that delivered the lowest \(\chi^{2}\). We found average differences from our X-shooter spectroscopic parameters and theirs of \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm spec}=0.03\) and \(\langle\Delta\log g\rangle_{\rm spec}=-0.21\) dex, while the comparison of their parameters with our photometric SDSS+PS1 ones provide \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm phot}=-0.08\) and \(\langle\Delta\log g\rangle_{\rm phot}=-0.53\) dex. The large differences found for the photometric fits are expected since Eisenstein et al.'s analysis relied mostly on the spectroscopic data, and our photometric fits largely benefit from knowledge of the distances (unknown at the time). Besides, these results are in agreement with our findings presented in Section 5.3, where spectroscopy leads to much higher \(T_{\rm eff}\) and \(\log g\) than those derived from photometric data.
Kleinman et al. (2013) carried out the same analysis as Eisenstein et al. but with SDSS DR7 spectroscopy and photometry data. Kleinman et al. used improved model atmospheres (we refer the reader to Koester 2009, 2010, for further details) and \(\alpha=1.25\). In this case, we find \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm spec}=0.01\) and \(\langle\Delta\log g\rangle_{\rm spec}=-0.31\) dex and \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm phot}=-0.04\) and \(\langle\Delta\log g\rangle_{\rm phot}=-0.48\) dex. The increase in deviation in the spectroscopic \(\log g\) is not yet to Eisenstein et al.'s sample is due to the new member additions, in particular 0827+1731, for which Kleinman et al. obtained \(\log g=9.59\pm 0.3\) dex, very far from our \(\log g=7.62\pm 0.04\) dex.
We have 11 white dwarfs in common with Koester & Kepler (2015)'s sample, but they only estimated the \(\log g\) for five of them18. The derived differences are \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm spec}=0.04\) and \(\langle\Delta\log g\rangle_{\rm spec}=-0.08\) dex, and \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm phot}=-0.03\) and \(\langle\Delta\log g\rangle_{\rm phot}=-0.32\) dex. Although the synthetic spectra are similar (we used an updated, improved version of D. Koester's models), our fitting techniques differ considerably as described in Sections 2 and 4, which may explain the deviations. The large discrepancy between Koester & Kepler's \(\log g\) and our photometric \(\log g\) is completely dominated by the object \(2324-0018\), for which they derived \(\log g=9.43\) dex.
Footnote 18: We refer the reader to Section 2 and Koester & Kepler (2015) for details on their model atmospheres and fitting techniques.
The third white dwarf catalogue based on SDSS DR10 spectra was published by Kepler et al. (2015). They used autogfit to characterise three of the 13 white dwarfs of our sample. We find \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm spec}=-0.05\) and \(\langle\Delta\log g\rangle_{\rm spec}=-0.03\) dex, and \(\langle\Delta\log g\rangle_{\rm phot}=-0.30\) dex. As previously outlined, the smaller deviations between their results and our spectroscopic parameters compared to our photometric ones are the result of similar techniques.
Coutu et al. (2019) presented an iterative analysis of spectroscopic and photometric data of 1023 DBZ/DZ(A) white dwarfs, which contains four of the 13 white dwarfs in our sample. Briefly, their atmospheric parameter determination relied on a first photometric fit to SDSS photometry, if available, and alternatively PS1 or _Gaia_ DR2 data, in that priority order, with \(T_{\rm eff}\) and the solid angle as free parameters and fixed \(\log g\), \(\log({\rm H/He})\) and \(\log\) (\({\rm Ca/He}\)). From the best-fit solid angle value and the known \(D\), they computed the log \(g\) from interpolation of the evolutionary models by Fontaine et al. (2001) and performed the photometric fit with this new log \(g\) fixed. This photometry fitting process is repeated until convergence is achieved. Then, they fit the available spectra (mainly retrieved from SDSS DR14, but also from Bergeron et al. 1997, 2001, Subasavage et al. 2007, Limoges et al. 2013, 2015 or archival data obtained by the Montreal group) with the solid angle, \(\log({\rm H/He})\) and \(\log\) (\({\rm Ca/He}\)) as free parameters and \(T_{\rm eff}\) and \(\log g\) fixed to the best photometric fit values. The resulting \(\log({\rm H/He})\), \(\log\) (\({\rm Ca/He}\)) and spectroscopic \(\log g\) (as derived from the spectroscopic solid angle and \(D\) by interpolation of evolutionary models) is then fixed in a subsequent photometric fit. This whole photometric-spectroscopic sequential process is repeated until \(T_{\rm eff}\), \(\log g\), \(\log({\rm H/He})\) and \(\log\) (\({\rm Ca/He}\)) arrived at steady solutions.
The comparison of Coutu et al.'s results with our best-fit parameters led to \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm spec}=0.05\) and \(\langle\Delta\log g\rangle_{\rm spec}=0.21\) dex and \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle_{\rm phot}=0.02\) and \(\langle\Delta\log g\rangle_{\rm phot}=0.01\) dex. The large difference in the spectroscopic \(\log g\) is probably related to our spectroscopic method, since, as previously mentioned, this technique fails to deliver reliable \(\log g\) values for \(T_{\rm eff}\) below \(15\,000\) K, which happens to be the case for the white dwarfs in common with Coutu et al. (2019).
Gentile Fusillo et al. (2021) compiled a catalogue of potential white dwarfs retrieved from _Gaia_ eDR3, which contains our 13 helium-dominated stars. Their white dwarf candidates were characterised by means of _Gaia_ eDR3 photometry in a similar way as described in Section 4.3: they computed the synthetic magnitudes using DA, DB and mixed hydrogen-helium models (Bergeron et al., 2011; Tremblay et al., 2011, 2014; McCleery et al., 2020) and the \(G_{\rm RP}\), \(G\) and \(G_{\rm BP}\) passbands, scaling the model spectra to the solid angle of the star using the evolutionary models of Bedard et al. (2020) and comparing with the published dereddened _Gaia_ eDR3 magnitudes19. A comparison of their photometric parameters with our spectroscopic ones leads to \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle=0.05\) and \(\langle\Delta\log g\rangle=0.21\) dex; with our SDSS+PS1 photometric ones to \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle=-0.002\) and \(\langle\Delta\log g\rangle=-0.01\) dex. Since we have also performed photometric fits to the _Gaia_ eDR3 data, we can compare our results with theirs and find \(\langle\Delta T_{\rm eff}/T_{\rm eff}\rangle=-0.01\) and \(\langle\Delta\log g\rangle=-0.01\) dex. The differences may arise from the use of different synthetic models with different chemical composition, but the use of distinct reddening values are also a possibility.
\begin{table}
\begin{tabular}{c|c c c c c c} Star & \(T_{\rm eff}\) (K) & \(\log g\) (dex) & \(\log({\rm H/He})\) (dex) & \(\log\) (Ca/He) (dex) & Synthetic spec & Refs. \\ \hline
0030+1526 & 16728 \(\pm\) 72 & \(8.30\pm 0.04\) & – & – & He & (1) \\ & 16133 \(\pm\) 77 & \(8.30\pm 0.05\) & – & – & He & (2) \\ & 16065 \(\pm\) 47 & \(8.10\pm 0.04\) & \(-4.62\pm 0.15\) & \(-7.01\pm 0.08\) & He(+H+Z) & (3) \\ & 14621 \(\pm\) 664 & \(8.00\pm 0.10\) & – & – & He & (6) \\ & 14524 \(\pm\) 649 & \(8.00\pm 0.10\) & – & – & He+H & (6) \\ & 15795 \(\pm\) 27 & \(8.18\pm 0.02\) & \(-5.01\pm 0.02\) & \(-7.60\) & He+H+Z & (7) \\ & 15285 \(\pm\) 300 & \(8.07\pm 0.04\) & –5.01 & –7.60 & He+H+Z & (8) \\
0259–0721 & 16128 \(\pm\) 124 & \(8.27\pm 0.08\) & – & – & He & (1) \\ & 15565 \(\pm\) 139 & \(8.19\pm 0.10\) & – & – & He & (2) \\ & 15433 \(\pm\) 77 & \(8.0\) & \(<\)\(<\)\(5.37\) & \(-6.77\pm 0.22\) & He(+H+Z) & (3) \\ & 13298 \(\pm\) 1263 & \(7.89\pm 0.19\) & – & – & He & (6) \\ & 13211 \(\pm\) 1293 & \(7.89\pm 0.20\) & –5.0 & – & He+H & (6) \\ & 16390 \(\pm\) 28 & \(8.26\pm 0.02\) & \(-6.04\pm 0.08\) & –6.24 & He+H+Z & (7) \\ & 14128 \(\pm\) 250 & \(8.01\pm 0.06\) & –6.14 & –6.24 & He+H+Z & (8) \\
0827+1731 & 12003 \(\pm\) 329 & \(9.59\pm 0.3\) & – & – & He & (2) \\ & 10537 \(\pm\) 382 & \(8.06\pm 0.08\) & \(-4.27\pm 0.07\) & – & He+H+Z & (5) \\ & 11544 \(\pm\) 453 & \(8.27\pm 0.08\) & – & – & He & (6) \\ & 11276 \(\pm\) 513 & \(8.23\pm 0.10\) & –5.0 & – & He+H & (6) \\ & 9397\({}^{\pm 6}_{76}\) & \(7.62\pm 0.04\) & \(-4.17\pm 0.03\) & \(-9.93\) & He+H+Z & (7) \\ & 10651 \(\pm\) 154 & \(8.09\pm 0.04\) & \(-4.17\pm 0.17\) & –9.93 & He+H+Z & (8) \\
0859+1123 & 16078 \(\pm\) 93 & \(8.20\pm 0.07\) & \(-4.39\pm 0.23\) & \(-6.35\pm 0.27\) & He(+H+Z) & (3) \\ & 16145 \(\pm\) 99 & \(8.14\pm 0.06\) & – & – & He & (4) \\ & 12964 \(\pm\) 1505 & \(7.84\pm 0.29\) & – & – & He & (6) \\ & 12861 \(\pm\) 1573 & \(7.83\pm 0.31\) & –5.0 & – & He+H & (6) \\ & 15717 \(\pm\) 63 & \(8.19\pm 0.04\) & \(-4.84\pm 0.04\) & \(-6.71\) & He(+H+Z) & (7) \\ & 15253 \(\pm\) 698 & \(8.09\pm 0.10\) & \(-4.86\) & –6.71 & He+H+Z & (8) \\
0930+0618 & 16817 \(\pm\) 73 & \(8.14\pm 0.04\) & – & – & He & (2) \\ & 16583 \(\pm\) 56 & \(8.03\pm 0.04\) & \(-4.72\pm 0.26\) & \(-6.55\pm 0.10\) & He(+H+Z) & (3) \\ & 17474 \(\pm\) 2092 & \(8.18\pm 0.21\) & – & – & He & (6) \\ & 17409 \(\pm\) 2132 & \(8.19\pm 0.21\) & –5.0 & – & He+H & (6) \\ & 15982 \(\pm\) 41 & \(8.18\pm 0.02\) & \(-4.87\pm 0.04\) & \(-7.11\) & He+H+Z & (7) \\ & 15560 \(\pm\) 380 & \(8.01\pm 0.06\) & –4.9 & –7.11 & He+H+Z & (8) \\
0944–0039 & 15522 \(\pm\) 76 & \(9.00\pm 0.01\) & – & He & (1) \\ & 14592 \(\pm\) 144 & \(8.82\pm 0.09\) & – & – & He & (2) \\ & 14057 \(\pm\) 62 & \(8.00\) & \(<-5.75\) & \(-7.14\pm 0.10\) & He(+H+Z) & (3) \\ & 12625 \(\pm\) 604 & \(8.13\pm 0.07\) & \(<-6.08\) & – & He+H+Z & (5) \\ & 12744 \(\pm\) 598 & \(8.11\pm 0.10\) & – & – & He & (6) \\ & 12623 \(\pm\) 634 & \(8.10\pm 0.11\) & –5.0 & – & He+H & (6) \\ & 14607 \(\pm\) 45 & \(8.76\pm 0.02\) & \(-5.87\pm 0.05\) & \(-7.58\) & He+H+Z & (7) \\ & 13113 \(\pm\) 180 & \(8.15\pm 0.04\) & –5.81 & –7.58 & He+H+Z & (8) \\
0958+0550 & 11684 \(\pm\) 83 & \(8.0\) & \(-5.62\pm 0.40\) & \(-8.75\pm 0.11\) & He(+H+Z) & (3) \\ & 12955 \(\pm\) 171 & \(8.54\pm 0.1\) & – & – & He & (4) \\ & 10960 \(\pm\) 402 & \(8.0\) & \(-5.84\pm 0.25\) & \(-8.66\pm 0.09\) & He+H+Z & (5) \\ & 10861 \(\pm\) 558 & \(7.92\pm 0.13\) & – & – & He & (6) \\ & 10540 \(\pm\) 597 & \(7.84\pm 0.15\) & \(-5.0\) & – & He+H & (6) \\ & 11428\({}^{\pm 1}_{-110}\) & \(8.22\pm 0.09\) & \(-5.82\pm 0.07\) & \(-8.89\) & He+H+Z & (7) \\ & 11201 \(\pm\) 176 & \(7.99\pm 0.06\) & \(-5.64\) & \(-8.89\) & He+H+Z & (8) \\
1013+0259 & 8512 \(\pm\) 24 & \(9.00\pm 0.01\) & – & – & He & (1) \\ & 3851 \(\pm\) 42 & \(9.09\pm 0.06\) & – & – & He & (2) \\ & 12428 \(\pm\) 1154 & \(7.97\pm 0.21\) & – & – & He & (6) \\ & 12294 \(\pm\) 1263 & \(7.96\pm 0.24\) & \(-5.0\) & – & He+H & (6) \\ & 13158 \(\pm\) 27 &
## 6 Conclusions
In this paper we have determined the atmospheric parameters of 13 white dwarfs with helium-dominated photospheres, traces of hydrogen and metals from spectroscopy and photometry data and investigated the overall trends of the use of different data sets and chemical composition models.
The use of different data sets leads to contrasting results both for spectroscopic and photometric data. The differences are in all the cases greater than the uncertainties published in individual studies. These discrepancies are most likely related to calibration issues, but differences in the spectral ranges and hence the use of different absorption lines, SNR or photometric filters cannot be ruled out. In particular:
* We find mean standard deviations of 524 K, 0.27 dex and 0.31 dex for \(T_{\rm eff}\), \(\log g\) and \(\log({\rm H/He})\), respectively, when fitting model spectra to diverse spectroscopic data sets. These values are substantially larger than the purely statistical uncertainties usually reported in studies of helium-dominated white dwarfs (with or without traces of hydrogen/metals), and we consider them as a more realistic assessment of the overall uncertainties of the model atmosphere analysis of these stars. We suggest to quote them when spectroscopically characterising helium-dominated white dwarfs (with or without traces of hydrogen/metals), in particular, in the cool end (\(T_{\rm eff}\leq 15000\) K) with just one spectroscopic data set.
* The photometric fits provide mean standard deviations between SDSS and PS1 data of \(\langle\sigma T_{\rm eff}\rangle=485\) K and \(\langle\sigma\log g\rangle=0.05\) dex. We encourage these values to be adopted as the minimum uncertainties when publishing atmospheric parameters from SDSS or PS1 photometry for cool helium-dominated white dwarfs (with or without traces of hydrogen/metals). The mean standard deviations become larger when _Gaia_ eDR3 data are used: \(\langle\sigma T_{\rm eff}\rangle=1210\) K and \(\langle\sigma\log g\rangle=0.13\) dex. This should be taken into account when quoting the uncertainties in the parameters derived from _Gaia_ eDR3 photometry data.
With the aim of investigating the effect of the assumed (often inaccurate) chemical composition on the best-fit atmospheric parameters, we carried out the data modelling using synthetic spectra of three different chemical compositions: (1) pure helium, (2) helium-dominated atmospheric models with traces of hydrogen (He+H) and (3) hydrogen plus metals in helium-dominated photospheres (He+H+Z). In general, pure helium model spectra result in larger \(T_{\rm eff}\) than those derived from He+H, while the \(\log g\) differences are also notable but change from spectroscopic to photometric data. The addition of metals does also affect the best-fit parameters, but the change is less dramatic than in the previous case. In particular:
\begin{table}
\begin{tabular}{c|c c c c c} \hline Star & \(T_{\rm eff}\) (K) & \(\log g\) (dex) & \(\log({\rm H/He})\) (dex) & \(\log\) (Ca/He) (dex) & Synthetic spec & Refs. \\ \hline
1109+1318 & \(16242.0\pm 194\) & \(8.24\pm 0.10\) & – & – & He & (2) \\ & \(16081\pm 130\) & \(8.06\pm 0.10\) & \(-3.85\pm 0.33\) & \(-6.46\pm 0.50\) & He(+H+Z) & (3) \\ & \(16722\pm 5342\) & \(8.21\pm 0.59\) & – & – & He & (6) \\ & \(16751\pm 5632\) & \(8.22\pm 0.61\) & \(-5.0\) & – & He+H & (6) \\ & \(16308\pm 62\) & \(8.25\pm 0.03\) & \(-4.01\pm 0.03\) & \(-7.51\) & He+H+Z & (7) \\ & \(15623\pm 480\) & \(8.12\pm 0.10\) & –4.05 & \(-7.51\) & He+H+Z & (8) \\
1359–0217 & \(17067\pm 104\) & \(8.12\pm 0.06\) & – & – & He & (1) \\ & \(16778\pm 123\) & \(8.18\pm 0.06\) & – & – & He & (2) \\ & \(16973\pm 60\) & \(7.83\pm 0.05\) & \(-3.33\pm 0.11\) & \(-6.49\pm 0.30\) & He(+H+Z) & (3) \\ & \(16701\pm 2238\) & \(8.07\pm 0.25\) & – & – & He & (6) \\ & \(16634\pm 2309\) & \(8.08\pm 0.25\) & \(-5.0\) & – & He+H & (6) \\ & \(16773\pm 55\) & \(8.14\pm 0.02\) & \(-3.15\pm 0.02\) & \(-7.23\) & He+H+Z & (7) \\ & \(13995\pm 285\) & \(7.78\pm 0.05\) & \(-3.16\) & \(-7.23\) & He+H+Z & (8) \\
1516–0040 & \(14961\pm 28\) & \(8.0\) & \(-4.47\pm 0.10\) & \(-7.38\pm 0.20\) & He(+H+Z) & (3) \\ & \(15264\pm 50\) & \(8.21\pm 0.01\) & – & – & He & (4) \\ & \(13006\pm 735\) & \(7.95\pm 0.10\) & \(-4.83\pm 0.08\) & \(-8.59\pm 0.10\) & He+H+Z & (5) \\ & \(13081\pm 751\) & \(7.89\pm 0.12\) & – & – & He & (6) \\ & \(12987\pm 779\) & \(7.88\pm 0.12\) & \(-5.0\) & – & He+H & (6) \\ & \(15448\pm 20\) & \(8.42\pm 0.01\) & \(-4.50\pm 0.01\) & \(-7.59\) & He+H+Z & (7) \\ & \(13193\pm 207\) & \(7.94\pm 0.03\) & \(-5.0\) & \(-7.59\) & He+H+Z & (8) \\
1627+1723 & \(15834\pm 174\) & \(7.98\pm 0.1\) & – & – & He & (2) \\ & \(15795\pm 112\) & \(8.0\) & \(<-5.02\) & \(<-6.66\) & He(+H+Z) & (3) \\ & \(16407\pm 2233\) & \(8.17\pm 0.27\) & – & – & He & (6) \\ & \(16326\pm 2299\) & \(8.17\pm 0.28\) & \(-5.0\) & – & He+H & (6) \\ & \(16134\pm 102\) & \(8.29\pm 0.05\) & \(-5.05\pm 0.07\) & \(-7.73\) & He+H+Z & (7) \\ & \(15903\pm 503\) & \(8.11\pm 0.09\) & \(-5.13\) & \(-7.73\) & He+H+Z & (8) \\
2324–0018 & \(23431\pm 697\) & \(5.01\pm 0.02\) & – & – & \(-\) & sdB (1) \\ & \(8231\pm 39\) & \(9.43\pm 0.04\) & – & – & He & (3) \\ & \(12198\pm 1303\) & \(7.66\pm 0.29\) & – & – & He & (6) \\ & \(12039\pm 1473\) & \(7.64\pm 0.33\) & \(-5.0\) & – & He+H & (6) \\ & \(14063\pm 53\) & \(8.25\pm 0.02\) & \(-3.32\pm 0.01\) & \(-8.99\) & He+H+Z & (7) \\ & \(12823\pm 325\) & \(7.66\pm 0.15\) & \(-3.33\) & \(-8.99\) & He+H+Z & (8) \\ \hline \end{tabular}
\end{table}
Table 5: Literature results from: (1) Eisenstein et al. (2006), (2) Kleinman et al. (2013), (3) Koester & Kepler (2015), (4) Kepler et al. (2015), (5) Coutu et al. (2019), (6) Gentile Fusillo et al. (2021), (7) and (8) X-shooter spectroscopic and SDSS+PS1 photometric fits presented in this paper, respectively. The sixth column states the synthetic spectra composition used in the fitting, where bracketed letters mark the estimation of those elements by independent fits (we refer to Section 2 and the main text for further details).
* The addition of hydrogen to the pure helium synthetic models (pure He \(\rightarrow\) He+H) produces a drop in the derived spectroscopic \(T_{\rm eff}\) of 300 K and a slight increase of 0.02 dex in the \(\log g\), on average. Although the addition of metals does not translate into a significant absolute change in the average spectroscopic values (\(\simeq 60\) K, no change and 0.01 dex for \(T_{\rm eff}\), \(\log g\) and \(\log(\rm H/He)\), respectively), we note it does affect the derived atmospheric parameters of each star and refer the reader to the individual results (Tables 12-14).
* As for the photometric fits, the inclusion of hydrogen (pure He \(\rightarrow\) He+H) produces a mean drop in the \(T_{\rm eff}\) and \(\log g\) of 440 K and 0.06 dex, respectively, while the addition of metals (He+H \(\rightarrow\) He+H+Z) results in an increase of \(\simeq 120\) K and 0.01 dex, on average.
The 13 white dwarfs in our sample have helium-dominated photospheres polluted with hydrogen and up to ten different metals (see Table 11). Therefore, a realistic characterisation must be based on model spectra that accurately reflect the actual chemical compositions. The above parameter differences illustrate the systematic uncertainties expected when the model grid chemical composition is not well suited for the actual data.
We also compared our spectroscopic and photometric results and find significant differences for those stars with \(T_{\rm eff}\leq 15\,000\) K. This is a well-known issue due to the poor implementation of resonance and van der Waals theories for the helium atom (see Sections 1 and 2 for more details), which affects the spectroscopic modelling but does not have an overall effect on the photometric fits, as the latter do not rely on the width and depth of the absorption lines. This can also be noticed in the literature of the white dwarfs in our sample. A future analysis, with a different sample that just contains white dwarfs above 15 000 K, will be needed to test the suitability of the different techniques in order to find the best method to characterise helium-dominated white dwarfs (with or without hydrogen/metals).
Even though there is no straightforward recipe to obtain the most realistic parameters, based on our experience, the SDSS and PS1 photometry provide consistent results for \(T_{\rm eff}\) and \(\log g\) when employing appropriate synthetic models. For the analysis of cool helium-dominated white dwarfs with spectroscopic data, we suggest to ideally obtain multiple observations to test for systematic uncertainties in the hope that such differences in the parameters average out.
## Acknowledgements
Based on observations collected at the European Southern Observatory under ESO programmes 0100.C-0500(A) and 0101.C-0646(A). PI was supported by a Leverhulme Trust Research Project Grant. PI and BTG were supported by grant ST/T000406/1 from the Science and Technology Facilities Council (STFC). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 101020057). OT was supported by a FONDECYT project 321038. This research was supported in part by the National Science Foundation under Grant No. PHY-1748958.
## Data Availability Statement
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2306.02782 | Reassembling Broken Objects using Breaking Curves | Reassembling 3D broken objects is a challenging task. A robust solution that
generalizes well must deal with diverse patterns associated with different
types of broken objects. We propose a method that tackles the pairwise assembly
of 3D point clouds, that is agnostic on the type of object, and that relies
solely on their geometrical information, without any prior information on the
shape of the reconstructed object. The method receives two point clouds as
input and segments them into regions using detected closed boundary contours,
known as breaking curves. Possible alignment combinations of the regions of
each broken object are evaluated and the best one is selected as the final
alignment. Experiments were carried out both on available 3D scanned objects
and on a recent benchmark for synthetic broken objects. Results show that our
solution performs well in reassembling different kinds of broken objects. | Ali Alagrami, Luca Palmieri, Sinem Aslan, Marcello Pelillo, Sebastiano Vascon | 2023-06-05T11:16:50Z | http://arxiv.org/abs/2306.02782v1 | # Reassembling Broken Objects using Breaking Curves
###### Abstract
Reassembling 3D broken objects is a challenging task. A robust solution that generalizes well must deal with diverse patterns associated with different types of broken objects. We propose a method that tackles the pairwise assembly of 3D point clouds, that is agnostic on the type of object, and that relies solely on their geometrical information, without any prior information on the shape of the reconstructed object. The method receives two point clouds as input and segments them into regions using detected closed boundary contours, known as breaking curves. Possible alignment combinations of the regions of each broken object are evaluated and the best one is selected as the final alignment. Experiments were carried out both on available 3D scanned objects and on a recent benchmark for synthetic broken objects. Results show that our solution performs well in reassembling different kinds of broken objects.
## 1 Introduction
Reconstructing three-dimensional broken objects is an important task in several fields such as computer graphics [8, 16], cultural heritage [14, 15], and robotics [4, 10, 21]. The growing interest in the community toward the 3D multi-part assembly task in recent years led to the development of a benchmark composed of realistically broken objects [16].
While there are numerous methods for the registration of 3D points, e.g., [2, 9, 18, 22], reassembling two parts of a broken object is a different task that usually requires registering only a partial subset of each part. Some registration methods address this issue by focusing on the low-overlap region [9], however accurately identifying the fractured surface region is important for performing pairwise matching over such point subsets. Indeed, the success of the reassembly depends highly on the precision of the segmentation process, and developing an algorithm that accurately identify fractured surface regions without making assumptions about the shape of the object is challenging. To deal with this issue, prior works [1, 8, 17] adopted extraction of _breaking curves_ in an initial step, and achieve segmentation by merging vertices that are not part of the breaking curve into a single region. Other approaches adopted graph-based techniques for segmentation of point clouds, as outlined in Section 2. These have been successfully used for extracting spatial geometric attributes from 3D point cloud data [5, 6, 12].
We propose a modular and adaptable open-source1 framework that integrates geometric-based methods to effectively reassemble pairs of 3D broken objects, without making any assumptions about their type or the nature of their damage. The proposed approach offers a significant advantage in obtaining region segmentation independent of surface characteristics. This is achieved through the guidance of _breaking curves_, which are extracted using an extension of the graph-based method in [5]. We experimentally demonstrate that, if the breaking curve extraction and the successive segmentation steps are successfully achieved, it is possible to accomplish the registration stage with a standard registration method such as the Iterative Closest Point (ICP) [2]. We evaluated the proposed approach on a state-of-the-art synthetic benchmark as well as two real-world datasets. The results demonstrate the robustness and accuracy of the proposed method, as presented in Figure 1.
Footnote 1: The code will be released in [https://github.com/RePAIRProject/AAFR](https://github.com/RePAIRProject/AAFR).
## 2 Related Work
**Non-learning based (geometrical) methods:** A common approach for automatic reassembly of broken 3D objects relies on fractured region matching for identifying po
Figure 1: The proposed method reconstructs accurately the mug by assembling the two parts, where the other approaches fail drastically in this case.
tential pairwise matches of fragments. This involves _(i)_ segmentation of the broken objects into fractured and intact regions and (_ii)_ matching of the fractured surfaces. A conventional technique for surface segmentation is to use _region growing_, where vertices with similar attributes are combined in the same region.
The region growing segmentation relies either on the contours or on the surface characteristics. Altantsetseg et al. [1] adopted the Fourier series to approximate the boundary contour, Huang et al. [8] extracted the long closed cycles from a minimum spanning graph of the edge points that have persistent curvatures at multiple scales. Several works used breaking curves for aligning fragments after segmenting them [19, 23], yet they do not consider deteriorated fragments. Some other works adopted features computed on the fractured surfaces for their alignment, e.g., concave and convex regions were extracted on the fractured surfaces by Li et al. [11] and Son et al. [17], and Huang et al. [8] adopted clusters of multi-scale surface characteristics computed based on the integral invariants. Papaioannou et al. [14] conducted an exhaustive search of fractured surfaces of all fragments, rather than extracting features.
**Learning-based methods:** Another approach adopted by the recent literature involves learning-based techniques to estimate the transformation required for the reassembly of fragments. In this context, Chen et al. [3] created a synthetic dataset by breaking 3D meshes into pairs of fragments and employed a transformer-based network with a loss that is a combination of geometric shape-based and transformation matrix-based loss functions to learn pairwise alignment. The reported results highlight the high complexity of this task, given that synthetically generated fragments devoid of physical deterioration were only roughly aligned [16]. This trend is further validated by Sellan et al. [16], which introduced a physically realistic dataset of broken 3D meshes to serve as a benchmark for the reassembly task and demonstrated that baseline learning-based algorithms are insufficient for solving the multi-part assembly task.
In this work, we follow the first approach, i.e., segment the broken surfaces as in [17, 8, 1] and register each segmented broken region with an exhaustive search as in [14]. Unlike them, we use a graph-based method for detecting the breaking curves of fragments which allows segmenting regions without prior assumptions on the surface characteristics of the object, and adopt the ICP algorithm for registration.
## 3 The Proposed Approach
The proposed method has a modular workflow depicted in Figure 2, which is divided into three main parts:
1. Detecting breaking curves: the set of points which belong to a three-dimensional edge (Section 3.1),
2. Segmenting the points into a set of regions using the breaking curves (Section 3.2),
3. Registering the objects by selecting the best match among possible combinations of the segmented regions of each objects (Section 3.3).
### Breaking Curves Extraction
When dealing with the assembly of fragmented objects, it is crucial to detect borders and edges as they provide cues for the correct matching. The proposed approach starts from a 3D point cloud and detects breaking curves. A breaking curve is defined as a subset of connected points that belong to a 3D edge, as illustrated in Figure 2(b). The set of all breaking curves acts as a support for segmenting the objects into distinct regions.
Let \(P\) be the set of points in a point cloud. We represent \(P\) as an unweighted directed graph \(G=(V,E)\) where the set of vertices \(V\) corresponds to the set of points \(p\in P\) and the edges \(E\subseteq V\times V\) represents the neighbouring relations between the points. Being the density of the point
Figure 2: The pipeline of the proposed approach. Two broken parts of a 3D point cloud (a bottle) are the input to the algorithm. After processing, the breaking curves are extracted and the point clouds are segmented. The registration selects the best match among the segmented parts and aligns the two input point clouds. The point cloud belongs to the DrinkBottle category of the breaking bad dataset [16].
cloud non-uniform, we opted for a mixed approach when adding edges: we create an \(\epsilon\)-graph [13, 20] using the average distance of the \(k\) nearest neighbours considering the entire point cloud. The \(\epsilon\) value is then computed as:
\[\epsilon=\frac{1}{|P|}\frac{1}{k}\sum_{p\in P}\sum_{q\in\mathcal{N}_{p}^{k}}|p-q|\]
Here \(P\) is the point cloud, \(p\in P\) is a 3D-point in \(\mathbb{R}^{3}\) and \(\mathcal{N}_{p}^{k}\) is the set of \(k\)-nearest neighbours of point \(p\).
After the graph is created, we compute for each node its _corner penalty_[5] defined as:
\[\omega_{co}(p)=\frac{\lambda_{2}(p)-\lambda_{0}(p)}{\lambda_{2}(p)}\]
where \(\lambda_{0}\) and \(\lambda_{2}\) are respectively the smallest and the largest of the three eigenvalues of the correlation matrix of the neighbours of \(p\). The eigenvalues of the correlation matrix provide the level of skewness of the ellipsoid enclosing the points. Intuitively, if the point \(p\) lies on a flat area (i.e. the surface), one would have \(\lambda_{2}\approx\lambda_{1}\) and \(\lambda_{2}\approx 0\), while if the point lies on a corner, the eigenvalues should approximately be the same (\(\lambda_{2}\approx\lambda_{1}\approx\lambda_{0}\)) [5]. If the corner penalty tends to \(1\), the node is likely to be on a flat area. We select all nodes whose corner penalty is less than a threshold to obtain a noisy initial version of the _breaking curves_. The final version is obtained after applying a refinement step similar to the morphological operation of opening. A pruning step is followed by a dilation to remove small isolated branches and promote the creation of closed breaking curves. Given a point cloud \(P\) we define \(\mathcal{B}^{P}\) as the set of points in \(P\) that are part of a breaking curve.
### Regions Segmentation
Regions are extracted using a region-growing approach constrained by the previously extracted breaking curves. Given a point \(p\notin\mathcal{B}^{P}\) we define the \(i\)-th region \(\mathcal{R}_{i}^{P}\) and assign \(p\) to it. We consider the set of \(q\in\mathcal{N}_{p}\) and include each \(q\) in the region \(\mathcal{R}_{i}\) if \(q\notin\mathcal{B}^{P}\). This procedure is iterated until all \(p\notin\mathcal{B}^{P}\) are considered. This results in segmenting the point cloud \(P\) into several regions \(\mathcal{R}^{P}\) enclosed by the breaking curves.
The only points that remain unassigned to a region are those that belong to the breaking curves. However, the breaking curve shape can also aid in the matching phase. Thus, a \(k\)-NN voting scheme is employed to assign these points to a segmented region, i.e., if the majority of neighboring points of a breaking curve point belong to a particular region, then it is assigned to that region.
### Region Matching and Registration
The final step involves aligning the fragments using the segmented regions. Given two segmented point clouds \(P\) and \(Q\), we attempt to register the regions in \(\mathcal{R}^{P}\) with the one in \(\mathcal{R}^{Q}\). To this end, we first discard regions having a number of nodes below a certain threshold. This step has two beneficial effects: reducing the computational effort and making the method more robust to noisy regions. The registration is achieved with an exhaustive search of all the remaining regions matches. Given a pair of regions \(\mathcal{R}_{i}^{P}\) and \(\mathcal{R}_{j}^{Q}\), we register them with ICP [2] and compute the Chamfer Distance (CD) as their matching score. The pair with the best score is selected and their transformation is used for the final alignment.
## 4 Experiments
We reported results of our model considering two available datasets of both synthetic and real scanned 3D objects and an in-house set of scanned 3D fresco fragments from the Pompeii Archaeological Site collected under the RePAIR project2. In particular, we experimented on a subset of categories of the Breaking Bad (BBad) dataset [16] having enough variability in terms of object characteristics, and one sample of TU-Wien dataset [8] since it was sufficient to explore whether the proposed algorithm is capable of solving the reassembly task for different objects.
Footnote 2: For more information, please visit [https://www.repairproject.eu/](https://www.repairproject.eu/).
We compare our method against the Generative 3D Part Assembly (DGL) method proposed in [7], which was reported as the superior method on the BBad dataset in [16]. As a baseline we also include ICP [2] into our evaluation 3.
Footnote 3: We trained from scratch the DGL on only pairs of fragments following authors’ implementation and used the Open3D implementation for ICP.
Despite other approaches for assembling 3D broken objects [1, 8, 14] exists, we do not report a comparison with them for two reasons: _i)_ these algorithms have a high dependence on particular characteristics of the broken objects, and _ii)_ they are complex to reproduce due to a large number of parameters. Moreover, they are not suitable for as
Figure 3: An example of the pipeline on both synthetic (top) and real (bottom) data: after processing the original point cloud, the borders (in red) are detected and the regions are segmented accordingly (different colors).
sembling synthetic objects, as they differentiate broken and intact regions of the objects based on the surface roughness [8] or use feature curves to complete the reassembly [14].
Although Neural Shape Mating (NSM) [3] reported promising results in the pairwise assembly task, we choose DGL as our competitor since we consider our work as a building block for the multi-part assembly task. Moreover, NSM is using an adversarial shape loss, which requires the complete object reconstruction after pairwise assembly, while our approach, as visible in Figure 4, correctly assembles incomplete broken parts with no need for the complete object reconstruction, an important step towards real-cases multi-part assembly.
We followed [16] for the choice of the metric using the root mean square error of the translation and the relative error. This happens because the broken parts are registered and completely overlap, but the solution is not satisfactory (See Figure 1.e). Additionally, we report qualitative results in Figure 4 showing that our method correctly reassembles the broken parts of real and synthetic broken objects.
It is worth noting that our method is able to estimate the correct rotation and translation to assemble pairs of fragments across different datasets, while the other approaches fail in most of the cases.
## 5 Conclusions
We presented a robust method for the pairwise assembly of 3D broken objects which performs well across different datasets of both real and synthetic models.
The objective of this analysis is not to discuss which algorithm works better in which case, but rather to analyze the current situation. We note that: _(i)_ using an off-the-shelf approach like ICP without processing the point cloud is not a viable solution, _(ii)_ it is confirmed that the DGL method, which was the best performer for the published benchmark [16], although performing well for semantic assembly, does not work for the geometric reassembly of broken objects and _(iii)_ using a more principled geometrical approach is a safe way to assemble broken objects.
Concerning the limitations, the proposed pipeline is sensitive to the choice of the parameters. In our experiments, we used a different set of parameters for the synthetic objects and for the real ones. There is a margin for improvements in the robustness at different steps of the pipeline.
The proposed method is presented as a building block for reassembling objects broken into multiple parts.
Extending the reassembly task to multiple broken parts following a greedy approach is under exploration. Future works include detecting non-matching surfaces and designing more principled ways of selecting the best registration among many pairs of broken objects.
Acknowledgements:This work is part of a project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 964854.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{**Relative RMSE (R)**} & \multicolumn{3}{c}{**RMSE (T)**} \\ \cline{2-7} Category & ICP [2] & DGL\(\bullet\) [7] & ours & ICP [2] & DGL\(\bullet\) [7] & ours \\ \hline BeerRoltle & 57.028 & 78.933 & **1.62** & 1.104 & 0.073 & **0.02** \\ WineBottle & 54.262 & 84.699 & **1.58** & 0.743 & 0.024 & **0.02** \\ DrinkBottle & 60.253 & 70.014 & **1.89** & 1.288 & **0.008** & 0.033 \\ Bottle & 68.125 & 76.802 & **1.983** & 1.198 & 0.078 & **0.077** \\ Mug & 5.041 & 86.221 & **1.12** & 0.364 & 0.164 & **0.025** \\ Cookie & 12.594 & 85.707 & **1.96** & 0.632 & 0.159 & **0.043** \\ Mirror & 0.593 & 81.454 & **0.111** & 0.503 & 0.125 & **0.001** \\ ToyFigure & 208.333 & 87.972 & **1.98** & 4.123 & 0.159 & **0.079** \\ Statue & 105.582 & 89.605 & **0.66** & 2.159 & 0.149 & **0.003** \\ Vase & 30.756 & 82.218 & **0.592** & 1.496 & 0.109 & **0.002** \\ \hline Brick\(\bullet\) [8] & 11.577 & 62.820 & **3.064** & 2.356 & 1.684 & **0.626** \\ Repair\(\bullet^{\text{2}}\) & 7.911 & 87.491 & **3.466** & 2.525 & **0.076** & 0.695 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Preliminary quantitative evaluation. The top rows refer to the synthetic breaking bad dataset [16] and the last two rows refer to real scanned objects. \(\bullet\)For DGL, we take the best value for each category. \(\bullet\)Scanned objects, where the solution is obtained from manual alignment (Brick from TU Wien Dataset [8] and fresco fragments from the RePAIR Project).
Figure 4: A qualitative overview of our results. On the left we show reassembly of real scanned objects: (a-b) show fresco fragments from the RePAIR project\({}^{\text{2}}\) and (c-d) show the scanned brick from the TU Wien dataset [8]. On the right we shows reassembly of synthetic objects from [16]. Note that (a-d) are parts of the same object and (g-h) complete the toy figure if assembled together, acting as a starting point towards a multi-part reconstruction. |
2302.10671 | Directive Explanations for Monitoring the Risk of Diabetes Onset:
Introducing Directive Data-Centric Explanations and Combinations to Support
What-If Explorations | Explainable artificial intelligence is increasingly used in machine learning
(ML) based decision-making systems in healthcare. However, little research has
compared the utility of different explanation methods in guiding healthcare
experts for patient care. Moreover, it is unclear how useful, understandable,
actionable and trustworthy these methods are for healthcare experts, as they
often require technical ML knowledge. This paper presents an explanation
dashboard that predicts the risk of diabetes onset and explains those
predictions with data-centric, feature-importance, and example-based
explanations. We designed an interactive dashboard to assist healthcare
experts, such as nurses and physicians, in monitoring the risk of diabetes
onset and recommending measures to minimize risk. We conducted a qualitative
study with 11 healthcare experts and a mixed-methods study with 45 healthcare
experts and 51 diabetic patients to compare the different explanation methods
in our dashboard in terms of understandability, usefulness, actionability, and
trust. Results indicate that our participants preferred our representation of
data-centric explanations that provide local explanations with a global
overview over other methods. Therefore, this paper highlights the importance of
visually directive data-centric explanation method for assisting healthcare
experts to gain actionable insights from patient health records. Furthermore,
we share our design implications for tailoring the visual representation of
different explanation methods for healthcare experts. | Aditya Bhattacharya, Jeroen Ooge, Gregor Stiglic, Katrien Verbert | 2023-02-21T13:40:16Z | http://arxiv.org/abs/2302.10671v1 | Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations
###### Abstract.
Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare. However, little research has compared the utility of different explanation methods in guiding healthcare experts for patient care. Moreover, it is unclear how useful, understandable, actionable and trustworthy these methods are for healthcare experts, as they often require technical ML knowledge. This paper presents an explanation dashboard that predicts the risk of diabetes onset and explains those predictions with data-centric, feature-importance, and example-based explanations. We designed an interactive dashboard to assist healthcare experts, such as nurses and physicians, in monitoring the risk of diabetes onset and recommending measures to minimize risk. We conducted a qualitative study with 11 healthcare experts and a mixed-methods study with 45 healthcare experts and 51 diabetic patients to compare the different explanation methods in our dashboard in terms of understandability, usefulness, actionability, and trust. Results indicate that our participants preferred our representation of data-centric explanations that provide local explanations with a global overview over other methods. Therefore, this paper highlights the importance of visually directive data-centric explanation method for assisting healthcare experts to gain actionable insights from patient health records. Furthermore, we share our design implications for tailoring the visual representation of different explanation methods for healthcare experts.
Explainable AI, XAI, Interpretable AI, Human-centered AI, Responsible AI, Visual Analytics 2023
Footnote 2: [https://doi.org/10.1145/3581641.3584075](https://doi.org/10.1145/3581641.3584075)
2
## 1. Introduction
Machine Learning (ML) based systems have been increasingly adopted in healthcare over the past few decades, in applications ranging from surgical robots to automated medical diagnostics (Safar et al., 2016). Especially for screening and monitoring of diseases such as type-2 diabetes, ML models have proven to be significant (Han et al., 2016; Sohn et al., 2017). However, most of these algorithms are "black-boxes" because the reasoning behind their predictions is unclear (Bahdan et al., 2016). Moreover, the growing concern of bias, lack of fairness, and inaccurate model prediction have limited the adoption of ML more recently (Sohn et al., 2017).
Consequently, _explainable artificial intelligence_ (XAI) has gained a lot of focus from ML practitioners as XAI methods facilitate the interpretation and understanding of complex algorithms, thereby increasing the transparency and trust of such black-box models (Sohn et al., 2017; Sohn et al., 2017). In healthcare, XAI empowers medical experts to make data-driven decisions using ML, resulting in a higher quality of medical services (Sohn et al., 2017) and can impact its trust and reliance (Sohn et al., 2017; Sohn et al., 2017).
Existing XAI methods (Sohn et al., 2017; Sohn et al., 2017; Sohn et al., 2017) are predominantly designed for ML practitioners instead of _non-expert users_(Sohn et al., 2017), who might be specialized in a particular application domain but lack ML knowledge (Sohn et al., 2017). Yet, the effectiveness of these explanation methods has not been fully analyzed due to the lack of user studies with non-expert users (Sohn et al., 2017; Sohn et al., 2017). This gap highlights the necessity for analyzing and comparing explanation methods with healthcare professionals (HCPs) such as nurses and physicians (Han et al., 2016) as it is unclear how useful, understandable, actionable, and trustworthy these methods are for them.
Moreover, non-expert users need help to understand how to obtain a favorable outcome (Han et al., 2016; Sohn et al., 2017; Sohn et al., 2017). This emphasizes the need
to make explanations _directive_, i.e. guiding the users to take action for achieving their desired outcome (Sutton et al., 2017). Additionally, instead of _static_ explanations, non-expert users have considered _interactive_ explanations essential to support understanding and interpretation (Bhattacharya et al., 2017; Sutton et al., 2018; Sutton et al., 2019). Therefore, _visually directive explanations_ should enable non-experts not only to understand _why_ a certain outcome is predicted but also to guide them in the process of finding _how_ to obtain their desired outcome without any intervention from ML experts (Sutton et al., 2018; Sutton et al., 2019; Sutton et al., 2019; Sutton et al., 2019).
We designed a prototypical dashboard that predicts patient's risk of diabetes onset using visually directive explanations based on different explanation methods, including data-centric approaches (Bhattacharya et al., 2017), feature importance (Bhattacharya et al., 2017), and example-based methods (Bhattacharya et al., 2017). We aimed to support nurses and physicians in screening patients with undiagnosed type-2 diabetes, monitoring their conditions, and suggesting actions to control their risk of diabetes onset.
We also obtained the perspective of diabetic patients during the evaluation process as they are well aware of the risk factors of type-2 diabetes. Furthermore, some of them were recently in the pre-diabetes phase and all of them are actively in contact with their HCP. Thus, we analyzed their motivation to use such a dashboard. This paper probes into the following research questions:
* In what ways do patients and HCPs find our visually directive explanation dashboard useful for monitoring and evaluating the risk of diabetes onset?
* In what ways do HCPs and patients perceive data-centric, model-centric, and example-based visually directive explanations in terms of usefulness, understandability, and trustworthiness in the context of healthcare?
* In what ways do visually directive explanations facilitate patients and HCPs to take action for improving patient conditions?
We explored these questions through a two-phased study: first, a qualitative study on a low-fidelity click-through prototype involving 11 HCPs; and second, a mixed-methods online study for the evaluation of a high-fidelity web application prototype involving 51 patients and 45 HCPs. We analyzed the effectiveness of our different visual explanation methods and compared them in terms of understandability, usefulness, actionability, and trust (Sutton et al., 2018; Sutton et al., 2019). Our results show that our dashboard provided actionable insights to HCPs about patient health by helping them to identify important risk factors and showcase how critical the patients' conditions are. Additionally, it helped patients to self-monitor and analyze their health conditions.
This paper presents three primary research contributions. First, we present our visually directive data-centric explanation methods that are aimed to provide local explanations of the predicted risk for individual patients with a global overview of risk factors for the entire patient population. Whereas it has been shown that non-expert users prefer local explanations that justify a single decision (Bhattacharya et al., 2017), it has also been argued that these explanations rarely provide sufficient insight into the reasoning of models and the explanatory depth that non-experts require to accept and trust the decision-making of the model (Bhattacharya et al., 2017). To address this challenge, we present an approach that combines perspectives of both local and global explanation methods (Sutton et al., 2019) to provide more insight into both the model predictions and the data for non-expert users. Second, we present the design of a dashboard that combines different explanation methods based on an iterative user-centered research process. Third, based on observations of our user-centered design process and an elaborate user study, we present design implications for tailoring explanations for healthcare experts. We observed that our participants had a higher preference for our representation of data-centric explanations over other methods as they found them more informative. We also observed that participants combined multiple explanation methods, particularly for recommending actions to minimize the risk and interpreting the rationale behind the predicted risk. Based on these observations, we present design implications for tailoring directive explanations for healthcare experts.
## 2. Background and Related Work
Designing visually explainable Decision Support System (DSS) in healthcare considering different types of explanations is an active area of research in XAI (Bhattacharya et al., 2017). To contextualize our research, we first review recent research findings in the domain of visually interactive DSS in healthcare and then investigate XAI methods that provide visual explanations to end-users.
### Visually Interactive DSS in Healthcare
In healthcare, using DSSs built on the domain knowledge of medical experts has a long history (Sutton et al., 2019). Usually, such systems are rule-based logical systems developed on pre-defined rules supplied by medical experts (Bhattacharya et al., 2017). Despite the explainability offered by such systems, there are many challenges such as poor user experience and scarcity of involvement of medical experts in forming the knowledge base (Sutton et al., 2018; Sutton et al., 2019).
To overcome these challenges, modern DSSs in healthcare use ML and data-driven techniques to learn patterns from historical data and apply visualizations to facilitate prescriptive insights for medical practitioners (Sutton et al., 2018; Sutton et al., 2019; Sutton et al., 2019). ML-based DSSs are being increasingly used in healthcare for the early detection of health conditions such as undiagnosed type-2 diabetes mellitus (Sutton et al., 2019). Despite the success of such systems, the lack of transparency of advanced ML algorithms has increased the need for human-friendly explainable DSS in healthcare (Sutton et al., 2019). To mitigate these challenges, interactive interfaces have proven to improve the understanding of non-expert users (Sutton et al., 2019).
Moreover, many researchers have found additional benefits in applying XAI for clinical DSSs such as the mitigation of cognitive bias (Sutton et al., 2019). Our research work focuses on providing an explainable DSS which is interactive and personalized to meet the needs of the medical experts involved in the progressive monitoring of the risk of diabetes onset for patients.
### Exploration in Visual Explanations
Harmonizing XAI techniques with _visual explanations_ enable non-expert users to gain appropriate trust in the outcome of ML systems (Sutton et al., 2019). Recent works also suggest that exploration and contextualization of explanation methods can enhance the satisfaction and interpretability of non-expert users (Bhattacharya et al., 2017).
Model-agnostic post-hoc explanation techniques (Srivastava et al., 2017; Wang et al., 2018; Wang et al., 2019) explain black-box ML models without having any intrinsic information about the inner working of the algorithm, i.e. knowledge about inner parameters or hyper-parameters of the model. Most common model-agnostic local explanation methods like LIME (Wang et al., 2018), and SHAP (Wang et al., 2019) are feature-importance-based methods that identify the most impactful features contributing to the model's prediction (Bowe et al., 2019).
However, more recently, due to the failure of ML models trained on biased, inconsistent and poor-quality data, the ML research community is exploring data-centric approaches (Bowe et al., 2019; Wang et al., 2019). Examples of data-centric approaches are summarizing individual data instances (using common statistical methods like mean, mode, and variance), visualizing the data distribution to compare feature values of an instance to those across the remaining dataset and observing changes in model predictions through _what-if analysis_(Krause et al., 2018; Krause et al., 2018; Wang et al., 2019; Wang et al., 2019). Additionally, data-centric explanations include data-driven rule-based approaches that are adopted commonly in medical DSS for assisting health experts (Bowe et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019).
Additionally, researchers have used counterfactuals to provide recommendations for health-related changes (Bowe et al., 2019; Wang et al., 2019; Wang et al., 2019). Adadi and Berrada (Bowe et al., 2019) have defined counterfactual explanations as _example-based_ methods that provide minimum conditions required to obtain an alternate decision. Although counterfactuals provide useful model-agnostic post-hoc explanations, examples generated by counterfactual algorithms can be practically infeasible, contradictory, or uncontrolled, thereby indicating a need for actionable recourse (Wang et al., 2019; Wang et al., 2019). For instance, to obtain a lower risk of diabetes, counterfactual algorithms can indicate patients to reduce their age by 30 years or alter their gender, which is practically infeasible. Yet, visually interactive counterfactuals hold great potential to produce actionable insights (Krause et al., 2018). Thus, there is an opportunity to explore a better representation of such explanation methods for achieving actionable recourse.
Moreover, as established by Bove et al. (Bove et al., 2019), exploring explainable interfaces is considered essential for the interpretation and satisfaction of end-users. The same notion is adopted in our work for considering different types of visual explanation methods.
## 3. Material and Methods
This section presents our visually directive explanation dashboard and our user-centric methodology for the design and evaluation of our prototypical dashboard. The ethical approval for our research was granted by the ethical committee of KU Leuven with the number G-2019-09-1742.
### Visually Directive Explanation Dashboard
Our explanation dashboard is designed to assist HCPs in monitoring and screening the risk of diabetes onset for patients. We enabled users to explore different kinds of visual explanations for interpreting the model prediction.
ML ModelWe used a logistic regression algorithm from Python scikit-learn module to train a classifier on our diabetes healthcare dataset. We achieved a training accuracy of 93% and a test accuracy of 91% (no overfitting effect). The test accuracy is considered as overall model accuracy. The complete technical approach of data processing, model training, tuning and evaluation was conducted in Python
DatasetOur ML model was trained on patient's electronic health records of comprehensive medical examinations conducted from 5 Slovenian primary care institutions (Wang et al., 2019). The health records include information about patients such as blood glucose, waist circumference measure, BMI, age, gender, etc., along with patient behaviors collected from the Finnish Diabetes Risk Score questionnaire (FIND-RISC) to predict the risk of diabetes.
Our dashboard explained the predicted risk for an individual patient along with an overview of the health statistics of the entire patient population in the medical records. Thus, our approach provides _local explanations_ with a _global perspective_ to HCPs (Wang et al., 2019).
#### 3.1.1. **User requirements**
We conducted a preliminary one-hour focus group with four healthcare workers with active experience in patient-care from Community Healthcare Centre dr. Adolf Drolc Maribor, Slovenia to understand their needs and challenges while monitoring the risk of diabetes in patients. We first demonstrated a dashboard based on the work of Rojo et al. (Rojo et al., 2019), AHMoSe, that explains the predicted risk of diabetes using SHAP values. Then, we did a co-design session with our participants to tailor the AHMoSe interface for meeting their requirements. The session was recorded and transcribed for analyzing their feedback and responses.
From this exploratory study, we identified their struggles in monitoring patients for the onset of diabetes. We also learned that the visualizations for SHAP based explanations in AHMoSe were too complex and detailed. Our participants wanted an easy way to find only health factors which plays a vital role in elevating the risk of diabetes for patients, instead of looking into all the features considered by the model for making predictions. We also learned from the co-design activity that the healthcare workers preferred simpler line-chart, bar-chart and textual representations of the data that could also be used to communicate with patients.
Additionally, we asked our about their specific needs in this study. We summarize their responses into the following user requirements that our explanation system should meet:
1. **An interface for monitoring patients** - HCPs wanted a visual interface to quickly observe the medical records of patients as it is inconvenient and time-consuming to examine multiple medical reports to assess patient health conditions. Additionally, HCPs wanted to analyze how a specific patient is doing compared to other patients.
2. **Suggest actions to minimize the predicted risk** - HCPs wanted to use an interactive interface to suggest actions to patients to minimize their risk of diabetes onset.
3. **Increase patient awareness** - HCPs wanted the interface to show patients how critical their conditions are for creating more conscious awareness and motivating them to follow the prescribed suggestions sincerely.
Then we designed our tailored explanation dashboard which supported the following tasks for meeting the user requirements:
1. **Monitor the risk of diabetes for patients** - Understand the current risk of diabetes onset for patients and identify if the patient's condition is more critical or not for deciding the amount of attention needed for the patient.
**T2: Propose actions to minimize the predicted risk** - Suggest actions to patients to reduce the risk of diabetes or to keep it under control.
**T3: Interpret the rationale behind the predicted risk** - Understand and explain the system's logic for the estimated risk of diabetes by identifying the health variables and their range of values that can increase or decrease the risk of diabetes.
**T4: Compare the current patient with other patients** - By comparing the health measures of the current patient with other patients, get an indication about a specific patient's situation as compared to other patients to decide the amount of attention needed.
Task **T1** aims to meet the first requirement, **T2** aims to meet the second requirement and **T3** and **T4** aims to meet the third requirement.
#### 3.1.2. **XAI techniques and visual components**
When we analyzed our user requirements, we found that these requirements are aligned with the _explanation goals_ presented by Wang et al. (2019). Therefore, the choice of our explanation methods should facilitate learning by enabling our users to filter a small set of factors to make their observations simpler and provide them with the ability to predict and control future phenomena by generalizing these observations into a conceptual model. Wang et al. (2019) also proposed _XAI elements_ that can meet these explanation goals and recommended visualizations that can be used to present these XAI elements. Moreover, we considered _model-agnostic local_ explanation methods for explaining the predicted risk for individual patients irrespective of the ML algorithm used. We further designed visual components as illustrated in Figure 1 for the following three types of explanation methods that meet our explanation goals:
**Feature Importance explanation** - As the dashboard aimed to direct HCPs towards suggesting actions to patients for minimizing the risk, _feature-importance explanations_ enabled them to identify the most influential risk factors according to the prediction model. However, Szymanski et al. (2019) have shown that the representation of feature-importance explanations can impact the understandability and usefulness of this method. Additionally, from our preliminary focus group session, we observed that our participants did not understand simplified SHAP-based feature-importance explanations in the AHMoSe dashboard (Szymanski et al., 2019).
Our representation of directive feature-importance explanations presented in _Factors Contributing to Risk_ (**VC3**) included only the _actionable health variables_ with a percentage measure that indicated how specific features influenced the prediction. We define
Figure 1. Dashboard design of our click-through prototype. Visual explanations are provided using: Patient information with the risk prediction chart (VC1), Patient Summary (VC2), Factors Contributing to Risk (VC3), Recommendations to reduce risk (VC4), Risk recovery (VC5).
actionable health variables_ as variables that can be controlled by the patient such as BMI, waist circumference, physical activity level. We considered other factors that are infeasible for the patient to alter such as age, gender, and geographical region as _non-actionable health variables_.
The feature-importance scores are calculated using the SHAP Python module. Factors that can increase risk are displayed in red, while those which can decrease risk are displayed in green. Subtle explanations are provided beside each health variable by comparing the feature value with the recommended range to indicate why it can increase or decrease risk.
**Data-Centric explanations** - Data-centric explanations included in our dashboard aimed to explain why a certain health variable is increasing or decreasing the risk of diabetes without focusing on what the model considered important. _Patient information with risk prediction chart_ (**VC1**), _Patient Summary_ (**VC2**), and _Risk Recovery_ (**VC5**) provided data-centric explanations in our dashboard.
**VC1** gave a textual overview of the health information of a particular patient with a doughnut chart showing the predicted risk of diabetes onset. To enable HCPs to observe the variance in the predicted risk between different patients we categorized the probability score (_prob_) generated by our ML classifier into three levels: _High (prob > 0.75)_, _Moderate (0.5 \(\leq\) prob \(\leq\) 0.75)_ and _Low (prob = 0.5)_. The abstracted level is displayed in the center of the doughnut chart, and the risk percentage (_prob * 100%_) as a tooltip. Moreover, we used consistent color coding across the different visual components (_red_ for _high_, _orange_ for _moderate_, and _green_ for _low_ risk). We also provided subtle indicators like colored arrows for numerical feature variables and colored text for categorical feature variables, along with necessary tool-tips to indicate if the specific health variable is within the recommended range or not. However, these visual indicators are only provided for _actionable health variables_.
**VC2** showed the value of each actionable health variable of a patient along with data-distribution charts, considering the entire patient population in the records. This component enabled HCPs to have a quick overview of a specific patient as compared to other patients and observe common patterns across all patients using the data-distribution charts. The health variables used in the data-distribution charts are further segregated as patient _measures_, which is visually represented with area charts as these are continuous variables, and _behavi_s which is visually represented with bar charts as these are categorical variables. **VC2** also showed the recommended ranges for the patient measures as configured by the health experts. It can be used to observe the health status of a patient in comparison to other patients. The data-distribution zone where the current health measure lies in the distribution chart is also color-coded with our consistent color-coding convention.
**VC5** enabled progressive monitoring of the predicted risk of diabetes considering the historical patient records. It was designed to allow users to identify if the patient's condition is improving or not over a period of time.
**Counterfactual explanations** - _Recommendations to reduce risk_ (**VC4**) suggested actions using counterfactual explanations generated by the DiCE framework (Zhou et al., 2017) that patients can take to reduce the predicted risk of diabetes. To mitigate the drawbacks of the counterfactual algorithm implemented in the DiCE framework (Zhou et al., 2017), we considered generating counterfactuals for only actionable health variables instead of non-actionable variables.
We also added data-driven boundary conditions so that counterfactuals with absurd alterations are avoided. Furthermore, the recommendations are presented as textual statements instead of discrete numbers for easier interpretation. We considered having these textual recommendations pre-configured for the patient behaviors that are categorical features of the model. For example, instead of suggesting the patient to increase their physical activity level from low to moderate, the visual recommends they exercise daily for 30 minutes.
**VC4** also included an indication of feasibility (_easy_ or _difficult_) and an estimated measure of risk reduction using _sensitivity analysis_(Krause et al., 2017; Krause et al., 2017; Krause et al., 2017) to compare between different recommended tasks. For continuous numerical features of the model, feasibility is measured by calculating the percentage change between recommended measure value and the current health measure. If the percentage change is within \(\pm 10\%\), feasibility is labeled as _easy_. Otherwise, it is considered as _difficult_. For categorical model features, ordinal integer encoding is done based on how the specific value can increase or decrease the risk of diabetes, and feasibility is considered based on the encoded ordinal values. For instance, _physical activity level_ is a categorical variable having three possible values: _low_, _moderate_, and _high_. _Low_ physical activity can increase the risk, and hence the corresponding ordinal value of 1 is assigned to it. If the value is _moderate_, an ordinal value of 2 is assigned, and if it is _high_ an ordinal value of 3 is assigned. Any change to immediate ordinal value is considered _easy_. For instance, a change from _low_ to _moderate_ is considered _easy_. But otherwise, it is considered as _difficult_. With this approach, we aimed to make counterfactual explanations more useful and actionable in a controlled way.
### Evaluation Process
We evaluated our prototype in a two-phased user study. First, a qualitative study for evaluating a low-fidelity click-through prototype was conducted through individual interviews involving 11 HCPs. The goal of this study was to get early feedback on how our primary users (HCPs like nurses and physicians) perceive the dashboard and whether the user requirements are being met or not.
Second, we conducted a mixed-methods study involving 45 HCPs and 51 diabetic patients through online survey questionnaires. The goal of this study was to evaluate the effectiveness of the dashboard in meeting the needs of HCPs and patients. We also compared the different explanation methods represented by our five visual components in terms of understandability, usefulness, actionability, and trustworthiness. The patient's perspective was also collected from this study as our dashboard would directly or indirectly impact them.
The main similarity between these two user studies is that our participants were given similar task-based questions about the four supported tasks (**T1**, **T2**, **T3**, **T4**) by our prototype in both studies. Regarding the differences, our first study involved only HCPs, and we recorded the measures of our qualitative evaluation of the visual components through 1:1 interviews. In our second study, we involved a larger pool of participants (including patients) to
evaluate our high-fidelity prototype. We recorded our measures of the slightly modified visual components (Figure 3) using a combination of participant-reported qualitative responses and self-reported Likert scale responses.
## 4. Evaluation and analysis of low-fidelity prototype
For our first study, we designed a click-through prototype in Figma (14) in multiple iterations. Figure 1 illustrates the final design of our low-fidelity click-through prototype.
### Participants
We conducted a qualitative study involving \(11\) HCPs (male: 3, female: 8) from the University of Maribor to evaluate our low-fidelity prototype. We recruited participants who had backgrounds in nursing and patient care. Participants recruited for this study belonged to the same country as the participants of our focus group discussion. But they belonged to two different institutions. Table 1 presents the demographic information of our participants. Our participants reported having experience in diverse specialization areas of healthcare like surgical nursing, interventional radiology, plastic and reconstructive surgery, pediatrics, orthopedics, preventive care, and others. Only one participant had explicit experience in preventive care of diabetes patients. Ten participants had at least one year of experience in looking after patients and having frequent direct interactions with patients. One of the participants was only involved in the training of nurses and did not have any active interaction with patients.
### Procedure
The study was conducted through semi-structured individual interviews that lasted between 45-70 minutes, that were recorded and transcribed. During each interview, we introduced our click-through prototype to our participants with brief contextual information.
Our participants were first asked to explore the prototype, and then asked questions based on the four tasks (**T1, T2, T3, T4**) supported by the prototype. Each question was followed by necessary follow-up questions about each visual component to understand how effective each component was for performing the given tasks.
We performed a thematic analysis considering the 6-phase-method from Braun and Clarke (Braun and Clarke, 2017) on the qualitative data. We first reviewed the transcripts of the recorded interviews. Then, we created a list of initial codes from the data. In multiple iterations, we grouped the identified codes into potential themes. After reviewing the preliminary set of themes, we formed a definitive set of themes and accordingly grouped them to analyze how each participant succeeded or failed to answer the tasks-based questions.
To preserve the anonymity of the participants while presenting the results of this study, we refer to the participant as P(N), where N is a particular participant from 1 to 11. We only made necessary grammatical corrections to the participants' quotes when presenting the results.
### Observation and Results
As shown in Table 2, 6 participants failed to answer **T3(Q2)** and 5 failed to answer **T4(Q1)**, thereby indicating that performing tasks **T3** and **T4** was difficult for our participants using this prototype. Also, for **T1(Q2)**, 3 participants failed to answer correctly, and 1 participant could not answer **T2(Q1)**. However, all our participants could successfully answer **T1(Q1)** and **T3(Q1)**.
Figure 2 shows the preferred visuals for each task and compares all the visual components based on understandability, usefulness, actionability, and trust based on the participant's responses. Although each visual was designed to support a specific set of tasks, it was observed that for all the tasks, there is a high preference for the _Patient Summary_ (**VC2**) as the majority found it more informative because of the graphical representation of the data with clear color-coding. More than half of the participants considered all the visual components useful. But the textual patient information with the risk chart presented in **VC1** is considered the most useful, followed by the graphical representation of patient data in **VC2** and recommendations in **VC4**. In terms of actionability, **VC2** and **VC4** were considered the most actionable. **VC1** was also considered the most trustworthy, while only 5 participants considered the risk recovery information in **VC5** as trustworthy as the others either found it difficult to interpret or did not consider it important.
We relate our observations with the following themes generated using our thematic analysis process.
_Visualizations facilitate quick understanding:_ All the participants were positive about the entire dashboard but the visual representation of the patient records enabled them to understand the patient's condition quickly. Most of them connected the usefulness and understandability of the visual components with graphical and color-coded representations. For example, P10 stated, _"It's clearly indicated that the risk is high. I think it's very colorfully indicated and that's good because it looks like it's marked in colors from green, orange, and red"_. This justifies why the _Patient Summary_ (**VC2**) was highly preferred by our participants.
_Including textual annotations can improve understandability:_ Although the _Patient Summary_**VC2** was highly preferred, some participants expressed difficulty in interpreting the data-distribution charts in **VC2**: _"I'm not sure about the distribution [in patient summary] and what does that tell... what does the peak mean?"_(P9). This justifies why many participants failed to answer **T4(Q1)** correctly. They preferred more textual representation provided in **VC1**
\begin{table}
\begin{tabular}{l l} \hline \hline & **Participant distribution** \\ \hline Gender & 3 : Male \\ & 8 : Female \\ \hline \multirow{4}{*}{Age group} & 8 : (21-30) years \\ & 1 : (31 - 40) years \\ & 1 : (41 - 50) years \\ & 1 : (51-60) years \\ \hline \multirow{4}{*}{Experience in patient care} & 11 : Master’s degree \\ \cline{1-1} & 10 : 1 - year experience \\ \cline{1-1} & 10 : 1 - year experience \\ \cline{1-1} & 10 : 1 - year experience \\ \cline{1-1} & 10 : 1 - year experience \\ \cline{1-1} & 10 : 1 - year experience \\ \cline{1-1} & 10 : 1 - year experience \\ \cline{1-1} & 10 : 1 - year experience \\ \cline{1-1} & 10 : 1 - year experience \\ \hline \hline \end{tabular}
\end{table}
Table 1. Participants’ information for the qualitative evaluation of the low-fidelity prototype.
and **VC3**. However, it is important to keep the balance between the amount of textual and graphical representations, as too much textual information can affect the usefulness of the visual: "_I think it is very important [to have graphical representations]... not just numbers, as it should tell you what will happen with the numbers. It makes it likable and useful and easy to understand_" (P6). While a lack of concise textual description can also impact the interpretation of the visual, like in the case of _Risk Recovery_ (**VC5**), almost half of our participants, found it difficult to interpret "_I don't know what it [risk recovery] means. I need more text and information on this_" (P2).
_Interactive visuals increase the interpretability of explanation methods_: Most of our participants liked how they could interact with the dashboard to observe changes in the predicted risk: "_Using it [interactions] you can show them [patients] if they have lower blood sugar, then what will happen. So, you are not just telling them, they can see how it improves their health_" (P6), "_I think that when you see that way if you reduce the blood sugar, how the graph is changing, it would be a motivation from them [patients]_" (P5). Interactions allow them to explore and easily understand visual explanations. This indicates that exploration through interactive visuals can increase the interpretability of the method. On the contrary, less interactive visual like the _Risk Recovery_ (**VC5**) was difficult to understand for our participants "_I don't understand the risk recovery too well. That can be improved_" (P3). As the interpretation of **VC5** was not very clear to many, they failed to answer **T1(Q2)** correctly. However, we observed that the discoverability of interactions was very difficult using the click-through prototype. This justifies why many participants failed to answer **T3(Q2)**.
_Combination of visual components_: It was observed that most participants combined two or more visuals to perform the given tasks. Particularly, we observed them combining multiple visuals when suggesting actions (**T2**). For interpreting the logic behind the predictions and drawing comparisons with other patients they mostly used the patient summary (**VC2**). Some of the participants mentioned all the visuals were useful, and it was hard for them to comment if a visual was more useful than others. Thus, they considered the entire dashboard very useful, actionable, and trustworthy. P10 stated: "_It's hard to say [which one is better] because I think all of them are very good. This [VC1] provides the basic information you need to know first. This [VC2] is like a really good summary because it shows you in detail what's going on with the patient... Well, here's the risk recovery, and you can see in the graph that risk is elevating._
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**T1**} & \multicolumn{2}{c|}{**T2**} & \multicolumn{2}{c|}{**T3**} & \multicolumn{1}{c|}{**T4**} \\ \cline{2-7} \multicolumn{1}{c|}{} & **Q1** & **Q2** & **Q1** & **Q1** & **Q2** & **Q1** \\ \hline P1 & \(\gamma\)(VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI, VCI) & \(\gamma\)(VCI, VCI) & \(\gamma\)(VCI, VCI) & \(\times\) \\ \hline P2 & \(\gamma\)(VCI, VCI) & \(\times\) & \(\times\) & \(\gamma\)(VCI) & \(\times\) & \(\gamma\)(VCI) \\ \hline P3 & \(\gamma\)(VCI, VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI, VCI) & \(\times\) & \(\times\) \\ \hline P4 & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\times\)(VCI, VCI) & \(\times\) & \(\times\) \\ \hline P5 & \(\gamma\)(VCI, VCI) & \(\gamma\)(VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI) & \(\times\) & \(\times\) \\ \hline P6 & \(\gamma\)(VCI, VCI) & \(\times\) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI) & \(\times\)(VCI) & \(\times\) \\ \hline P7 & \(\gamma\)(VCI) & \(\gamma\)(VCI) & \(\gamma\)(VCI) & \(\gamma\)(VCI, VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\times\)(VCI) & \(\times\) \\ \hline P8 & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI) & \(\times\) \\ \hline P9 & \(\gamma\)(VCI, VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI, VCI) & \(\times\) & \(\times\) \\ \hline P10 & \(\gamma\)(VCI, VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI) & \(\gamma\)(VCI, VCI, VCI, VCI) & \(\gamma\)(VCI) & \(\gamma\)(VCI) & \(\gamma\)(VCI) \\ \hline P11 & \(\gamma\)(VCI) & \(\times\) & \(\gamma\)(VCI) & \(\gamma\)(VCI) & \(\gamma\)(VCI) & \(\times\) & \(\gamma\)(VCI) \\ \hline \end{tabular}
\end{table}
Table 2. Observation from our first user study. The task-based questions are: T1(Q1): _What is the overall risk for the patients?_ T1(Q2): _Is the condition improving?_ T2(Q1): _What actions can you suggest to the patient to reduce the risk?_ T3(Q1): _Can you explain why the system is showing a high risk of diabetes?_ T3(Q2): _Does the system allow you to see what happens if the blood sugar is less than six instead the current value (7.5)?_ T4(Q1): _What can you tell about the health variables of the patient as compared to other patients?_ \(\gamma\) denotes that the participant was successful in answering the questions, and \(\times\) denotes they failed to answer. Visual component(s) used by them to successfully respond to the task-based questions are mentioned in brackets.
Figure 2. Results from our qualitative evaluation of our low-fidelity prototype. (left) Chart showing the count of participants using the visual components for the task-based questions. (right) Comparison of each visual component in terms of understandability, usefulness, actionability and trustworthiness by reported by the participants.
From risk factors, I can figure out what increased risk. This [VC4] recommends how to reduce risks. So, I would say all of them combined have their own role".
Association of trust with visualization and underlying data:None of the participants mentioned a lack of trust due to "complex algorithms". They could trust the system as they could see the reference patient data: "_Yeah, I trust this not because it's generated by the computer, but the overall score is evidence-based [based on patient data]_" (P9). When asked about why the system is predicting the risk of diabetes as high, all of them mentioned values of the health variables which were higher than the recommended ranges and mentioned the color-coding of red used to denote health factors that are not good for the patient. "_[Risk is high] because you see the high-risk circle and a lot of things are in color red, and we also see that the blood sugar is 7.5, much higher than the recommended range. This is the biggest indicator of diabetes_" (P11). So, their sense of trust is linked with the visual representation of the patient data. Moreover, lack of interpretation of the visuals (like for **VC5**) affects the overall trust and hence the usefulness of the visual. Like P2 and P3, even P11 did not consider the risk recovery visual trustworthy as they did not understand it: "_I really don't know. It's something to do with months, so with time and the risk. What I don't know is what it should mean_" (P11).
_Action recommendation based on data, risk predictions, and a priori knowledge_: For suggesting actions to reduce the risk of diabetes, most participants relied on the reference data and their interpretation of how the underlying patient data is related to the overall risk. Their ability to perform interactions with the patient data to observe changes in the predicted risk helped them to suggest actions: "_if the blood sugar is lower say 5.8, the overall risk is in orange and I see that risk is lower and so moderate_" (P6). However, most of them used their a priori knowledge of the domain to suggest actions to minimize risk: "_The highest risk is [due to] the blood sugar as we can see in the chart because the red level [from VC2] is the biggest and I would recommend a diet for the blood sugar_" (P5). Even feature-importance-based **VC3** and counterfactual-based recommendations provided in **VC4** were less preferred for suggesting actions as they expressed the need to have additional information about the patient's diet, current medications, blood pressure, and other information not used by the ML model: "_Something about the patient's diet is missing in this dashboard... nutrition is not included and whether the patient is taking a specific medication for diabetes or other reasons_" (P4). However, we observed a higher preference for data-centric explanations provided through **VC2** for action recommendation compared to counterfactual explanations provided through **VC4** as they considered the recommendations to be very generic and not personalized: "_It [VC4] is useful. But it's like a general suggestion for everyone. I think they [patients] trust much more when we don't generalize them with another. This kind of recommendation would be too general for one patient_" (P5).
_Patients as potential users: All our participants mentioned that our dashboard can be a useful tool for monitoring patient health conditions: "_I think it will be great tool for the healthcare providers, because you can't remember everything. And if you have all the data combined in one platform, it will be very good and you can see the progress, the risk factors all at once. So, you don't have to put them together from the different lab results_" (P8). Additionally, most of them considered the dashboard to be a good source of motivation for patients to create a better awareness of their health conditions: "_Patients also need to see some graphs to realize that their health is not so good_"(P3). However, they expressed some concern for older patients to directly use this dashboard: "_The younger patients would use this but for the older patients I don't know_" (P8). Even though our prototype was designed around the needs of HCPs, an interesting point of analyzing the patient's perspective was raised from their feedback. Thus, we included patients as participants along with HCPs during the evaluation of our high-fidelity prototype.
## 5. Evaluation and Analysis of High-fidelity Prototype
After analyzing the qualitative data collected from the previous study, we implemented a high-fidelity web application prototype developed using Meteor, React.js, HTML, and CSS. The various charts used in our interactive dashboard are developed using Chart.js and the application is deployed using Docker. Fig. 3 presents a snapshot of our high-fidelity prototype. The prototype was evaluated through an online user study in Prolific (Vaswani et al., 2017) as our recruitment platform and Qualtrics (Vaswani et al., 2017) as our survey platform.
### Design Rationale
From our first study, we observed that some of our participants found it difficult to discover the interactive components from our click-through prototype. Thus, we added icons as explicit indicators for the visuals which supported _what-if interactions_ in our high-fidelity prototype. Additionally, we added tooltips with short descriptions for each visual component. Furthermore, the mouse cursor style was modified when the user hovered over the interactive components for encouraging them to click and observe the changes in the interactive plots.
As many participants found the _Risk Recovery_ (**VC5**) difficult to interpret, we added interactive effects on hover that highlights the risk zones of the patient. We aimed to increase user exploration through interactions to increase understandability of this visual.
We also observed that our participants in our first study found the _Factors contributing to risk_ (**VC3**) to be less actionable than the _patient summary_ (**VC2**) and _Recommendations to reduce risk_ visual (**VC4**). Some of our participants suggested adding what-if interactions to **VC3** for making it more actionable. Thus, we have considered this feedback and modified **VC3** for supporting what-if interactions. Another feedback was received for simplifying the title of **VC3** and so we renamed it _Important Risk Factors_. We also swapped the position of **VC3** and **VC4** in the high-fidelity prototype to improve the discoverability of the recommendations as suggested by our participants.
Hence, we considered the feedback received through our observation and participant responses during the evaluation of our click-through prototype to improve the design of our high-fidelity prototype to effectively meet our user requirements.
### Participants
This prototype was evaluated using 45 HCPs like nurses, physicians, and medical workers and 51 diabetic patients. We recruited patients who had been diagnosed with diabetes at least 6 months prior to
our experiment and HCPs' who were actively involved in patient care. Using Prolific, the recruited participants were compensated with an hourly rate of $10 for their time.
Table (a)a presents the demographic information of our HCP participants. Collectively, they had experience in dealing with all types of patients in any age group with both acute and chronic disorders (not just diabetes) from non-critical to critical nature. The demographic information of our patient participants is presented in Table (b)b. All of our patient participants were actively in contact with their HCPs and they had awareness of the risk factors of type-2 diabetes.
### Procedure
We first gave an overview of the prototype to our participants and suggested them to explore it on their own. Then, they were given similar task-based questions as the previous study based on the four supported tasks **(T1, T2, T3, T4)** through an online questionnaire. Based on the information shown in our prototypical dashboard, our participants were asked to identify patients with the most critical condition, their current risk level, and whether their condition is improving or not for task **T1**. For **T2**, they were asked to suggest actions to reduce the high risk for a specific patient shown on our dashboard. For **T3**, they were asked to justify why the system indicated that a specific patient had a high risk while another had a low risk. Finally, for **T4**, they were asked to compare a specific patient's health factors with the recommended range of values for the health factor and with those of the other patients.
Additionally, our participants had to justify their responses to the task-based questions. We also asked them to report their perception of the visual components in terms of understandability, usefulness and trustworthiness through 4-point Likert Scale questions. We included additional open-ended questions to gather more qualitative data about actionability of our dashboard and their motivation for using it.
During the evaluation process, we categorized all the responses to the task-based questions into four categories: _correct with sufficient justification, correct with insufficient justification, guess / unintelligible_ and _incorrect_, similar to Lim et al. (Lim et al., 2018) as shown in Table 4. We recorded their overall response time and mouse-movements to track their interactions with our dashboard while answering the given questions. Furthermore, we analyzed the qualitative responses about the actionability of our dashboard to understand why the participants found it actionable and which component is considered most actionable. Additionally, we categorized the qualitative responses about motivation to use our dashboard as _positive_, _negative_ or _neutral_ to analyze their sentiments and understand the rationale behind their responses.
Figure 3. Dashboard design of the high-fidelity web application prototype. Visual explanations are provided using: Patient information with the risk prediction chart (VC1), Patient Summary (VC2), Important Risk Factors (VC3), Recommendations to reduce risk (VC4), Risk Recovery (VC5). This is the modified version of the dashboard used for final evaluation.
We performed hypothesis testing with one-proportion z-test at 5% significance level (Sutton et al., 2017) to measure the statistical significance of the correct responses to the task-based questions. We aimed to achieve to a success rate of 80% for the tasks supported by our dashboard. Thus, our null hypothesis (\(H_{0}\)) was that 80% of the participants giving correct responses, while our alternate hypothesis (\(H_{A}\)) is more than 80% giving correct responses. We further noted the proportion of participants giving correct responses with sufficient and insufficient justifications. We used descriptive statistics for the evaluation of the remaining questions considering their format instead of hypothesis testing.
### Observation and Results
_HCP participants_: We observed that 41 HCPs (91.1%) gave correct responses for **T1** questions (_z =2.619, p= 0.0044_). It was observed that 84.4% of them provided sufficient justifications, while only 6.67% HCPs failed to provide sufficient justifications to their correct responses. They mostly used the risk recovery visual to perform this task. For questions asked for task **T2**, all 45 HCPs gave correct responses (_z=_\(\infty\), _p=0.00_). However, 31.1% of them failed to provide sufficient justifications. For task **T3**, 43 HCPs (95.56%) gave correct responses (_z=5.064, p=0.00_). But only 4.4% of the correct responses did not include sufficient justifications. Although most of them reported using the _patient summary_ (VC2) to perform tasks **T3** and **T4**, we observed them combining multiple visuals like VC2, VC3 and VC4. This indicates that the participants preferred combining different explanation methods, i.e. data-centric, feature-importance, and counterfactual explanations when suggesting actions and understanding the rationale behind the predicted risk. For **T4**, 33 HCPs (73.3%) gave correct responses (_z=-1.011, p=0.8441_), out of which 11.1% did not include sufficient justifications. They mostly used VC2 to perform this task.
We only _failed_ to _reject H\({}_{0}\)_ for task **T4**, even though we observed a high proportion (73.3%) of the HCP participants giving correct responses. For other tasks, we _rejected H\({}_{0}\)_, suggesting that more than 80% of HCPs can perform these tasks correctly. On investigating the reason why 9 HCPs (20%) gave incorrect answers to T4 questions, we observed that unlike the majority of the HCPs who used the _patient summary_ (**VC2**), they used the patient information provided in **VC1** to manually compare different patient records by using the patient id filter. This involved many interactions and it was not convenient to perform the comparisons manually. Therefore, many of them failed to give the correct responses. Fig. 4 illustrates visual
\begin{table}
\begin{tabular}{l l} \hline \hline
**Correct with sufficient justification** & Correct response with all correct rules and no extra, unnecessary rules \\ \hline
**Correct with insufficient justification** & Correct response but with only some rules or extra, unnecessary rules \\ \hline
**Guess/unintelligible** & Correct response but with no reason, or with wrong interpretation of the visual(s) \\ \hline
**Incorrect** & Failed to give correct responses \\ \hline \hline \end{tabular}
\end{table}
Table 4. Grading rubric considered during evaluation of the high-fidelity prototype.
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{2}{c}{**Participant distribution**} \\ \hline \multirow{7}{*}{Country} & United Kingdom (12/45) \\ & South Africa (8/45) \\ & Mexico (5/45) \\ & Poland (4/45) \\ & United States, Portugal (3/45) \\ & Chile (2/45) \\ & Finland, Sweden, Hungary, \\ & Israel, Canada, Spain, \\ & Italy, Germany (1/45) \\ \hline \multirow{3}{*}{Gender} & 17 : male \\ & 27 : female \\ & 11 : non-binary \\ \hline \multirow{3}{*}{Age group} & 26 : (21 - 30) years \\ & 11 : (31 - 40) years \\ & 2 : (41 - 50) years \\ & 6 : (51 - 60) years \\ \hline \multirow{3}{*}{Highest education level} & 1: Ph.D. \\ & 9 : Master’s degree \\ & 35 : Bachelor’s degree \\ \hline \multirow{7}{*}{Experience in patient care} & 12 : 1- year \\ & 16 : (1 - 3) years \\ & 3 : (3 - 5) years \\ & 1 : (5 - 10) years \\ & 13 : \(\times\) 10-years \\ \hline \hline \end{tabular}
\end{table}
Table 3. Information of participants recruited for the evaluation of the high-fidelity prototype
components used by HCPs for all the task-based questions and gives an overview of their responses to all the tasks.
Fig. 5 illustrates the self-reported responses to the Likert-scale questions about understandability, usefulness, and trustworthiness of the visual components of the HCP group. This group found the _Patient Summary_ (**VC2**) the easiest to understand and the most useful. For trustworthiness, the HCPs found **VC1** to be most trustworthy. Table 5 presents the proportion of participants for each self-reported Likert scale question for both groups.
The majority (64.4%) of the HCPs considered **VC2** as the most actionable. They mentioned that the color-coded and graphical representation of the health variables helped them to easily identify risk factors that need immediate attention for suggesting actions. However, most of them mentioned using two or more visuals together for suggesting actions and understanding the logic behind the predicted risk. For instance, one of the participants mentioned: _"Given the patient summary it would make it easier to address the most pressing risk factors for a patient and what they need to work on at most"_.
For the motivation of using our dashboard, 44 HCPs (97.7%) responded positively for using it as a screening tool during their consultation with patients. They mentioned about using our dashboard to show the risk levels and create more awareness for the patients.
_Patient participants_ : We observed that 39 patients (76.5%) gave correct responses to **T1** questions (_z=-0.594, p=0.724_). Despite 62.7% of them giving sufficient justification for their correct responses, we _failed to reject \(H_{0}\)_. But as 45 of them (88.2%) could answer **T2** questions correctly (_z=1.825, p=0.034_), we could _reject \(H_{0}\)_. However, we observed 43.14% of them failed to give sufficient justification for their correct responses. On investigating further, we observed that 58% of the older patients (>50 years) failed to give sufficient explanations for **T2** questions, indicating their understanding of the visual components used to recommend actions was not fully correct. Like **T2** questions, 45 of them (88.2%) could answer **T3** questions correctly (_z=1.825, p=0.034_). So, we could _reject \(H_{0}\)_ for **T3** questions. But we observed 66.67% of them giving sufficient justifications, while 21.57% gave insufficient justification for their correct responses. For **T4**, we observed only 33 (64.7%) could give correct responses (_z=-2.286, p=0.98_). Despite 54.9% of them giving sufficient justification for their correct responses, we _failed to reject \(H_{0}\)_. Additionally, we observed that most older patients (>50 years) struggled with **T4** as 50% of the older patients failed to give
Figure 4. Results obtained from the responses of the HCP participants. (left) Responses are categorized according to the grading rubric in Table 4 for the tasks supported. (right) Chart showing the count of the participants using the visual components to answer the task-based questions.
Figure 5. Diverging bar-charts summarizing the self-reported Likert-scale responses of the HCP group
sufficient explanations for **T4** questions. Fig. 6 illustrates visual components used by the patients for all the task-based questions and gives an overview of their responses to all the tasks.
Figure 7 illustrates the responses of the patient group to the Likert-scale questions about understandability, usefulness and trustworthiness of the visual components. This group found both **VC2** and **VC3** as the easiest to understand as compared to other visuals. However, they found **VC1** to be the most useful and **VC2** to be most trustworthy.
We observed the majority (68.6%) of the patients also considered **VC2** as the most actionable as they could easily identify the high-risk factors similar to the HCP group. Like the HCPs, even this group had mentioned combining multiple visuals together for all the given tasks. For instance, one of the patient participants mentioned: _"The Recommendations to reduce risk gives precise ways to reduce my risk
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{**VC1**} & \multicolumn{2}{c}{**VC2**} & \multicolumn{2}{c}{**VC3**} & \multicolumn{2}{c}{**VC4**} & \multicolumn{2}{c}{**VC5**} \\ \hline \multirow{4}{*}{**Understandingability**} & \multirow{4}{*}{Very Easy to Understand} & HCPs & Patients & HCPs & Patients & HCPs & Patients & HCPs & Patients & HCPs & Patients \\ & & 55.5\% & 62.7\% & 64.4\% & 64.7\% & 60\% & 58.8\% & 62.2\% & 64.7\% & 46.7\% & 54.9\% \\ & & Somewhat Easy to Understand & 42.2\% & 31.3\% & 33.3\% & 29.4\% & 33.3\% & 31.3\% & 33.3\% & 29.4\% & 40\% & 33.3\% \\ & & Somewhat Difficult to Understand & 08 & 1.9\% & 2.2\% & 1.9\% & 6.6\% & 5.8\% & 2.2\% & 1.9\% & 8.8\% & 9.8\% \\ & & Very Difficult to Understand & 22.2\% & 3.9\% & 05 & 3.9\% & 05 & 3.9\% & 2.2\% & 3.9\% & 4.4\% & 5.8\% \\ \hline \multirow{4}{*}{**Usefulness**} & \multirow{4}{*}{Always Used} & 60\% & 78.4\% & 77.7\% & 74.3\% & 66.6\% & 70.5\% & 60\% & 66.6\% & 48.8\% & 58.8\% \\ & & Sometimes Useful & 33.3\% & 15.6\% & 20\% & 19.6\% & 24.4\% & 19.6\% & 40\% & 25.4\% & 40\% & 19.6\% \\ & & Rarely Useful & 66.6\% & 1.9\% & 2.2\% & 1.9\% & 8.8\% & 5.8\% & 0\% & 1.9\% & 11.1\% & 11.7\% \\ & & Never Useful & 0\% & 3.9\% & 05 & 3.9\% & 0\% & 3.9\% & 0\% & 5.8\% & 0\% & 9.8\% \\ \hline \multirow{4}{*}{**Trustworthiness**} & \multirow{4}{*}{Always Trust} & 66.6\% & 74.5\% & 68.8\% & 78.4\% & 64.4\% & 68.6\% & 57.7\% & 66.6\% & 51.1\% & 58.8\% \\ & & Sometimes Trust & 31.1\% & 19.6\% & 24.4\% & 17.6\% & 33.3\% & 23.5\% & 33.3\% & 23.5\% & 42.2\% & 25.4\% \\ \cline{1-1} & & Rarely Trust & 2.2\% & 1.9\% & 6.6\% & 05 & 2.2\% & 3.9\% & 8.8\% & 5.8\% & 6.6\% & 9.8\% \\ \cline{1-1} & & Never Trust & 0\% & 3.9\% & 05 & 3.9\% & 05 & 3.9\% & 0\% & 3.9\% & 0\% & 5.8\% \\ \hline \hline \end{tabular}
\end{table}
Table 5. Results showing the proportion of participants for the self reported responses to 4-point Likert Scale questions about Understandability, Usefulness, Trustworthiness
Figure 6. Results obtained from the responses of the patient participants. (left) Responses are categorized according to the grading rubric in Table 4 for the tasks supported. (right) Chart showing the count of the participants using the visual components to answer the task-based questions.
Figure 7. Diverging bar-charts summarizing the self-reported Likert-scale responses of the patient group
of diabetes. The 'Patient Summary' section showed me areas where my behavior, such as physical exercise, was too low in certain areas"_.
We observed that 48 patients (94.11%) responded positively when asked about their motivation to use this dashboard. Most of them mentioned interactive explanations increased their motivation as they could see how the risk changes on changing the values of the health variables.
## 6. Discussion
### Key Takeaways From Our User Studies
From our first user study, we collected feedback on the design of our visually directive explanation methods from HCPs. We used their feedback to make design changes discussed in Section 5.1 for our web application prototype. We also collected an early indication of the usefulness, understandability, actionability and trustworthiness of the visual components from our participants.
In our second study, we collected data from a larger pool of participants to validate our observations from the previous study. The web application used in our second study was interactive and enabled our participants to give better feedback. Also, the feedback collected from patients along with HCPs helped us to justify the effectiveness of our explanation methods included in our dashboard.
Results from our second study indicate that a significant proportion of HCPs could successfully perform all the given tasks. Most of them could provide sufficient justifications for their correct responses irrespective of their age group, domain expertise, or years of experience in patient care. However, we observed older patients (> 50 years) struggled with tasks for suggesting actions to reduce risk (**T2**) and comparing their health measures with that of other patients (**T4**). Further simplifications may be needed for this user group.
Overall, our participants used data-centric explanation based _patient summary_ (**VC2**) and _patient information_ (**VC1**) more than other visuals for all the tasks. While explaining their responses, most of them compared the current health measures with the recommended values of health variables. Color-coded representations were considered more useful than graphical representations of the data. This indicates that color-coded and interactive data-centric explanations form a vital part of our explanation dashboard for both HCPs and patients.
Unlike the HCPs, we observed a high proportion of patients (68.6%) using the counterfactual-based recommendations (**VC4**) for suggesting actions to minimize risk (**T2**). Some of them mentioned that for getting quick suggestions, they preferred the recommendation list. But to get more detail on how these recommendations are helpful, they relied on data-centric explanations.
Moreover, both HCPs and patients have mentioned combining multiple visual components, especially for suggesting actions to minimize high risk and understanding the rationale behind the predicted risk. This suggests that the limitations of any explanation method included in our dashboard can be complemented by the other methods.
### Addressing the Research Questions
**RQ1. In what ways do patients and HCPs find our visually directive explanation dashboard useful for monitoring and evaluating the risk of diabetes onset?** - Our results indicate that both HCPs and patients found our explanation dashboard very useful for monitoring and evaluating the risk of diabetes onset. As inferred from our user studies, the interactive visual components enabled our participants to explore and unravel the rationale behind the predicted risk. They mentioned that interactive and directive visuals, that show how the predicted risk changes when changing the health variables, are useful to create more awareness for the patients and drive change in their behavior.
**RQ2. In what ways do HCP and patients perceive data-centric, model-centric, and example-based visually directive explanations in terms of usefulness, understandability, and trustworthiness in the context of healthcare?** - It was observed that most of our participants justified the rationale behind the predicted risk by referring to the underlying patient data used for training the ML model and its color-coded visual representations. They mentioned about trusting the predictions as they could easily relate the prediction with the underlying data. Thus, we get an indication that data-centric explanations are more trustworthy and useful than commonly adopted model-centric feature-importance explanations, especially in healthcare.
However, as they mentioned using a combination of visual components together, the significance of feature-importance explanations cannot be neglected. Additionally, our participants have mentioned that the what-if interactions enabled them to explore our dashboard and develop a better understanding of the visual explanations. Also, our participants found the example-based counterfactual explanations important and useful when they wanted explanations in the form of recommendations.
Furthermore, our participants have shown a higher usage of our representation of data-centric explanations through the _patient summary_ (**VC2**) for performing given tasks and actions over other explanation methods as they found them more informative. However, in general, it was observed that more visual, interactive explanations with reference to the recommended range of data having concise textual descriptions are more useful to both HCPs and patients.
**RQ3. In what ways do visually directive explanations facilitate patients and HCPs to take action for improving patient conditions?** - Patients reported that interactive explanations increased their motivation of using our dashboard as a self-screening tool as they could see how the risk changes on changing the health variables. While HCPs wanted to use this dashboard for better communication with patients during their consultations. They found our dashboard to be actionable and useful as they can utilize it for guiding patients in improving their health by showing the high-risk factors. From the qualitative data collected through our user studies, we inferred that the interactive visual explanations enabled both HCPs and patients to explore how to alter the predicted outcome along with explaining the factors that could affect the predictions.
### Tailoring Directive Explanations for Healthcare Experts
We share our design implications for tailoring the visual representation of directive explanations for healthcare experts from our
observations and results. Our design implications are aligned with the recommendations from Wang et al.'s framework (Wang et al., 2018).
_Increasing actionability through interactive what-if analysis_: During the evaluation of our low-fidelity prototype, our participants highlighted that the conventional representation of feature-importance explanations (as illustrated by _factors contributing to risk_ visual in Figure 1) was less actionable than data-centric explanations presented through the _patient summary_ **(VC2)** as it was difficult for them to understand how the risk factors affected the predicted risk. Our modified design of this visual component **(VC3)** used in our high-fidelity prototype enabled them to perform interactive what-if analysis, i.e. allowed them to change the feature values and observe the change in the overall prediction. Hence, we recommend the usage of interactive design elements that allows what-if analysis for representing directive explanations for HCPs. This recommendation also _supports hypothesis generation_ as proposed by Wang et al. (Wang et al., 2018)
_Explanations through actionable features instead of non-actionable features_: In our approach, we included only _actionable variables_ for visual components which supports what-if interactions and better _identification of coherent factors_(Wang et al., 2018). We anticipated that allowing the ability to alter values of non-actionable variables can create confusion for HCPs, especially for representing counterfactual explanations. Thus, we propose providing explanations through actionable variables for suggesting actions that the user can perform to obtain their desired outcomes.
_Color-coded visual indicators_: HCPs indicated that the color-coded representations of risk factors were very useful for getting quick insights. Hence, we recommend the usage of color-coded representations and visual indicators to highlight factors that can increase or decrease the predictor variable. This suggestion further facilitates Wang et al.'s recommendation (Wang et al., 2018) of _identifying coherent factors_.
_Data-centric directive explanations_: HCPs indicated that our representation of data-centric explainability through the patient summary was very informative. They could easily identify how good or bad the risk factors are for a specific patient. Additionally, they could get an overview of how other patients are doing as compared to a specific patient through the data-distribution charts. Thus, our representation of data-centric explainability provided a local explanation but with a global perspective. This suggestion is also aligned with the recommendations from Wang et al. (Wang et al., 2018) as data-centric directive explanations support _forward reasoning_ by providing _access to source and situational data_ and yet can be _easily integrated with multiple explanation methods_.
### Limitations and Future Work
In this section, we articulated some limitations of this work: (1) Our prototype used offline predictions about the overall risk generated by our model instead of real-time predictions. The use of other ML algorithms with real-time inference processes might impact the perceived utility of the tool. (2) The prototype was evaluated with HCPs with backgrounds in diverse specializations. Even though we were able to reach a wider population, it would be more helpful to evaluate this prototype with HCPs who are dedicated to taking care of patients with type-2 diabetes. (3) The importance of different explanation methods was examined jointly as part of the dashboard and not analyzed independently. Consequently, the limitations of some of the explanation methods could be concealed by the benefits of other methods. (4) Since the prototype was personalized for monitoring the risk of diabetes onset, the findings from this research may not be applicable for monitoring other diseases as the user needs for other diseases can be very distinct.
In our future studies, we aim to focus on personalizing directive counterfactual explanations, as our participants had expressed a need for a better representation of such explanations. Additionally, we plan to analyze the utility of different explanation methods used in the dashboard in isolation.
## 7. Conclusion
In this research work, we present a directive explanation dashboard that combines visually represented data-centric, feature-importance, and counterfactual explanations for monitoring the risk of diabetes onset. Our research compared the different visual explanations in terms of understandability, usefulness, actionability, and trustworthiness with healthcare experts and patients. Our participants have shown a higher preference for the visually represented data-centric explanations that provided local explanations with a global overview, over other methods. Especially, we observed that the color-coded risk factors and data-distribution charts in our visually directive data-centric explanations assisted healthcare experts in suggesting actions to reduce risk by easily identifying high-risk factors. Based on our results, we suggest using such data-centric explanations combined with other explanations. We hope our results will inspire other researchers to utilize such visually directive explanations to other specialization areas in healthcare as well.
###### Acknowledgements.
We would like to thank Oscar Alvarado, Robin De Croon, Maxwell Szymanski, Houda Lamqaddam and Diego Rojo for providing helpful comments that improved this text. We also thank Lucija Gosak for helping us with participant recruitment for our first user study. This work was supported by Research Foundation-Flanders (FWO, grant G0A3319N) and KU Leuven Internal Funds (grant C14/21/072).
|
2306.13857 | On the maxima of nonstationary random fields subject to missing
observations | Motivated by the papers of Mladenovc and Piterbarg (2006), Krajka (2011) and
Pereira and Tan (2017), we study the limit properties for the maxima from
nonstationary random fields subject to missing observations and obtain the
weakly convergence and almost sure convergence results for these maxima. Some
examples such as Gaussian random fields, $chi$-random fields and Gaussian order
statistics fields are given to illustrate the obtained results. | Shengchao Zheng, Zhongquan Tan | 2023-06-24T04:12:12Z | http://arxiv.org/abs/2306.13857v1 | # On the maxima of nonstationary random fields subject to missing observations +
# On the maxima of nonstationary random fields subject to missing observations +
Footnote †: Research supported by Innovation of Jiaxing City: a program to support the talented persons, National Bureau of Statistics of China (No. 2020LY031) and Project of new economy research center of Jiaxing City (No. WYZB202254, WYZB202257).
Shengchao Zheng, Zhongquan Tan
E-mail address: [email protected]
_College of Data Science, Jiaxing University, Jiaxing 314001, PR China_
**Abstract:** Motivated by the papers of Mladenovic and Piterbarg (2006), Krajka (2011) and Pereira and Tan (2017), we study the limit properties for the maxima from nonstationary random fields subject to missing observations and obtain the weakly convergence and almost sure convergence results for these maxima. Some examples such as Gaussian random fields, \(chi\)-random fields and Gaussian order statistics fields are given to illustrate the obtained results.
**Key Words:** extreme value theory; nonstationary random fields; missing observations; almost sure central limit theorem
**AMS Classification:** Primary 60G70; secondary 60G60
## 1 Introduction
Suppose that \(\{X_{n},n\geq 1\}\) is a sequence of stationary random variables with the marginal distribution function \(F(x)\) and satisfies the weak dependence condition \(D(u_{n})\) and the local dependence condition \(D^{\prime}(u_{n})\) (see e.g., Leadbetter et al. (1983) for the definitions), where \(u_{n}=u_{n}(x)=a_{n}^{-1}x+b_{n}\) is a sequence of constants with \(a_{n}>0\) and \(b_{n}\in\mathbb{R}\). If \(u_{n}(x)\) satisfies \(n(1-F(u_{n}(x)))\to-\log G(x)\), as \(n\to\infty\), then for any \(x\in\mathbb{R}\), we have
\[\lim_{n\to\infty}P\left(M_{n}\leq u_{n}(x)\right)=G(x), \tag{1}\]
where \(M_{n}=\max\{X_{k},k=1,2,\ldots,n\}\), and \(G(x)\) is of one of the three types of extreme value distributions. The more details for (1) can be found in the monographs (Leadbetter et al., 1983; Piterbarg, 1996). Missing observations may occur randomly and cause serious consequences in practical applications. It is very important to study the impact induced by the missing observations on the maxima in extreme value theory. Suppose \(\varepsilon_{k}\) is the indicator of the event whether the random variable \(X_{k}\) is observed (in other word, \(\varepsilon_{k}\) is a Bernoulli random variable), and let \(S_{n}=\sum_{k\leq n}\varepsilon_{k}\). Furthermore, we assume that \(\{\varepsilon_{n},n\geq 1\}\) are independent of \(\{X_{n},n\geq 1\}\) with \(S_{n}\) satisfying
\[\frac{S_{n}}{n}\stackrel{{ P}}{{\longrightarrow}}\lambda,\ \ \mbox{as}\ \ \ n\to\infty,\]
where \(\lambda\) is a random or nonrandom variable.
For a constant \(\lambda\in(0,1)\), Mladenovic and Piterbarg (2006) studied the asymptotic distribution of maximum from stationary sequences and its maximum when the sequence is subject to random missing observations under conditions \(D(u_{n},v_{n})\) and \(D^{\prime}(u_{n})\), and derived the following result for any \(x<y\in\mathbb{R}\),
\[\lim_{n\to\infty}P\left(\widetilde{M}_{n}\leq u_{n}(x),M_{n}\leq v_{n}(y) \right)=G^{\lambda}(x)G^{1-\lambda}(y), \tag{2}\]
where \(\widetilde{M}_{n}=\max\{X_{k},\varepsilon_{k}=1,k=1,2,\ldots,n\}\) denotes the maximum subject to random missing observations. Cao and Peng (2011) and Peng et al. (2010) extended the result (2) to Gaussian cases. The generalization of (2) to autoregressive process and linear process can be found in Glavas et al. (2017) and Glavas and Mladenovic (2020), respectively, and to nonstationary random fields can be found in Panga and Pereira (2018). Tong and Peng (2011) also studied the almost sure limit theorem for these maxima and got
\[\lim_{n\to\infty}\frac{1}{\log n}\sum_{k=1}^{n}\frac{1}{k}\mathbbm{1}_{\{ \widetilde{M}_{n}\leq u_{n}(x),M_{n}\leq v_{n}(y)\}}=G^{\lambda}(x)G^{1- \lambda}(y)\quad\text{ \emph{a.s.}}, \tag{3}\]
for any \(x<y\in\mathbb{R}\).
When \(\lambda\) is a random variable, Krajka (2011) obtained the following result for any \(x<y\in\mathbb{R}\)
\[\lim_{n\to\infty}P\left(\widetilde{M}_{n}\leq u_{n}(x),M_{n}\leq v_{n}(y) \right)=E[G^{\lambda}(x)G^{1-\lambda}(y)]. \tag{4}\]
Hashorva et al. (2013) extended the results of (4) to weakly and strongly dependent Gaussian sequences.
To our knowledge, the studies on the maxima subject to missing observations mainly focus on the case when \(\lambda\) is a constant. In this paper, we will extend the above limit results on the maxima subject to missing observations to nonstationary random fields when \(\lambda\) is a random variable.
We first generalize the dependent condition \(D(u_{n})\) and the local dependence condition \(D^{\prime}(u_{n})\) for nonstationary random fields. These conditions strongly rely on the random variables, the partition of index set and the levels \(u_{n}\).
For sake of simplicity we only deal with the two-dimensional case and we claim that the results for higher dimensions can be derived by similar discussions. We first introduce some notations and notions used in the paper. For \(\mathbf{i}=(i_{1},i_{2})\) and \(\mathbf{j}=(j_{1},j_{2})\), \(\mathbf{i}\leq\mathbf{j}\) means \(i_{k}\leq j_{k},k=1,2\) and \(\mathbf{n}=(n_{1},n_{2})\to\infty\) means \(n_{k}\to\infty,k=1,2\), respectively. Let \(\mathbf{X}=\{X_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) be a sequence of nonstationary random fields and \(\{u_{\mathbf{n},\mathbf{i}},\mathbf{i}\leq\mathbf{n}\}_{\mathbf{n}\geq \mathbf{1}}\) and \(\{v_{\mathbf{n},\mathbf{i}},\mathbf{i}\leq\mathbf{n}\}_{\mathbf{n}\geq \mathbf{1}}\) be two sequences of real numbers satisfying \(v_{\mathbf{n},\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}}\) for all \(\mathbf{i}\leq\mathbf{n}\). Suppose that \(P(X_{\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}})=O(P(X_{\mathbf{j}}\leq u_{ \mathbf{n},\mathbf{j}}))\) and \(P(X_{\mathbf{i}}\leq v_{\mathbf{n},\mathbf{i}})=O(P(X_{\mathbf{j}}\leq v_{ \mathbf{n},\mathbf{j}}))\) for all \(\mathbf{1}\leq\mathbf{i}\neq\mathbf{j}\leq\mathbf{n}\) as \(\mathbf{n}\to\infty\). This makes the random field \(\mathbf{X}\) look like a stationary one. However, this condition is necessary to deal with the random missing data (see Remark 2.1 for details). Let \(\mathbf{R}_{\mathbf{n}}=\{1,\cdots,n_{1}\}\times\{1,\cdots,n_{2}\}\) and \(\mathbf{1}=(1,1)\), and subdivide \(\mathbf{R}_{\mathbf{n}}\) into \(k_{n_{1}}k_{n_{2}}\) disjoint rectangle subsets \(\mathbf{K}_{\mathbf{s}}=\mathbf{K}_{(s_{1},s_{2})},s_{1}=1,2,\ldots,k_{n_{1}}, s_{2}=1,2,\ldots,k_{n_{2}}\), such that
\[\sum_{i\in\mathbf{K}_{\mathbf{s}}}P\left(X_{\mathbf{i}}>w_{\mathbf{n},\mathbf{ i}}\right)=\frac{1}{k_{n_{1}}k_{n_{2}}}\sum_{\mathbf{i}\in\mathbf{R}_{ \mathbf{n}}}P\left(X_{\mathbf{i}}>w_{\mathbf{n},\mathbf{i}}\right)+o(1), \tag{5}\]
as \(\mathbf{n}\to\infty\), where \(w_{\mathbf{n},\mathbf{i}}\) equals \(v_{\mathbf{n},\mathbf{i}}\) or \(u_{\mathbf{n},\mathbf{i}}\) for all \(\mathbf{i}\leq\mathbf{n}\). Note that average partitions of \(\mathbf{R}_{\mathbf{n}}\) satisfies (5) under the above assumptions.
**Definition 1.1.** Let \(\mathcal{F}\) be a family of rectangle subsets \(\mathbf{K_{s}}\). The nonstationary random field \(\mathbf{X}\) on \(\mathbb{Z}_{+}^{2}\) satisfies the condition \(\mathbf{D}(u_{\mathbf{n},\mathbf{i}},v_{\mathbf{n},\mathbf{i}})\) over \(\mathcal{F}\) if there exist sequences of integer valued constants \(\{k_{n_{i}}\}_{n_{i}\geq 1}\),\(\{m_{n_{i}}\}_{n_{i}\geq 1}\), \(i=1,2\) with
\[(k_{n_{1}},k_{n_{2}})\rightarrow\infty,\ \ \left(\frac{k_{n_{1}}m_{n_{1}}}{n_{1}},\frac{k_{n_{2}}m_{n_{2}}}{n_{2}}\right)\rightarrow\mathbf{0} \tag{6}\]
such that \(k_{n_{1}}k_{n_{2}}\alpha_{\mathbf{n},m_{n_{1}},m_{n_{2}}}\to 0\), as \(\mathbf{n}=(n_{1},n_{2})\rightarrow\infty\), where \(\alpha_{\mathbf{n},m_{n_{1}},m_{n_{2}}}\) is defined as
\[\begin{split}&\alpha_{\mathbf{n},m_{n_{1}},m_{n_{2}}}=\sup_{ \begin{subarray}{c}(\mathbf{I}_{1}\bigcup J_{1},\mathbf{I}_{2}\bigcup J_{2}) \\ \in\mathcal{S}(m_{n_{1}},m_{n_{2}})\end{subarray}}\left|P\left(\bigcap_{ \mathbf{i}\in\mathbf{I}_{1}\bigcup J_{1}}\{X_{\mathbf{i}}\leq u_{\mathbf{n}, \mathbf{i}}\}\cap\bigcap_{\mathbf{j}\in\mathbf{I}_{2}\bigcup J_{2}}\{X_{ \mathbf{j}}\leq v_{\mathbf{n},\mathbf{j}}\}\right)-\right.\\ & P\left(\bigcap_{\mathbf{i}\in\mathbf{I}_{1}}\{X_{\mathbf{i}}\leq u _{\mathbf{n},\mathbf{i}}\}\cap\bigcap_{\mathbf{j}\in\mathbf{I}_{2}}\{X_{ \mathbf{j}}\leq v_{\mathbf{n},\mathbf{j}}\}\right)P\left(\bigcap_{\mathbf{i} \in\mathbf{I}_{1}}\{X_{\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}}\}\cap\bigcap _{\mathbf{j}\in\mathbf{J}_{2}}\{X_{\mathbf{j}}\leq v_{\mathbf{n},\mathbf{j}}\} \right)\bigg{|}\end{split} \tag{7}\]
with \(\mathcal{S}(m_{n_{1}},m_{n_{2}})=\{(\mathbf{I},\mathbf{J})\in\mathcal{F}^{2}, d(\pi_{1}(\mathbf{I}),\pi_{1}(\mathbf{J}))\geq m_{n_{1}}\bigvee d(\pi_{2}( \mathbf{I}),\pi_{2}(\mathbf{J}))\geq m_{n_{2}}\}\), \(\mathbf{I}_{1}\bigcap\mathbf{I}_{2}=\varnothing,\mathbf{J}_{1}\bigcap\mathbf{ J}_{2}=\varnothing\). Here \(\Pi_{i},i=1,2\), denote the cartesian projections on \(x\)-axis and \(y\)-axis, and \(d(A,B)\) denotes the distance between two sets \(A\) and \(B\).
The next local dependent condition is taken from Pereira and Tan (2017).
**Definition 1.2.** The condition \(\mathbf{D}^{\prime}(v_{\mathbf{n},\mathbf{i}})\) holds for the nonstationary random field \(\mathbf{X}\), if for each \(\mathbf{I}\in\mathcal{F}\), we have
\[k_{n_{1}}k_{n_{2}}\sum_{\mathbf{i}\neq\mathbf{j}\in\mathbf{I}}P\left(X_{ \mathbf{i}}>v_{\mathbf{n},\mathbf{i}},X_{\mathbf{j}}>v_{\mathbf{n},\mathbf{j}} \right)\to 0,\ \ as\ \ \mathbf{n}\rightarrow\infty. \tag{8}\]
The local dependence condition \(\mathbf{D}^{\prime}(v_{\mathbf{n},\mathbf{i}})\) is an anti-cluster condition, which bounds the probability of more than one exceedance above the levels \(v_{\mathbf{n},\mathbf{i}}\) in a rectangle with a few indexes.
## 2 Main results
Assume that some of random variables in the random filed \(\mathbf{X}\) can be observed and the sequence of random variables \(\varepsilon=\{\varepsilon_{\mathbf{i}},\mathbf{i}\geq\mathbf{1}\}\) indicate which variables in the random filed \(\mathbf{X}\) are observed. Let \(S_{\mathbf{n}}=\sum_{\mathbf{i}\leq\mathbf{n}}\varepsilon_{\mathbf{i}}\) and suppose that
\[\frac{S_{\mathbf{n}}}{n_{1}n_{2}}\overset{P}{\longrightarrow}\lambda,\ \ \mbox{as}\ \ \mathbf{n} \rightarrow\infty, \tag{9}\]
where \(\lambda\) is a random variable satisfying \(0\leq\lambda\leq 1\) a.s. For any random variable sequence \(\{\xi_{\mathbf{i}},\mathbf{n}\geq\mathbf{1}\}\), we define
\[\xi_{\mathbf{i}}(\varepsilon)=(1-\varepsilon_{\mathbf{i}})\gamma(\xi_{ \mathbf{i}})+\varepsilon_{\mathbf{i}}\xi_{\mathbf{i}},\ \ \mathbf{i}\geq\mathbf{1}, \tag{10}\]
where \(\gamma(\xi_{\mathbf{i}})=\inf\{x\in\mathbb{R}:P(\xi_{\mathbf{i}}\leq x)>0\}\).
**Theorem 2.1**. Suppose that the nonstationary random field \(\mathbf{X}=\{X_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) satisfies \(\mathbf{D}(u_{\mathbf{n},\mathbf{i}},v_{\mathbf{n},\mathbf{i}})\) and \(\mathbf{D}^{\prime}(v_{\mathbf{n},\mathbf{i}})\) over \(\mathcal{F}\) and \(\sup_{\mathbf{n}\geq\mathbf{1}}\{n_{1}n_{2}P(X_{\mathbf{i}}\geq v_{\mathbf{n}, \mathbf{i}}),\mathbf{i}\leq\mathbf{n}\}\) is bounded. Assume that \(\varepsilon=\{\varepsilon_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) is a sequence of indicators that is independent of \(\mathbf{X}=\{X_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) and such that (9) holds. If \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P\left(X_{\mathbf{i}}>u_{\mathbf{n}, \mathbf{i}}\right)\rightarrow\tau>0\) and \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P\left(X_{\mathbf{i}}>v_{\mathbf{n}, \mathbf{i}}\right)\rightarrow\kappa>0\) hold, then
\[\lim_{\mathbf{n}\rightarrow\infty}P\left(\bigcap_{\mathbf{i}\in\mathbf{R}_{ \mathbf{n}}}\{X_{\mathbf{i}}(\varepsilon)\leq v_{\mathbf{n},\mathbf{i}}\}, \bigcap_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}\{X_{\mathbf{i}}\leq u_{ \mathbf{n},\mathbf{i}}\}\right)=E[e^{-\lambda\kappa}e^{-(1-\lambda)\tau}]. \tag{11}\]
**Remark 2.1**. Let us divide \({\bf R_{n}}\) onto two parts \({\bf R_{n,1}}\) and \({\bf R_{n,2}}\) and let \(P(X_{\bf i}\leq u_{{\bf n,i}})=o(P(X_{\bf j}\leq u_{{\bf n,j}}))\) and \(P(X_{\bf i}\leq v_{{\bf n,i}})=o(P(X_{\bf i}\leq v_{{\bf n,j}}))\) for \({\bf i}\in{\bf R_{n,1}}\) and \({\bf j}\in{\bf R_{n,2}}\), respectively. Thus, \(\sum_{{\bf i}\in{\bf R_{n,2}}}P\left(X_{\bf i}>u_{{\bf n,i}}\right)\to\tau>0\) and \(\sum_{{\bf i}\in{\bf R_{n,2}}}P\left(X_{\bf i}>v_{{\bf n,i}}\right)\to\kappa>0\). Since we do not know the limit of \(\frac{S_{{\bf R_{n,2}}}}{n_{1}n_{2}}\), the limit of the probability in (11) does not exist, where \(S_{{\bf R_{n,2}}}=\sum_{{\bf i}\in{\bf R_{n,2}}}\varepsilon_{\bf i}\). This shows that the condition that \(P(X_{\bf i}\leq u_{{\bf n,i}})=O(P(X_{\bf j}\leq u_{{\bf n,j}}))\) and \(P(X_{\bf i}\leq v_{{\bf n,i}})=O(P(X_{\bf j}\leq v_{{\bf n,j}}))\) for all \({\bf 1}\leq{\bf i}\neq{\bf j}\leq{\bf n}\) as \({\bf n}\to\infty\) is necessary.
Next, we extend the above weakly convergence results to almost sure version. As usual, \(a_{\bf n}\ll b_{\bf n}\) means \(a_{\bf n}=O(b_{\bf n})\). In order to formulate the results, we need another condition \({\bf D^{*}}(u_{{\bf k,j}},v_{{\bf k,i}},u_{{\bf n,j}},v_{{\bf n,i}})\) as follows.
**Definition 2.1**. The random field \({\bf X}\) on \({\mathbb{Z}}_{+}^{2}\) satisfies the condition \({\bf D^{*}}(u_{{\bf k,j}},v_{{\bf k,i}},u_{{\bf n,j}},v_{{\bf n,i}})\) if there exist sequences of integer valued constants \(\{m_{n_{i}}\}_{n_{i}\geq 1},i=1,2\), and
\[(m_{n_{1}},m_{n_{2}})\to\infty,\ \ \left(\frac{m_{n_{1}}}{n_{1}},\frac{m_{n_{2}}} {n_{2}}\right)\to{\bf 0}\]
such that for some \(\epsilon>0\)
\[\sup_{k_{1}k_{2}<n_{1}n_{2}}\alpha_{{\bf n,k},m_{n_{1}},m_{n_{2}}}^{*}\ll( \log\log n_{1}\log\log n_{2})^{-(1+\epsilon)} \tag{12}\]
as \({\bf n}=(n_{1},n_{2})\to\infty\), where \(\alpha_{{\bf n,k},m_{n_{1}},m_{n_{2}}}^{*}\) is defined as following. For any \({\bf k\neq n}\) such that \(k_{1}k_{2}<n_{1}n_{2}\) define
\[\alpha_{{\bf n,k},m_{n_{1}},m_{n_{2}}}^{*}\] \[=\sup_{{\bf I}_{1}\subseteq{\bf R_{k}},{\bf I}_{2}\subseteq{\bf R _{n}}\backslash{\bf M_{kn}}}\left|P(\bigcap_{{\bf i}\in{\bf I}_{1}}\{X_{\bf i} \leq v_{{\bf k,i}}\}\cap\bigcap_{{\bf j}\in{\bf J}_{1}}\{X_{\bf j}\leq u_{{\bf k,j}}\}\cap\bigcap_{{\bf i}\in{\bf I}_{2}}\{X_{\bf i}\leq v_{{\bf n,i}}\}\cap \bigcap_{{\bf j}\in{\bf J}_{2}}\{X_{\bf j}\leq u_{{\bf n,j}}\})\right.\] \[\left.-P(\bigcap_{{\bf i}\in{\bf I}_{1}}\{X_{\bf i}\leq v_{{\bf i },{\bf i}}\}\cap\bigcap_{{\bf j}\in{\bf J}_{1}}\{X_{\bf j}\leq u_{{\bf k,j}}\}) P(\bigcap_{{\bf i}\in{\bf I}_{2}}\{X_{\bf i}\leq v_{{\bf n,i}}\}\cap \bigcap_{{\bf j}\in{\bf J}_{2}}\{X_{\bf j}\leq u_{{\bf n,j}}\})\right|\]
where \({\bf I}_{1}\subseteq{\bf J}_{1}={\bf R_{k}},{\bf I}_{2}\subseteq{\bf J}_{2}={ \bf R_{n}}\backslash{\bf M_{kn}}\) with \({\bf M_{kn}}=\{(j_{1},j_{2}):(j_{1},j_{2})\in{\mathbb{N}}^{2},0\leq j_{i}\leq{ \mathbb{\#}}(\pi_{i}({\bf M_{kn}^{*}}))+m_{n_{i}},i=1,2\}\) and \({\bf M_{kn}^{*}}={\bf R_{k}}\bigcap{\bf R_{n}}\). Here \(\#(A)\) denotes cardinality of the set \(A\).
**Theorem 2.2**. Suppose that the nonstationary random field \({\bf X}=\{X_{\bf n},{\bf n}\geq{\bf 1}\}\) satisfies \({\bf D}(u_{{\bf n,i}},v_{{\bf n,i}})\), \({\bf D^{*}}(u_{{\bf k,j}},v_{{\bf k,i}},u_{{\bf n,j}},v_{{\bf n,i}})\) and \({\bf D^{\prime}}(v_{{\bf n,i}})\) over \({\cal F}\) and \(\sup_{{\bf n}\geq{\bf 1}}\{n_{1}n_{2}P(X_{\bf i}\geq v_{{\bf n,i}}),{\bf i}\leq{\bf n}\}\) is bounded. Assume that \(\varepsilon=\{\varepsilon_{\bf n},{\bf n}\geq{\bf 1}\}\) is a sequence of independent indicators that is independent of \({\bf X}=\{X_{\bf n},{\bf n}\geq{\bf 1}\}\) and such that (9) holds. If \(\sum_{{\bf i}\in{\bf R_{n}}}P\left(X_{\bf i}>u_{{\bf n,i}}\right)\to\tau>0, \sum_{{\bf i}\in{\bf R_{n}}}P\left(X_{\bf i}>v_{{\bf n,i}}\right)\to\kappa>0\), and \(n_{1}=O(n_{2})\) hold, then
\[\lim_{{\bf n}\to\infty}\frac{1}{\log n_{1}\log n_{2}}\sum_{{\bf k}\in{\bf R_{n} }}\frac{1}{k_{1}k_{2}}\frac{1}{2}\big{\{}\bigcap_{i\in{\bf R_{k}}_{\bf k}}\{X_{ \bf i}(\varepsilon)\leq v_{{\bf k,i}},X_{\bf i}\leq u_{{\bf k,i}}\}\big{\}}=E[e^{- \lambda\kappa}e^{-(1-\lambda)\tau}],\ \ a.s. \tag{13}\]
**Remark 2.2**. Theorem 2.1 of Pereira and Tan (2017) derived the almost sure limit theorem for the maxima of nonstationary random fields, but the condition \({\bf D^{*}}(u_{{\bf n,i}})\) in their paper can not imply their main result. More precisely, for the case \(k_{1}\leq l_{1},k_{2}\leq l_{2}\), the term \(I_{2}\) in the proof of Lemma 4.1 in their paper can not be bounded by \(\alpha_{{\bf l},m_{l_{1}},m_{l_{2}}}\), since the sets \({\bf R_{k}}\) and \({\bf R_{l}}-{\bf M_{kl}}\) do not satisfy the conditions of \({\bf D^{*}}(u_{{\bf n,i}})\). We need a technical condition \({\bf D^{*}}(u_{{\bf k,j}},v_{{\bf k,i}},u_{{\bf n,j}},v_{{\bf n,i}})\) to solve this problem. However this condition is not very strict but involve more levels and different index sets. We will give several examples to illustrate it.
We end with this section by two examples which satisfy the conditions of Theorems 2.1 and 2.2. For examples such as for Gaussian case and its function, it will be given as applications in Section 3.
**Example 2.1.** Suppose that \(\mathbf{X}=\{X_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) is a sequence of independent random fields, then conditions \(\mathbf{D}(u_{\mathbf{n},\mathbf{i}},v_{\mathbf{n},\mathbf{i}})\), \(\mathbf{D}^{*}(u_{\mathbf{k},\mathbf{j}},v_{\mathbf{k},\mathbf{i}},u_{\mathbf{ n},\mathbf{j}},v_{\mathbf{n},\mathbf{i}})\) and \(\mathbf{D}^{\prime}(v_{\mathbf{n},\mathbf{i}})\) hold.
**Example 2.2.** Suppose that \(\mathbf{X}=\{X_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) is a sequence of \(m-\)denpendent or strong mixing random fields, then condition \(\mathbf{D}(u_{\mathbf{n},\mathbf{i}},v_{\mathbf{n},\mathbf{i}})\) holds.
## 3 Applications
Applying our main results, we can derive the weak and almost sure convergence results of the maxima for complete and incomplete samples for nonstationary Gaussian random fields and their functions.
Suppose in this section that \(\mathbf{Y}=\{Y_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) is a sequence of standard (with 0 mean and 1 variance) Gaussian random fields with covariances \(r_{\mathbf{i},\mathbf{j}}=Cov(Y_{\mathbf{i}},Y_{\mathbf{j}})\). Assume that \(r_{\mathbf{i},\mathbf{j}}\) satisfies \(|r_{\mathbf{i},\mathbf{j}}|<\rho_{|\mathbf{i}-\mathbf{j}|}\) when \(\mathbf{i}\neq\mathbf{j}\), for some sequence \(\rho_{\mathbf{n}}<1\) for \(\mathbf{n}\neq\mathbf{0}\) such that
\[\rho_{(n_{1},0)}\log n_{1}\text{ and }\rho_{(0,n_{2})}\log n_{2}\text{ are \ bounded and }\lim_{\mathbf{n}\to\infty}\rho_{\mathbf{n}}\log(n_{1}n_{2})=0 \tag{14}\]
or
\[\rho_{(n_{1},0)}\log n_{1} \ll(\log\log n_{1})^{-(1+\epsilon)},\ \rho_{(0,n_{2})}\log n_{2}\ll(\log\log n_{2})^{-(1+\epsilon)}\ \text{ and }\] \[\rho_{\mathbf{n}}\log(n_{1}n_{2}) \ll(\log\log n_{1}\log\log n_{2})^{-(1+\epsilon)}. \tag{15}\]
Let \(\{u_{\mathbf{n},\mathbf{i}},\mathbf{i}\leq\mathbf{n}\}_{\mathbf{n}\geq \mathbf{1}}\) and \(\{v_{\mathbf{n},\mathbf{i}},\mathbf{i}\leq\mathbf{n}\}_{\mathbf{n}\geq \mathbf{1}}\) be two sequences of real numbers satisfying that \(v_{\mathbf{n},\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}}\) for all \(\mathbf{i}\leq\mathbf{n}\) and \(\Phi(u_{\mathbf{n},\mathbf{i}})=O(\Phi(u_{\mathbf{n},\mathbf{j}}))\), \(\Phi(v_{\mathbf{n},\mathbf{i}})=O(\Phi(v_{\mathbf{n},\mathbf{j}}))\) for all \(\mathbf{1}\leq\mathbf{i}\neq\mathbf{j}\leq\mathbf{n}\) and sufficiently large \(\mathbf{n}\).
**Theorem 3.1.** Suppose that \(\mathbf{Y}=\{Y_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) is a sequence of standard Gaussian random fields with covariances satisfying (14). Assume that \(\varepsilon=\{\varepsilon_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) is a sequence of indicators that is independent of \(\mathbf{Y}\) and such that (9) holds. Let the constants \(\{u_{\mathbf{n},\mathbf{i}},\mathbf{i}\leq\mathbf{n}\}_{\mathbf{n}\geq \mathbf{1}}\) and \(\{v_{\mathbf{n},\mathbf{i}},\mathbf{i}\leq\mathbf{n}\}_{\mathbf{n}\geq \mathbf{1}}\) be such that \(\sup_{\mathbf{n}\geq\mathbf{1}}\{n_{1}n_{2}(1-\Phi(v_{\mathbf{n},\mathbf{i}}), \mathbf{i}\leq\mathbf{n})\}\) is bounded, \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}[1-\Phi(u_{\mathbf{n},\mathbf{i}})] \to\tau>0\) and \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}[1-\Phi(v_{\mathbf{n},\mathbf{i}})] \to\kappa>0\). Then, we have
\[\lim_{\mathbf{n}\to\infty}P\left(\bigcap_{\mathbf{i}\in\mathbf{R}_{\mathbf{n} }}\{Y_{\mathbf{i}}(\varepsilon)\leq v_{\mathbf{n},\mathbf{i}}\},\bigcap_{ \mathbf{i}\in\mathbf{R}_{\mathbf{n}}}\{Y_{\mathbf{i}}\leq u_{\mathbf{n},\mathbf{ i}}\}\right)=E[e^{-\lambda\kappa}e^{-(1-\lambda)\tau}].\]
Furthermore, if \(\varepsilon=\{\varepsilon_{\mathbf{n}},\mathbf{n}\geq\mathbf{1}\}\) is a sequence of independent indicators, \(n_{1}=O(n_{2})\) and (15) holds, we have
\[\lim_{\mathbf{n}\to\infty}\frac{1}{\log n_{1}\log n_{2}}\sum_{\mathbf{k}\in \mathbf{R}_{\mathbf{n}}}\frac{1}{k_{1}k_{2}}\mathbb{1}\left\{\bigcap_{i\in \mathbf{R}_{\mathbf{k}}}\{Y_{\mathbf{i}}(\varepsilon)\leq v_{\mathbf{k}, \mathbf{i}},Y_{\mathbf{i}}\leq u_{\mathbf{k},\mathbf{i}}\}\right\}=E[e^{- \lambda\kappa}e^{-(1-\lambda)\tau}],\ \ a.s.\]
**Remark 3.1.** 1) Under the same conditions as Theorem 3.1, Tan and Wang (2014) derived the almost sure limit theorem for maxima of complete samples of nonstationary Gaussian random fields. Theorem 3.1 extended their results to the maxima for complete and incomplete samples.
2) For one-dimensional case, i.e., for Gaussian sequences, the condition that \(\varepsilon=\{\varepsilon_{n},n\geq 1\}\) is a sequence of independent indicators can be weakened as that \(\varepsilon=\{\varepsilon_{n},n\geq 1\}\) is strong mixing indicators with mixing coefficient \(\alpha(n)\ll(\log\log n)^{-(1+\epsilon)}\) for some \(\epsilon>0\).
3) The one-dimensional case of Theorem 3.1 is an extension of the main results of Tong and Peng (2011) which derived the almost limit theorem of the maxima for complete and incomplete samples for stationary Gaussian case when \(\lambda\) is a constant.
Let \(a_{\mathbf{n}}=\sqrt{2\log(n_{1}n_{2})}\) and \(b_{\mathbf{n}}=a_{\mathbf{n}}-\frac{\log\log(n_{1}n_{2})+\log(4\pi)}{2a_{\mathbf{ n}}}\).
**Corollary 3.1**.: Let \(\mathbf{Z}=\{Y_{\mathbf{n}}+m_{\mathbf{n}},\mathbf{n}\geq 1\}\), where \(\{Y_{\mathbf{n}},\mathbf{n}\geq 1\}\) is defined as above and \(\{m_{\mathbf{n}},\mathbf{n}\geq 1\}\) satisfies
\[\beta_{\mathbf{n}}=\max_{\mathbf{k}\in\mathbf{R}_{\mathbf{n}}}|m_{\mathbf{k}}| =o(\sqrt{n_{1}n_{2}}), \tag{16}\]
and let \(m_{\mathbf{n}}^{*}\) be such that
\[|m_{\mathbf{n}}^{*}|\leq\beta_{\mathbf{n}} \tag{17}\]
and
\[\frac{1}{n_{1}n_{2}}\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}\exp\left(a_{ \mathbf{n}}^{*}(m_{\mathbf{i}}-m_{\mathbf{n}}^{*})-\frac{1}{2}(m_{\mathbf{i}} -m_{\mathbf{n}}^{*})^{2}\right)\to 1 \tag{18}\]
as \(\mathbf{n}\rightarrow\infty\), where \(a_{\mathbf{n}}^{*}=a_{\mathbf{n}}-\log\log(n_{1}n_{2})/2a_{\mathbf{n}}\). Assume that \(\varepsilon=\{\varepsilon_{\mathbf{n}},\mathbf{n}\geq 1\}\) is a sequence of indicators that is independent of \(\mathbf{Y}\) and such that (9) holds. If (14) holds, then for any \(x\leq y\in\mathbb{R}\)
\[\lim_{\mathbf{n}\rightarrow\infty}P\bigg{(}a_{\mathbf{n}}\big{(} M_{\mathbf{n}}(Z(\varepsilon))-b_{\mathbf{n}}-m_{\mathbf{n}}^{*}\big{)}\leq x,a_{\mathbf{n}}\big{(}M_{\mathbf{n}}(Z)-b_{\mathbf{n}}-m_{\mathbf{n}}^{*}\big{)} \leq y\bigg{)}\] \[=E[\exp(-\lambda e^{-x})\exp(-(1-\lambda)e^{-y})]; \tag{19}\]
Furthermore, if \(\varepsilon=\{\varepsilon_{\mathbf{n}},\mathbf{n}\geq 1\}\) is a sequence of independent indicators, \(n_{1}=O(n_{2})\), (15) holds and
\[a_{\mathbf{n}}\big{(}\max_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}m_{\mathbf{i} }-m_{\mathbf{n}}^{*}\big{)}\ \ \text{is bounded}, \tag{20}\]
then for any \(x\leq y\in\mathbb{R}\)
\[\lim_{\mathbf{n}\rightarrow\infty}\frac{1}{\log n_{1}\log n_{2}} \sum_{\mathbf{k}\in\mathbf{R}_{\mathbf{n}}}\frac{1}{k_{1}k_{2}}\mathbbm{1}_{ \big{\{}a_{\mathbf{n}}\big{(}M_{\mathbf{k}}(Z(\varepsilon))-b_{\mathbf{n}}-m_ {\mathbf{k}}^{*}\big{)}\leq x,a_{\mathbf{n}}\big{(}M_{\mathbf{k}}(Z)-b_{ \mathbf{n}}-m_{\mathbf{k}}^{*}\big{)}\leq y\big{\}}}\] \[=E[\exp(-\lambda e^{-x})\exp(-(1-\lambda)e^{-y})],\ \ a.s., \tag{21}\]
where \(M_{\mathbf{n}}(Z(\varepsilon))=\max_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}Z _{\mathbf{i}}(\varepsilon)\) and \(M_{\mathbf{n}}(Z)=\max_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}Z_{\mathbf{i}}\).
Next, we deal with two types of Gaussian functions. Let \(\{Y_{\mathbf{n}j},\mathbf{n}\geq 1\},j=1,2,\cdots,d\), \(d\geq 1\) be the independent copies of \(\{Y_{\mathbf{n}},\mathbf{n}\geq 1\}\). Define
\[\chi_{\mathbf{n}}=(\sum_{j=1}^{d}Y_{\mathbf{n}j}^{2})^{1/2},\ \ \mathbf{n}\geq 1\]
and for \(r\in\{1,2,\ldots,d\}\)
\[O_{\mathbf{n}}^{(d)}:=\min_{j=1}^{d}Y_{\mathbf{n}j}\leq\cdots\leq O_{\mathbf{ n}}^{(r)}\leq\cdots\leq O_{\mathbf{n}j}^{(1)}:=\max_{j=1}^{d}Y_{\mathbf{n}j},\ \ \mathbf{n}\geq 1.\]
It worth pointing out that \(\{\chi_{\mathbf{n}},\mathbf{n}\geq 1\}\) is a sequence of \(\chi\) random field and \(\{O_{\mathbf{n}}^{(r)},\mathbf{n}\geq 1\}\) is a sequence of Gaussian order statistic random field. We refer to Tan and Hashorva (2013a,b), Ling and Tan (2016) and Shao and Tan (2022) for recent work on extremes for \(\chi\) variables and Debicki et al. (2015, 2017) and Tan (2018) for recent work on extremes for Gaussian order statistic variables.
**Theorem 3.2**.: Suppose that \(\{Y_{\mathbf{n}},\mathbf{n}\geq 1\}\) is a sequence of standard Gaussian random fields with covariances satisfying (14). Assume that \(\varepsilon=\{\varepsilon_{\mathbf{n}},\mathbf{n}\geq 1\}\) is a sequence of indicators that is independent of \(\mathbf{Y}\) and such that (9) holds. Let the constants \(\{u_{\mathbf{n},\mathbf{i}},\mathbf{i}\leq\mathbf{n}\}_{\mathbf{n}\geq 1}\) and \(\{v_{\mathbf{n},\mathbf{i}},\mathbf{i}\leq\mathbf{n}\}_{\mathbf{n}\geq 1}\) be such that \(\sup_{\mathbf{n}\geq 1}\{n_{1}n_{2}P(\chi_{\mathbf{i}}>v_{\mathbf{n},\mathbf{i}}), \mathbf{i}\leq\mathbf{n})\}\) is bounded, \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P\left(\chi_{\mathbf{i}}>u_{\mathbf{ n},\mathbf{i}}\right)\rightarrow\tau>0\) and \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P\left(\chi_{\mathbf{i}}>v_{\mathbf{ n},\mathbf{i}}\right)\rightarrow\kappa>0\) hold. Then,
\[\lim_{\mathbf{n}\rightarrow\infty}P\left(\bigcap_{\mathbf{i}\in\mathbf{R}_{ \mathbf{n}}}\{\chi_{\mathbf{i}}(\varepsilon)\leq v_{\mathbf{n},\mathbf{i}}\}, \bigcap_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}\{\chi_{\mathbf{i}}\leq u_{ \mathbf{n},\mathbf{i}}\}\right)=E[e^{-\lambda\kappa}e^{-(1-\lambda)\tau}].\]
Furthermore, if \(\varepsilon=\{\varepsilon_{\bf n},{\bf n}\geq{\bf 1}\}\) is a sequence of independent indicators, \(n_{1}=O(n_{2})\) and (15) holds, we have
\[\lim_{{\bf n}\to\infty}\frac{1}{\log n_{1}\log n_{2}}\sum_{{\bf k} \in{\bf R}_{\bf n}}\frac{1}{k_{1}k_{2}}\mathbbm{1}\big{\{}\bigcap_{i\in{\bf R} _{\bf k}}\{\chi_{i}(\varepsilon)\leq v_{{\bf k},i},\chi_{1}\leq u_{{\bf k},i} \}\big{\}}=E[e^{-\lambda\kappa}e^{-(1-\lambda)\tau}],\ \ a.s.\]
**Theorem 3.3.** Suppose that \(\{Y_{\bf n},{\bf n}\geq{\bf 1}\}\) is a sequence of standard Gaussian random fields with covariances satisfying (14). Assume that \(\varepsilon=\{\varepsilon_{\bf n},{\bf n}\geq{\bf 1}\}\) is a sequence of indicators that is independent of \({\bf Y}\) and such that (9) holds. Let the constants \(\{u_{{\bf n},{\bf i}},{\bf i}\leq{\bf n}\}_{{\bf n}\geq{\bf 1}}\) and \(\{v_{{\bf n},{\bf i}},{\bf i}\leq{\bf n}\}_{{\bf n}\geq{\bf 1}}\) be such that \(\sup_{{\bf n}\geq{\bf 1}}\{n_{1}n_{2}P(O_{\bf i}^{(r)}>v_{{\bf n},{\bf i}}),{\bf i }\leq{\bf n}\}\) is bounded, \(\sum_{{\bf i}\in{\bf R}_{\bf n}}P\left(O_{\bf i}^{(r)}>u_{{\bf n},{\bf i}}\right) \to\tau>0\) and \(\sum_{{\bf i}\in{\bf R}_{\bf n}}P\left(O_{\bf i}^{(r)}>v_{{\bf n},{\bf i}} \right)\to\kappa>0\) hold. Then,
\[\lim_{{\bf n}\to\infty}P\left(\bigcap_{{\bf i}\in{\bf R}_{\bf n}} \{O_{\bf i}^{(r)}(\varepsilon)\leq v_{{\bf n},{\bf i}}\},\bigcap_{{\bf i}\in {\bf R}_{\bf n}}\{O_{\bf i}^{(r)}\leq u_{{\bf n},{\bf i}}\}\right)=E[e^{- \lambda\kappa}e^{-(1-\lambda)\tau}].\]
Furthermore, if \(\varepsilon=\{\varepsilon_{\bf n},{\bf n}\geq{\bf 1}\}\) is a sequence of independent indicators, \(n_{1}=O(n_{2})\) and (15) holds, we have
\[\lim_{{\bf n}\to\infty}\frac{1}{\log n_{1}\log n_{2}}\sum_{{\bf k }\in{\bf R}_{\bf n}}\frac{1}{k_{1}k_{2}}\mathbbm{1}\big{\{}\bigcap_{i\in{\bf R }_{\bf k}}\{O_{\bf i}^{(r)}(\varepsilon)\leq v_{{\bf k},i},O_{\bf i}^{(r)} \leq u_{{\bf k},{\bf i}}\}\big{\}}=E[e^{-\lambda\kappa}e^{-(1-\lambda)\tau}], \ \ a.s.\]
## 4 Auxiliary results and proofs
In this section, we first state and prove several lemmas which will be used in the proofs of our main results and then we give the proofs of the main results. For any \({\bf I}\subseteq{\bf R}_{\bf n}\), let \({\cal B}_{\bf k}({\bf I})=\bigcap_{{\bf i}\in{\bf I}}\{X_{{\bf i}}\leq u_{{\bf k },{\bf i}},X_{{\bf i}}(\varepsilon)\leq v_{{\bf k},{\bf i}}\}\), and \(\overline{\cal B}_{\bf k}({\bf I})=\bigcup_{{\bf i}\in{\bf I}}\{X_{{\bf i}}>u_{ {\bf k},{\bf i}}\}\bigcup\{X_{{\bf i}}(\varepsilon)>v_{{\bf k},{\bf i}}\}\}\). Let \(m_{l_{i}}=\log l_{i}\), \(i=1,2\). For random variable \(\lambda\) such that \(0\leq\lambda\leq 1\) a.s., we put
\[B_{r,{\bf l}}=\left\{\begin{array}{cc}\omega:\lambda(\omega)\in \left\{\begin{array}{cc}\left[0,\ \frac{1}{2^{l_{1}l_{2}}}\right],&r=0;\\ \\ \left(\frac{r}{2^{l_{1}l_{2}}},\ \ \frac{r+1}{2^{l_{1}l_{2}}}\right],&0<r\leq 2^{l_{1}l_{2 }}-1.\end{array}\right\}\end{array}\right\}\]
and
\[B_{r,{\bf l},\alpha,{\bf n}}=\{\omega:\varepsilon_{\bf j}(\omega)=\alpha_{ \bf j},{\bf 1}\leq{\bf j}\leq{\bf n}\}\bigcap B_{r,{\bf l}}.\]
**Lemma 4.1**. Under the conditions of Theorem 2.2, for \({\bf k},{\bf l}\in{\bf R}_{\bf n}\) such that \({\bf k}\neq{\bf l}\) and \(l_{1}l_{2}\geq k_{1}k_{2}\), we have
\[\left|Cov\bigg{(}\mathbbm{1}_{\{\bigcap_{i\in{\bf R}_{\bf k}}\{X_ {{\bf i}}\leq u_{{\bf k},{\bf i}},X_{{\bf i}}(\varepsilon)\leq v_{{\bf k},{\bf i }}\}\}},\mathbbm{1}_{\{\bigcap_{i\in{\bf R}_{\bf l}}-{\bf R}_{\bf k}}\{X_{{\bf i }}\leq u_{{\bf i},{\bf i}},X_{{\bf i}}(\varepsilon)\leq v_{{\bf i},{\bf i}}\} \}\bigg{)}\right|\] \[\ll\left\{\begin{array}{cc}\alpha_{{\bf l},{\bf k},m_{l},m_{l }}^{*}+\frac{k_{2}m_{l_{1}}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}},&k_{ 1}<k_{2},l_{1}<l_{2};\\ \\ \alpha_{{\bf l},{\bf k},m_{l_{1}},m_{l_{2}}}^{*}+\frac{l_{2}m_{l_{1}}+m_{l_{1}}m_ {l_{2}}}{l_{1}l_{2}},&k_{1}<l_{1},l_{2}<k_{2},k_{1}k_{2}<l_{1}l_{2};\\ \\ \alpha_{{\bf l},{\bf k},m_{l_{1}},m_{l_{2}}}^{*}+\frac{l_{1}m_{l_{2}}+m_{l_{1}}m_ {l_{2}}}{l_{1}l_{2}},&l_{1}<k_{2},k_{2}<l_{2},k_{1}k_{2}<l_{1}l_{2}.\end{array}\right.\]
**Proof.** Recall that \({\bf M}_{{\bf k}{\bf n}}=\{(j_{1},j_{2}):(j_{1},j_{2})\in\mathbb{N}^{2},0\leq j _{i}\leq\#(\pi_{i}({\bf M}_{{\bf k}{\bf n}}^{*}))+m_{n_{i}},i=1,2\}\) with \({\bf M}_{{\bf k}{\bf n}}^{*}={\bf R}_{\bf k}\bigcap{\bf R}_{\bf n}\). Write
\[\left|Cov\bigg{(}\mathbbm{1}_{\{\bigcap_{i\in{\bf R}_{\bf k}}\{X_ {{\bf i}}\leq u_{{\bf k},{\bf i}},X_{{\bf i}}(\varepsilon)\leq v_{{\bf k},{\bf i }}\}\}},\mathbbm{1}_{\{\bigcap_{i\in{\bf R}_{\bf l}}-{\bf R}_{\bf k}}\{X_{{\bf i }}\leq u_{{\bf i},{\bf i}},X_{{\bf i}}(\varepsilon)\leq v_{{\bf i},{\bf i}}\} \}\bigg{)}\right|\]
\[=P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{\mathbf{k}})\bigcap \mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{l}}-\mathbf{R}_{\mathbf{k}})\bigg{)} -P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{\mathbf{k}})\bigcap\mathcal{B}_{ \mathbf{l}}(\mathbf{R}_{\mathbf{l}}-\mathbf{M}_{\mathbf{k}\mathbf{l}})\bigg{)}\bigg{|}\] \[\leq\bigg{|}P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{\mathbf{ k}})\bigcap\mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{l}}-\mathbf{R}_{ \mathbf{k}})\bigg{)}-P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{\mathbf{k}}) \bigcap\mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{l}}-\mathbf{M}_{\mathbf{ k}\mathbf{l}})\bigg{)}\bigg{|}\] \[\quad+\bigg{|}P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{ \mathbf{k}})\bigcap\mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{l}}-\mathbf{ M}_{\mathbf{k}\mathbf{l}})\bigg{)}-P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{ \mathbf{k}})\bigg{)}P\bigg{(}\mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{l}}- \mathbf{M}_{\mathbf{k}\mathbf{l}})\bigg{)}\bigg{|}\] \[=I_{1}+I_{2}+I_{3}.\]
Noting that \(k_{1}k_{2}\leq l_{1}l_{2}\), we will estimate the above terms for three cases: case 1, \(k_{1}\leq l_{1},k_{2}\leq l_{2}\); case 2, \(k_{1}>l_{1},k_{2}\leq l_{2}\) but \(k_{1}k_{2}\leq l_{1}l_{2}\); case 3, \(k_{1}\leq l_{1},k_{2}>l_{2}\) but \(k_{1}k_{2}\leq l_{1}l_{2}\).
For the first case, we have
\[I_{1} = \bigg{|}P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{\mathbf{k}} )\bigcap\mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{l}}-\mathbf{R}_{\mathbf{ k}})\bigg{)}-P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{\mathbf{k}}) \bigcap\mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{l}}-\mathbf{M}_{\mathbf{ k}\mathbf{l}})\bigg{)}\bigg{|}\] \[\leq\] \[\leq P\left(\overline{\mathcal{B}}_{\mathbf{l}}(\mathbf{M}_{\mathbf{ k}\mathbf{l}}-\mathbf{R}_{\mathbf{k}})\right)\] \[\leq P\bigg{(}\bigcup_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{l} }-\mathbf{R}_{\mathbf{k}}}\left\{X_{\mathbf{i}}>u_{\mathbf{i},\mathbf{i}} \right\}\bigg{)}+P\bigg{(}\bigcup_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{ l}}-\mathbf{R}_{\mathbf{k}}}\left\{X_{\mathbf{i}}(\varepsilon)>v_{\mathbf{i}, \mathbf{i}}\right\}\bigg{)}\] \[= I_{11}+I_{12},\]
where
\[I_{11} = P\bigg{(}\bigcup_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{l} }-\mathbf{R}_{\mathbf{k}}}\left\{X_{\mathbf{i}}>u_{\mathbf{i},\mathbf{i}} \right\}\bigg{)}\leq\sum_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{l}}- \mathbf{R}_{\mathbf{k}}}P\left(X_{\mathbf{i}}>u_{\mathbf{i},\mathbf{i}}\right)\] \[\leq (k_{2}m_{l_{1}}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}})\max\left\{P \left(X_{\mathbf{i}}>u_{\mathbf{i},\mathbf{i}}\right),\mathbf{i}\leq\mathbf{l}\right\}\] \[= \frac{k_{2}m_{l_{1}}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2 }}I_{1}l_{2}\max\left\{P\left(X_{\mathbf{i}}>v_{\mathbf{i},\mathbf{i}}\right),\mathbf{i}\leq\mathbf{l}\right\}\] \[\ll \frac{k_{2}m_{l_{1}}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2 }}\]
by using the condition that \(\sup_{\mathbf{n}\geq\mathbf{1}}\{n_{1}n_{2}P(X_{\mathbf{i}}\geq v_{\mathbf{n}, \mathbf{i}}),\mathbf{i}\leq\mathbf{n}\}\) is bounded, and similarly,
\[I_{12} = P\bigg{(}\bigcup_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{l} }-\mathbf{R}_{\mathbf{k}}}\left\{X_{\mathbf{i}}(\varepsilon)>v_{\mathbf{i}, \mathbf{i}}\right\}\bigg{)}\] \[= \sum_{t=0}^{\#(\mathbf{M}_{\mathbf{k}\mathbf{l}}-\mathbf{R}_{ \mathbf{k}})}P\left(\bigcup_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{l}}- \mathbf{R}_{\mathbf{k}}}X_{\mathbf{i}}(\varepsilon)>v_{\mathbf{i},\mathbf{i}} \bigg{|}\sum_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{l}}-\mathbf{R}_{ \mathbf{k}}}\varepsilon_{\mathbf{i}}=t\right)P\left(\sum_{\mathbf{i}\in \mathbf{M}_{\mathbf{k}\mathbf{l}}-\mathbf{R}_{\mathbf{k}}}\varepsilon_{\mathbf{i} }=t\right)\] \[\leq \sum_{t=0}^{\#(\mathbf{M}_{\mathbf{k}\mathbf{l}}-\mathbf{R}_{ \mathbf{k}})}t\max\left\{P\left(X_{\mathbf{i}}>v_{\mathbf{i},\mathbf{i}}\right), \mathbf{i}\leq\mathbf{l}\right\}P\left(\sum_{\mathbf{i}\in\mathbf{M}_{\mathbf{k} \mathbf{l}}-\mathbf{R}_{\mathbf{k}}}\varepsilon_{\mathbf{i}}=t\right)\] \[= E\left(\sum_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{l}}- \mathbf{R}_{\mathbf{k}}}\varepsilon_{\mathbf{i}}\right)\max\left\{P\left(X_{ \mathbf{i}}>v_{\mathbf{i},\mathbf{i}}\right),\mathbf{i}\leq\mathbf{l}\right\}\] \[= \frac{E\left(\sum_{\mathbf{i}\in\mathbf{M}_{\mathbf{k}\mathbf{l}}- \mathbf{R}_{\mathbf{k}}}\varepsilon_{\mathbf{i}}\right)}{l_{1}l_{2}}l_{1}l_{2} \max\left\{P\left(X_{\mathbf{i}}>v_{\mathbf{i},\mathbf{i}}\right),\mathbf{i}\leq \mathbf{l}\right\}\] \[\ll \frac{k_{2}m_{l_{1}}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}.\]
Thus
\[I_{1}\ll\frac{k_{2}m_{l_{1}}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}.\]
By the similar arguments as for \(I_{1}\), we have
\[I_{3}\ll\frac{k_{2}m_{l_{1}}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}.\]
Let \(\varepsilon(\mathbf{R_{k}})=\{\varepsilon_{\mathbf{i}},\mathbf{i}\in\mathbf{ R_{k}}\}\) and \(\varepsilon(\mathbf{R_{1}}-\mathbf{M_{kl}})=\{\varepsilon_{\mathbf{i}},\mathbf{ i}\in\mathbf{R_{l}}-\mathbf{M_{kl}}\}\). By condition \(\mathbf{D}^{*}\left(u_{\mathbf{k},\mathbf{j}},v_{\mathbf{k},\mathbf{i}},u_{ \mathbf{n},\mathbf{j}},v_{\mathbf{n},\mathbf{i}}\right)\), we have
\[\left|E\left[P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R_{k}}) \bigcap\mathcal{B}_{\mathbf{l}}(\mathbf{R_{l}}-\mathbf{M_{kl}})|(\varepsilon (\mathbf{R_{k}}),\varepsilon(\mathbf{R_{l}}-\mathbf{M_{kl}}))\bigg{)}\right]\right.\] \[\left.-E\left[P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R_{k}}) |\varepsilon(\mathbf{R_{k}})\bigg{)}P\bigg{(}\mathcal{B}_{\mathbf{l}}( \mathbf{R_{l}}-\mathbf{M_{kl}})|\varepsilon(\mathbf{R_{l}}-\mathbf{M_{kl}}) \bigg{)}\right]\bigg{|}\leq\alpha_{\mathbf{i},\mathbf{k},m_{l_{1}},m_{l_{2}}}^ {*}. \tag{22}\]
By the independence of \(\{\varepsilon_{\mathbf{i}},\mathbf{i}\geq\mathbf{1}\}\), we have
\[E\left[P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R_{k}})| \varepsilon(\mathbf{R_{k}})\bigg{)}P\bigg{(}\mathcal{B}_{\mathbf{l}}(\mathbf{ R_{l}}-\mathbf{M_{kl}})|\varepsilon(\mathbf{R_{l}}-\mathbf{M_{kl}})\bigg{)}\right]\] \[=E\left[P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R_{k}})| \varepsilon(\mathbf{R_{k}})\bigg{)}\right]E\left[P\bigg{(}\mathcal{B}_{ \mathbf{l}}(\mathbf{R_{l}}-\mathbf{M_{kl}})|\varepsilon(\mathbf{R_{l}}- \mathbf{M_{kl}})\bigg{)}\right]\] \[=P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R_{k}})\bigg{)}P\bigg{(} \mathcal{B}_{\mathbf{l}}(\mathbf{R_{l}}-\mathbf{M_{kl}})\bigg{)}. \tag{23}\]
Therefore (22) together with (23) implies
\[I_{2}=\left|P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R_{k}})\bigcap\mathcal{ B}_{\mathbf{l}}(\mathbf{R_{l}}-\mathbf{M_{kl}})\bigg{)}-P\bigg{(}\mathcal{B}_{ \mathbf{k}}(\mathbf{R_{k}})\bigg{)}P\bigg{(}\mathcal{B}_{\mathbf{l}}(\mathbf{ R_{l}}-\mathbf{M_{kl}})\bigg{)}\right|\ll\alpha_{\mathbf{i},\mathbf{k},m_{l_{1}},m_{l_{2}}}^ {*}.\]
Thus, we have
\[\left|Cov\bigg{(}\mathbb{1}_{\{\bigcap_{\mathbf{i}\in\mathbf{R_{ k}}}\{X_{\mathbf{i}}\leq u_{\mathbf{i},X_{\mathbf{i}}}(\varepsilon)\leq v_{ \mathbf{i},\mathbf{i}}\}\}},\mathbb{1}_{\{\bigcap_{\mathbf{i}\in\mathbf{R_{l}} -\mathbf{R_{k}}}\{X_{\mathbf{i}}\leq u_{\mathbf{i},X_{\mathbf{i}}}(\varepsilon )\leq v_{\mathbf{i},\mathbf{i}}\}\}}\bigg{)}\right|\] \[\ll\alpha_{\mathbf{i},\mathbf{k},m_{l_{1}},m_{l_{2}}}^{*}+\frac{k _{1}m_{l_{2}}+k_{2}m_{l_{1}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}.\]
Next, we deal with the second case: \(k_{1}<l_{1},l_{2}<k_{2}\), but \(k_{1}k_{2}<l_{1}l_{2}\). As for (22), we have
\[I_{1} = \left|P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R_{k}})\bigcap \mathcal{B}_{\mathbf{l}}(\mathbf{R_{l}}-\mathbf{R_{k}})\bigg{)}-P\bigg{(} \mathcal{B}_{\mathbf{k}}(\mathbf{R_{k}})\bigcap\mathcal{B}_{\mathbf{l}}(\mathbf{ R_{l}}-\mathbf{M_{kl}})\bigg{)}\right|\] \[\leq P\bigg{(}\bigcup_{\mathbf{i}\in\mathbf{M_{kl}}-\mathbf{R_{k}}} \{X_{\mathbf{i}}>u_{\mathbf{i},\mathbf{i}}\}\bigg{)}+P\bigg{(}\bigcup_{ \mathbf{i}\in\mathbf{M_{kl}}-\mathbf{R_{k}}}\{X_{\mathbf{i}}(\varepsilon)>v_{ \mathbf{i},\mathbf{i}}\}\bigg{)}\] \[= I_{13}+I_{14},\]
where
\[I_{13} \leq (l_{2}m_{l_{1}}+m_{l_{1}}m_{l_{2}})\max\left\{P(X_{\mathbf{i}}>u_ {\mathbf{i},\mathbf{i}}),\mathbf{i}\leq\mathbf{l}\right\}\] \[\leq \frac{l_{2}m_{l_{1}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}},\]
by using the condition that \(\sup_{\mathbf{n}\geq\mathbf{1}}\{n_{1}n_{2}P(X_{\mathbf{i}}\geq v_{\mathbf{n}, \mathbf{i}}),\mathbf{i}\leq\mathbf{n}\}\) is bounded again, and similarly,
\[I_{14} = \sum_{t=0}^{\#(\mathbf{M_{kl}}-\mathbf{R_{k}})}P\left(\bigcup_{ \mathbf{i}\in\mathbf{M_{kl}}-\mathbf{R_{k}}}X_{\mathbf{i}}(\varepsilon)>v_{ \mathbf{i},\mathbf{i}}\bigg{|}\sum_{\mathbf{i}\in\mathbf{M_{kl}}-\mathbf{R_{k}} }\varepsilon_{\mathbf{i}}=t\right)P\left(\sum_{\mathbf{i}\in\mathbf{M_{kl}}- \mathbf{R_{k}}}\varepsilon_{\mathbf{i}}=t\right)\]
\[\leq\] \[\leq \frac{l_{2}m_{l_{1}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}.\]
Thus
\[I_{1}\ll\frac{l_{2}m_{l_{1}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}.\]
By the similar arguments as for \(I_{1}\), we have
\[I_{3}\ll\frac{l_{2}m_{l_{1}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}.\]
By condition \(\mathbf{D}^{*}\left(u_{\mathbf{k,j}},v_{\mathbf{k,i}},u_{\mathbf{n,j}},v_{ \mathbf{n,i}}\right)\) and the independence of \(\{\varepsilon_{\mathbf{i}},\mathbf{i}\geq\mathbf{1}\}\) again, we get
\[I_{2}=\left|P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{\mathbf{k}})\bigcap \mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{i}}-\mathbf{M}_{\mathbf{kl}}) \bigg{)}-P\bigg{(}\mathcal{B}_{\mathbf{k}}(\mathbf{R}_{\mathbf{k}})\bigg{)}P \bigg{(}\mathcal{B}_{\mathbf{l}}(\mathbf{R}_{\mathbf{l}}-\mathbf{M}_{\mathbf{ kl}})\bigg{)}\right|\ll\alpha_{\mathbf{1,k},m_{l_{1}},m_{l_{2}}}^{*}.\]
Thus, we have
\[\left|Cov\bigg{(}\mathbb{1}_{\{\bigcap_{i\in\mathbf{R}_{ \mathbf{k}}}\{X_{i}\leq u_{\mathbf{i},\mathbf{i}},X_{i}(\varepsilon)\leq v_{ \mathbf{i},\mathbf{i}}\}\}}\mathbb{1}_{\{\bigcap_{i\in\mathbf{R}_{\mathbf{i}} }\{X_{i}\leq u_{\mathbf{i},\mathbf{i}},X_{i}(\varepsilon)\leq v_{\mathbf{i}, \mathbf{i}}\}\}}\bigg{)}\right|\] \[\ll\alpha_{\mathbf{1,k},m_{l_{1}},m_{l_{2}}}^{*}+\frac{l_{2}m_{l _{1}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}.\]
By the similar discussions as for the second case, we can get the desired bounds for the third case, so we omit the details. The proof of the lemma is complete.
**Lemma 4.2** Under the conditions of Theorem 2.3, for \(\mathbf{k,l}\in\mathbf{R_{n}}\) such that \(\mathbf{k}\neq\mathbf{l}\) and \(k_{1}k_{2}\leq l_{1}l_{2}\), we have
\[E\left|\mathbb{1}_{\bigcap_{i\in\mathbf{R}_{\mathbf{l}}-\mathbf{R}_{\mathbf{k} }}\{X_{i}\leq u_{\mathbf{i},\mathbf{i}},X_{i}(\varepsilon)\leq v_{\mathbf{i}, \mathbf{i}}\}}-\mathbb{1}_{\bigcap_{i\in\mathbf{R}_{\mathbf{l}}}\{X_{i}\leq u _{\mathbf{i},\mathbf{i}},X_{i}(\varepsilon)\leq v_{\mathbf{i},\mathbf{i}}\}} \right|\leq\frac{l_{1}l_{2}-\#\left(\mathbf{R}_{\mathbf{l}}-\mathbf{R}_{ \mathbf{k}}\right)}{l_{1}l_{2}}. \tag{24}\]
**Proof.** We have
\[E\left|\mathbb{1}_{\bigcap_{i\in\mathbf{R}_{\mathbf{l}}-\mathbf{ R}_{\mathbf{k}}}\{X_{i}\leq u_{\mathbf{i},\mathbf{i}},X_{i}(\varepsilon)\leq v_{ \mathbf{i},\mathbf{i}}\}}-\mathbb{1}_{\bigcap_{i\in\mathbf{R}_{\mathbf{l}}}\{ X_{i}\leq u_{\mathbf{i},\mathbf{i}},X_{i}(\varepsilon)\leq v_{\mathbf{i}, \mathbf{i}}\}}\right|\] \[=P\left(\bigcap_{i\in\mathbf{R}_{\mathbf{l}}-\mathbf{R}_{\mathbf{ k}}}\{X_{i}\leq u_{\mathbf{i},\mathbf{i}},X_{i}(\varepsilon)\leq v_{ \mathbf{i},\mathbf{i}}\}\right)-P\left(\bigcap_{i\in\mathbf{R}_{\mathbf{l}}}\{ X_{i}\leq u_{\mathbf{i},\mathbf{i}},X_{i}(\varepsilon)\leq v_{\mathbf{i}, \mathbf{i}}\}\right)\] \[\leq\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{l}}-(\mathbf{R}_{ \mathbf{l}}-\mathbf{R}_{\mathbf{k}})}P\left(\{X_{\mathbf{i}}>u_{\mathbf{i}, \mathbf{i}}\}\bigcup\{X_{\mathbf{i}}(\varepsilon)>v_{\mathbf{i},\mathbf{i}}\}\right)\] \[\leq[l_{1}l_{2}-\#\left(\mathbf{R}_{\mathbf{l}}-\mathbf{R}_{ \mathbf{k}}\right)]\left[\max\left\{P\left(X_{\mathbf{i}}>u_{\mathbf{i}, \mathbf{i}}\right),\mathbf{i}\leq\mathbf{l}\right\}\right.\] \[\ll\frac{l_{1}l_{2}-\#\left(\mathbf{R}_{\mathbf{l}}-\mathbf{R}_{ \mathbf{k}}\right)}{l_{1}l_{2}},\]
by using the condition that \(\sup_{\mathbf{n}\geq\mathbf{1}}\{n_{1}n_{2}P(X_{\mathbf{i}}\geq v_{\mathbf{n}, \mathbf{i}}),\mathbf{i}\leq\mathbf{n}\}\) is bounded.
The following lemma is from Tan and Wang (2013), which plays a crucial role in the proof of Theorem 2.2.
**Lemma 4.3**. Let \(\eta_{\mathbf{i}},\mathbf{i}\in\mathbb{Z}_{+}^{2}\) be uniformly bounded variables. Assume that
\[Var\left(\frac{1}{\log n_{1}\log n_{2}}\sum_{\mathbf{k}\in\mathbf{R}_{\mathbf{ n}}}\frac{1}{k_{1}k_{2}}\eta_{\mathbf{k}}\right)\ll\frac{1}{\left(\log\log n _{1}\log\log n_{2}\right)^{1+\epsilon}}\]
Then
\[\frac{1}{\log n_{1}\log n_{2}}\sum_{\mathbf{k}\in\mathbf{R}_{\mathbf{n}}}\frac {1}{k_{1}k_{2}}\left(\eta_{\mathbf{k}}-E\left(\eta_{\mathbf{k}}\right)\right) \to 0\ \ \ \ a.s. \tag{25}\]
**Proof**: Please see Lemma 3.2 of Tan and Wang (2013).
**Proof of Theorem 2.1:** Recall that \(X_{\mathbf{i}}(\alpha)=(1-\alpha_{\mathbf{i}})\gamma(X_{\mathbf{i}})+\alpha_{ \mathbf{i}}X_{\mathbf{i}}\). Let \(w(s_{1},s_{2})=\sharp(\mathbf{K}_{\mathbf{s}})\). It is easy to see that \(w(s_{1},s_{2})\rightarrow\infty\) as \(\mathbf{n}\rightarrow\infty\). By using the full probability formula and triangle inequality, we get
\[\left|P\left(\bigcap_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}\{X_{ \mathbf{i}}(\varepsilon)\leq v_{\mathbf{n},\mathbf{i}}\},\bigcap_{\mathbf{i} \in\mathbf{R}_{\mathbf{n}}}\{X_{\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}}\} \right)-E[e^{-\lambda\kappa}e^{-(1-\lambda)\tau}]\right|\] \[\leq\sum_{r=0}^{2^{k_{n_{1}}k_{n_{2}}}-1}\sum_{\alpha\in\{0,1\}^{ n_{1}n_{2}}}E\left|P\left(\bigcap_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}\{X_{ \mathbf{i}}(\alpha)\leq v_{\mathbf{n},\mathbf{i}}\},\bigcap_{\mathbf{i}\in \mathbf{R}_{\mathbf{n}}}\{X_{\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}}\}\right)\right.\] \[\left.-\prod_{s_{1}=1}^{k_{n_{1}}}\prod_{s_{2}=1}^{k_{n_{2}}}P \left(\bigcap_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\{X_{\mathbf{i}}(\alpha) \leq v_{\mathbf{n},\mathbf{i}}\},\bigcap_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}} }\{X_{\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}}\}\right)\right)\right|_{\{ \mathbb{1}_{\{B_{r,\mathbf{k}_{n,\mathbf{n},\mathbf{n}}}\}}}}\] \[+\sum_{r=0}^{2^{k_{n_{1}}k_{n_{2}}}-1}\sum_{\alpha\in\{0,1\}^{n_{1} n_{2}}}E\left|\prod_{s_{1}=1}^{k_{n_{1}}}\prod_{s_{2}=0}^{k_{n_{2}}}P\left( \bigcap_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\{X_{\mathbf{i}}(\alpha)\leq v_ {\mathbf{n},\mathbf{i}}\},\bigcap_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\{X_{ \mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}}\}\right)\right.\] \[\left.-\prod_{s_{1}=1}^{k_{n_{1}}}\prod_{s_{2}=1}^{k_{n_{2}}}\left[ 1-\frac{\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\sum\limits_{\mathbf{i}\in\mathbf{R}_{ \mathbf{n}}}P(X_{\mathbf{i}}>v_{\mathbf{n},\mathbf{i}})+\left(1-\frac{r}{2^{k_ {n_{1}}k_{n_{2}}}}\right)\sum\limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_ {\mathbf{i}}>u_{\mathbf{n},\mathbf{i}})}{k_{n_{1}}k_{n_{2}}}\right]\right|_{ \mathbb{1}_{\{B_{r,\mathbf{k}_{n,\mathbf{n},\mathbf{n}}}\}}}\] \[+\sum_{r=0}^{2^{k_{n_{1}}k_{n_{2}}}-1}\sum_{\alpha\in\{0,1\}^{n_{1} n_{2}}}E\left|\prod_{s_{1}=1}^{k_{n_{1}}}\prod_{s_{2}=1}^{k_{n_{2}}}\left[1-\frac{ \frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\sum\limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{ n}}}P(X_{\mathbf{i}}>v_{\mathbf{n},\mathbf{i}})+\left(1-\frac{r}{2^{k_{n_{1}}k_{n_{2}}}} \right)\sum\limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>u_{ \mathbf{n},\mathbf{i}})}{k_{n_{1}}k_{n_{2}}}\right]\right.\] \[\left.-E[e^{-\lambda\kappa}e^{-(1-\lambda)\tau}]\right\|_{\{B_{r, \mathbf{k}_{n,\mathbf{n},\mathbf{n}}}\}}\] \[=J_{1}+J_{2}+J_{3}+J_{4}.\]
To bound the first term \(J_{1}\), we will divide each rectangle subsets \(\mathbf{K}_{\mathbf{s}}=\mathbf{K}_{(s_{1},s_{2})}\), \(s_{1}=1,2,\ldots,k_{n_{1}}\), \(s_{2}=1,2,\ldots,k_{n_{2}}\) into two parts. Without loss of generality, suppose the coordinates of the four apexes of the rectangle \(\mathbf{K}_{\mathbf{s}}\) are \((s_{11},s_{21}),(s_{11},s_{22}),(s_{12},s_{21})\) and \((s_{12},s_{22})\) with \(s_{11}<s_{12}\) and
\(s_{21}<s_{22}\). Let \({\bf K}_{\bf s}^{*}=[s_{11},s_{12}-m_{n_{1}}]\times[s_{21},s_{22}-m_{n_{2}}]\) and \({\bf K}_{\bf s}^{**}={\bf K}_{\bf s}-{\bf K}_{\bf s}^{*}\). It is easy to see that
\[\frac{s}{2}\left(\bigcup_{s_{1}=1,2,\ldots,k_{n_{1}},s_{1}=1,2,\ldots,k_{n_{1}} }{\bf K}_{\bf s}^{**}\right)<m_{n_{1}}k_{n_{1}}n_{2}+m_{n_{2}}k_{n_{2}}n_{1}.\]
Obviously,
\[\left|P\left(\bigcap_{{\bf i}\in{\bf R}_{\bf n}}\{X_{\bf i}(\alpha )\leq v_{\bf n,i}\},\bigcap_{{\bf i}\in{\bf R}_{\bf n}}\{X_{\bf i}\leq u_{\bf n,i}\}\right)-\prod_{s_{1}=1}^{k_{n_{1}}}\prod_{s_{2}=1}^{k_{n_{2}}}P\left( \bigcap_{{\bf i}\in{\bf K}_{\bf s}}\{X_{\bf i}(\alpha)\leq v_{\bf n,i}\}, \bigcap_{{\bf i}\in{\bf K}_{\bf s}}\{X_{\bf i}\leq u_{\bf n,i}\}\right)\right|\] \[+\left|P\left(\bigcap_{{\bf i}\in{\rm U}{\bf K}_{\bf s}^{*}}\{X_{ \bf i}(\alpha)\leq v_{\bf n,i}\},\bigcap_{{\bf i}\in{\rm U}{\bf K}_{\bf s}^{*} }\{X_{\bf i}\leq u_{\bf n,i}\}\right)-\prod_{s_{1}=1}^{k_{n_{1}}}\prod_{s_{2}= 1}^{k_{n_{2}}}P\left(\bigcap_{{\bf i}\in{\bf K}_{\bf s}^{*}}\{X_{\bf i}(\alpha) \leq v_{\bf n,i}\},\bigcap_{{\bf i}\in{\bf K}_{\bf s}^{*}}\{X_{\bf i}\leq u_{ \bf n,i}\}\right)\right|\] \[+\left|\prod_{s_{1}=1}^{k_{n_{1}}}\prod_{s_{2}=1}^{k_{n_{2}}}P \left(\bigcap_{{\bf i}\in{\rm U}{\bf K}_{\bf s}^{*}}\{X_{\bf i}(\alpha)\leq v _{\bf n,i}\},\bigcap_{{\bf i}\in{\bf K}_{\bf s}^{*}}\{X_{\bf i}\leq u_{\bf n,i }\}\right)-\prod_{s_{1}=1}^{k_{n_{1}}}\prod_{s_{2}=1}^{k_{n_{2}}}P\left( \bigcap_{{\bf i}\in{\bf K}_{\bf s}}\{X_{\bf i}(\alpha)\leq v_{\bf n,i}\}, \bigcap_{{\bf i}\in{\bf K}_{\bf s}}\{X_{\bf i}\leq u_{\bf n,i}\}\right)\right|\] \[=J_{11}+J_{12}+J_{13}.\]
It is directly to check that
\[J_{11} \leq P\left(\bigcup_{{\bf i}\in{\rm U}{\bf K}_{\bf s}^{**}}\{X_{\bf i }(\alpha)>v_{\bf n,i}\}\right)+P\left(\bigcup_{{\bf i}\in{\rm U}{\bf K}_{\bf s }^{**}}\{X_{\bf i}>u_{\bf n,i}\}\right)\] \[\leq 2\sharp\left(\cup{\bf K}_{\bf s}^{**}\right)\max_{{\bf i}\in{ \bf R}_{\bf n}}P(X_{\bf i}>v_{\bf n,i})<2[m_{n_{1}}k_{n_{1}}n_{2}+m_{n_{2}}k_{n _{2}}n_{1}]\max_{{\bf i}\in{\bf R}_{\bf n}}P(X_{\bf i}>v_{\bf n,i}).\]
Noting that
\[\left|\prod_{s=1}^{k}a_{s}-\prod_{s=1}^{k}b_{s}\right|\leq\sum_{s=1}^{k}|a_{s }-b_{s}|, \tag{26}\]
for all \(a_{s},b_{s}\in[0,1]\), we have similarly
\[J_{13} \leq \sum_{s_{1}=1}^{k_{n_{1}}}\sum_{s_{2}=1}^{k_{n_{2}}}\left[P\left( \bigcup_{{\bf i}\in{\bf K}_{\bf s}^{**}}\{X_{\bf i}(\alpha)>v_{\bf n,i}\} \right)+P\left(\bigcup_{{\bf i}\in{\bf K}_{\bf s}^{**}}\{X_{\bf i}>u_{\bf n,i} \}\right)\right]\] \[\leq 2\sharp\left(\cup{\bf K}_{\bf s}^{**}\right)\max_{{\bf i}\in{ \bf R}_{\bf n}}P(X_{\bf i}>v_{\bf n,i})<2[m_{n_{1}}k_{n_{1}}n_{2}+m_{n_{2}}k_{n _{2}}n_{1}]\max_{{\bf i}\in{\bf R}_{\bf n}}P(X_{\bf i}>v_{\bf n,i})\]
By induction and the condition \({\bf D}(u_{\bf n,i},v_{\bf n,i})\), we get
\[J_{12} \leq (k_{n_{1}}k_{n_{2}}-1)\alpha_{{\bf n},m_{n_{1}},m_{n_{2}}}.\]
Thus, we have
\[J_{1} \leq (k_{n_{1}}k_{n_{2}}-1)\alpha_{{\bf n},m_{n_{1}},m_{n_{2}}}+\frac {4[k_{n_{1}}m_{n_{1}}n_{2}+k_{n_{2}}m_{n_{2}}n_{1}]}{n_{1}n_{2}}n_{1}n_{2}\max_ {{\bf i}\in{\bf R}_{\bf n}}P(X_{\bf i}>v_{\bf n,i}).\]
Further, noting that \(\sup_{{\bf n}\geq{\bf 1}}\{n_{1}n_{2}P(X_{\bf i}\geq v_{\bf n,i}),{\bf i}\leq{\bf n}\}\) is bounded and using condition \({\bf D}(u_{\bf n,i},v_{\bf n,i})\), we have
\[J_{1}=o(1)\ \ \mbox{as}\ \ \ {\bf n}\to\infty.\]
For the second term, for any \(0\leq r\leq 2^{k_{n_{1}}k_{n_{2}}}-1\), it is not hard to check from (5) and Bonferroni type inequality that
\[\left[1-\frac{\frac{r}{2^{k_{n_{1}}k_{n_{2}}}-1}}{\sum\limits_{ \mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>v_{\mathbf{n},\mathbf{i}} )+\left(1-\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\right)\sum\limits_{\mathbf{i}\in \mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i}})}\right]\] \[\quad+\sum\limits_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\left[ \frac{r}{2^{k_{n_{1}}k_{n_{2}}}}-\alpha_{\mathbf{i}}\right]\left(P(X_{\mathbf{ i}}>v_{\mathbf{n},\mathbf{i}})-P(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i}}) \right)+o(1)\] \[\quad\leq P\left(\bigcap\limits_{\mathbf{i}\in\mathbf{K}_{ \mathbf{s}}}\{X_{\mathbf{i}}(\alpha)\leq v_{\mathbf{n},\mathbf{i}}\},\bigcap \limits_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\{X_{\mathbf{i}}\leq u_{ \mathbf{n},\mathbf{i}}\}\right)\] \[\quad\leq\left[1-\frac{\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\sum \limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>v_{\mathbf{n}, \mathbf{i}})+\left(1-\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\right)\sum\limits_{ \mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i} })}{k_{n_{1}}k_{n_{2}}}\right]\] \[\quad\quad+\sum\limits_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}} \left[\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}-\alpha_{\mathbf{i}}\right]\left(P(X_{ \mathbf{i}}>v_{\mathbf{n},\mathbf{i}})-P(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{ i}})\right)\] \[\quad\quad+\sum\limits_{\mathbf{i},\mathbf{j}\in\mathbf{K}_{ \mathbf{s}},\mathbf{i}\neq\mathbf{j}}P\left(X_{\mathbf{i}}\geq v_{\mathbf{n}, \mathbf{i}},X_{\mathbf{j}}\geq v_{\mathbf{n},\mathbf{j}}\right)+o(1),\]
as \(\mathbf{n}\rightarrow\infty\), which together with (26) implies
\[J_{2} \leq \sum\limits_{r=0}^{2^{k_{n_{1}}k_{n_{2}}}-1}\sum\limits_{\alpha \in\{0,1\}^{n_{1}}n_{2}}\sum\limits_{s_{1}=1}^{k_{n_{1}}}\sum\limits_{s_{2}=1} ^{k_{n_{2}}}E\left|P\left(\bigcap\limits_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}} }\{X_{\mathbf{i}}(\alpha)\leq v_{\mathbf{n},\mathbf{i}}\},\bigcap\limits_{ \mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\{X_{\mathbf{i}}\leq u_{\mathbf{n}, \mathbf{i}}\}\right)\right.\] \[\quad-\left.\left[1-\frac{\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\sum \limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>v_{\mathbf{n}, \mathbf{i}})+\left(1-\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\right)\sum\limits_{ \mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i} })}{k_{n_{1}}k_{n_{2}}}\right]\right|\mathbbm{1}_{\{B_{r,\mathbf{k}_{\mathbf{n} },\mathbf{n},\mathbf{n}}\}}\] \[\leq \sum\limits_{r=0}^{2^{k_{n_{1}}k_{n_{2}}}-1}\sum\limits_{\alpha \in\{0,1\}^{n_{1}}n_{2}}\sum\limits_{s_{1}=1}^{k_{n_{1}}}\sum\limits_{s_{2}=1} ^{k_{n_{2}}}E\left|\sum\limits_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\left[ \frac{r}{2^{k_{n_{1}}k_{n_{2}}}}-\alpha_{\mathbf{i}}\right]\left(P(X_{\mathbf{ i}}>v_{\mathbf{n},\mathbf{i}})-P(X_{\mathbf{i}}>u_{\mathbf{n}, \mathbf{i}})\right)\right|\mathbbm{1}_{\{B_{r,\mathbf{k}_{\mathbf{n}},\mathbf{n },\mathbf{n}}\}}\] \[\quad+\sum\limits_{s_{1}=1}^{k_{n_{1}}k_{n_{2}}}\sum\limits_{s_{2}=1 }^{k_{n_{2}}}\sum\limits_{\mathbf{i},\mathbf{j}\in\mathbf{K}_{\mathbf{s}}, \mathbf{i}\neq\mathbf{j}}P\left(X_{\mathbf{i}}\geq v_{\mathbf{n},\mathbf{i}},X _{\mathbf{j}}\geq v_{\mathbf{n},\mathbf{j}}\right)+o(1)\] \[= J_{21}+J_{22}+o(1).\]
For the first term \(J_{21}\), suppose again that the coordinates of the four apexes of the rectangle \(\mathbf{K_{\mathbf{s}}}\) are \((s_{11},s_{21}),(s_{11},s_{22}),(s_{12},s_{21})\) and \((s_{12},s_{22})\) with \(s_{11}<s_{12}\) and \(s_{21}<s_{22}\). It follows from the facts that \(P(X_{\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}})=O(P(X_{\mathbf{j}}\leq u_{ \mathbf{n},\mathbf{j}}))\) and \(P(X_{\mathbf{i}}\leq v_{\mathbf{n},\mathbf{i}})=O(P(X_{\mathbf{j}}\leq v_{ \mathbf{n},\mathbf{j}}))\) for all \(\mathbf{1}\leq\mathbf{i}\neq\mathbf{j}\leq\mathbf{n}\) that
\[\sum\limits_{r=0}^{2^{k_{n_{1}}k_{n_{2}}}-1}E\left|\sum\limits_{\mathbf{i}\in \mathbf{K}_{\mathbf{s}}}\left[\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}-\varepsilon_{ \mathbf{i}}\right]\left(P(X_{\mathbf{i}}>v_{\mathbf{n},\mathbf{i}})-P(X_{ \mathbf{i}}>u_{\mathbf{n},\mathbf{i}})\right)\right|\mathbbm{1}_{\{B_{r, \mathbf{k}_{\mathbf{n}}}\}}\]
\[\ll E\left|\sum_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}[\varepsilon_{ \mathbf{i}}-\lambda]\right|(P(X_{\mathbf{1}}\leq u_{\mathbf{n},\mathbf{1}})-P(X _{\mathbf{1}}\leq v_{\mathbf{n},\mathbf{1}}))\] \[\quad+\sum_{r=0}^{2^{k_{n_{1}}k_{n_{2}}-1}}(P(X_{\mathbf{1}}\leq u _{\mathbf{n},\mathbf{1}})-P(X_{\mathbf{1}}\leq v_{\mathbf{n},\mathbf{1}}))E \left|\sum_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\left[\lambda-\frac{r}{2^{k_{ n_{1}}k_{n_{2}}}}\right]\right|\mathbbm{1}_{\{B_{r,\mathbf{k}_{\mathbf{n}}}\}}\] \[\leq\frac{1}{k_{n_{1}}k_{n_{2}}}E\left|\sum_{\mathbf{i}\in \mathbf{K}_{\mathbf{s}}}\frac{\varepsilon_{\mathbf{i}}}{w(s_{1},s_{2})}- \lambda\right|n_{1}n_{2}(P(X_{\mathbf{1}}\leq u_{\mathbf{n},\mathbf{1}})-P(X _{\mathbf{1}}\leq v_{\mathbf{n},\mathbf{1}}))+\frac{1}{2^{k_{n_{1}}k_{n_{2}}} }\frac{n_{1}n_{2}(P(X_{\mathbf{1}}\leq u_{\mathbf{n},\mathbf{1}})-P(X_{ \mathbf{1}}\leq v_{\mathbf{n},\mathbf{1}}))}{k_{n_{1}}k_{n_{2}}},\]
where
\[E\left|\sum_{\mathbf{i}\in\mathbf{K}_{\mathbf{s}}}\frac{ \varepsilon_{\mathbf{i}}}{w(s_{1},s_{2})}-\lambda\right| = E\left|\frac{S_{(s_{12},s_{22})}-S_{(s_{11},s_{22})}-S_{(s_{12},s_{21})}+S_{(s_{11},s_{21})}}{w(s_{1},s_{2})}-\lambda\right|\] \[\leq E\left|s_{12}s_{22}\left(\frac{S(s_{12},s_{22})}{s_{12}s_{22}w(s _{1},s_{2})}-\lambda\right)\right|+E\left|s_{11}s_{22}\left(\frac{S_{(s_{11},s _{22})}}{s_{11}s_{22}w(s_{1},s_{2})}-\lambda\right)\right|\] \[+ E\left|s_{12}s_{21}\left(\frac{S_{(s_{12},s_{1})}}{s_{12}s_{21}w (s_{1},s_{2})}-\lambda\right)\right|+E\left|s_{11}s_{21}\left(\frac{S_{(s_{11},s_{21})}}{s_{11}s_{21}w(s_{1},s_{2})}-\lambda\right)\right|.\]
Taking into account (9), we have
\[\lim_{w(s_{1},s_{2})\to\infty}\left[E\left|s_{12}s_{22}\left( \frac{S(s_{12},s_{22})}{s_{12}s_{22}w(s_{1},s_{2})}-\lambda\right)\right|+E \left|s_{11}s_{22}\left(\frac{S_{(s_{11},s_{22})}}{s_{11}s_{22}w(s_{1},s_{2})} -\lambda\right)\right|\right.\] \[\left.+E\left|s_{12}s_{21}\left(\frac{S_{(s_{12},s_{21})}}{s_{12} s_{21}w(s_{1},s_{2})}-\lambda\right)\right|+E\left|s_{11}s_{21}\left(\frac{S_{(s_{11},s_{21})}}{s_{11}s_{21}w(s_{1},s_{2})}-\lambda\right)\right|\right]=0\]
Note that \(w(s_{1},s_{2})\to\infty\) as \(\mathbf{n}\to\infty\). Letting \(\mathbf{n}\to\infty\), we have
\[J_{21} \leq \sum_{s_{1}=1}^{k_{n_{1}}}\sum_{s_{2}=1}^{k_{n_{2}}}\frac{1}{2^{k _{n_{1}}k_{n_{2}}}}\frac{\kappa-\tau}{k_{n_{1}}k_{n_{2}}}=O(\frac{1}{2^{k_{n_{1} }k_{n_{2}}}}).\]
For the second term \(J_{22}\), by the condition \(\mathbf{D}^{\prime}(u_{\mathbf{n},\mathbf{i}})\), we obtain
\[J_{22} = \frac{1}{k_{n_{1}}k_{n_{2}}}\sum_{s_{1}=1}^{k_{n_{1}}}\sum_{s_{2}=1 }^{k_{n_{2}}}\left(k_{n_{1}}k_{n_{2}}\sum_{\mathbf{i},\mathbf{j}\in\mathbf{K}_ {\mathbf{s}},\mathbf{i}\neq\mathbf{j}}P\left(X_{\mathbf{i}}\geq v_{\mathbf{n}, \mathbf{i}},X_{\mathbf{j}}\geq v_{\mathbf{n},\mathbf{j}}\right)\right)=o(1),\]
as \(\mathbf{n}\to\infty\). We thus have
\[J_{2}\leq O(\frac{1}{2^{k_{n_{1}}k_{n_{2}}}})\ \ \mathbf{a}\ \mathbf{n}\to\infty.\]
For the term \(J_{3}\), applying the facts that \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P\left(X_{\mathbf{i}}>u_{\mathbf{n}, \mathbf{i}}\right)\to\tau>0\) and \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P\left(X_{\mathbf{i}}>v_{\mathbf{n}, \mathbf{i}}\right)\to\kappa>0\) again, we have
\[J_{3} \leq \sum_{r=0}^{2^{k_{n_{1}}k_{n_{2}}}-1}\sum_{\alpha\in\{0,1\}^{n_{ 1}n_{2}}}\sum_{s_{1}=1}^{k_{n_{2}}}E\left|\left[1-\frac{\frac{r}{2^{k_{n_{1}}k_{n_{ 2}}}}\sum\limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>v_{ \mathbf{n},\mathbf{i}})+\left(1-\frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\right)\sum \limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>u_{\mathbf{n}, \mathbf{i}})}{k_{n_{1}}k_{n_{2}}}\right]\right.\] \[- \left.\left[1-\frac{\lambda\sum\limits_{\mathbf{i}\in\mathbf{R}_{ \mathbf{n}}}P(X_{\mathbf{i}}>v_{\mathbf{n},\mathbf{i}})+(1-\lambda)\sum\limits_{ \mathbf{i}\in\mathbf{R}_{\mathbf{n}}}P(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i}})}{k _{n_{1}}k_{n_{2}}}\right]\right|\mathbbm{1}_{\{B_{r,\mathbf{k}_{\mathbf{n}},\alpha, \mathbf{n}}\}}\] \[\leq \sum_{r=0}^{2^{k_{n_{1}}k_{n_{2}}}-1}\sum_{\alpha\in\{0,1\}^{n_{ 1}n_{2}}}\sum_{s_{1}=1}^{k_{n_{1}}}\sum_{s_{2}=1}^{k_{n_{2}}}E\left|\lambda- \frac{r}{2^{k_{n_{1}}k_{n_{2}}}}\right|\mathbbm{1}_{\{B_{r,\mathbf{k}_{\mathbf{n}}, \alpha,\mathbf{n}}\}}\frac{\sum\limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}(P(X_{ \mathbf{i}}>v_{\mathbf{n},\mathbf{i}})+P(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i}}))} {k_{n_{1}}k_{n_{2}}}\] \[\leq \frac{\sum\limits_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}(P(X_{ \mathbf{i}}>v_{\mathbf{n},\mathbf{i}})+P(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i}}))} {2^{k_{n_{1}}k_{n_{2}}}}\]
\[\leq\] \[+\left|Cov\left(\mathbb{1}_{\left\{\bigcap_{k\in\mathbf{R}_{k}}\{X_{i} \leq u_{i,1},X_{i}(\varepsilon)\leq v_{i,1}\}\right\}}\right),\mathbb{1}_{ \left\{\bigcap_{k\in\mathbf{R}_{1}-\mathbf{R}_{k}}\{X_{i}\leq u_{i,1},X_{i}( \varepsilon)\leq v_{i,1}\}\}}\right)\right|.\]
By the Lemmas 4.2 and 4.1, we get
\[E\left|\mathbb{1}_{\left\{\bigcap_{k\in\mathbf{R}_{1}}\{X_{i} \leq u_{i,1},X_{i}(\varepsilon)\leq v_{i,1}\}\right\}}-\mathbb{1}_{\left\{ \bigcap_{k\in\mathbf{R}_{1}-\mathbf{R}_{k}}\{X_{i}\leq u_{i,1},X_{i}( \varepsilon)\leq v_{i,1}\}\right\}}\right|\ll\frac{l_{1}l_{2}-\#(\mathbf{R}_{ 1}-\mathbf{R}_{k})}{l_{1}l_{2}}\]
and
\[\left|Cov\left(\mathbb{1}_{\left\{\bigcap_{k\in\mathbf{R}_{k}}\{X_{i}\leq u_ {i,1},X_{i}(\varepsilon)\leq v_{i,1}\}\right\}},\mathbb{1}_{\left\{\bigcap_{k \in\mathbf{R}_{1}-\mathbf{R}_{k}}\{X_{i}\leq u_{i,1},X_{i}(\varepsilon)\leq v _{i,1}\}\right\}}\right)\right|\]
\[\ll\left\{\begin{array}{ll}\alpha_{1,{\bf k},m_{l_{1}},m_{l_{2}}}^{ \ast}+\frac{k_{2}m_{l_{1}}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}},&k_{1} <k_{2},l_{1}<l_{2};\\ \alpha_{1,{\bf k},m_{l_{1}},m_{l_{2}}}^{\ast}+\frac{l_{2}m_{l_{1}}+m_{l_{1}}m_{ l_{2}}}{l_{1}l_{2}},&k_{1}<l_{1},l_{2}<k_{2},k_{1}k_{2}<l_{1}l_{2};\\ \alpha_{1,{\bf k},m_{l_{1}},m_{l_{2}}}^{\ast}+\frac{l_{1}m_{l_{2}}+m_{l_{1}}m_{ l_{2}}}{l_{1}l_{2}},&l_{1}<k_{2},k_{2}<l_{2},k_{1}k_{2}<l_{1}l_{2}.\end{array}\right.\]
respectively.
If \(k_{1}<l_{1},k_{2}<l_{2}\), we have
\[E\left|\eta_{\bf k}\eta\right|\leq\frac{l_{1}l_{2}-\#({\bf R_{1}}-{\bf R_{k}}) }{l_{1}l_{2}}+\alpha_{1,{\bf k},m_{l_{1}},m_{l_{2}}}^{\ast}+\frac{k_{2}m_{l_{1 }}+k_{1}m_{l_{2}}+m_{l_{1}}m_{l_{2}}}{l_{1}l_{2}}\]
and
\[T_{2} \ll \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{\begin{subarray} {c}{\bf k},{\bf l}\in{\bf R_{n}}\\ {\bf k}\neq{\bf l}\end{subarray}}\frac{l_{1}l_{2}-\#({\bf R_{1}}-{\bf R_{k}}) }{k_{1}k_{2}l_{1}^{2}l_{2}^{2}}+\frac{1}{\left(\log n_{1}\log n_{2}\right)^{2 }}\sum_{\begin{subarray}{c}{\bf k},{\bf l}\in{\bf R_{n}}\\ {\bf k}\neq{\bf l}\end{subarray}}\frac{\alpha_{1,m_{l_{1}},m_{l_{2}}}^{\ast}} {k_{1}k_{2}l_{1}l_{2}}\] \[+\frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{\begin{subarray} {c}{\bf k},{\bf l}\in{\bf R_{n}}\\ {\bf k}\neq{\bf l}\end{subarray}}\frac{k_{1}m_{l_{2}}+k_{2}m_{l_{1}}+m_{l_{1}}m _{l_{2}}}{k_{1}k_{2}l_{1}^{2}l_{2}^{2}}\] \[= T_{21}+T_{22}+T_{23},\]
where
\[T_{21} = \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{\begin{subarray} {c}{\bf k},{\bf l}\in{\bf R_{n}}\\ {\bf k}\neq{\bf l}\end{subarray}}\frac{l_{1}l_{2}-\#({\bf R_{1}}-{\bf R_{k}}) }{k_{1}k_{2}l_{1}^{2}l_{2}^{2}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{1\leq k_{1} \leq l_{1}\leq n_{1}}\sum_{1\leq k_{2}\leq l_{2}\leq n_{2}}\frac{k_{1}k_{2}}{k _{1}k_{2}l_{1}^{2}l_{2}^{2}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n_ {1}}\sum_{k_{1}=1}^{l_{1}}\sum_{l_{2}=1}^{n_{2}}\sum_{k_{2}=1}^{l_{2}}\frac{1 }{l_{1}^{2}l_{2}^{2}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n_ {1}}\sum_{l_{2}=1}^{n_{2}}\frac{1}{l_{1}l_{2}}\ll\frac{1}{\log n_{1}\log n_{2}},\]
\[T_{22} = \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{\begin{subarray} {c}{\bf k},{\bf l}\in{\bf R_{n}}\ll{\bf l}\end{subarray}}\frac{\alpha_{1,{\bf k },m_{l_{1}},m_{l_{2}}}^{\ast}}{k_{1}k_{2}l_{1}l_{2}}\] \[\ll \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n_ {1}}\sum_{k_{1}=1}^{l_{1}}\sum_{l_{2}=1}^{n_{2}}\sum_{k_{2}=1}^{l_{2}}\frac{1 }{k_{1}k_{2}l_{1}l_{2}\left(\log\log l_{1}\log\log l_{2}\right)^{1+\epsilon}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n_ {1}}\sum_{l_{2}=1}^{n_{2}}\frac{\log l_{1}\log l_{2}}{l_{1}l_{2}\left(\log \log l_{1}\right)^{1+\epsilon}\left(\log\log l_{2}\right)^{1+\epsilon}}\] \[\ll \frac{1}{\left(\log\log n_{1}\log\log n_{2}\right)^{1+\epsilon}}\]
and
\[T_{23} = \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{\begin{subarray} {c}{\bf k},{\bf l}\in{\bf R_{n}}\\ {\bf k}\neq{\bf l}\end{subarray}}\frac{k_{1}m_{l_{2}}+k_{2}m_{l_{1}}+m_{l_{1}}m _{l_{2}}}{k_{1}k_{2}l_{1}^{2}l_{2}^{2}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n_ {1}}\sum_{k_{1}=1}^{l_{1}}\sum_{l_{2}=1}^{n_{2}}\sum_{k_{2}=1}^{l_{2}}\frac{k_{1} \log l_{2}+k_{2}\log l_{1}+\log l_{1}\log l_{2}}{k_{1}k_{2}l_{1}^{2}l_{2}^{2}}\]
\[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n_{1}} \sum_{k_{1}=1}^{l_{1}}\sum_{l_{2}=1}^{n_{2}}\sum_{k_{2}=1}^{l_{2}}\frac{\log l _{2}}{k_{2}l_{1}^{2}l_{2}^{2}}+\frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}} \sum_{l_{1}=1}^{n_{1}}\sum_{k_{1}=1}^{l_{1}}\sum_{l_{2}=1}^{n_{2}}\sum_{k_{2}= 1}^{l_{2}}\frac{\log l_{1}}{k_{1}l_{1}^{2}l_{2}^{2}}\] \[+\frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{1}=1}^{l_{1}}\sum_{l_{2}=1}^{n_{2}}\sum_{k_{2}=1}^{l_{2}}\frac{ \log l_{1}\log l_{2}}{k_{1}k_{2}l_{1}^{2}l_{2}^{2}}\] \[\ll \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\left(\log n_{1}+ \log n_{2}\right)=o\left(\frac{1}{\left(\log\log n_{1}\log\log n_{2}\right)^{1 +\epsilon}}\right).\]
Thus
\[T_{2}\ll\frac{1}{\left(\log\log n_{1}\log\log n_{2}\right)^{1+\epsilon}}.\]
If \(k_{1}<l_{1},l_{2}<k_{2}\) and \(k_{1}k_{2}<l_{1}l_{2}\)
\[T_{2} \ll \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{\begin{subarray} {c}\mathbf{k},\mathbf{l}\in\mathbf{R}_{n}\\ \mathbf{k}\neq\mathbf{l}\end{subarray}}\frac{l_{1}l_{2}-\#(\mathbf{R}_{ \mathbf{l}}-\mathbf{R}_{\mathbf{k}})}{k_{1}k_{2}l_{1}^{2}l_{2}^{2}}+\frac{1}{ \left(\log n_{1}\log n_{2}\right)^{2}}\sum_{\begin{subarray}{c}\mathbf{k}, \mathbf{l}\in\mathbf{R}_{n}\\ \mathbf{k}\neq\mathbf{l}\end{subarray}}\frac{\alpha_{\mathbf{l},\mathbf{k},m_{ l_{1}},m_{l_{2}}}^{*}}{k_{1}k_{2}l_{1}l_{2}}\] \[+\frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{ \begin{subarray}{c}\mathbf{k},\mathbf{l}\in\mathbf{R}_{n}\\ \mathbf{k}\neq\mathbf{l}\end{subarray}}\frac{m_{l_{1}}l_{2}+m_{l_{1}}m_{l_{2}}} {k_{1}k_{2}l_{1}^{2}l_{2}^{2}}\] \[= T_{24}+T_{25}+T_{26}.\]
For the first and second term, we have
\[T_{24} = \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{\begin{subarray} {c}\mathbf{k},\mathbf{l}\in\mathbf{R}_{n}\\ \mathbf{k}\neq\mathbf{l}\end{subarray}}\frac{l_{1}l_{2}-\left(l_{1}l_{2}-k_{1}l _{2}\right)}{k_{1}k_{2}l_{1}^{2}l_{2}^{2}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{1}=1}^{l_{1}l_{2}/k_{2}}\sum_{k_{2}=1}^{n_{2}}\sum_{l_{2}=1}^{k_ {2}}\frac{1}{k_{2}l_{1}^{2}l_{2}}\] \[= \frac{1}{\log n_{1}\log n_{2}}=o\left(\frac{1}{\left(\log\log n_{ 1}\log\log n_{2}\right)^{1+\epsilon}}\right)\]
and
\[T_{25} = \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{1}=1}^{l_{1}l_{2}/k_{2}}\sum_{k_{2}=1}^{n_{2}}\sum_{l_{2}=1}^{k_ {2}}\frac{\alpha_{\mathbf{l},\mathbf{k},m_{l_{1}},m_{l_{2}}}^{*}}{k_{1}k_{2}l _{1}l_{2}}\] \[\ll \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{1}=1}^{l_{1}l_{2}/k_{2}}\sum_{k_{2}=1}^{n_{2}}\sum_{l_{2}=1}^{k_ {2}}\frac{1}{k_{1}k_{2}l_{1}l_{2}\left(\log\log l_{1}\log\log l_{2}\right)^{1+ \epsilon}}\] \[\ll \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{2}=1}^{n_{2}}\sum_{l_{2}=1}^{k_{2}}\frac{\log l_{1}+\log l_{2}- \log k_{2}}{k_{2}l_{1}l_{2}\left(\log\log l_{1}\log\log l_{2}\right)^{1+\epsilon}}\] \[< \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{l_{2}=1}^{n_{2}}\frac{\log n_{1}\log n_{2}}{l_{1}l_{2}\left(\log \log l_{1}\log\log l_{2}\right)^{1+\epsilon}}\] \[\leq \frac{1}{\left(\log\log n_{1}\log\log n_{2}\right)^{1+\epsilon}}.\]
Noting that \(n_{1}=O(n_{2})\), we have
\[T_{26} \ll \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{1}=1}^{l_{1}l_{2}/k_{2}}\sum_{k_{2}=l_{2}}^{n_{2}}\sum_{l_{2}=1}^{n _{2}}\frac{m_{l_{1}}}{k_{1}k_{2}l_{1}^{2}l_{2}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{1}=1}^{l_{1}l_{2}/k_{2}}\sum_{k_{2}=1}^{n_{2}}\sum_{l_{2}=1}^{n _{2}}\frac{m_{l_{1}}}{k_{1}k_{2}l_{1}^{2}l_{2}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{1}=1}^{l_{1}l_{2}/k_{2}}\sum_{k_{2}=1}^{n_{2}}\sum_{l_{2}=1}^{k_ {2}}\frac{1}{k_{2}l_{1}^{2}l_{2}}\] \[= \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{1}=1}^{l_{1}l_{2}/k_{2}}\sum_{k_{2}=1}^{n_{2}}\frac{\log l_{1}\log n _{2}}{k_{1}k_{2}l_{1}l_{2}\left(\log\log l_{1}\log\log l_{2}\right)^{1+\epsilon}}\] \[\ll \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{2}=1}^{n_{2}}\sum_{l_{2}=1}^{k_{2}}\frac{\log l_{1}+\log l_{2}- \log k_{2}}{k_{2}l_{1}l_{2}\left(\log\log l_{1}\log\log l_{2}\right)^{1+ \epsilon}}\] \[< \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{l_{2}=1}^{n_{2}}\sum_{l_{1}=1}^{n_{2}}\frac{\log n_{1}\log n_{2}}{l_{1}l_{2} \left(\log\log l_{1}\log\log l_{2}\right)^{1+\epsilon}}\] \[\leq \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n _{1}}\sum_{k_{2}=1}^{n_{2}}\sum_{l_{2}=1}^{k_{2}}\frac{\log l_{1}\log n_{2}}{k_{2}l _{1}l_{
\[< \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n_{1}} \sum_{k_{2}=l_{2}}^{n_{2}}\sum_{l_{2}=1}^{n_{2}}\frac{\log l_{1}(\log l_{1}+ \log l_{2})}{l_{1}^{2}l_{2}k_{2}}\] \[< \frac{1}{\left(\log n_{1}\log n_{2}\right)^{2}}\sum_{l_{1}=1}^{n_ {1}}\sum_{l_{2}=1}^{n_{2}}\frac{\log l_{1}(\log l_{1}+\log l_{2})\log n_{2}}{l _{1}^{2}l_{2}}\] \[\ll \frac{\log n_{2}}{\left(\log n_{1}\log n_{2}\right)^{2}}\left( \log n_{2}+\sum_{l_{2}=1}^{n_{2}}\frac{\log l_{2}}{l_{2}}\right)\] \[\ll \frac{1}{\left(\log n_{1}\right)^{2}}+\frac{\log n_{2}}{(\log n _{1})^{2}}=o\left(\frac{1}{\left(\log\log n_{1}\log\log n_{2}\right)^{1+ \epsilon}}\right).\]
Thus
\[T_{2}\ll\frac{1}{\left(\log\log n_{1}\log\log n_{2}\right)^{1+\epsilon}}.\]
If \(l_{1}<k_{1},k_{2}<l_{2}\) and \(k_{1}k_{2}<l_{1}l_{2}\), by the same arguments as for the second case, we also have
\[T_{2}\ll\frac{1}{\left(\log\log n_{1}\log\log n_{2}\right)^{1+\epsilon}}.\]
Therefore
\[Var\left(\frac{1}{\log n_{1}\log n_{2}}\sum_{\mathbf{k}\in\mathbf{R}_{n}} \frac{1}{k_{1}k_{2}}\eta_{\mathbf{k}}\right)\ll\frac{1}{\left(\log\log n_{1} \log\log n_{2}\right)^{1+\epsilon}}.\]
Now, the result follows from Theorem 2.1 and Lemma 4.3.
**Proof of Theorem 3.1**: To prove the theorem, it suffices to check that \(\mathbf{D}(u_{\mathbf{n},\mathbf{i}},v_{\mathbf{n},\mathbf{i}})\), \(\mathbf{D}^{*}(u_{\mathbf{k},\mathbf{j}},v_{\mathbf{k},\mathbf{i}},u_{\mathbf{ n},\mathbf{j}},v_{\mathbf{n},\mathbf{i}})\) and \(\mathbf{D}^{\prime}(v_{\mathbf{n},\mathbf{i}})\) hold. Recall that \(v_{\mathbf{n},\mathbf{j}}\leq u_{\mathbf{n},\mathbf{j}}\) for all \(\mathbf{i}\leq\mathbf{n}\). By Normal Comparison Lemma (see e.g., Leadbetter et al. (1983)), we have
\[k_{n_{1}}k_{n_{2}}\alpha_{\mathbf{n},m_{n_{1}},m_{n_{2}}}\ll\sum_{\mathbf{1} \leq\mathbf{i}\neq\mathbf{j}\leq\mathbf{n}}|r_{\mathbf{i},\mathbf{j}}|\exp \left(-\frac{v_{\mathbf{n},\mathbf{i}}^{2}+v_{\mathbf{n},\mathbf{j}}^{2}}{2(1+ |r_{\mathbf{i},\mathbf{j}}|)}\right),\]
which tends to \(0\) as \(\mathbf{n}\rightarrow\infty\), by Lemmas 3.3 and 3.4 of Tan and Wang (2014). Thus, \(\mathbf{D}(u_{\mathbf{n},\mathbf{i}},v_{\mathbf{n},\mathbf{i}})\) holds. Applying Normal Comparison Lemma again, we have
\[\sup_{k_{1}k_{2}<n_{1}n_{2}}\alpha_{\mathbf{n},\mathbf{k},m_{n_{ 1}},m_{n_{2}}}^{*} \ll \sup_{k_{1}k_{2}<n_{1}n_{2}}\sup_{\mathbf{1}\subseteq\mathbf{R}_{ \mathbf{k}},\mathbf{I}_{2}\subseteq\mathbf{R}_{\mathbf{n}}\setminus\mathbf{M}_ {\mathbf{n},\mathbf{n}}}\sum_{\mathbf{i}\in\mathbf{I}_{1},\mathbf{j}\in \mathbf{I}_{2}}|r_{\mathbf{i},\mathbf{j}}|\exp\left(-\frac{v_{\mathbf{k}, \mathbf{i}}^{2}+v_{\mathbf{n},\mathbf{j}}^{2}}{2(1+|r_{\mathbf{i},\mathbf{j}}| )}\right) \tag{27}\] \[\ll \sup_{k_{1}k_{2}<n_{1}n_{2}}\sum_{\mathbf{i}\in\mathbf{R}_{ \mathbf{k}},\mathbf{j}\subset\mathbf{R}_{\mathbf{n}}\atop\mathbf{i}\neq \mathbf{j}}|r_{\mathbf{i},\mathbf{j}}|\exp\left(-\frac{v_{\mathbf{k},\mathbf{i} }^{2}+v_{\mathbf{n},\mathbf{j}}^{2}}{2(1+|r_{\mathbf{i},\mathbf{j}}|)}\right)\] \[\ll \sup_{k_{1}k_{2}<n_{1}n_{2}}k_{1}k_{2}\sum_{\mathbf{0}\leq\mathbf{ j}\leq\mathbf{n},\mathbf{j}\neq\mathbf{0}}|\rho_{\mathbf{j}}|\exp\left(-\frac{v_{ \mathbf{k},\mathbf{i}}^{2}+v_{\mathbf{n},\mathbf{j}}^{2}}{2(1+|\rho_{\mathbf{ j}}|)}\right).\]
Now, by a similar arguments as for the proof of Lemmas 3.3 and 3.4 of Tan and Wang (2014), we can show the term in (27) tends to \(0\) as \(\mathbf{n}\rightarrow\infty\). This proves that \(\mathbf{D}^{*}(u_{\mathbf{k},\mathbf{j}},v_{\mathbf{k},\mathbf{i}},u_{\mathbf{ n},\mathbf{j}},v_{\mathbf{n},\mathbf{i}})\) holds. Note that by Normal Comparison Lemma again
\[|P\left(X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i}},X_{\mathbf{j}}>u_{\mathbf{n}, \mathbf{j}}\right)-(1-\Phi(u_{\mathbf{n},\mathbf{i}}))(1-\Phi(u_{\mathbf{n},\mathbf{j}}))|\ll|r_{\mathbf{i},\mathbf{j}}|\exp\left(-\frac{u_{\mathbf{n}, \mathbf{i}}^{2}+u_{\mathbf{n},\mathbf{j}}^{2}}{2(1+|r_{\mathbf{i},\mathbf{j}}| )}\right),\]
which combined with (5) implies
\[k_{n_{1}}k_{n_{2}}\sum_{\mathbf{i}\neq\mathbf{j}\in\mathbf{I}}P\left( X_{\mathbf{i}}>u_{\mathbf{n},\mathbf{i}},X_{\mathbf{j}}>u_{\mathbf{n},\mathbf{j}}\right)\] \[\ll k_{n_{1}}k_{n_{2}}\sum_{\mathbf{i}\neq\mathbf{j}\in\mathbf{I}} \left(1-\Phi(u_{\mathbf{n},\mathbf{i}})\right)(1-\Phi(u_{\mathbf{n},\mathbf{j} }))+k_{n_{1}}k_{n_{2}}\sum_{\mathbf{i}\neq\mathbf{j}\in\mathbf{I}}|r_{\mathbf{ i},\mathbf{j}}|\exp\left(-\frac{u_{\mathbf{n},\mathbf{i}}^{2}+u_{\mathbf{n}, \mathbf{j}}^{2}}{2(1+|r_{\mathbf{i},\mathbf{j}}|)}\right)\] \[\ll k_{n_{1}}k_{n_{2}}\left[\sum_{\mathbf{i}\in\mathbf{I}}(1- \Phi(u_{\mathbf{n},\mathbf{i}}))\right]^{2}+k_{n_{1}}k_{n_{2}}\sum_{\mathbf{i} \neq\mathbf{j}\in\mathbf{I}}|r_{\mathbf{i},\mathbf{j}}|\exp\left(-\frac{u_{ \mathbf{n},\mathbf{i}}^{2}+u_{\mathbf{n},\mathbf{j}}^{2}}{2(1+|r_{\mathbf{i}, \mathbf{j}}|)}\right)\] \[\ll\frac{1}{k_{n_{1}}k_{n_{2}}}\left[\sum_{\mathbf{i}\in\mathbf{ R}_{\mathbf{n}}}\left(1-\Phi(u_{\mathbf{n},\mathbf{i}})\right)\right]^{2}+\sum_{ \mathbf{i}\neq\mathbf{j}\in\mathbf{R}_{\mathbf{n}}}|r_{\mathbf{i},\mathbf{j}} |\exp\left(-\frac{u_{\mathbf{n},\mathbf{i}}^{2}+u_{\mathbf{n},\mathbf{j}}^{2}} {2(1+|r_{\mathbf{i},\mathbf{j}}|)}\right). \tag{28}\]
Since \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}[1-\Phi(u_{\mathbf{n},\mathbf{i}})] \rightarrow\tau>0\), the first term in (28) tends to \(0\) as \(\mathbf{n}\rightarrow\infty\). By the same arguments as those in the proof of Lemmas 3.3 and 3.4 of Tan and Wang (2014), the second term in (28) also tends to \(0\) as \(\mathbf{n}\rightarrow\infty\). Thus, \(\mathbf{D}^{\prime}(v_{\mathbf{n},\mathbf{i}})\) holds, which completes the proof.
**Proof of Corollary 3.1**: Letting \(v_{\mathbf{n},\mathbf{i}}=x/a_{\mathbf{n}}+b_{\mathbf{n}}+m_{\mathbf{n}}^{*}- m_{\mathbf{i}}\) and \(u_{\mathbf{n},\mathbf{i}}=y/a_{\mathbf{n}}+b_{\mathbf{n}}+m_{\mathbf{n}}^{*}- m_{\mathbf{i}}\), we have
\[\bigg{(}a_{\mathbf{n}}\big{(}M_{\mathbf{n}}(Z(\varepsilon))-b_{ \mathbf{n}}-m_{\mathbf{n}}^{*}\big{)}\leq x,a_{\mathbf{n}}\big{(}M_{\mathbf{ n}}(Z)-b_{\mathbf{n}}-m_{\mathbf{n}}^{*}\big{)}\leq y\bigg{)}\] \[=\bigg{(}\bigcap_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}\{Y_{ \mathbf{i}}(\varepsilon)\leq v_{\mathbf{n},\mathbf{i}}\},\bigcap_{\mathbf{i} \in\mathbf{R}_{\mathbf{n}}}\{Y_{\mathbf{i}}\leq u_{\mathbf{n},\mathbf{i}}\} \bigg{)}.\]
Thus, to prove Corollary 3.1, it is sufficient to show the conditions of Theorem 3.1 hold. More precisely, we only need to show that \(\sup_{\mathbf{n}\geq\mathbf{1}}\{n_{1}n_{2}(1-\Phi(v_{\mathbf{n},\mathbf{i}}), \mathbf{i}\leq\mathbf{n})\}\) is bounded, \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}[1-\Phi(u_{\mathbf{n},\mathbf{i}})] \rightarrow\tau>0\), \(\sum_{\mathbf{i}\in\mathbf{R}_{\mathbf{n}}}[1-\Phi(v_{\mathbf{n},\mathbf{i}})] \rightarrow\kappa>0\) and \(\Phi(u_{\mathbf{n},\mathbf{i}})=O(\Phi(u_{\mathbf{n},\mathbf{j}})),\Phi(v_{ \mathbf{n},\mathbf{i}})=O(\Phi(v_{\mathbf{n},\mathbf{j}}))\) for \(\mathbf{1}\leq\mathbf{i}\neq\mathbf{j}\leq\mathbf{n}\). It has been done in the proofs of Corollaries 2.2 and 2.3 in Tan and Wang (2014). The proof is complete.
**Proof of Theorem 3.2**: The proof is similar with that of Theorem 3.1 by using a comparison lemma for \(chi\)-random variables to replace the Normal Comparison Lemma, see e.g., Song and Tan (2022).
**Proof of Theorem 3.3**: The proof is similar with that of Theorem 3.1 by using a comparison lemma for Gaussian order statistics to replace Normal Comparison Lemma, see e.g., Song and Tan (2022).
|
2303.07024 | Addressing Biases in the Texts using an End-to-End Pipeline Approach | The concept of fairness is gaining popularity in academia and industry.
Social media is especially vulnerable to media biases and toxic language and
comments. We propose a fair ML pipeline that takes a text as input and
determines whether it contains biases and toxic content. Then, based on
pre-trained word embeddings, it suggests a set of new words by substituting the
bi-ased words, the idea is to lessen the effects of those biases by replacing
them with alternative words. We compare our approach to existing fairness
models to determine its effectiveness. The results show that our proposed
pipeline can de-tect, identify, and mitigate biases in social media data | Shaina Raza, Syed Raza Bashir, Sneha, Urooj Qamar | 2023-03-13T11:41:28Z | http://arxiv.org/abs/2303.07024v1 | # Addressing Biases in the Texts using an End-to-End Pipeline Approach
###### Abstract
The concept of fairness is gaining popularity in academia and industry. Social media is especially vulnerable to media biases and toxic language and comments. We propose a fair ML pipeline that takes a text as input and determines whether it contains biases and toxic content. Then, based on pre-trained word embeddings, it suggests a set of new words by substituting the biased words, the idea is to lessen the effects of those biases by replacing them with alternative words. We compare our approach to existing fairness models to determine its effectiveness. The results show that our proposed pipeline can detect, identify, and mitigate biases in social media data.
Keywords:Bias, fairness, Transformer-model, pipeline, machine learning.
## 1 Introduction
Social media platforms allow users to interact with one another in a variety of ways, such as messaging, photo and video sharing apps and even allow users to leave comments and communicate with one another. This functionality is vulnerable to several internet crimes, including personal insults and threats, propaganda, fraud, and the advertisement of illegal goods and services. It is critical to identify and eliminate these toxic comments from social media that reflect biases.
The Conversation AI team, a joint venture between Jigsaw and Google develops technology to protect human voices during a conversation [1]. They are particularly interested in developing machine learning (ML) models that can detect toxicity in online conversations, with toxicity defined as anything biased, rude, disrespectful, offensive, or otherwise likely to cause someone to leave a discussion. This initiative has generated a substantial number of published words and competitions [2, 3].
In this paper, we propose a novel ML pipeline that ingests data and identifies toxic words early in the pre-processing stage; the identified words are then replaced with
substitute words that retain the lexical meaning of the word but reduce or eliminate its effect. The main contribution of this work is to identify and mitigate biases during the pre-processing stage and avoid these biases replicate in the ML predictions. In this work, the term bias refers to any behavior, attitude, or expression that negatively affects a specific identity group, including actions that are hurtful, disrespectful, or disruptive. This definition is consistent with the bias definitions found in relevant literature [4, 5, 6, 7]. The specific contribution of this work is as:
* We propose a fair ML pipeline that takes any data and detects if the biases exist, if existing then it mitigate those biases.
* We annotate the dataset with bias-bearing words, which are generally biased words used in toxic contexts to refer to specific identities (race, ethnicity, religion, gender), and are taken from various literature sources [8]; [9], and [10].
* We test each pipeline component individually to determine the method effectiveness, and we also quantify fairness (i.e., non-biased words in this context, for each sub-group based on identities- race, gender etc.).
## 2 Related Work
Fairness [11] is a multi-faceted concept that vary by culture and context. Bias mitigation or fairness methods are categorized into three broad types: (1) pre-processing; (2) in-processing; and (3) post-processing algorithms. The pre-processing algorithms [12] attempt to learn a new representation of data by removing the biases prior to algorithm training. In-processing algorithms influences the loss function during the model training to mitigate biases [13]. Post-processing algorithms [14] manipulate output predictions after training to reduce bias.
Several models have been proposed to address the issue of bias in ML algorithms and data. For example Fairness GAN [15] is a Generative Adversarial Network that learns to generate synthetic data samples to ensure demographic parity. Aequitas [16] is a toolkit for assessing and mitigating bias in predictive models. Themis-ML [17] is a library for creating fair ML models that utilizes algorithms for fairness-aware classification, regression, and clustering. Fairlearn [18] is another library for ML fairness built on top of the popular scikit-learn library. It provides metrics and algorithms for fairness evaluation and mitigation. Google's What-If Tool [19] is a visual interface for exploring ML models. It allows users to see the impact of changes to inputs and models on predictions and fairness metrics. AI Fairness 360 [20] is an open-source software toolkit that contains a comprehensive set of metrics, algorithms, and tutorials for detecting and mitigating bias in ML models. Other models such as Counterfactual Fairness [21], and Disentangled Representation Learning [22], also tackle the problem of bias mitigation in ML. It is important to note that while these models have shown promise in reducing bias, more research is needed to ensure the generalizability and effectiveness of these techniques. Each of the previous works is valuable and incremental, focusing on task fairness (pre / in / post-processing). Unlike previous works, we detect and mitigate many biases from text, and we build a pipeline to achieve fairness.
Proposed Methodology
We develop a fair ML pipeline (Figure 1) that accepts raw text, detects if the text is biased or not (detection task), then identifies the bias-bearing words in the text (recognition task), and finally substitutes those words with alternative words (mitigation task). We explain each phase next.
_Bias detection:_ The problem of bias detection involves identifying whether a given document, such as a news article or social media post, contains any biased language or perspectives. To address this problem, we have treated it as a multi-label classification task. Specifically, we have used a Transformer-based model called ELECTRA [23] and fine-tune the model for bias detection. We have used the labeled data provided by the Jigsaw Toxic Comment Classification [1] competition. This competition involved identifying whether comments on online forums were toxic or not. The dataset is also used in the competition to identify different types of language biases [1, 2]. By fine-tuning the ELECTRA model on this labeled data, we are able to adapt it to the specific task of bias detection. The output of the detection model is a sentence or text that has been labeled with one or more bias labels. These labels can indicate various types of biases, such as political bias, gender bias, or racial bias.
_Bias Identification:_ The second step in the pipeline involves a module designed to identify biases within the dataset, which we refer to as the bias identification module. To create this module, we compiled a comprehensive list of biases that includes gender, race, religion, mental health, and disability. We also incorporated biases from sources such as [24], [8]; [9], and [10] to ensure that our list is comprehensive and up-to-date. Using this list of biases, we tag each comment in the dataset with relevant biases. Once the comments are tagged, we fine-tune the BERT model for named entity recognition (NER) to identify the biased words within the text. This fine-tuned model is then used to identify instances of bias in the comments, allowing us to analyze the extent and nature of biases present in the dataset.
In the bias identification task, certain categories of bias may be more easily or hardly detected depending on the nature of the biases and the dataset being analyzed. For example, some biases may be more explicit, while others may be more subtle or implicit. Similarly, some biases may be more prevalent in certain types of texts or
Figure 1: Fair ML pipeline
domains, such as gender bias in job postings or racial bias in news articles. Based on our initial assessment of the data, we find that we are able to cover a range of topics, including online toxicity, hate speech, and misinformation.
_Bias mitigation:_ After identifying the biased words in the text, our next step is to mitigate these biases by recommending alternative words that can be used in place of the biased words. We typically recommend between 5 to 10 substitute words per biased word, based on their similarity and appropriateness in the context of the text.
To generate these substitute words, we utilize publicly available pre-trained word embeddings, specifically Word2Vec [25], which operates in a 300-dimensional space. BERT can also be used to understand the contextual meaning of words and phrases in text data, and to fill in for the words as the substitute task. However, BERT can be computationally expensive and may require extensive training data to perform well. So we choose to work the Word2Vec in this paper.
Our method for identifying appropriate substitute words is based on semantic similarity and word analogy benchmarks [26]. By using this method, we aim to preserve the semantic information present in word embeddings while removing any biases that may be present in the text. The idea behind using Word2Vec here is to offer suitable substitutes that can help ensure a more equitable and inclusive representation of the target groups through words/ phrases.
## 4 Experimental setup
In this work, we use Google's Jigsaw Multilingual Toxic Comment Classification [1] dataset. It includes 223,549 annotated user comments collected from Wikipedia talk pages. These comments were annotated by human raters with six labels 'toxic','severe toxic, 'insult', 'threat', 'obscene', and 'identity hate'.
We use the F1-score (F1) for the accuracy, and a bias metric ROC-AUC (b-AUC) [2] to evaluate fairness. This bias metric combines several sub-metrics to balance overall performance. We also use the disparate impact ratio [27] to quantify fairness.
For our experiments, we utilized a standard set of hyperparameters to train our models, including a batch size of 16, a sequence length of 512, and 6 labels for the classification task. We trained the models for 10 epochs and optmize the learning rate in the range of 0.0001-0.001, the dropout rate in the range of 0.1-0.5, and the weight decay in the range of 0.0001-0.001. Our experiments were conducted on an NVIDIA P100 GPU with 32 GB RAM, and we implemented our models using TensorFlow. We fine-tuned our models using pre-trained weights from Huggingface.co. These settings ensured that our models were optimized for performance and accuracy.
## 5 Results
_Evaluation of bias detection task_: We evaluate our multi-label classifier with baseline methods: Logistic Regression with TFIDF (LG-TFIDF), LG with ELMO [28], BERTbase and DistillBERT.
We observe in Table 1 that LG-TFIDF model has the lowest performance, achieving a b-AUC score of 0.547 and an F1-score of 0.585. The LG-ELMO model has an improved b-AUC score of 0.684 and F1 score of 0.625. The BERT-base model achieves a higher b-AUC score of 0.692, but its F1 score is comparatively lower at 0.687. DistilBERT model achieves the b-AUC score of 0.742 and the F1 score of 0.753. Our model outperforms all other models, achieving the highest b-AUC score of 0.837 and the highest F1 score of 0.812. The significant improvement in the performance of our model suggests that it is effective in detecting bias in text data.
_Effectiveness of bias identification task:_ We compare different configurations of NER: Spacy core web small (core-sm), core web medium (core-md), and core web large (core-lg) methods (that are based on RoBERTa [29]) against our NER.
Based on the performance metrics reported in Table 2, it is clear that our model outperformed the three baseline models (Core-sm, Core-md, Core-lg) by a significant margin in both AUC and F1 score. Our model achieved an AUC of 0.832 and F1 score of 0.828, while the best-performing baseline (Core-lg) achieved an AUC of 0.643 and F1 score of 0.637. This indicates that our model is fine-tuned properly on the biased labels and is more effective in recognizing bias in the dataset than the baseline models. It is also worth noting that the performance of the baseline models improved as the model size increased, with Core-lg performing better than Core-md and Core-sm. This also suggests that the size of the model can have a significant impact on its performance.
_Overall Performance comparison_: To evaluate the pipeline as a whole, we use the adversarial debiasing (AD) [13] and meta-fair (MF) classifier [30] methods as the baselines. AD is a fairness method that addresses fairness during the data pre-processing time and MF is an in-processing method that addressed biases during the optimization phase.
In this experiment, we provide the labeled data to each method. First, we use our detection module to find if a text is biased or not, and then use each method's debiasing technique to introduce fairness in the data. The new data that is produced in the transformed data. These methods calculate fairness based on the ratio of fair out
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Model** & **b-AUC** & **F1** \\ \hline LG-TFIDF & 0.547 & 0.585 \\ \hline LG- ELMO & 0.684 & 0.625 \\ \hline BERT-base & 0.692 & 0.687 \\ \hline DistilBERT & 0.742 & 0.753 \\ \hline Our model & **0.837** & **0.812** \\ \hline \end{tabular}
\end{table}
Table 1: **Performance of bias detection task. Bold means best performance.**
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Model** & **AUC** & **F1** \\ \hline Core-sm & 0.427 & 0.432 \\ \hline Core-md & 0.532 & 0.524 \\ \hline Core-lg & 0.643 & 0.637 \\ \hline Our model & **0.832** & **0.828** \\ \hline \end{tabular}
\end{table}
Table 2: **Performance of bias recognition task**
comes (non-biased words) for each sub-group (e.g., gender, and identities). For example, these methods see how many biased or unbiased words are associated with each identity group, and then remove the biases for the subgroup that is more prone to negative outcomes.
We consider the sub-groups based on gender and race as the use cases. For gender, we consider the privileged class to be "male," while the unprivileged class is "female". For race, we consider "Asians" and "American-Africans" to be unprivileged, and "white" to be privileged. These groups are chosen based on an initial analysis of the data. We use the disparate impact ratio evaluation metric to quantify fairness. A good range of DI ratio is between 0.8 and 1.25 [27] with scores lower than 0.8 showing favorable outcomes for privileged sub-group and values above 1.25 favoring the unprivileged class. The results are shown in Figure 2.
The results in Figure 2 show that the DI score in the original data is lower than 0.8, which means biased outcomes toward unprivileged identities. Before mitigation, the DI score is the same for all methods, since it is calculated based on original data. The DI score after fairness methods is above 0.8 for all methods. Our approach gives us a DI score close to 1, which shows that we achieve a balance between unprivileged and privileged groups. Other methods also get fairer on transformed data, but they seem to skewed toward privileged groups (score close to 0.8).
## 6 Discussion
The main implications of the proposed model are in applications where bias in textual data can have significant real-world impacts, such as in hiring and admissions decisions, financial lending, and predictive policing. Prior work [15][16][17][18][19][20][31][13][21] in this area has also explored various techniques for detecting and mitigating text biases. However, the proposed method has the advantage of being a scalable and easy-to-implement solution that does not require additional annotation or training data. There are some limitations of this study, which are also suggestions
Figure 2: Disparate impact scores to quantify fairness using different methods.
for future work. First, the current work assumes that the biased words can be easily identified and replaced with alternative words, which may not always be the case. We need to consider the epistemology and tone in the biased language as well. The method also relies on pre-trained embeddings, which may contain biases and affect the quality of the mitigation. Further, the effectiveness of the method may vary across different domains and languages, which needs to be further investigated.
## 7 Conclusion
The proposed fair ML pipeline for detecting and mitigating text biases is an important step towards developing more equitable and just AI models. However, there is still much to learn about applying and interpreting fairness in our study. We plan to further evaluate the pipeline using other evaluation metrics and extend it to other domains such as health and life sciences. The bias mitigation task will also be evaluated to enhance the effectiveness of the pipeline. Additionally, we will explore other datasets to see how the pipeline works in other contexts. Through continuous evaluation and refinement, we aim to develop more sophisticated and effective fairness approach.
|
2307.12860 | Non-thermal photons and a Fermi-Dirac spectral distribution | Although non-intuitive, an accelerated electron along a particular trajectory
can be shown to emit classical electromagnetic radiation in the form of a
Fermi-Dirac spectral distribution when observed in a particular angular regime.
We investigate the relationship between the distribution, spectrum, and
particle count. The result for the moving point charge is classical, as it
accelerates along an exactly known trajectory. We map to the semi-classical
regime of the moving mirror model with a quantized spin-0 field. The scalars
also possess a $\beta$ Bogoliubov coefficient distribution with Fermi-Dirac
form in the respective frequency regime. | Evgenii Ievlev, Michael R. R. Good | 2023-07-24T14:58:48Z | http://arxiv.org/abs/2307.12860v1 | # Non-thermal photons and a Fermi-Dirac spectral distribution
###### Abstract
Although non-intuitive, an accelerated electron along a particular trajectory can be shown to emit classical electromagnetic radiation in the form of a Fermi-Dirac spectral distribution when observed in a particular angular regime. We investigate the relationship between the distribution, spectrum, and particle count. The result for the moving point charge is classical, as it accelerates along an exactly known trajectory. We map to the semi-classical regime of the moving mirror model with a quantized spin-0 field. The scalars also possess a \(\beta\) Bogoliubov coefficient distribution with Fermi-Dirac form in the respective frequency regime.
moving mirrors, black hole evaporation, acceleration radiation, Fermi-Dirac statistics pacs: 41.60.-m (Radiation by moving charges), 04.70.Dy (Quantum aspects of black holes)
## I Introduction
It is well-known that bosons obey Bose-Einstein statistics, and fermions obey Fermi-Dirac statistics. Interestingly, Haro and Elizalde [1] found a result for the \(\beta\)-Bogolubyov coefficient of a semitransparent mirror demonstrating a flux of scalar particles obeying a Fermi-Dirac distribution form in the large \(\omega^{\prime}\) limit. Nicolaevici [2] confirmed the Fermi-Dirac form with respect to the energy \(\omega\) but made special note that this does not establish the number of particles since it only applies in the large \(\omega^{\prime}\) limit. In a follow-up, Elizalde and Haro [3] recommended further investigation into the Fermi-Dirac form and its relationship to the sign in the \(\beta\)-Bogoluubov coefficient; in particular, the connection with the number of particles emitted per mode.
Here we investigate the situation using an ordinary moving point charge in classical electrodynamics [4]. We demonstrate the phenomenon without appealing to quantum field theory; i.e., one does not need to use moving mirrors or semi-transparency to understand the situation. Nevertheless, we find the perfectly reflecting accelerating boundary corresponding to the moving point charge and examine its spectral statistics for clarity.
The functional mapping between moving mirrors and moving point charges [5; 6; 7; 8; 9; 10; 11; 12] is leveraged to understand the problem in both contexts. The situation has different physical meanings when examined for the classical electromagnetic field or quantized scalar field [13; 14]; and thus, different implications for the classical radiation in ordinary 3+1 dimensions and quantum radiation in 1+1 dimensions.
We use natural units, setting \(\hbar=c=k_{B}=\mu_{0}=1\); the electron's charge is then a dimensionless number \(e^{2}=4\pi\alpha_{\rm fs}\approx 0.092\).
## II Fermi-Dirac trajectory
### Dynamics and total energy
Let us start with a simple illustration of the situation. Consider an electron moving in a straight line along the \(z\)-axis. We take the trajectory defined implicitly as
\[t(z)=\frac{\kappa}{4}z^{2}+\frac{2}{\kappa}\ln(\kappa z)+z\zeta, \tag{1}\]
where \(\kappa>0\) is the acceleration scale and \(-1<\zeta<1\). The inverse velocity along the trajectory and the maximum velocity are, respectively,
\[\frac{1}{v}=\frac{\mathrm{d}t(z)}{\mathrm{d}z}=\frac{\kappa z}{2}+\frac{2}{ \kappa z}+\zeta\,,\quad v_{\rm max}=\frac{1}{2+\zeta}\,. \tag{2}\]
From Eq. (2), it is evident that for \(\zeta>-1\) this trajectory travels along a time-like, relativistic worldline. This trajectory is asymptotically static; see Fig. 1 for a spacetime diagram and Fig. 2 for a Penrose diagram.
The total energy emitted can be calculated with the Larmor formula; this energy is finite for \(\zeta>-1\). For example, when \(\zeta=0\) it takes the analytic form:
\[E=\frac{e^{2}\kappa}{36}\left(\frac{1}{3\sqrt{3}}-\frac{1}{4\pi}\right). \tag{3}\]
For other values of the parameter \(\zeta\), the analytic formula for the total energy exists but is complicated; nevertheless, it is simple to illustrate numerically, see Fig. 3. One can use the total energy, e.g. Eq. (3), to check the consistency of the spectral results (see the Appendix for detail).
### Spectral distribution of the accelerating electron's radiation
To find the spectral distribution for Eq. (1), we use the standard approach in classical electrodynamics [4]. Here the energy \(E\) can be found by the spectrum \(I(\omega)\), or spectral distribution \(\mathrm{d}I/\,\mathrm{d}\Omega\)[15],
\[E=\int\mathrm{d}\omega I(\omega)=\int\mathrm{d}\omega\int\mathrm{d}\Omega \frac{\mathrm{d}I(\omega)}{\mathrm{d}\Omega}. \tag{4}\]
For example, by the use of Eq. 13 in [13],
\[\frac{\mathrm{d}I(\omega)}{\mathrm{d}\Omega}=\frac{e^{2}\omega^{2}}{16\pi^{3}} \left|\sin\theta\int\limits_{0}^{\infty}\mathrm{d}ze^{i\phi(z)}\right|^{2}, \tag{5}\]
where \(\phi=\omega(t-z\cos\theta)\), we write
\[\frac{1}{\sin^{2}\theta}\frac{\mathrm{d}I(\omega)}{\mathrm{d}\Omega}=\frac{e^ {2}\omega^{2}}{16\pi^{3}}\left|\int\limits_{0}^{\infty}\mathrm{d}ze^{i\phi(z )}\right|^{2}. \tag{6}\]
This integral can be solved exactly as is (see the Appendix), but for simplicity, consider the spectral distribution at a particular angle \(\theta_{0}\) instead. Specialize to \(\theta\to\theta_{0}\) such that the phase is
\[\phi(z)=\frac{\kappa\omega}{4}z^{2}+\frac{2}{\kappa}\omega\ln z+\omega z( \zeta-\cos\theta_{0}). \tag{7}\]
It is now straightforward to integrate Eq. (6) when \(\cos\theta_{0}=\zeta\). One obtains the spectral distribution,
\[\frac{1}{\sin^{2}\theta_{0}}\left.\frac{\mathrm{d}I(\omega)}{\mathrm{d}\Omega }\right|_{\theta_{0}}=\frac{e^{2}}{8\pi^{2}}\frac{\omega/\kappa}{e^{2\pi\omega /\kappa}+1}, \tag{8}\]
where the spectral frequency content of the radiation has a Fermi-Dirac form. Before examining this form and its relationship to its particle spectrum \(N(\omega)\), let us first look at its quantum dual in the moving mirror model [16; 17; 18], in the spirit of previous moving mirror studies on the Fermi-Dirac result [1; 2; 3].
Figure 1: The position \(z(t)\) trajectories, plotted to demonstrate various \(\zeta\) values of Eq. (1). The key takeaway is the motions are asymptotic resting and are limited to the half-plane \(z>0\).
Figure 3: Total radiated energy \(E/e^{2}\kappa\) as a function of parameter \(\zeta\). The energy blows up as \(\zeta\to-1\); as this corresponds to the speed of light for the trajectory’s maximum velocity, e.g. see the red worldline in Fig. 1 and Fig. 2.
Figure 2: The position \(z(t)\) trajectories, plotted in a Penrose diagram to demonstrate the \(\zeta=(-1,0,+1)\) values of Eq. (1). The key takeaway is the motions are asymptotic resting and are limited to the half-plane \(z>0\). The same color scheme of Fig. 1 is used; here, \(\kappa=2\) for illustration.
### Corresponding Bogolubov Coefficients
While the above result is classical radiation from the 3+1 dimensional electromagnetic field, we can investigate the quantum radiation from the 1+1 dimensional scalar field of the moving mirror model. The mapping recipe [13] between electron and mirror links the spectral distribution on the electron side and the Bogolubov coefficient squared on the mirror side, c.f. [14]:
\[\frac{\mathrm{d}I}{\mathrm{d}\Omega}(\omega,\cos\theta)=\frac{e^{2 }\omega^{2}}{4\pi}|\beta_{pq}|^{2}, \tag{9}\] \[p+q=\omega\,,\quad p-q=\omega\cos\theta.\]
Using the full spectral distribution from Eq. (10) one can use this recipe to obtain the corresponding Bogolubov coefficients. Let us however consider setting \(\theta=\theta_{0}\), \(\zeta=\cos\theta_{0}\). In this case, the electron's spectral distribution has a simple form Eq. (8). In terms of the scalar frequencies \(p,q\), the corresponding condition reads
\[\theta=\theta_{0}\,,\;\zeta=\cos\theta_{0}\Longleftrightarrow p=\omega\frac{ 1+\zeta}{2}\;,\;q=\omega\frac{1-\zeta}{2} \tag{10}\]
So, the scalar frequencies \(p\) and \(q\) are not independent, but related to each other through this condition. The recipe Eq. (9) gives particular beta Bogolubov coefficients:
\[|\beta_{pq}|^{2}=\frac{1-\zeta^{2}}{2\pi(p+q)\kappa}\frac{1}{e^{2\pi(p+q)/ \kappa}+1}\,,\quad\frac{p}{q}\equiv\frac{1+\zeta}{1-\zeta}\,. \tag{11}\]
This result demonstrates the Fermi-Dirac form for the \(\beta\)-Bogolubov coefficients of the quantum scalars.
By tuning the value of \(\zeta\) one can obtain a trajectory that gives the Fermi-Dirac at any pre-assigned angle (or a bespoke frequency regime). For instance, the 'high-frequency' regime [19], \(p\sim 0\) (\(q\gg p\)), corresponds to \(\zeta\sim-1\); Eq. (11) becomes to leading order
\[|\beta_{pq}|^{2}=\frac{1+\zeta}{\pi q\kappa}\frac{1}{e^{2\pi q/ \kappa}+1}\,. \tag{12}\]
Using the duality to map back to the electron, the choice \(\zeta=\cos\theta_{0}=-1\) corresponds to a viewpoint behind the accelerating electron \(\theta_{0}\sim\pi\).
### Connection to Particle Count
The notion of discrete radiation energy \(\hbar\omega\) allows an introduction of a particle spectrum \(N(\omega)\). The connection between the spectral distribution of electromagnetic waves and the particle spectrum is [4]:
\[N(\omega)=\frac{1}{\omega}I(\omega)=\frac{1}{\omega}\int d\Omega\frac{ \mathrm{d}I}{\mathrm{d}\Omega}, \tag{13}\]
which must be consistent with the total energy emission as computed by the spectral distribution,
\[E=\int\mathrm{d}\omega\;\omega N(\omega)=\int\mathrm{d}\omega\int\mathrm{d} \Omega\frac{\mathrm{d}I}{\mathrm{d}\Omega}. \tag{14}\]
Therefore, the Fermi-Dirac distribution does not correspond to the particle spectrum \(N(\omega)\), or the energy spectrum \(I(\omega)\). It does not even correspond to the spectral distribution \(\mathrm{d}I/\mathrm{d}\Omega\) at an arbitrary observation angle \(\theta\); but only in a specific angular regime \(\theta\rightarrow\theta_{0}\): \(\left.\mathrm{d}I/\mathrm{d}\Omega\right|_{\theta_{0}}\) using the corresponding trajectory, Eq. (1). Nevertheless, the interesting question remains if it is possible to observe such radiation measured in such a specified angular regime \(\theta_{0}\). Does an observer see Fermionic electromagnetic radiation? Is the spectral content congruent with Fermi-Dirac statistics?
We stress again that the trajectory is easily generalized in a number of different ways to illustrate the robust Fermi-Dirac form of the spectral distribution at particular angles. For instance, a particular choice of \(\zeta\) in Eq. (1) results in a new trajectory form, capable of a new observation angle \(\theta_{0}\), which gives the Fermi-Dirac result. This means, depending on the particular bespoke trajectory of interest, the relevant zeta-angle (\(\zeta,\theta_{0}\)) could be in any desired direction, such as to the side (\(0,\pi/2\)), in front (\(+1,0\)), or behind (\(-1,\pi\)) the accelerating electron.
## III Discussion
The particular physics of this result depends in subtle ways on dimension (3+1 vs 1+1), source (electron vs. mirror), and regime (angle vs. frequency). Two other notable examples in the literature confirm this, namely, scalar charges and even-odd dimensional dependence. Let us consider these examples now.
For an example of a source other than an electric charge or mirror, Nikishov and Ritus found [10] that in the case of a scalar charge, the emitted scalar radiation along a particular trajectory will obey Fermi-Dirac statistics; in contrast to an electric charge following the same particular trajectory whose spin-1 radiation field obeys Bose-Einstein statistics.
As an example of dimensional dependence, scalar field radiation measured by a uniformly accelerated DeWitt detector obeys Bose-Einstein statistics when the dimension of the spacetime is even, but when the dimension is odd, one obtains Fermi-Dirac statistics [20].
Taken as a whole, it is clear the subtleties involved make it especially important to precisely define the context and the regime of applicability and explicitly examine the form of the computed observables.
We note that Hawking radiation [19] and its Schwarzschild moving mirror analog [21] utilize the high-frequency regime lending support to the notion of thermality and Bose-Einstein distributed scalars at late-times. However, in the Schwarzschild case (as opposed to extremal cases [22] or asymptotically inertial situations [23]), there is no finite total energy check corresponding to the Bogoliubov coefficients. Moreover, the total particle count is infinite. This is ultimately due in part to the horizon; and in the analog moving mirror situation, the fact that the proper acceleration is asymptotically infi
nite. The same goes for the eternal black hole analog of Carlitz-Willey [24].
In this work, we have shown that the scalars can possess Fermi-Dirac distributed \(\beta\)-Bogoliubov coefficients. If one considers \(\beta\)-Bogoliubov coefficients sufficient evidence of a thermal Bose-Einstein distribution for the Schwarzschild or Carlitz-Willey trajectories; then the result Eq. (12) is also sufficient evidence of a thermal Fermi-Dirac distribution for the trajectories Eq. (1). Moreover, we have an additional check of total finite energy and finite particle emission (see the Appendix for more detail).
This result demonstrates that, although the high-frequency approximation (or the low-frequency approximation, if one prefers) is frequently used in the literature, one should clearly understand whether this approximation represents the physical system under consideration. The result Eq. (12) does not reveal e.g. the particle spectrum, as the high-frequency region does not dominate the corresponding contribution from the beta Bogolubov coefficients. In other words, one should be careful when applying the high-frequency (or the low-frequency) approximation; in each case, this approximation should be well-motivated; otherwise, peculiar results may arise.
## IV Conclusion
We have shown that moving point charge radiation can possess a Fermi-Dirac spectral distribution form. A particular trajectory and corresponding angular regime demonstrate the result. By appealing to classical electrodynamics, we have analyzed the physical reason for the resulting unexpected spectral-statistics form.
The spectral-statistics (as explicitly derived from the spectral distribution in a particular angular regime for the radiation from a moving point charge) do not necessarily characterize the spin-statistics of the electromagnetic field in question. Instead, they depend crucially on the observation angle and the specific electron trajectory interaction with the radiation field.
## V Acknowledgements
Funding comes in part from the FY2021-SGP-1-STMM Faculty Development Competitive Research Grant No. 021220FD3951 at Nazarbayev University.
## Appendix A Partial contribution from the FD particles
The partial energy contribution when \(\zeta=\cos\theta_{0}\) can be found from the Fermi-Dirac form Eq. (8) of the electron,
\[E_{\text{electron}}^{fd}=\int_{0}^{\infty}\text{d}\omega\int_{0}^{2\pi}\text{ d}\varphi\left.\frac{\text{d}I(\omega)}{\text{d}\Omega}\right|_{\theta_{0}}= \frac{e^{2}\kappa(1-\zeta^{2})}{192\pi} \tag{14}\]
The corresponding contribution from the mirror can be derived from this result in the following way. We insert into Eq. (14) the integral \(1=\int_{-1}^{1}\text{d}(\cos\theta)\,\delta(\cos\theta-\zeta)\) and change the variables according to Eq. (9). The Jacobian is \(2/(p+q)\), and the resulting contribution is
\[\begin{split}& E_{\text{mirror}}^{fd}\\ &=\int_{0}^{\infty}\text{d}p\int_{0}^{\infty}\text{d}q\;\delta \left(\frac{p-q}{p+q}-\zeta\right)\,(p+q)|\beta_{pq}|^{2}\\ &=\frac{\kappa(1-\zeta^{2})}{192\pi}.\end{split} \tag{15}\]
This gives the partial contribution to the energy emitted to both sides of the corresponding mirror, counting only the FD particles.
Let us look at how the analogy extends to finite particle count. For the moving mirror, the total number of scalars emitted to the right side of the mirror is:
\[\begin{split}& N_{\text{mirror}}\\ &=\int_{0}^{\infty}\text{d}p\int_{0}^{\infty}\text{d}q\;\delta \left(\frac{p-q}{p+q}-\zeta\right)\,|\beta_{pq}|^{2}\\ &=\frac{(1-\zeta^{2})\ln 2}{8\pi^{2}}\,.\end{split} \tag{16}\]
For the case of the accelerating electron, we may integrate over all frequencies \(\omega\) on Eq. (13) which gives
\[\begin{split}\int_{0}^{\infty}\text{d}\omega\;N(\omega)& =\int_{0}^{\infty}\frac{\text{d}\omega}{\omega}\int_{0}^{2\pi} \text{d}\psi\left.\frac{\text{d}I(\omega)}{\text{d}\Omega}\right|_{\theta_{0} },\\ &=e^{2}\frac{(1-\zeta^{2})\ln 2}{8\pi^{2}},\end{split} \tag{17}\]
in agreement with Eq. (16). This highlights the dual consistency between the mirror and electron but also demonstrates the advantage of a finite energy and finite particle count with an exact analytic solution, Eq. (1).
## Appendix B Exact spectrum
To demonstrate the difference between a particular choice of observation angle and the spectrum \(I(\omega)\) which results from integration over the solid angle, we briefly look at an exact answer for the integral of Eq. (6), setting \(\zeta=0\) and leaving \(\theta\) unset. Integrating Eq. (6) gives
\[\begin{split}&\frac{\text{d}I(\omega)}{\text{d}\Omega}=\frac{e^{2 }\omega\sin^{2}\theta}{16\pi^{3}\kappa}e^{-\frac{\kappa\omega}{\kappa}}\times \\ &\times\left|\Gamma\left(\frac{1}{2}-\frac{i\omega}{\kappa}\right) A+2\cos\theta\sqrt{\frac{i\omega}{\kappa}}\Gamma\left(1-\frac{i\omega}{\kappa} \right)B\right|^{2}\,,\end{split} \tag{18}\]
where
\[\begin{split} A&=\,_{1}F_{1}\left(\frac{1}{2}-\frac{i \omega}{\kappa};\frac{1}{2};\frac{i\omega\cos^{2}\theta}{\kappa}\right)\,,\\ B&=\,_{1}F_{1}\left(1-\frac{i\omega}{\kappa}; \frac{3}{2};\frac{i\omega\cos^{2}\theta}{\kappa}\right)\,.\end{split} \tag{19}\]
This spectral distribution, Eq. (10), gives the total energy, Eq. (3) by numerical integration,
\[E =\int_{0}^{\infty}\mathrm{d}\omega\int_{0}^{2\pi}\mathrm{d}\phi\int_ {0}^{\pi}\mathrm{d}\theta\sin\theta\frac{\mathrm{d}I(\omega)}{\mathrm{d}\Omega} \tag{11}\] \[=\frac{e^{2}\kappa}{36}\left(\frac{1}{3\sqrt{3}}-\frac{1}{4\pi} \right)\,.\]
We cannot integrate Eq. (10) exactly over the solid angle to obtain an analytic form of \(I(\omega)\). However, it is clear that Eq. (10) will not result in a Fermi-Dirac form for the spectrum \(I(\omega)\), even though at \(\theta_{0}=\pi/2\), Eq. (10) gives
\[\left.\frac{\mathrm{d}I(\omega)}{\mathrm{d}\Omega}\right|_{\theta_{0}}=\frac {e^{2}}{8\pi^{2}}\frac{\omega/\kappa}{e^{2\pi\omega/\kappa}+1}, \tag{12}\]
which is the result Eq. (8). For this reason, one cannot say the particle count, \(N(\omega)\) rests in a Fermi-Dirac distribution, which is ultimately consistent with the horizonless globally defined motion, Eq. (1), evolving to an asymptotic stop.
|